FIT5202 - Data processing for Big Data

Hello, if you have any need, please feel free to consult us, this is my wechat: wx91due

FIT5202 - Data processing for Big Data (SSB 2025)

Assignment 1: Analysing Food Delivery Data

Due Date: 23:55 Friday 17/Jan/2025 (End of week 3)

Weight: 10% of the final marks

Background

Food delivery services have become an integral part of modern society, revolutionizing the way we consume meals and interact with the food industry. These platforms, accessible through websites and mobile apps, provide a convenient bridge between restaurants and consumers, allowing users to browse menus, place orders, and  have  food delivered directly to their doorstep with just a few taps.  In today's fast-paced world, where time is a precious commodity, food delivery services offer an invaluable solution, catering to busy lifestyles, limited mobility, and the ever-present desire  for  convenience.  They  empower  individuals  to  enjoy  a  diverse  range  of cuisines  without   leaving  their   homes   or  offices,  support  local  restaurants  by expanding their reach, and have even become a crucial lifeline during times of crisis, such as lockdowns and emergencies, ensuring access to essential sustenance and supporting  the  economy.  As  a  result  of  its  convenience,  and  the  increasing preference for on-demand services, food delivery has become a very important part of modern life, impacting everything from our daily routines to the broader economic landscape.

In the food delivery industry, accurate on-time delivery prediction is paramount. Big data processing  allows  companies to achieve this by analyzing vast datasets encompassing order details, driver performance, real-time traffic, and even weather.

Sophisticated algorithms leverage this data to build predictive models. These models learn  from  historical  trends,  for example,  a  restaurant's  longer  preparation  times during peak hours or a driver's faster navigation in specific areas. Real-time data, like driver GPS location and live traffic, further refine these predictions, enabling dynamic adjustments to estimated delivery times.

The benefits are substantial.  Firstly, customer satisfaction improves with reliable delivery  estimates  and  transparent  communication  regarding  delays.  Secondly, operational  efficiency  increases through optimized driver  scheduling  and  route planning, leading  to reduced  costs  and  faster  deliveries.  Furthermore,  accurate predictions empower proactive measures to mitigate delays. The system can alert customers  of  potential  issues,  offer compensation,  and  trigger  interventions like expediting  order preparation.  If an order  is  not delivered on time, a quality after-service should be followed, such as offering refunds, providing future discounts, or simply offering a sincere apology.

By mastering on-time delivery prediction through big data, food delivery companies gain a crucial competitive edge. They can meet and exceed customer expectations, foster loyalty, and drive sustainable growth in a demanding market. As the industry evolves,  leveraging  big  data  for accurate  delivery forecasting  will remain a key differentiator for success.

This series of assignments will immerse you in  the world of  big  data analytics, specifically  within  the context  of a modern,  data-driven  application:  food  delivery services.  We  will  explore  the  entire  lifecycle  of  data  processing,  from analyzing historical information to building and deploying real-time machine learning models. Each assignment builds upon the last, culminating in a comprehensive understanding of how big data technologies can be leveraged to optimize performance and enhance user experience.

In the first assignment(A1), we will delve into historical datasets, performing data analysis to uncover key trends and patterns related to delivery times, order volumes, and  other  crucial  metrics.  This  foundational  understanding  will  pave  the  way  for assignment  2A,  where we will harness  the  power  of  Apache  Spark's  MLLib  to construct and train machine learning models, focusing on predicting delivery times with accuracy and efficiency. Finally, assignment 2B will elevate our analysis to the real-time domain, utilizing Apache Spark Structured Streaming to process live data streams and dynamically adjust predictions, providing a glimpse into the cutting-edge techniques driving modern, responsive applications. Through this hands-on journey, you will gain practical experience with industry-standard tools and develop a strong conceptual understanding of how big data powers the dynamic world of on-demand services.

In A1, we will perform historical data analysis using Apache Spark. We will use RDD, DataFrame and SQL API learnt from topics 1-4.

The Dataset

The dataset can be downloaded from Moodle.

You will find the following files after extracting the zip file:

1)  delivery_order.csv: Contains food order records.

2)  geolocation.csv: Contains geographical information about restaurants and delivery locations

3)  delivery_person.csv: Contains basic driver information, their rating and vehicle information.

The metadata of the dataset can be found in the appendix at the end of this document. (Note: The dataset is a mixture of real-life and synthetic data, therefore some anomalies   may   exist   in   the dataset.   Data cleansing   is   not   mandatory   in this assignment.)

Assignment Information

The   assignment   consists   of   three   parts: Working with RDD Working   with Dataframes,  and Comparison of three  forms  of  Sparkabstractions.   In  this assignment, you are required to implement various solutions based on RDDs and


DataFrames in PySpark for the given queries related to eCommerce data analysis.

Getting Started

● Download your dataset from Moodle.

● Download a template file for submission purposes:

● A1_template.ipynb file  in Jupyter notebook  to  write  your solution. Rename it into  the format (for    example: A1_xxx0000.ipynbThis  file  contains  your  code solution(xxx0000 is your authcode).

●   For this assignment, you will use Python 3+ and PySpark 3.5.0. (The environment is provided as a Docker image, the same you use in labs.)

Part 1: Working with RDDs (30%)

In  this  section,  you  need  to   create  RDDs  from  the   given  datasets,   perform partitioning in these RDDs and use various RDD operations to answer the queries.

1.1 Data Preparation and Loading (5%)

1.  Write the code to create a SparkContext object using SparkSession. To create a  SparkSession,  you  first  need  to  build  a  SparkConf  object  that  contains information  about  your  application.  Use  Melbourne  time  as  the  session timezone. Give your application an appropriate name and run Spark locally with 4 cores on your machine.

2.  Load the CSV files into multiple RDDs.

3.  For each RDD, remove the header rows and display the total count and first

10 records.

4.  Drop records with invalid information(NaN or Null) in any column.

1.2 Data Partitioning in RDD (15%)

1.  For each RDD, using Spark’s default partitioning, printout the total number of partitions and the number of records in each partition (5%).

2.  Answer the following questions:

a.  How many partitions do the above RDDs have?

b.  How is the data in these RDDs partitioned by default, when we do not explicitly specify any partitioning strategy? Can you explain why it is partitioned in this number?

c.  Assuming we  are querying the dataset based on order timestamp, can you think of a better strategy for partitioning the data based on your available hardware resources?

Write your explanation in Markdown cells. (5%)

3.  Create  a  user-defined  function   (UDF)  to  transform   a  timestamp  to  ISO format(YYYY-MM-DD   HH:mm:ss),   then   call   the   UDF   to   transform   two timestamps(order_ts and ready_ts) to order_datetime and ready_datetime(5%)

1.3 Query/Analysis (10%)

For this part, write relevant RDD operations to answer the following questions.

1.  Extract weekday (Monday-Sunday) information from orders and print the total number of orders each weekday.  (5%)

2.  Show a list of type_of_order and average preparation time in minutes (ready_ts - order_ts)  (5%)

Part 2. Working with DataFrames (45%)

In   this   section,   you   need   to   load   the   given   datasets   into   PySpark DataFrames and use DataFrame functions to answer the queries.

2.1 Data Preparation and Loading (5%)

1.  Load  the  CSV  files  into  separate  dataframes.  When  you  create  your dataframes, please refer to the metadata file and think about the appropriate data type for each column.

2.  Display the schema of the dataframes.

When  the  dataset  is  large,  do you need all columns?  How to optimize  memory usage? Do you need a customized data partitioning strategy? (Note: Think about those questions but you don’t need to answer these questions.)

2.2 Query/Analysis (40%)

Implement the following queries using dataframes. You need to be able to perform operations like transforming, filtering, sorting, joining and group by using the functions provided by the DataFrame API.

1.  Write a function to encode/transform weather conditions to Integers and drop the original string. You can decide your own encoding scheme. (i.e. Sunny=0, Cloudy = 1, Fog = 2, etc.)  (5%)

2.  Calculate the amount of order for each hour. Show the results in a table and plot a bar chart. (5%)

3.  Join the delivery_order with geolocation data frame, calculate the distance between a restaurant and the delivery location, and store the distance in a new column named delivery_distance. (hint: You may need to install an additional library like GeoPandas to calculate the distance between two points). (5%)

4.  Using the data from 3, find the top 10 drivers travelling the longest distance. (5%)

5.  For each type of order, plot a histogram of meal preparation time. The plot can be done with multiple legends or sub-plots. (note: you can decide your bin size). (10%)


6.  (Open Question) Explore the dataset and use a delivery person’s rating as a performance indicator. Is a lower rating usually correlated to a longer delivery time? What might be the contributing factors to the low rate of drivers? Please include one plot and discussion based on your observation (no word limit but please keep it concise). (10%)

Part3: RDDs vs DataFrame vs Spark SQL (25%)

Implement the following queries using RDDs, DataFrame in SparkSQL separately. Log the time taken for each query in each approach using the “%%time” built-in magic command in Jupyter Notebook and discuss the performance difference between these 3 approaches.

(Complex Query) Calculate the time taken on the road (defined as the total time taken  minus  restaurants’  order  preparation  time,   i.e.,  total  time  -   (ready_ts  - order_ts)).  For each road_condition, using a 10-minute bucket size of time on the road(e.g. 0-10, 10-20, 20-30, etc.), show the percentage of each bucket.

(note: You can reuse the loaded data/variables from part 1&2.)

(hint: You may create intermediate RDD/dataframes for this query.)

1)  Implement  the  above  query  using  RDDs,  DataFrame and SQL separately and print the results. (Note: The three different approaches should have the same results). (15%)

2)  Which  one  is the easiest to implement in your opinion? Log the time taken for each  query,  and  observe  the  query execution time, among RDD, DataFrame, and SparkSQL, which is the fastest and why? Please include proper references. (Maximum 500 words.) (10%)

Submission

You should submit your final version of the assignment solution online via Moodle. You must submit the files created:

-    Your jupyter notebook file (e.g., A1_authcate.ipynb).

A pdf file saved from jupyter notebook with all output following the file naming format as follows: A1_authcate.pdf

发表评论

电子邮件地址不会被公开。 必填项已用*标注