Hello, if you have any need, please feel free to consult us, this is my wechat: wx91due
Homework Overview
Vast amounts of digital data are generated each day, but raw data is often not immediately “usable”. Instead, we are interested in the information content of the data such as what patterns are captured? This assignment covers useful tools for acquiring, cleaning, storing, and visualizing datasets. In questions 1 & 2, we’ll perform a simple end-to-end analysis using data from The Movie Database (TMDb). We will collect movie data via API, store the data in csv files, and analyze data using SQL queries. For Q3, we will complete a D3 warmup to prepare our students for visualization questions in HW2. Q4 & 5 will provide an opportunity to explore other industry tools used to acquire, store, and clean datasets.
The maximum possible score for this homework is 100 points.
Important Notes
a. Every assignment has a generous 48-hour grace period, allowing students to address unexpected minor issues without facing penalties. You may use it without asking.b. Before the grace period expires, you may resubmit as many times as you need.
c. TA assistance is not guaranteed during the grace period.
d. Submissions during the grace period will display as “late” but will not incur a penalty.e. We will not accept any submissions executed after the grace period ends.
C. You may discuss ideas with other students at the "whiteboard" level (e.g., how cross-validation works, use HashMap instead of an array) and review any relevant materials online. However, each student must write up and submit the student’s own answers.
D. All incidents of suspected dishonesty, plagiarism, or violations of the Georgia Tech Honor Code will be subject to the institute’s Academic Integrity procedures, directly handled by the Office of Student Integrity (OSI). Consequences can be severe, e.g., academic probation or dismissal, a 0 grade for assignments concerned, and prohibition from withdrawing from the class.
Submission Notes
A. All questions are graded on the Gradescope platform, accessible through Canvas.
B. We will not accept submissions anywhere else outside of Gradescope.
C. Submit all required files as specified in each question. Make sure they are named correctly.
D. You may upload your code periodically to Gradescope to obtain feedback on your code. There are no hidden test cases. The score you see on Gradescope is what you will receive.
a. The code is free of syntax errors (by running locally)b. All methods have been implementedc. The correct file was submitted with the correct named. No extra packages or files were imported
Do I need to use the specific version of the software listed?
Q1 [40 points] Collect data from TMDb to build a co-actor network
Leveraging the power of APIs for data acquisition, you will build a co-actor network of highly rated movies using information from The Movie Database (TMDb). Through data collection and analysis, you will create a graph showing the relationships between actors based on their highly rated movies. This will not only highlight the practical application of APIs in collecting rich datasets, but also introduce the importance of graphs in understanding and visualizing the real-world dataset.
Technology |
• Python 3.10.x
• TMDb API version 3
|
Allowed Libraries |
The Python Standard Library and Requests only. |
Max runtime |
10 minutes. Submissions exceeding this will receive zero credit. |
Deliverables |
• Q1.py: The completed Python file
• nodes.csv: The csv file containing nodes
• edges.csv: The csv file containing edges
|
Follow the instructions found in Q1.py to complete the Graph class, the TMDbAPIUtils class, and the one global function. The Graph class will serve as a re-usable way to represent and write out your collected graph data. The TMDbAPIUtils class will be used to work with the TMDb API for data retrieval.
Tasks and point breakdown
a. The graph is undirected, thus {a, b} and {b, a} refer to the same undirected edge in the graph; keep only either {a, b} or {b, a} in the Graph object. A node’s degree is the number of (undirected) edges incident on it. In/ out-degrees are not defined for undirected graphs.
a. Create a TMDb account and follow the instructions on this document to obtain an API key.b. Be sure to use the key, not the token. This is the shorter of the two.c. Refer to the TMDB API Documentation as you work on this question.
a. If an actor's name has comma characters (“,”), remove those characters before writing that name into the CSV files.
Q2 [35 points] SQLite
You will modify the given Q2.py file by adding SQL statements to it. We suggest testing your SQL locally on your computer using interactive tools to speed up testing and debugging, such as DB Browser for SQLite.
Technology |
• SQLite release 3.37.2
• Python 3.10.x
|
Allowed Libraries |
Do not modify import statements. Everything you need to complete this question has been imported for you. Do not use other libraries for this question.
|
Max runtime |
10 minutes. Submissions exceeding this will receive zero credit. |
Deliverables |
• Q2.py: Modified file containing all the SQL statements you have used to answer parts a - h in the proper sequence.
|
- If the final output asks for a decimal column, format it to two places using printf(). Do NOT use the ROUND() function, as in rare cases, it works differently on different platforms. If you need to sort that column, be sure you sort it using the actual decimal value and not the string returned by printf.
- A sample class has been provided to show example SQL statements; you can turn off this output by changing the global variable SHOW from True to False.
- In this question, you must only use INNER JOIN when performing a join between two tables, except for part g. Other types of joins may result in incorrect results.
Tasks and point breakdown
i. movies
1. id (integer)2. title (text)3. score (real)
ii. movie_cast
1. movie_id (integer)2. cast_id (integer)3. cast_name (text)4. birthday (text)5. popularity (real)
i. Write Python code that imports the .csv files into the individual tables. This will include looping though the file and using the ‘INSERT INTO’ SQL command. Make sure you use
c. [5 points] Vertical Database Partitioning. Database partitioning is an important technique that divides large tables into smaller tables, which may help speed up queries. Create a new table cast_bio from the movie_cast table. Be sure that the values are unique when inserting into the new cast_bio table. Read this page for an example of vertical database partitioning.
i. cast_bio
1. cast_id (integer)2. cast_name (text)3. birthday (text)4. popularity (real)
a. movie_index for the id column in movies tableb. cast_index for the cast_id column in movie_cast tablec. cast_bio_index for the cast_id column in cast_bio table
a. Output format and example value:
7.70
a. Output format and example row values (cast_name,appearance_count):Harrison Ford,2
a. Output format and example values (movie_title,score,cast_count):
Star Wars: Holiday Special,75.01,12Games,58.49,33
6. [4 points] Get high scoring actors. Find the top ten cast members who have the highest average movie
a. Exclude movies with score < 25 before calculating average_score.b. Include only cast members who have appeared in three or more movies with score >= 25.
i. Output format and example value (cast_id,cast_name,average_score):
8822,Julia Roberts,53.00
7. [2 points] Creating views. Create a view (virtual table) called good_collaboration that lists pairs of actors who have had a good collaboration as defined here. Each row in the view describes one pair of actors who appeared in at least 2 movies together AND the average score of these movies is >= 40.
The view should have the format:
good_collaboration(cast_member_id1,cast_member_id2,movie_count,
average_movie_score)
For symmetrical or mirror pairs, only keep the row in which cast_member_id1 has a lower numeric value. For example, for ID pairs (1, 2) and (2, 1), keep the row with IDs (1, 2). There should not be any “self-pair” where cast_member_id1 is the same as cast_member_id2. Remember that creating a view will not produce any output, so you should test your view with a few simple select statements during development. One such test has already been added to the code as part of the auto-grading. NOTE: Do not submit any code that creates a ‘TEMP’ or ‘TEMPORARY’ view that you may have used for testing.
8. [4 points] Find the best collaborators. Get the 5 cast members with the highest average scores from the good_collaboration view, and call this score the collaboration_score. This score is the average of the average_movie_score corresponding to each cast member, including actors in cast_member_id1 as well as cast_member_id2.
a. Order your output by collaboration_score in descending order, then by cast_name alphabetically.b. Output format and example values(cast_id,cast_name,collaboration_score):
2,Mark Hamil,99.321920,Winoa Ryder,88.32
a. [1 point] Import movie overview data from the movie_overview.csv into a new FTS table calledmovie_overview with the schema:
movie_overview
id (integer)
overview (text)
• Go to sqlite3 downloads page: https://www.sqlite.org/download.html• Download the dll file for your system• Navigate to your Python packages folder, e.g., C:\Users\... ...\Anaconda3\pkgs\sqlite-3.29.0- he774522_0\Library\bin• Drop the downloaded .dll file in the bin.• In your IDE, import sqlite3 again, fts should be enabled.
b. [1 point] Count the number of movies whose overview field contains the word ‘fight’. Matches are not case sensitive. Match full words, not word parts/sub-strings.
i. Example:
Allowed: ‘FIGHT’, ‘Fight’, ‘fight’, ‘fight.’Disallowed: ‘gunfight’, ‘fighting’, etc.
ii. Output format and example value:
12
i. Example:
Allowed: ‘In Space there was a program’, ‘In this space program’Disallowed: ‘In space you are not subjected to the laws of gravity. A program.’
ii. Output format and example value:
6
Q3 [15 points] D3 Warmup - Visualizing Wildlife Trafficking by Species
Read chapters 4-8 of Scott Murray’s Interactive Data Visualization for the Web, 2nd edition (sign in using your GT account, e.g., [email protected]). This reading provides an important foundation you will need for Homework 2. The question and autograder have been developed and tested for D3 version 5 (v5), while the book covers v4. What you learn from the book is transferable to v5, as v5 introduced few breaking changes. We also suggest briefly reviewing chapters 1-3 for background information on web development.
Technology |
• D3 Version 5 (included in the lib folder)
• Chrome 97.0 (or newer): the browser for grading your code
• Python HTTP server (for local testing)
|
Allowed Libraries |
D3 library is provided to you in the lib folder. You must NOT use any D3 libraries (d3*.js) other than the ones provided.
|
Deliverables |
• Q3.html: Modified file containing all html, javascript, and any css code required to produce the bar plot. Do not include the D3 libraries or q3.csv dataset.
|
IMPORTANT NOTES:
• We have provided sections of skeleton code and comments to help you complete the implementation. While you do not need to remove them, you need to write additional code to make things work.
• All d3*.js files are provided in the lib folder and referenced using relative paths in your html file. For example, since the file “Q3/Q3.html” uses d3, its header contains: <script type="text/javascript" src="lib/d3/d3.min.js"></script>. It is incorrect to use an absolute path such as: <script type="text/javascript" src="http://d3js.org/d3.v5.min.js"></script>. The 3 files that are referenced are:
• In your html / js code, use a relative path to read the dataset file. For example, since Q3 requires reading data from the q3.csv file, the path must be “q3.csv” and NOT an absolute path such as “C:/Users/polo/HW1-skeleton/Q3/q3.csv”. Absolute paths are specific locations that exist only on your computer, which means your code will NOT run on our machines when we grade, and you will lose points.
Tasks and point breakdown
Q3.html: When run in a browser, should display a horizontal bar plot with the following specifications:
- [3.5 points] The bar plot must display one bar for each of the five most trafficked species by count. Each bar’s length corresponds to the number of wildlife trafficking incidents involving that species between 2015 and 2023, represented by the ‘count’ column in our dataset.
- [1 point] The bars must have the same fixed thickness, and there must be some space between the bars, so they do not overlap.
- [3 points] The plot must have visible X and Y axes that scale according to the generated bars. That is, the axes are driven by the data that they are representing. They must not be hard-coded. The x-axis must be a <g> element having the id: “x_axis” and the y-axis must be a <g> element having the id: “y_axis”.
- [2 points] Set x-axis label to ‘Count’ and y-axis label to ‘Species’. The x-axis label must be a <text> element having the id: “x_axis_label” and the y-axis label must be a <text> element having the id: “y_axis_label”.
- [2 points] Use a linear scale for the X-axis to represent the count (recommended function: d3.scaleLinear()). Only display ticks and labels at every 500 interval. The X-axis must be displayed below the plot.
- [2 points] Use a categorical scale for the Y-axis to represent the species names (recommended function: d3.scaleBand()). Order the species names from greatest to least on ‘Count’ and limit the output to the top 5 species. The Y-axis must be displayed to the left of the plot.
- [1 point] Set the HTML title tag and display a title for the plot. Those two titles are independent of each other and need to be set separately. Set the HTML title tag (i.e., <title> Wildlife Trafficking Incidents per Species (2015 to 2023)</title>). Position the title “Wildlife Trafficking Incidents per Species (2015 to 2023)” above the bar plot. The title must be a <text> element having the id: “title”.
- [0.25 points] Add your GT username (usually includes a mix of letters and numbers) to the area beneath the bottom-right of the plot. The GT username must be a <text> element having the id: “credit”
- [0.25 points] Fill each bar with a unique color. We recommend using a colorblind-safe pallete.
NOTE: Gradescope will render your plot using Chrome and present you with a Dropbox link to view the screenshot of your plot as the autograder sees it. This visual feedback helps you adjust and identify errors, e.g., a blank plot indicates a serious error. Your design does not need to replicate the solution plot. However, the autograder requires the following DOM structure (including using correct IDs for elements) and sizing attributes to know how your chart is built.
| width: 900| height: 370|+-- < g id="container"> containing Q3.a plot elements
|+-- <g id="bars"> containing bars|+-- <g id="x_axis"> x-axis| || +-- (x-axis elements)|+-- <text id="x_axis_label"> x-axis label|+-- <g id="y_axis"> y-axis| || +-- (y-axis elements)|+-- <text id="y_axis_label"> y-axis label|+-- <text id="credit"> GTUsername|+-- <text id="title"> chart title
Q4 [5 points] OpenRefine
Technology |
• OpenRefine 3.6.2 |
Deliverables |
• properties_clean.csv: Export the final table as a csv file.
• changes.json: Submit a list of changes made to file in json format. Go to 'Undo/Redo' Tab → 'Extract' → 'Export'. This downloads 'history.json' . Rename it to 'changes.json'.
• Q4Observations.txt: A text file with answers to parts b.i, b.ii, b.iii, b.iv, b.v, b.vi. Provide each answer in a new line in the output format specified. Your file’s final formatting should result in a .txt file that has each answer on a new line followed by one blank line.
|
Tasks and point breakdown
a. Run OpenRefine and point your browser at https://127.0.0.1:3333.b. We use a products dataset from Mercari, derived from a Kaggle competition (Mercari Price Suggestion Challenge). If you are interested in the details, visit the data description page. We have sampled a subset of the dataset provided as "properties.csv".c. Choose "Create Project" → This Computer → properties.csv. Click "Next".d. You will now see a preview of the data. Click "Create Project" at the upper right corner.
2. [5 points] Clean/Refine the Data
a. [0.5 point] Select the category_name column and choose ‘Facet by Blank’ (Facet → Customized Facets → Facet by blank) to filter out the records that have blank values in this column. Provide the number of rows that return True in Q4Observations.txt. Exclude these rows.
NOTE: OpenRefine maintains a log of all changes. You can undo changes by the "Undo/Redo" button at the upper left corner. You must follow all the steps in order and submit the final cleaned data file properties_clean.csv. The changes made by this step need to be present in the final submission. If they are not done at the beginning, the final number of rows can be incorrect and raise errors by the autograder.
b. [1 point] Split the column category_name into multiple columns without removing the original column. For example, a row with “Kids/Toys/Dolls & Accessories” in the category_name column would be split across the newly created columns as “Kids”, “Toys” and “Dolls & Accessories”. Use the existing functionality in OpenRefine that creates multiple columns from an existing column based on a separator (i.e., in this case ‘/’) and does not remove the original category_name column. Provide the number of new columns that are created by this operation, excluding the original category_name column.
c. [0.5 points] Select the column name and apply the Text Facet (Facet → Text Facet). Cluster by using (Edit Cells → Cluster and Edit …) this opens a window where you can choose different “methods” and “keying functions” to use while clustering. Choose the keying function that produces the smallest number of clusters under the “Key Collision” method. Click ‘Select All’ and ‘Merge
d. [1 point] Replace the null values in the brand_name column with the text “Unknown” (Edit Cells → Transform). Provide the expression used.
e. [0.5 point] Create a new column high_priced with the values 0 or 1 based on the “price” column with the following conditions: if the price is greater than 90, high_priced should be set as 1, else 0. Provide the GREL expression used to perform this.
Q5 [5 points] Introduction to Python Flask
Technology |
Python 3.10.x
Flask
|
Allowed Libraries |
Python standard libraries
Libraries already imported in Q5.py
|
Deliverables |
Q5.py: Completed Python file with your changes |