In this assessment you are required to answer five questions using Spark. For each question you will
need to write code which uses the appropriate transformations and actions.
Our main input file for this assessment is called DataCoSupplyChainDataset.csv, a subset of a dataset
from Kaggle, which contains supply chains used by a company called DataCo Global. There is a second
file provided, called DescriptionDataCoSupplyChain.csv, which describes the columns in the main
You should use the following template file to write your code: test3_solutions.py. See the video
instructions provided with the assessment instructions for an example of how to use the template.
Q1. Load the data, convert to dataframe and apply appropriate column names and variable types.
Q2. Determine what proportion of all transactions is attributed to each customer segment in the dataset
i.e. Consumer = x%, Corporate = y% etc.
This question uses the Customer Segment field.
Q3. Determine which three products had the most amount of sales.
This question uses the Order Item Total and Product Name fields.
Q4. For each transaction type, determine the average item cost.
This question uses the Type, Order Item Product Price and Order Item Quantity fields.
Q5. What is the first name of the most regular customer in Puerto Rico? (Repeat transactions by the
same customer should not count as separate customers.)
This question uses the Customer Country, Customer Fname and Customer Id fields.
• Q4 will probably be easier if you don’t use the mean action.
• In Q5 you can specify to the max action which field should be used i.e. max(lambda x: x)
本网站支持 Alipay WeChatPay PayPal等支付方式
E-mail: email@example.com 微信号:vipnxx