首页 » Python代写 » Python数据分析代写 | AI, Ethics, and Society Homework Project #3

Python数据分析代写 | AI, Ethics, and Society Homework Project #3

本次澳洲代写是Python AI代写的一个assignment

In this assignment, you’ll continue the process of exploring relationships in data. You’ll accomplish this
task by computing some basic inferential statistical measures on a natural language-based dataset.

Natural language processing is concerned with the ability to process and analyze large amounts of natural
language data, whether for automated sentence completion in emails, conversational agents and chatbots,
or AI tools to help journalists. In this assignment, we will work with data from a classifier built to identify
toxicity in comments from Wikipedia Talk Pages. The model is built from a dataset of 127,820 Talk Page
comments, each labeled by human raters as toxic or non-toxic. A toxic comment is defined as a “rude,
disrespectful, or unreasonable comment that is likely to make you leave a discussion.”

Step 1

– Download the modified dataset available on CANVAS – toxity_per_attribute.csv:

• <Wiki_ID> is unique identifier associated with Wikipedia comment
• <TOXICITY> is a toxicity value from 1 if the comment was considered toxic and value 0 if the
comment was considered neutral or healthy
• < subgroup > columns: One column per human attribute; True if the comment mentioned this
identity.
• Due to sensitivity, comments were removed to construct the modified dataset. The original data
source can be found at: https://github.com/conversationai/unintended-ml-bias-
analysis/tree/master/data

Step 2:

Identify the protected class categories and members associated with each protected class category.
• For each protected class category, identify its relevant protected class members (e.g. christian +
muslim + X -> Religion)
• Provide the classification results (i.e. list of protected class categories and their associated
protected class members)

Step 3:

• Create a reduced data set by deleting any rows that have all FALSE values for every column in
that row. Note: This is the reduced data set that you will use in all subsequent steps.

• Using the reduced data set, identify an ordering scheme for each protected class category by
defining values for each of its protected class member. Convert FALSE to 0 and TRUE to a
unique numerical value for each subgroup member based on a subjective ordering of who you
believe would be least/most impacted by negative toxicity (e.g. for gender identify: FALSE = 0;
male = 1; female = 2; binary = 3; etc.). You may also combine group members and assignnumerical values based on your belief about similarities among the group members (e.g. gender
identify: FALSE = 0; all others = 1; female = 2).

• Using your assigned numerical values, create a compacted data set by combining the columns
associated with the related protected class members into one column representing the protected
class category (e.g. combine all columns related to Religion into one Religion column).

• Calculate the correlation between the protected class category and TOXICITY. Provide the
correlation coefficients in table format and identify the strength of the correlation. Select the three
highest correlation coefficients and plot data for the correlated variables; indicate its correlation
strength [Note: there may/may not be any strong correlations in this dataset].

• As guidance, can use (Evans, J. D. (1996). Straightforward statistics for the behavioral sciences.
Brooks/Cole Publishing) which suggests the following related to the absolute value of the
correlation coefficient:

.00-.19 “very weak” correlation
.20-.39 “weak” correlation
.40-.59 “moderate” correlation
.60-.79 “strong” correlation
.80-1.0 “very strong” correlation

Example Output (for illustrative purposes only):

Classification Results – Protected Class Variables:
• Religion: christian, muslim
• Age: younger, older


程序辅导定制C/C++/JAVA/安卓/PYTHON/留学生/PHP/APP开发/MATLAB


blank

本网站支持 Alipay WeChatPay PayPal等支付方式

E-mail: vipdue@outlook.com  微信号:vipnxx


如果您使用手机请先保存二维码,微信识别。如果用电脑,直接掏出手机果断扫描。

blank