这是一篇来自美国的关于学习理论和线性预测器的机器学习代写家庭作业
Submission Instruction
- For the writing part: please only submit pdf file with name [ComputingID]-hw01.pdf. We recommend to use LaTeX and you can find a template on the course webpage. If you are not familiar with LaTeX, using hand writing and scanning it to pdf will also work.
- For the coding part: by default, we will use Python. Your submission should be an iPython notebook file with the name [ComputingID]-hw01.ipynb.
Questions (20 points)
- The Bayes Predictor (4 points) For a binary classification problem, if we know the data distribution D over X × Y with Y = {+1, −1}, we can define the Bayes predictor as
fD(x) = +1 if P[y = +1| x]> 12
−1 ifP[y = −1| x]> 12(1)
Note that P[y = +1 | x] + P[y = −1 | x] = 1. Please show that this is the optimal predictor. In other words, for any predictor h, we have
LD(fD) ≤ LD(h) (2)
- Selection of Hypothesis Spaces (8 points) In lectures, we talked about how to identify the decision boundary using a mixture of Gaussian distributions. As an exercise, please replace the distribution with the following mixture of Gaussian distributions
D =12N (x; 0, 1)| y= {z−1 }+12N (x;23π, 0.5)|{z }y=+1(3)
Please answer the following questions with the new data distribution
(a) (2 point) What is the decision boundary of the Bayes predictor bBayes? Such as the Bayes predictor can be defined as
fD(x) = +1x > bBayes−1x < b Bayes (4)
(b) (1 point) What is the true error of the Bayes predictor, LD(fD)?
(c) (2 point) With the following hypothesis space H and the data distribution in equation 3, please
find out the best hypothesis h∗ ∈ H and report the corresponding decision boundary b∗
H = {
i
400
: i ∈ [1200]}
(5)1(d) (1 point) What is the true error of h∗, LD(h∗ )?
(e) (1 point) Follow a similar data generation procedure as in the demo code, sample 100 data points from each component and label them correspondly. Then, with the same hypothesis space H in equation 5 and these 200 training examples, please find out the best hypothesis hS that minimize the empirical error and report the corresponding decision boundary bS.
(f) (1 point) What is the true error of hS, LD(hS)?
- Perceptron algorithm (3 points) Implementing the Perceptron algorithm with a simple example.
The data you need for the implementation is in the file data.txt, which is released together with the assignment. Comparing to the pseudocode in our lecture, T was removed from line 3. That is because in practice we do not know the actual value of T. But we can monitor the predictions on all data points and stop the algorithm when the classifier makes correct predictions on all examples.
1: Input: S = {(x1, y1), . . . ,(xm, ym))}
2: Initialize w(0) = (0, . . . , 0)
3: for t = 1, 2, · · · do
4:i ← t mod m
5:if yi⟨w(t), xi⟩≤0 then
6:w(t+1) ← w(t) + yixi
7:end if
8: end for
9: Output: the final w(t)
- Logistic Regression (2 points) Given a training set S = {(x1, y1), . . . ,(xm, ym)}, the loss function of logistic regression is defined as
L(hw, S) =1m mXi=1 log(1 + exp(−yi⟨w, xi⟩)). (6)
Please show that the gradient of L(hw, S) with respect to w is dL(hw, S)
dw =1m mXi=1exp(−yi⟨w, xi⟩)1 + exp(−yi⟨w, xi⟩)(−yixi) (7)
- Linear Regression (3 points) The loss function of linear regression with ℓ2 regularization is defined as
Lℓ2 (hw, S) =mXi=1(hw(xi) − yi)2 + λ∥w∥2 2 (8)
Please show that the solution of this problem, when A + λI is invertible, is
w = (A + λI)−1b (9)
where I is the identify matrix, A and b is defined as
A =mXi=1xixTi b =mXi=1yixi (10)
Note that {xi} are column vectors.
Please report your answer in the homework submission and also submit your code with file name as [ComputingID]-hw02.py or [ComputingID]-hw02.ipynb. Without code submission, you will get a 50% deduction on the total points you have on this problem.
程序辅导定制C/C++/JAVA/安卓/PYTHON/留学生/PHP/APP开发/MATLAB

本网站支持 Alipay WeChatPay PayPal等支付方式
E-mail: vipdue@outlook.com 微信号:vipnxx
如果您使用手机请先保存二维码,微信识别。如果用电脑,直接掏出手机果断扫描。
