本次英国代写主要为Python神经网络的assignment

2 Question 1: Perceptron training algorithm (30%)

Below is the usual code to import libraries we need. You can modify it as needed.

[ ]: import numpy as np # This is for mathematical operations

# this is used in plotting

import matplotlib.pyplot as plt

import time

import pylab as pl

from IPython import display

%matplotlib inline

from perceptron import *

%load_ext autoreload

%autoreload 2

%reload_ext autoreload

1. Starting from the boiler plate code provided in perceptron.py, implement the perceptron

training algorithm as seen in the lecture. This perceptron takes a 2D point as input and clas-

sifies it as belonging to either class 0 or 1. By changing the number_of_samples parameter,

different sample sizes can be generated for both classes.

You need the address the section of the code marked with #TODO

[ ]: number_of_samples = 100

max_number_of_epochs = 10

X = np.random.rand(number_of_samples,2)

X = np.append(X, np.ones((X.shape[0],1)),axis = 1)

Y = X[:,1] > (X[:,0])

Y = np.float32(Y)

Y = Y.reshape((number_of_samples,1))

p = Perceptron(3)

p.train(X,Y,max_number_of_epochs)

2. The code plots the ground-truth boundary between the two classes. We know this is the

ground-truth because this is the boundary from which we generated data. However, even

though the estimated line can correctly classify all the training samples, it might not agree

with the ground-truth boundary.

• Explain why this happens.

• What is the potential disadvantage of such a decision boundary?

• How could you change the training algorithm to reduce this disadvantage? (Implementation

not required)

[ ]: ### Your answer here (use multiple cells if required, to answer any question in␣

,→this assignment)

3. In some training sessions, you might observe the boundary oscillating between two solutions

and not reaching 100% accuracy. Discuss why this can happen and modify the training

algorithm to correct this behaviour. We will call this the modified algorithm. (Hint: learning

rate)

[ ]: ### Your answer here

4. Random initialization causes the algorithm to converge in different number of epochs. Execute

the training algorithm on sample sizes of 10, 50, 500 and 5000. Report in a table the mean

number of epochs required to converge for both the original and the modified algorithm.

Which algorithm performs better and why? Is there a clear winner?

[ ]: ### Your answer here

3 Question 2: Multiclass classifier (30%)

Pat yourselves on the back, you have successfully trained a binary classifier. Now, its time to

move to 3 classes. The dataset we are going on work on is shown below and contains three classes

represented by different colors.

**程序辅导定制C/C++/JAVA/安卓/PYTHON/留学生/PHP/APP开发/MATLAB**

本网站支持 Alipay WeChatPay PayPal等支付方式

**E-mail:** vipdue@outlook.com **微信号:**vipnxx

如果您使用手机请先保存二维码，微信识别。如果用电脑，直接掏出手机果断扫描。