Please fill out the conceptual questions and submit to Gradescope under hw1-knn conceptual (make sure you submit to the correct assignment depending on whether you’re a 1470 or 2470 student). You must type your submissions and upload them as a PDF. We recommend using LaTeX.
Getting the Stencil
Please click here to get the stencil code. Reference this guide for more information about GitHub and GitHub Classroom.
Getting the Data
Use the download.sh bash script for downloading the data (if it exists for that homework). You need to run download.sh to get the data. You can run a bash script with the command ./script_name.sh (ex: bash ./download.sh).
Using the Virtualenv
We will be using conda, an open source package and environment management system, to set up virtual environments this year. Please reference this guide for more info on setting up your virtual environment.
Once you have a Conda environment set up, please make sure that you have all of these packages (csci1470.yml) installed. Note that if you have an M1/M2 MacBook, you should instead check that you have the packages listed in Jeff Heaton’s tensorflow-apple-metal-conda.yml installed.
Work on this assignment off of the stencil code provided, but do not change the stencil except where specified. Changing the stencil will result in incompatibility with the autograder and result in a low grade. You shouldn’t change any method signatures or add any trainable parameters to init that we don’t give you (other instance variables are fine).
Make sure that you are using Python version 3.7 or higher on this assignment and all future assignments. You can check this in the terminal with python –version.
Most department machines should have Python 3.7. Please see the virtual environment setup guide for installing Python 3.7 on your local machine.
This assignment also requires the NumPy and Matplotlib packages. You can install them using pip or run the assignment in the virtual environment.
You can also check out the Python virtual environment guide to set up TensorFlow 2.5 on your local machine.
If this assignment sounds like a lot, don’t worry! Remember the famous quote, usually attributed to Henry Ford, “nothing is particularly hard if you divide it into small jobs”. The same goes for this assignment. Our stencil provides several functions and a class with several methods. You are also given Jupyter Notebooks to help test your code incrementally and visualize the intermediate steps.
Step 1. Preprocess the data
Before training a network, you will need to clean your data. This includes retrieving, altering, and formatting the data into the inputs for your network. For this assignment, you will be working on the MNIST dataset and the CIFAR dataset. Both can be downloaded through the download.sh script, but they’re also linked below.
The MNIST dataset can be found here: http://yann.lecun.com/exdb/mnist/ (NOTE: http only because reasons). The training data contains 60,000 examples (handwritten digits) broken into two files: one file contains the image pixel data and the other contains the class label.
The CIFAR-10 dataset can be found here: https://www.cs.toronto.edu/~kriz/cifar.html. It is a dataset of RGB color images of pixel size 32 by 32, and it has 10 possible classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.
Note: For the CIFAR dataset, the training data is divided into batches. Since we won’t be using batches for this assignment, you’ll need to combine all of the batches to create your training set.
You should train your network using only the training data and then test your network’s accuracy on the testing data.
The MNIST dataset
This is the part where you write get_data_MNIST.
The MNIST data files are gzipped. You can use the gzip library to read these files from Python. To open a gzipped file from Python you can use the following code:
You might find np.frombuffer helpful to convert from a buffer of bytes to a NumPy array. Here, you should also use the dtype argument to specify the data to be of type np.uint8.
The testing and training data are in the following format. You can read and ignore headers:
- train-images-idx3-ubyte.gz: 16 byte header followed by 60K training images. Image consists of 784 single-byteintegers (0-255) representing pixel intensities.
- train-labels-idx1-ubyte.gz: 8 byte header followed by 60K training labels. Label consists of single-byteintegers from 0-9 representing the class label.
- t10k-images-idx3-ubyte.gz: 16 byte header followed by 10K testing images.
- t10k-labels-idx1-ubyte.gz: 8 byte header followed by 10K testing labels.
When processing images, you should normalize the pixel values so that they range from 0 to 1. This can be done by dividing each pixel value by 255.0 to avoid any numerical overflow issues. Each pixel is 1 byte. When normalizing, you should cast the data to np.float32.
You want the inputs for your model to be batch size of images. Each image is a 28 by 28 matrix of pixel values, but you will have to flatten each image into a vector of pixel values of length 784. You might find numpy.reshape (see documentation here) helpful for this.
The CIFAR-10 dataset:
This is the part where you write get_data_CIFAR.
- get_data_CIFARalready has a few lines to get the file paths to the data.
- The training data is currently split into batches (i.e. different files). You’ll need to combine all of the batches into one datasetby unpickling each file and concatenating them into one list. You’ll also want to get the label names from meta and convert them from binary strings to text strings. If you need help unpickling, take a look at the explanation of the dataset layout here.
- You will find that the labels are not very useful for human eyes. For the sake of convenience, let’s re-label them to be more descriptive to us.
- At this point, your inputs are still two dimensional. You will want to reshape your inputsand transpose them.
You should not normalize the image pixel values in this function. You will do that later in the HW1_CIFAR.ipynb notebook. The reason is that the CIFAR images must be prepared differently for the KNN model and the ResNet model.
For this assignment, you will use only subsets of the whole MNIST and CIFAR datasets.
You will eventually need to use the whole dataset later in the course to train neural network models because they need a large amount of data. However, using the whole dataset is not necessary for the simplified KNN for this assignment and you will end up wasting too much time waiting for your code to finish running.
You’ll write the get_specific_class and get_subset functions in preprocess.py. Here, the get_specific_class function is a helper function for the get_subset function that returns the images and the labels for a specific class. For example, if the keyword arguments are specific_class=”cat” and num=50, then the function will return the first 50 cat images and labels from the image and label arrays. You can implement this function in whatever way you want, but you will probably find NumPy mask operations very useful.
Then, the get_subset function will repeatedly call the get_specific_class function for each class included in the keyword argument class_list..
Step 2. Fill in the model
Let’s finish implementing the KNN algorithm in the KNN_Model.py file. The model is going to pick K images that are closest to the target image, look at the labels of the neighboring images, and return the most frequent label among the neighbors. There are three methods in the KNN_Model class that you need to finish implementing.
- get_neighbor_counts: Construct the NumPy array distances, which contains the squared Euclidean distances between the new image (new_image) and all images in the training set (image_train). Then, you need to find the nearest neighboring images’ indices in the training set, self.image_train, and put them in the NumPy array, nearest_indices.
- predict: Take the majority vote on the K nearest neighbors’ labels and return this as the label for the example.
- get_prediction_array: Make predictions on multiple images by repeatedly calling the predict
本网站支持 Alipay WeChatPay PayPal等支付方式
E-mail: email@example.com 微信号:vipnxx