- The work has to be finished individually. Plagiarism will be dealt with seriously.
- Learn to do control experiments
- Learn there are alternatives to Softmax/Cross Entropy when training DNN
A: Use "Issues" of this repo.
A: We'll discuss the homework in succeeding experiment course (every two week). Those homeworks turned in after discussion will be capped at 90 marks.
A: The algorithm is TBD.
A: You can choose to skip extra_32x32.mat when trainining. Find a file called common.py in 01-svhn, then set use_extra_data = False or True to control.
A: Open http://ufldl.stanford.edu/housenumbers . Please download format2 data. (train_32x32.mat, test_32x32.mat, extra_32x32.mat)
(Should be named as q1.1.diff q1.2.diff q1.3.diff q1.4.diff)
-
- Change cross entropy loss to the square of euclidean distance between model predicted probability and one hot vector of the true label.
-
- Change all pooling layers to Lp pooling
- Descriptions about Lp pooling is at https://www.computer.org/csdl/proceedings/icpr/2012/2216/00/06460867.pdf
-
- Try Lp regularization with different p. (Pick one number p with best accuracy and name as q4.1.diff)
- Set Lp regularization to a minus number. (L_model + L_reg to L_model - L_reg) (Should be named as q4.2.diff)