Skip to content

Latest commit

 

History

History
42 lines (29 loc) · 1.82 KB

Homework 1.md

File metadata and controls

42 lines (29 loc) · 1.82 KB

Homework 1

Requirement

  • The work has to be finished individually. Plagiarism will be dealt with seriously.

Goals

  • Learn to do control experiments
  • Learn there are alternatives to Softmax/Cross Entropy when training DNN

FAQ

Q: Where can I get support for this homework?

A: Use "Issues" of this repo.

Q: What's the DDL for homeworks?

A: We'll discuss the homework in succeeding experiment course (every two week). Those homeworks turned in after discussion will be capped at 90 marks.

Q: How will the score of each homework affect final course score?

A: The algorithm is TBD.

Q: I don't have access to a GPU. How do I finish homework in time?

A: You can choose to skip extra_32x32.mat when trainining. Find a file called common.py in 01-svhn, then set use_extra_data = False or True to control.

Q: Where to find dataset files?

A: Open http://ufldl.stanford.edu/housenumbers . Please download format2 data. (train_32x32.mat, test_32x32.mat, extra_32x32.mat)

Questions

  • Q1: Finding alternatives of softmax

(Should be named as q1.1.diff q1.2.diff q1.3.diff q1.4.diff)

  • Q2: Regression vs Classification (Should be named as q2.diff)

    • Change cross entropy loss to the square of euclidean distance between model predicted probability and one hot vector of the true label.
  • Q3: Lp pooling (Should be named as q3.diff)

  • Q4: Regularization

    • Try Lp regularization with different p. (Pick one number p with best accuracy and name as q4.1.diff)
    • Set Lp regularization to a minus number. (L_model + L_reg to L_model - L_reg) (Should be named as q4.2.diff)