Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

README #14

Merged
merged 15 commits into from over 2 years ago
This page is out of date. Refresh to see the latest.
4 handin3/Code/regularization.m
@@ -9,8 +9,8 @@
9 9 modelSmaller = train(data, C/100, gamma)
10 10
11 11 [f b] = dividesupportvectors(C, model.SVs, model.sv_coef);
12   -[fl bl] = dividesupportvectors(C, modelLarger.SVs, modelLarger.sv_coef);
13   -[fs bs] = dividesupportvectors(C, modelSmaller.SVs, ...
  12 +[fl bl] = dividesupportvectors(C*100, modelLarger.SVs, modelLarger.sv_coef);
  13 +[fs bs] = dividesupportvectors(C/100, modelSmaller.SVs, ...
14 14 modelSmaller.sv_coef);
15 15
16 16 disp(sprintf('Original: #SVs: %d\t#free SVs: %d\t#bounded SVs: %d', ...
47 handin3/README.txt
... ... @@ -1 +1,46 @@
1   -TODO
  1 +README
  2 +======
  3 +
  4 +This is the README for the hand-in for assignment 3 in Statistical Methods for Machine Learning.
  5 +
  6 +About the code
  7 +--------------
  8 +All code is implemented in MATLAB.
  9 +
  10 +Neural Network
  11 +--------------
  12 +Run nntrain.m, which
  13 +* creates weight matrices representing eight neural networks,
  14 +* trains them for 25000 batch learning iterations,
  15 +* measures the error to training and test sets every 50 iterations,
  16 +* plots the solutions and the error trajectories,
  17 +* save these plots to solution20ns.eps, solution2.eps, error20ns.eps and error2ns.eps.
  18 +
  19 +The eight neural networks are:
  20 +* four with a hidden layer of 20 neurons,
  21 +* four with two hidden neurons.
  22 +
  23 +Both groups of neural networks feature a training with learning rates of
  24 +* 0.001,
  25 +* 0.0001,
  26 +* 0.00001 and
  27 +* 0.000001.
  28 +
  29 +After running nntrain, these eight weight matrices can be found in WsCells and the error rates in ErrVecTrain and ErrVecTest.
  30 +
  31 +Each major step involves supporting functions. Sometimes alternatives exist. For example, instead of using the function initWsRandNoShortcuts(...) to produce an initial weight matrix without short cuts, initWsRand(...) can be used to get a neural network with short cuts.
  32 +
  33 +
  34 +
  35 +Support Vector Machines
  36 +-----------------------
  37 +The implementation of the SVM exercise depends on the current version of the LIBSVM Matlab interface being compiled and available in Matlab. It will not run correctly if Matlab picks its own builtin function svmtrain instead.
  38 +Run runsvm.m to start most calculations concerning the SVM exercise. It will print the results of the model selection and kernel inspection tasks except for 2.2.2 to the console. The plot of bounded and unbounded support vectors will be saved as freeBoundesSVs.eps in the current directory.
  39 +For 2.2.2, run regularization.m.
  40 +Subtasks like the model selection (modelselect.m) and the calculation of bounded and free support vectors (dividesupportvectors.m) have been saved in separate functions and should be self-explanatory.
  41 +
  42 +Authors
  43 +-------
  44 +Philip Pickering <pgpick@gmx.at>
  45 +Marco Eilers <eilers.marco@googlemail.com>
  46 +Thomas Bracht Laumann Jespersen <laumann.thomas@gmail.com>
BIN  handin3/handin3.pdf
Binary file not shown
2  handin3/handin3.tex
@@ -253,7 +253,7 @@ \subsubsection{Effect of the regularization parameter}
253 253
254 254 The file \texttt{regularization.m} performs the outlined procedure, by first training the SVM model using the values for $C$ and $\gamma$ found during model selection. Then it trains to other models, one in which $C$ is multiplied by a hundred and one in which we divide $C$ by 100.
255 255
256   -The most notable change is in the number of support vectors. There's a total of 93 support vectors for the ``original'' value of $C$---87 of which are bounded. When $C$ is a hundred times larger, the number of support vectors drop to just 19, all of which are free. Conversely, when dividing $C$ by a hundred we get an increase in the number of support vectors to 199, but again all of them are free.
  256 +The most notable change is in the number of support vectors. There's a total of 93 support vectors for the ``original'' value of $C$---87 of which are bounded. When $C$ is a hundred times larger, the number of support vectors drop to just 19, 14 of which are bounded. Conversely, when dividing $C$ by a hundred we get an increase in the number of support vectors to 199, and again most of them (195) are bounded.
257 257
258 258 \subsubsection{Scaling behaviour}
259 259

Tip: You can add notes to lines in a file. Hover to the left of a line to make a note

Something went wrong with that request. Please try again.