Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

N2D2 versions conflict #15

Closed
nassim-abderrahmane opened this issue Mar 21, 2018 · 5 comments
Closed

N2D2 versions conflict #15

nassim-abderrahmane opened this issue Mar 21, 2018 · 5 comments
Labels

Comments

@nassim-abderrahmane
Copy link

Il y a une anomalie avec N2D2 car j'obtiens des résultats différents avec deux versions de N2D2.

Version actuelle "21/03/2018": accuracy = 77.92 %
Version antérieure "16/03/2017": accuracy = 95.37 %

Ce problème est présent seulement pour les réseaux à spikes.

Je utilise la commande "n2d2 mnist28_300_10_Spike.ini -test" pour les deux versions.

@olivierbichler-cea
Copy link
Contributor

Bonjour,
Pouvez-vous nous joindre le fichier mnist28_300_10_Spike.ini qui pose problème ?

@nassim-abderrahmane
Copy link
Author

nassim-abderrahmane commented Mar 22, 2018 via email

@olivierbichler-cea
Copy link
Contributor

J'arrive à reproduire le problème.
Nous allons investiguer !

@olivierbichler-cea
Copy link
Contributor

olivierbichler-cea commented Mar 30, 2018

The problem arises because of weights normalization that is done after the test in frame mode, which was likely changed between the two versions of N2D2.
The following lines are executed at the end of a test:

deepNet->normalizeFreeParameters();
deepNet->exportNetworkFreeParameters("weights_normalized");

deepNet->normalizeOutputsRange(outputsRange, 0.25);
deepNet->exportNetworkFreeParameters("weights_range_normalized");

Then, when the spike test follows, the weights that are used are the one normalized by the second normalize command.
Long story short, to obtain the same results than for the previous N2D2 version, just re-run N2D2 without the "-test" argument after the learning.
You can also load specific weights for the test with the "-w <weights_folder>" argument.

Now, the rational behind the automatic normalization after the test in frame mode is that the outputs range normalization should improve the spike coding. In your case, it is not and I suspect that this is just luck, but on bigger networks the score would likely fall apart...
There are indeed several issues with your INI simulation:

  • You are using ReLU activation on the output layer. The chances are high that this may cause some neurons to become silent. You should be using a softmax (with loss) during the learning and drop it for the test. In all cases, the output layer should not have an activation function (or Linear activation).
  • Bias it not supported in spike mode, so you should be using NoBias=1.

Considering that, I achieve for the same network a test score of 98.20% in frame mode and 97.98% in spike mode using the "-w weights_range_normalized" weights and a terminate delta of 20.

@olivierbichler-cea
Copy link
Contributor

The problem appears to be solved, closing the issue.
Please don't hesitate to reopen it if needed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants