Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log Vars Units #70

Closed
keeperbitz opened this issue Oct 26, 2020 · 6 comments
Closed

Log Vars Units #70

keeperbitz opened this issue Oct 26, 2020 · 6 comments

Comments

@keeperbitz
Copy link

I converted a DNN to SNN, and I am trying to extract log vars that correspond to the average firing rates, # of operations, etc. on Intel Loihi. I was able to load a dictionary of quantities 'avg_rate', 'operations_ann', etc. after running on Intel Loihi. However, I am unsure to what the units are for these variables. For example, synaptic_operations_b_t values are decimals < 1, when in the documentation they seem to refer a # of operations. Is there somewhere that lists the units used in this monitoring? The quantities I am interested in are ['spiketrains_n_b_l_t', 'avg_rate', 'top5err_ann', 'neuron_operations_b_t', 'input_b_l_t', 'activations_n_b_l', 'top1err_ann', 'top1err_b_t', 'true_classes_b', 'operations_ann', 'mem_n_b_l_t', 'input_image_b_l', 'synaptic_operations_b_t', 'top5err_b_t'].

Thanks!

@rbodo
Copy link
Contributor

rbodo commented Oct 30, 2020

Apologies, these logs are not well documented.

  • The operations (both neuron and synaptic) are reported as million ops, i.e. (# ops) / 1e6.
  • spiketrains_n_b_l_t is a binary array with ones indicating spikes and zeros everywhere else.
  • avg_rate is the total number of spikes that were generated during the simulation of a batch, divided by the batch size, the number of neurons in the network, and the simulation duration.
  • top1err_b_t is the top-1 error of the converted SNN over time, for each test sample in the given batch.
  • top1err_ann is the top-1 error of the original ANN averaged over the same test samples used for the SNN, reported as floating-point number in range [0, 1].
  • top5err_ann is the top-k error of the ANN (k as specified in the config, default 1).
  • input_b_l_t contains the input spike train (matrix of 0's and 1's).
  • activations_n_b_l are the ANN activations in arbitrary units.
  • true_classes_b contains the target labels for classification in integer format, not one-hot encoded.
  • operations_ann is the number of floating point operations (FLOP) of the ANN in unit of million FLOP. We count each multiply-accumulate (MAC) operation in the ANN as two FLOP.
  • mem_n_b_l_t is the membrane potential of each neuron over time, in arbitrary units.
  • input_image_b_l is the batch of input images as provided by the user.

@wusaifei
Copy link

wusaifei commented Nov 1, 2020

@rbodo Hello, author, what do these parameters mean in plot_vars and log_vars?

log_vars = {'activations_n_b_l', 'spiketrains_n_b_l_t', 'input_b_l_t',
            'mem_n_b_l_t', 'synaptic_operations_b_t', 'neuron_operations_b_t',
            'all'}
plot_vars = {'activations', 'spiketrains', 'spikecounts', 'spikerates',
             'input_image', 'error_t', 'confusion_matrix', 'correlation',
             'hist_spikerates_activations', 'normalization_activations',
             'operations', 'v_mem', 'all'}

Please help the author to explain the meaning.

@wusaifei
Copy link

wusaifei commented Nov 2, 2020

@rbodo Hello author, can "Number of operations of ANN: 4131959181" in the figure below be understood as the total number of operations of ANN in the calculation process?
Are all operands of SNN "Number of neurons: 2232677" or "Number of synapses: 2026076528", or the sum of "Number of neurons" and "Number of synapses"?
Looking forward to the author's reply!
image

@rbodo
Copy link
Contributor

rbodo commented Nov 3, 2020

The "number of operations of ANN" is the number of floating point operations needed to do one forward pass on an input sample.
The number of ops of the SNN depends on network activity at runtime and changes from sample to sample, so the toolbox cannot give you a single number like for the ANN before simulating. So you cannot directly infer the number of ops from neuron or synapse count.

@rbodo
Copy link
Contributor

rbodo commented Nov 3, 2020

@rbodo Hello, author, what do these parameters mean in plot_vars and log_vars?

log_vars = {'activations_n_b_l', 'spiketrains_n_b_l_t', 'input_b_l_t',
            'mem_n_b_l_t', 'synaptic_operations_b_t', 'neuron_operations_b_t',
            'all'}
plot_vars = {'activations', 'spiketrains', 'spikecounts', 'spikerates',
             'input_image', 'error_t', 'confusion_matrix', 'correlation',
             'hist_spikerates_activations', 'normalization_activations',
             'operations', 'v_mem', 'all'}

Please help the author to explain the meaning.

log_vars are explained in the post above.

  • activations are the ANN activations in feature maps of hidden layers.
  • spiketrains are the spikes of hidden layer neurons over time.
  • spikecounts are the total number of spikes of the network over time.
  • spikerates are the SNN firing rates in feature maps of hidden layers.
  • input_image is the input image as provided by the user.
  • error_t is the classification error over time.
  • confusion_matrix is the heatmap showing the distribution of correct / incorrect classifications.
  • correlation is a correlation plot between activations and spikerates for each layer.
  • hist_spikerates_activations shows the histogram of spikerates and activations of a layer for the current sample.
  • normalization_activations shows the histogram of activations before and after applying weight normalization to the ANN.
  • operations shows the number of synaptic operatios over time.
  • v_mem is the membrane potential of each neuron of a layer over time.

@wusaifei
Copy link

wusaifei commented Nov 3, 2020

@rbodo Thank you for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants