Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help for using .aedat format file #63

Closed
hyeongilee opened this issue Jun 12, 2020 · 10 comments
Closed

Help for using .aedat format file #63

hyeongilee opened this issue Jun 12, 2020 · 10 comments

Comments

@hyeongilee
Copy link

Hi,
I'm trying to conduct a classification process using DVS camera(.aedat format), and I'm confusing for how should I set the "label_dict" option in the config file

For example, I have an .aedat file with taking a sequence aout 2 minute length, and the class label value varies over time. Such as for the first 10 seconds, the class is 0, and the next 10 seconds, the class is 1, ... and so on.

Thanks.

@rbodo
Copy link
Contributor

rbodo commented Jun 13, 2020

The aedat input format works similarly to the flow_from_directory InputDataGenerator from tf/keras, which reads images from folders that represent a class.

That means you would have to cut your aedat sequence into shorter files that only contain one class. Then structure your dataset directory like this:

\MyDVSdata
    \apples
        00.aedat
        01.aedat
        ...
    \oranges
        00.aedat
        01.aedat
        ...

The [input] section in your config file would look like this:

[input]
dataset_format = aedat
label_dict = {'apples': '0', 'oranges': '1'}  # Map from folder name to class index in output layer.

For training your model, you may have binned the DVS events into frames. The following arguments allow reconstructing these frames from the event stream at test time:

num_dvs_events_per_sample = 2000     # How many events to accumulate into one frame.
chip_size = (240, 180)               # The dimensions of your DVS sensor
frame_gen_method = rectified_sum     # rectified_sum: Discard polarity. Other possible value: ``signed_sum`` (keep polarity while adding up events into frame).
is_x_first = False                   # Depending on the axis ordering convention of your framework (numpy, PIL, ...) you may have to swap / flip the x/y coordinates.            
is_x_flipped = True
is_y_flipped = True
do_clip_three_sigma = True           # Outlier removal
maxpool_subsampling = True           # When events fall on the same pixel address within one time bin, we can either add them up (which results in a spike burst), or keep only one of them (which I found to be more robust)

Our simulator is time stepped, which means we can only approximate the asynchronous nature of the DVS event stream. We achieve this by feeding in very thin frames (a few microseconds in width). You can specifiy the bin width in microseconds here:

eventframe_width = 10                      

Smaller values mean more accurate "asynchronous" mode but also more computations.

@hyeongilee
Copy link
Author

Thanks.
I wrote some python code like /example/mnist_keras_brian2.py using .aedat files

But some error occurs as follows

Traceback (most recent call last):
File "KH_keras_brian2.py", line 164, in
main(config_filepath)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/run.py", line 31, in main
run_pipeline(config)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 145, in run_pipeline
results = run(spiking_model, **testset)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 220, in wrapper
results.append(run_single(snn, **testset))
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 142, in run
return snn.run(**test_set)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/simulation/utils.py", line 549, in run
self.parsed_model.layers[0].batch_input_shape, int))
AttributeError: 'InputLayer' object has no attribute 'batch_input_shape'

How can I solve the problem?

@rbodo
Copy link
Contributor

rbodo commented Jun 16, 2020

Please update the toolbox to the latest pypi (or better: development) version. The error you are seeing is due to a keras breaking change.

@hyeongilee
Copy link
Author

With updating the development version the problem above solved.

But the following error occurs, and I think the error is saying the ANN needs traning and a method to create an image that should be given as the input of ANN from aedat file is not implemented.

Should I implement this part?

Traceback (most recent call last):
File "KH_keras_INI_2.py", line 165, in
main(config_filepath)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/run.py", line 31, in main
run_pipeline(config)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 81, in run_pipeline
num_to_test, **testset)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/parsing/model_libs/keras_input_lib.py", line 209, in evaluate
raise NotImplementedError
NotImplementedError

When I set the evaluate_ann option False, I think the weighs are set as 0, and nothing appears in the plotting example.

@rbodo
Copy link
Contributor

rbodo commented Jun 18, 2020

When using aedat as input, you should turn off the evaluate_ann option. This option allows testing the original ANN before conversion, but that requires that you have the DVS dataset preprocessed as frames.

What do you mean by "nothing appears in the plotting example"? The weights shouldn't be set to 0, all that's happening is that the original model is not tested.

@hyeongilee
Copy link
Author

The activations of the layer are almost 0 as below.
0Activations

I'm building a classification network using my own dataset that distinguish whether a short given sequence is a glass bottle or a mug as shown below.
image

If the weights are not 0, then why the activation values are all 0?

@rbodo
Copy link
Contributor

rbodo commented Jun 19, 2020

OK, what you see here is unrelated to the evaluate_ann option.

The reason the activations are so low is not that the weights are modified but that the input image is so sparse.

Remember that your input is a stream of DVS events. If we want to apply the ANN to these events, we need to bin them into frames. The toolbox allows you to specify how many events should go into one frame:

[input]
num_dvs_events_per_sample = 2000

I would try to increase this number until you get frames that look reasonable. (If you have all plots enabled, you can find the created frames in the log dir.)

1 similar comment
@rbodo
Copy link
Contributor

rbodo commented Jun 19, 2020

OK, what you see here is unrelated to the evaluate_ann option.

The reason the activations are so low is not that the weights are modified but that the input image is so sparse.

Remember that your input is a stream of DVS events. If we want to apply the ANN to these events, we need to bin them into frames. The toolbox allows you to specify how many events should go into one frame:

[input]
num_dvs_events_per_sample = 2000

I would try to increase this number until you get frames that look reasonable. (If you have all plots enabled, you can find the created frames in the log dir.)

@hyeongilee
Copy link
Author

I increased num_dvs_events_per_sample to 200000, but the input image doesn't changes reasonably...
Only the noise that looks like a line at x~100 (I'm not sure what it is) is a bit sharper.input_image

There seems to be no problem with the input aedat file.

I have attached the input file and the code. Could you please help me?
test.zip

@rbodo
Copy link
Contributor

rbodo commented Jun 21, 2020

Which version of aedat file format are you using? I'm guessing 3. The toolbox supports 1 and 2, so I think what you see here is the effect of an incorrect decoding of the binary values read from file.

You could probably implement this yourself - unfortunately I won't be able to work on this.

@rbodo rbodo closed this as completed Jul 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants