-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help for using .aedat format file #63
Comments
The aedat input format works similarly to the flow_from_directory InputDataGenerator from tf/keras, which reads images from folders that represent a class. That means you would have to cut your aedat sequence into shorter files that only contain one class. Then structure your dataset directory like this:
The [input] section in your config file would look like this:
For training your model, you may have binned the DVS events into frames. The following arguments allow reconstructing these frames from the event stream at test time:
Our simulator is time stepped, which means we can only approximate the asynchronous nature of the DVS event stream. We achieve this by feeding in very thin frames (a few microseconds in width). You can specifiy the bin width in microseconds here:
Smaller values mean more accurate "asynchronous" mode but also more computations. |
Thanks. But some error occurs as follows Traceback (most recent call last): How can I solve the problem? |
Please update the toolbox to the latest pypi (or better: development) version. The error you are seeing is due to a keras breaking change. |
With updating the development version the problem above solved. But the following error occurs, and I think the error is saying the ANN needs traning and a method to create an image that should be given as the input of ANN from aedat file is not implemented. Should I implement this part? Traceback (most recent call last): When I set the evaluate_ann option False, I think the weighs are set as 0, and nothing appears in the plotting example. |
When using aedat as input, you should turn off the evaluate_ann option. This option allows testing the original ANN before conversion, but that requires that you have the DVS dataset preprocessed as frames. What do you mean by "nothing appears in the plotting example"? The weights shouldn't be set to 0, all that's happening is that the original model is not tested. |
OK, what you see here is unrelated to the evaluate_ann option. The reason the activations are so low is not that the weights are modified but that the input image is so sparse. Remember that your input is a stream of DVS events. If we want to apply the ANN to these events, we need to bin them into frames. The toolbox allows you to specify how many events should go into one frame:
I would try to increase this number until you get frames that look reasonable. (If you have all plots enabled, you can find the created frames in the log dir.) |
1 similar comment
OK, what you see here is unrelated to the evaluate_ann option. The reason the activations are so low is not that the weights are modified but that the input image is so sparse. Remember that your input is a stream of DVS events. If we want to apply the ANN to these events, we need to bin them into frames. The toolbox allows you to specify how many events should go into one frame:
I would try to increase this number until you get frames that look reasonable. (If you have all plots enabled, you can find the created frames in the log dir.) |
I increased num_dvs_events_per_sample to 200000, but the input image doesn't changes reasonably... There seems to be no problem with the input aedat file. I have attached the input file and the code. Could you please help me? |
Which version of aedat file format are you using? I'm guessing 3. The toolbox supports 1 and 2, so I think what you see here is the effect of an incorrect decoding of the binary values read from file. You could probably implement this yourself - unfortunately I won't be able to work on this. |
Hi,
I'm trying to conduct a classification process using DVS camera(.aedat format), and I'm confusing for how should I set the "label_dict" option in the config file
For example, I have an .aedat file with taking a sequence aout 2 minute length, and the class label value varies over time. Such as for the first 10 seconds, the class is 0, and the next 10 seconds, the class is 1, ... and so on.
Thanks.
The text was updated successfully, but these errors were encountered: