Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train MEGA on my own dataset? #13

Closed
ZJU-lishuang opened this issue Apr 10, 2020 · 11 comments
Closed

How to train MEGA on my own dataset? #13

ZJU-lishuang opened this issue Apr 10, 2020 · 11 comments

Comments

@ZJU-lishuang
Copy link

No description provided.

@Scalsol
Copy link
Owner

Scalsol commented Apr 12, 2020

You should organize your dataset like ImageNet VID and generate imageset files just like those in datasets/ILSVRC2015/ImageSets. For example, in VID_train_15frames.txt, every line contains 4 strings, each representing video folder, no meaning(but I still keep it), frame number, video length. You could check the dataset for further details.

@ZJU-lishuang
Copy link
Author

where does it need to be modified if the number of categories has changed?

NUM_CLASSES in configs/BASE_RCNN_{}gpu.yaml
classes and classes_map in mega_core/data/datasets/vid.py

Does any other place need?

@Scalsol
Copy link
Owner

Scalsol commented Apr 13, 2020

where does it need to be modified if the number of categories has changed?

NUM_CLASSES in configs/BASE_RCNN_{}gpu.yaml
classes and classes_map in mega_core/data/datasets/vid.py

Does any other place need?

I think it's enough.

@ZJU-lishuang
Copy link
Author

How to test MEGA if current frame could only access information from previous frames?
Where does it need to be modified?

@Scalsol
Copy link
Owner

Scalsol commented Apr 15, 2020

For local frame, change MEGA.MAX_OFFSET to zero, MEGA.KEY_FRAME_LOCATION to -MEGA.MIN_OFFSET and MEGA.ALL_FRAME_INTERVAL to -MEGA.MIN_OFFSET + 1. For global frame there is no elegant way to modify the current code to support that. One way to acheive this is to keep a queue to store all previous frame, and randomly select several of them when testing. Hope this helps.

@ZJU-lishuang
Copy link
Author

Thanks for your help.I will have a try.

@HzZHoO
Copy link

HzZHoO commented Jul 27, 2020

No description provided.

Sorry to bother you. @ZJU-lishuang
May I ask you about the way that you annotate your own dataset? Does your dataset has annotations for every frame of the videos just like the VID dataset?

@ZJU-lishuang
Copy link
Author

yes,the dataset is just like VID dataset.

@Dawn-LX
Copy link

Dawn-LX commented Aug 26, 2020

where does it need to be modified if the number of categories has changed?

NUM_CLASSES in configs/BASE_RCNN_{}gpu.yaml
classes and classes_map in mega_core/data/datasets/vid.py

Does any other place need?

Thank you very much!, I only modified the classes and classes_map in mega_core/data/datasets/vid.py; and I encountered a RuntimeError:
RuntimeError: copy_if failed to synchronize: device-side assert triggered
CUDA error 59: Device-side assert triggered

Then I modified the NUM_CLASSES in configs/BASE_RCNN_{}gpu.yaml to the corresponding number of classes and the problem fixed

@Jezzzzz
Copy link

Jezzzzz commented Nov 6, 2020

yes,the dataset is just like VID dataset.

Hi,how do you annotate your dataset like the VID dataset?

@ZJU-lishuang
Copy link
Author

labelimg is enough

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants