-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding running CutLER on Custom dataset #16
Comments
Hi, sorry for the late reply. I'll do my best to answer all of your questions, but please let me know if I miss anything.
Hope these answers help. |
Closing it now. Please feel free to reopen it if you have further questions. |
This was referenced Apr 3, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi Folks, excellent read and amazing work! I've been trying to run the CutLER on my dataset and had some queries regarding running the experiment, but also some clarifications regarding the paper in general. Please let me know if this is not the appropriate medium for the question, I'll send a mail instead. Thanks!
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset", {}, "json_annotation.json", "path/to/image/dir")
Where do I need to run this code snippet from? Can I just create a jupyter notebook in the CutLER folder and run these snippets? And if I do, I need to provide the annotations file as well, but I'm trying to use the MaskCUT approach discussed to generate the psuedo ground truth, in that case how do I pass the .json file to register the dataset
--save-path imagenet_train_fixsize480_tau0.15_N3.json
however the naming convention of the json file generated by running the maskcut.py is different, so while running the merge_jsons.py are we supposed to pass theimagenet_train_fixsize480_tau0.15_N3.json
or the one that was generated after running the maskcut.pypython train_net.py --num-gpus 8 \ --config-file model_zoo/configs/CutLER-ImageNet/cascade_mask_rcnn_R_50_FPN.yaml \ --test-dataset imagenet_train \ --eval-only TEST.DETECTIONS_PER_IMAGE 30 \ MODEL.WEIGHTS output/model_final.pth \ # load previous stage/round checkpoints OUTPUT_DIR output/ # path to save model predictions
Could you please explain a little bit about the parameters
And when we want to get the annotations using the following command
python tools/get_self_training_ann.py \ --new-pred output/inference/coco_instances_results.json \ # load model predictions --prev-ann DETECTRON2_DATASETS/imagenet/annotations/imagenet_train_fixsize480_tau0.15_N3.json \ # path to the old annotation file. --save-path DETECTRON2_DATASETS/imagenet/annotations/cutler_imagenet1k_train_r1.json \ # path to save a new annotation file. --threshold 0.7
Here we are passing the coco_instances_results.json # load model predictions, but are we supposed to pass anything else instead if we are doing custom training on our dataset? If you could elaborate what that file is and will it be generated when we train it?
I have some more theoretical doubts as well, let me know If I add them this to this issue or create a separate issue as well? Thanks and sorry for an extended and (possibly) trivial queries regarding semantics.
The text was updated successfully, but these errors were encountered: