New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The training schedule #17
Comments
I actually trained using their scripts but could not recreate their results. Prev class AP50: tensor(43.3941) AND one thing to note is that they continue numbering the epochs across tasks, so for the second one it will resume at epoch 50 and train for an additional 50 epoch |
Yeah, I did notice that they continue numbering the epochs across tasks. But even though in that case, theirs scripts are apparently different from what discribed in the paper. |
Hello @luckychay @orrzohar-stanford The paper use 2 open-world splits and I have updated the repo with both splits configs. Can you please let me know which split config is causing the problem? |
Dear authors,
|
Thanks for your reply. I am using old splits from ORE, in fact no problem is causing by the config for me. I am just confused about how many epochs I should train and finetune in incremental step. I notice that your newly uploaded scripts train about 5 epochs for task2,3,4 and finetune 45,30,20 epochs respectivly. The training epochs is much less than 50 epochs. How could that happen? I am not familiar with this part and thank you for your patience. |
The weights are uploaded in the repository. According to the results you have shared they look pretty close so it just might be environment change or because of machine changes but you can always visualize and check how the unknown classes are responding to your code. |
Dear @akshitac8 and @orrzohar-stanford, May I ask that how long it takes to train the model on OWOD_split_task1 using 8 V100 GPUs for 50 epoches ? I only have 2 RTX3090 GPUs, and I am estimating if it is possible and how long it may take to train the models. Thanks. |
Dear Author,
In the paper, I see that every task is trained for 50 epochs and finetuned for 20 epochs as
However, in
configs/OWOD_new_split.sh
, I see the training schedule is following a different setting as highlighted by the red boxes.Is there anything I missed? Looking forward to your reply. Thanks.
The text was updated successfully, but these errors were encountered: