Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unstable training when saving zero-shot weights #9

Closed
joshmyersdean opened this issue Dec 4, 2023 · 0 comments
Closed

Unstable training when saving zero-shot weights #9

joshmyersdean opened this issue Dec 4, 2023 · 0 comments

Comments

@joshmyersdean
Copy link

Hello!

I am trying to reproduce the pascal part results when going from base to novel categories and am experiencing different results from saving my own weights vs. using the ones you have provided in datasets/metadata.

For example, when using the saved weights you provide these are my results after 2000 iterations (1 gpu).
image

And these are my results after saving new weights using tools/pascal_part_base_clip_name.py.
image

The only difference seems to be in the precision between these weights (in the 1e-4 place). Do you know why there would be such a discrepancy?

Thank you!
Josh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant