New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eval runs ok first time but then throw an "ZeroDivisionError: division by zero" #98
Comments
This generally means that your image is not in the val set. Note that preds_filt is the filtered preds where the predictions of images not in coco val 2014 will be removed. |
Ok but why would the inference works the first time I run it? the viz.json file has been created correctly that first time. |
I feel confused too. Sorry. I think some other issue mentions this problem too. |
It s probably something silly somewhere but I am not familiar with the code base so I would have to do quite some digging to identify why I have this behavior starting from the second run onward. Thanks for this great work btw, really cool project |
Ok so I quickly replicated your colab notebook on my machine locally in a jupyter notebook and it works just fine. I ll try to tweak it further. Obviously this is just the inference part so if folks wants the training part too they need to stick with the full repo code base. |
I had the same problem when i do evaluation, it works the first time but then throws this division by zero error all the time. |
It's happening on my end as well. Every pretrained resnet model can only run once and after that eval.py would throw the zero division error |
tell me if the solution in #100 fix it. If so, I will push a bug fix. |
wow thanks for the quick reply! I tried that one but it doesn't seem to fix the problem. The code looks like this but I don't know if that's what you intend it to be!
|
Sorry, the fix in #100 is already there. Interesting, are you using the master? |
Do you want to turn --language_eval to 0? |
Yes I am using the master. I will try that! Give me a sec. |
remove what I asked you to add. That was wrong. |
Sorry this may sound a bit silly but by |
I am in the middle of something, but I believe language_eval is an argument, would you mind try it? |
Yeah I already tried that but
I am on python 3.7.8, pyTorch 1.6 and Windows 10 if these info help. |
Can you show the full error traceback? |
|
Replace line 74,75 with
|
Thanks for the fix. Now it can produce the json file but the content of the json file is always the same no matter how I change what's in |
add --force |
Ah it works! Thanks for all the help. |
Hi all, I am having the same problem. Tried the proposed fix without success. Which folder do you make the |
Hey, I have the same problem my JSON file shows the same captions no matter the images. I used --force as you said but it returned with the following error: |
hi
I got the eval code to run ok the first time on an image, but then when I try to run it again on the same image or on any other image I get the following error message.
Is there some buffer that I need to clean up somewhere between each inference run?
Same pattern if I change the pre-trained model used for evaluation, it works the first time but then throws this division by zero error all the time.
thanks for your help.
The text was updated successfully, but these errors were encountered: