-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In jupyter viwer, INV and DIR test would change two sentences pred in reverse order. #34
Comments
Hi! Could you please give me a more complete example test, so I can take a closer look? e.g. what do you see if you print |
Hi, just gusse you are chinese from your user_name, so I post one example for you to take a look . Examples: I remove the puncuation of sentence for INV test, because it's NER model so test.conf is None. Another issue is that if sentence and pred of testcase is too long, jupyter would not should the complete pred in testcase'box of html because I think the pred box can not split lines automatic which not like sentence . So I need to change width of jupyter to 200% to show the hole contenxt. |
Thanks for catching both bugs! (And yes, I'm Chinese :P) |
Thank you for doing that, I want to ask can I just copy some folders like checklist/viewer and checklist/visual_interface and then reinstall the procject? Because I modfied a lot for the project in some files. |
Yeah that works, just copy-n-replace |
Hi, I just copy-n-replace /checklist/viewer/static/ just like you said, and then use pip install -e . in the project. But I get the Output below I don't know it reinstall successfully? Installing collected packages: checklist |
I couldn't really guess what's going on, but hopefully this stackoverflow thread can help? Another hack to try, essentially forcing pip to upgrade the package:
|
Thanks for reply, I do nothing but when I test the jupyter found that Bugs is gone! |
Hi :) The strange thing is that I only get this error when I test a different model than the released ones (amazon, google, etc). Are there particular format-rules to follow besides saving the tests_n500 predictions in a txt file having the format "pred - prob for 0 - prob for 1 - prob for 2"? Sentiment-laden words in context Example fails: |
Hm, this is odd. Can you provide us with a small example? |
Yes, of course and thank you 2 0.99984 0.000000 0.00016 I report the first lines of the textual summary too: the problem persists only when negative labels are involved Vocabulary single positive words single negative words Example fails: single neutral words Example fails: Sentiment-laden words in context Example fails: neutral words in context Example fails: |
Sorry for the long delay in responding. The format you are using is |
If I use Suite.summary(), I can get the corret result just like:
Example fails: 1 (0.8) I'm the guy. 0 (0.9) I'm eth guy.
But in Suite.Summarizer(), I may get the result show in jupyter just like:
I'm the guy.→ I'm eth guy. Pred: 0(0.9)→1(0.8)
I can't find the bug where it happends, so please help me to debug , thanks!
The text was updated successfully, but these errors were encountered: