-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #776. #777
Fix #776. #777
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this solution.
Would you like to add a test as well? I am also fine with merging now and adding a test case later ourselves.
elif isinstance(metric, list): | ||
ret = {} | ||
for m in metric: | ||
ret.update(compute_metrics(m, preds, labels)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice one! recursive call covering all cases, even registered metrics or list of lists.
I would be fine with adding a test, but I had difficulties getting started with how testing works in FARM. Doing
Is there anything needed to be done in addition to the normal installation procedure to get set up for development and what is the proper way for running unit tests? |
Hey @johann-petrak, Line 34 in 751b48f
Let me know if you face any problems with this. |
Thanks - I still get what looks like a dependency problem:
I do have transformers version 4.5.0 (which fits requirements.txt) installed but the module transformers.tokenization_albert does not exist. I had a quick look at the current transformers source code and it seems there is a class with that name in |
It seems that your tests are using another (outdated) FARM installation. The import that is flagged in your logs (farm/modeling/tokenization.py:25 ) shouldn't call If you run this from an IDE you probably need to make sure that the test is using your current project code. If you run this from the terminal, |
Thanks, your observation helped point me in the right direction. After uninstalling pytest from all the base and other non-env locations it worked. So of 92 tests of which 11 were deselected and 81 selected, 80 passed and 1 failed:
Since I have not found an existing unit tests module for the metric module, I have created an initial unit test module now which tests the list support, but this could test quite a lot of method calls from evaluation.metric. |
Ok, that's really nasty and seems like a pytest bug. Good to know. Regarding the failing test: Seems to pass in the CI. So probably only a minor fluctuation on the precision in your local run. From my side this seems ready to be merged or do you want to add anything else here? |
Fine from my side! |
I came across this failing test too when I upgraded pytorch to 1.8.1. in this PR We will relax the test to Line 97 in 24a5b05
|
Also add pythondoc for the compute_metrics function.