-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Remote Eval Script #68
Conversation
@@ -1,4 +1,3 @@ | |||
import random | |||
|
|||
|
|||
def evaluate(user_submission_file, phase_codename, test_annotation_file=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you update the order of arguments to be submission_file, test_annotation_file, phase_codename, **kwargs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ram81 I think they will have to change the code in main.py
to correcly pass the test annotation file if we make it into a positional argument. I made it None
and placed it add the end so they can pass a default, or skip and use phase_codename
to find the correct file. Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, that makes sense. In that case, can you add a comment in the evaluate method of this form
'''
# Load test annotation file for current phase
test_annotation_file = json.loads(open("{phase_codename}_path", "r"))
'''
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have made a couple changes accordingly. Please let me know if they look okay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Nicely done!
This PR fixes the remote eval script template with the locally tested and working version of the same.