New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submission modules #180
Submission modules #180
Conversation
… model scoring with a plug repo in mechanism
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
high-level comments
- cool!
- I find the similar sounding methods in
submission/__init.py
,score_model.py
confusing. Suggestion: Movescore_model
method tobrainscore/__init__.py
, renamescore_model.py
tosubmission/__init__.py
- there's some mixing of coding styles:
os.path
vspathlib
and string-formatting vs f-strings - the manual file-writing could be made more robust with pandas I think
- the manual db-writing could be made more robust with an ORM library
- is there a way for us to unit-test parts of this? e.g. by mocking jenkins and the database away and asserting that the right things are written?
@mschrimpf : I've worked through your comments. I don't think an ORM is necessary. I only have one insert statement so this should be fine as it is. Also I didn't add the code to |
OK everything looks good except the argument list in https://github.com/brain-score/brain-score/pull/180/files#diff-e76a4ede37cda5bffaf698022bbf6f24R85 which should either use keyword-arguments or a dict with the arguments as we talked about. |
great work! |
This PR contains code to allow model submissions by delivering a zip file or git repository.
The modules unpack and run the submitted models and automatically save the results to the configured database.