-
Notifications
You must be signed in to change notification settings - Fork 543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Renaming and saving run results #2696
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## develop #2696 +/- ##
============================================
- Coverage 61.40% 48.58% -12.82%
============================================
Files 256 236 -20
Lines 42954 32146 -10808
Branches 349 349
============================================
- Hits 26374 15619 -10755
+ Misses 16580 16527 -53
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just one comment about repeated code.
lgtm other than that
@@ -163,12 +164,33 @@ def get_fields(self, samples, eval_key): | |||
|
|||
return fields | |||
|
|||
def rename(self, samples, eval_key, new_eval_key): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saw this same method above in multiple classes, can we extract it out somewhere to remain DRY ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was following along with how the cleanup()
and get_fields()
methods are implemented explicitly on each evaluation protocol.
The implementations for each protocol are currently similar, but there's not an inherent reason why they would always be the same, so I think some repeated code is okay in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See what you're saying but can't say I agree with the logic. I feel like this is why when we add something to OSS backend we will have multiple places we have to add it to. Which is probably fine if you're making the edit but anyone else might easily miss a spot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As it currently is, there are XXXEvaluation
base classes for each type of evaluation protocol: classification, detection, regression, segmentation.
The abstraction to remove duplicate code would be "this class of evaluation protocol supports either sample-level or frame-level fields. When processing sample-level fields, it populates a single field whose name matches the eval_key
. When processing frame-level fields, it populates a frame-level and sample-level fields whose name matches eval_key
." For that class of methods, we could abstract get_fields()
, rename()
, and cleanup()
into a mixin and share the code.
It doesn't seem that useful to me though. Sometimes when there's a bunch of fancy subclass implementations it becomes less clear how to implement a new instance of the interface (eg a new label type to evaluate) because it seems like you have to follow how the base class does some things rather than just doing whatever you need for your case.
* adding factory methods for cuboids and rotated boxes * documenting * linting * adding new label types * adding support for overriding the shape's filled status * always treat polygons as filled * removing duplicate 3D label descriptions * cleanup * documenting point cloud datasets * initial text similarity in app * 782 - add see more to similarity menu, seperate by text/image * save sortByImage and sortByText differently in atom * update graphQL schema to get supportsPrompts of brain methods * add warning when brainkey exists but not support text prompt * cleanup * fix graphql schema bug * adding support for querying by vectors * adds support for text prompts * updating docstrings * show indication when there is no brainkey * adds support for text prompts * update based on PR review * copy tweaks * only show brain keys that supportsPrompt in text mode * refactor components and clean up * cleanup * break up the main component * refactor utils and useEffect * code pass * fix info link * update cls for similairty brain run condition * fix viewbar parsing validation error * toggle icons * fix k input setting issue and tune icons to grey color * rm inapplicable sort * fix extend stage bug and viewbar casting issue * fix export name inconsistency * add brain run method type into graphQL and replace the cls hack for getting similarity brain methods * convert brainMethods.config.type from list to string * add hack back as a fallback for the sort/ route dataset.brainMethods.config.type bug * sentence case * linting * apply extended stages first * apply extended stages before filters * add loading state after similarity run submission + correct the tooltip * loading icon style * path folder name de-capitalizaed to fix build issue * path issue with build * removing duplicate docs * add loading icon for image search + trying to add brainmethod.config.type to data in Sort.py * fix duplicate progress icons * use setattr to overwrite bound * linting * add progress icon for imagesearch --------- Co-authored-by: brimoor <brimoor@umich.edu> Co-authored-by: Ritchie Martori <ritchie@voxel51.com> Co-authored-by: Benjamin Kane <ben@tapes.co>
Change log
rename_annotation_run()
,rename_evaluation()
, andrename_brain_run()
type
argument tolist_annotation_runs()
,list_evaluations()
andlist_brain_runs()
to retrieve runs of specific type (eg similarity indexes or visualizations within the full set of brain keys)save()
method to allRunResults
objects that allows for saving updates to results to the database