New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[issue-799] coco split into train/test; updated coco dataset module api #805
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
update: @ItayGabbay @nirhutnik In general
|
Seems good to me, but would prefer Gabbay take a look as its his code. If I understand correctly, this means that sometimes we get average precision = None (why?), and in those cases you want it to count as 0. Not sure what is done later with that information - if its summed up, then ok, but I guess it is averaged again and in that case maybe should be just ignored and not counted in the denominator as well. But didn't dive into the code yet. |
also, are those two lines within 80: if train and test:
81: train.validate_shared_label(test) if I got it right, in object-detection task label is a 2d array each row of which we probably should only verify that number of columns is the same? |
@yromanyshyn I think that Gabbay's PR solves that anyway as he transfers label validation to the LabelEncoder classes |
pin_memory: bool = True, | ||
object_type: Literal['Dataset', 'DataLoader'] = 'DataLoader' | ||
) -> t.Union[DataLoader, vision.VisionDataset]: | ||
"""Get the COCO dataset and return a dataloader. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would call it the COCO 128 dataset, as the COCO dataset is much larger than that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, did we validate the license of this dataset? and model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed,
it is GNU Version 3
makefile
Outdated
xargs -P4 -I'{}' $(JUPYTER) nbconvert --execute '{}' \ | ||
--to notebook --stdout > /dev/null | ||
|
||
$(JUPYTER) nbconvert --execute $$(find ./docs/source/examples -name "*.ipynb") --to notebook --stdout > /dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
weird but I initially thought that this was causing 'notebook check' failure. I have undone it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you are correct. I'm now checking this
@@ -79,7 +79,10 @@ def compute(self): | |||
**self._compute_ap_recall(ev["scores"], ev["matched"], ev["NP"]) | |||
} | |||
if self.return_ap_only: | |||
res = torch.tensor([res[k]["AP"] for k in sorted(res.keys())]) | |||
res = torch.tensor([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's OK. This may happen in the case the model detected a class that doesn't exist in the test set.
Anyway, we are planning to replace this module in the near future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
resolves #799