Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix nested tensors predictions compatibility with fp16 #3516

Merged
merged 3 commits into from Nov 1, 2021
Merged

Fix nested tensors predictions compatibility with fp16 #3516

merged 3 commits into from Nov 1, 2021

Conversation

tcapelle
Copy link
Contributor

@tcapelle tcapelle commented Oct 29, 2021

Modifies retrieving first tensor to check float16 dtype to support nested tensors.

Warning: This PR needs the flatten function that is added in this other PR.

@tcapelle tcapelle requested a review from jph00 as a code owner October 29, 2021 09:33
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@tcapelle tcapelle changed the title fix nested tensors Fix nested tensors predictions compatibility with fp16 Oct 29, 2021
@tcapelle
Copy link
Contributor Author

tcapelle commented Oct 29, 2021

@jph00 do you know why the test is not passing? my bad, it is missing the fastcore integration of flatten PR

@jph00
Copy link
Member

jph00 commented Oct 29, 2021

@tcapelle can you please try changing listify(flatten(...)) to concat(...), since that's the same thing now

@tcapelle
Copy link
Contributor Author

tcapelle commented Oct 30, 2021

@jph00 we have a conflict with torch_core.concat
I did a quick fix but found another bug, try:

retain_type([1,2,3], tuple([1]))
> [1,2,3]

it should be:

> (1,2,3)

@jph00
Copy link
Member

jph00 commented Oct 31, 2021

OK happy to leave it as-is then. Well spotted on the concat name clash - that's actually been there for ages but no-one had noticed before!

@jph00
Copy link
Member

jph00 commented Oct 31, 2021

Just another thought: what if instead of listify(flatten(self.pred))[0] we did next(flatten(self.pred)) ? That would avoid needlessly allocating extra memory.

@jph00
Copy link
Member

jph00 commented Nov 1, 2021

Looks like it's all sorted! I'll update the fastcore version dep too.

@jph00 jph00 merged commit 85e0d1a into fastai:master Nov 1, 2021
@jph00 jph00 added the bug label Mar 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants