Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing image segmentation with inference mode. #14204

Merged
merged 2 commits into from
Oct 29, 2021

Conversation

Narsil
Copy link
Contributor

@Narsil Narsil commented Oct 29, 2021

What does this PR do?

It seems detr models are modifying tensors inplace, which is not allowed
by inference_mode context manager, effectively breaking image-segmentation for
torch > 1.9 users.

This PR proposes to override the context manager only in image-segmentation to no_grad
so the pipeline works again, without checking underlying model. Using a method to do the
switch.

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@mishig25 @LysandreJik

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
@patrickvonplaten
Copy link
Contributor

Fine with this solution, but a better one might actually to disable all those:

        if torch.isinf(hidden_states.clone()).any() or torch.isnan(hidden_states.clone()).any():
            clamp_value = torch.finfo(hidden_states.dtype).max - 1000
            hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) 

for inference to make inference mode possible. Those "value" clamping checks are IMO only really useful to allow fp16 training (not so much for fp16 inference) and thus we could disable them with an if self.training statement.

@patil-suraj and I have somewhat unsuccesfully added lots of those statements to T5 in the hope of enabling fp16 training, which turned out to not work very well...with bfloat16 being available more for PyTorch those statements might not serve a good purpose anymore anyways.

=> So I would be fine with hiding them behind a self.training.

Pinging @stas00 here as well to hear his opinion :-)

@Narsil
Copy link
Contributor Author

Narsil commented Oct 29, 2021

Agreed with @patrickvonplaten it would be better if we could keep inference_mode and work we the other fix.

@sgugger
Copy link
Collaborator

sgugger commented Oct 29, 2021

This fix is fine for a quick patch (patch release is going to be made in the next couple of hours), and we can make a better one afterward. So I would merge this now :-)

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Narsil!

@LysandreJik LysandreJik merged commit b338596 into huggingface:master Oct 29, 2021
@Narsil Narsil deleted the fix_image_segmentation branch October 29, 2021 15:38
LysandreJik pushed a commit that referenced this pull request Oct 29, 2021
* Fixing image segmentation for inference mode.

* Update src/transformers/pipelines/base.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Albertobegue pushed a commit to Albertobegue/transformers that referenced this pull request Jan 27, 2022
* Fixing image segmentation for inference mode.

* Update src/transformers/pipelines/base.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants