Skip to content

Conversation

gmagogsfm
Copy link
Contributor

@gmagogsfm gmagogsfm commented Dec 14, 2020

Adding a flag torch_jit_disable_warning_prints to optimize interpreter performance by suppressing (potentially large amount) of warnings.warn.

This is to work around TorchScript's warning behavior mismatch with Python. Python by default triggers a warning once per location but TorchScript doesn't support it. This causes same warning to trigger and print once per inference run, hurting performance.

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Dec 14, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Dec 14, 2020

Codecov Report

Merging #49313 (3ec71b1) into master (94a3d4b) will increase coverage by 0.00%.
The diff coverage is 50.00%.

@@           Coverage Diff           @@
##           master   #49313   +/-   ##
=======================================
  Coverage   80.56%   80.56%           
=======================================
  Files        1874     1874           
  Lines      202776   202777    +1     
=======================================
+ Hits       163362   163363    +1     
  Misses      39414    39414           

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@gmagogsfm gmagogsfm marked this pull request as ready for review December 15, 2020 08:11
@SplitInfinity
Copy link

Python by default triggers a warning once per location but TorchScript doesn't support it.

Why not?

@gmagogsfm
Copy link
Contributor Author

Python by default triggers a warning once per location but TorchScript doesn't support it.

Why not?

It's a bit of long story, there are a couple reasons:

  1. Python relies on the callstack to tell different "warnings.warn" apart. TorchScript can't accurately mimic this behavior because all the inlining and unrolling we do in optimization.
  2. Even if we accept the fact that we don't match Python fully, and use the less good option of making each warning node fire only once. it is still very difficult because there is no good place to store the information of "this warning node has triggered before". Two places I can think of are "Code", which is supposed to represent the program. Or "InterpreterState", which is supposed to capture the execution state. "Code" doesn't work because it is supposed to be unchanged during execution, thus not representing execution state. "InterpreterState" doesn't work either because caller can (and does) destroy/recreate it between inference calls, thus the information of "warning has fired before" is lost between inference calls.

I am not entirely happy with current approach because it completely suppresses all warnings, but couldn't come up with a good alternative Let me know if you know of good options to implement the functionality of "limiting number of warnings fired per process regardless of how many independent inference calls are made".

Copy link

@SplitInfinity SplitInfinity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, okay.

@facebook-github-bot
Copy link
Contributor

@gmagogsfm merged this pull request in 7518f54.

hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 6, 2021
…ings.warn (pytorch#49313)

Summary:
Adding a flag torch_jit_disable_warning_prints to optimize interpreter performance by suppressing (potentially large amount) of warnings.warn.

This is to work around TorchScript's warning behavior mismatch with Python. Python by default triggers a warning once per location but TorchScript doesn't support it. This causes same warning to trigger and print once per inference run, hurting performance.

Pull Request resolved: pytorch#49313

Reviewed By: SplitInfinity

Differential Revision: D25534274

Pulled By: gmagogsfm

fbshipit-source-id: eaeb57a335c3e6c7eb259671645db05d781e80a2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants