Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MO] relax condition for casting into FP16 #8083

Closed

Conversation

pavel-esir
Copy link
Contributor

@pavel-esir pavel-esir commented Oct 19, 2021

Root cause and solution 1: In some models -inf values are present in the original FW file, and these models after conversion into FP16 work correctly. But in #6590 strong condition for limit check was introduced. Need to relax limit check condition in ChangeOutputTypeAttributes.py.

Ticket: CVS-65735

Root cause and solution 2: In some models patterns input_1/Maximum(input_2, eps) and input_1/Add(input_2, eps) are used to prevent division to zero. But usually in FP32 networks eps is such small (e.g. 1e-9, 1e-12, ...) that after casting to FP16 it's collapsed to zero. This can lead to division to zero if input_2 is also zero. To prevent that we change eps to FP16 smallest normal value in such patterns in transformation DivisionToZeroFP16Resolver.py.
Moved solution 2 into nGraph and created separate PR #8676

Ticket: xxx-66045

Code:

  • Comments
  • Code style (PEP8)
  • Transformation generates reshape-able IR
  • Transformation preserves original framework node names

Validation:

  • Unit tests
  • Framework operation tests" n/a
  • Transformation tests: n/a
  • Model Optimizer IR Reader check: on Dien there are some problems but they are caused not by my changes

Documentation:

  • Supported frameworks operations list: n/a
  • Guide on how to convert the public model: n/a
  • User guide update: n/a

@pavel-esir pavel-esir added the bug Something isn't working label Oct 19, 2021
@pavel-esir pavel-esir added this to the 2022.1 milestone Oct 19, 2021
@pavel-esir pavel-esir requested review from a team, mvafin, sadolini and evolosen and removed request for a team October 19, 2021 08:13
@openvino-pushbot openvino-pushbot added the category: MO Model Optimizer label Oct 19, 2021
@pavel-esir
Copy link
Contributor Author

pavel-esir commented Oct 22, 2021

Communicated personally with @mvafin. He asked to return back unit-tests when casted to infinity and also add warning when casted into zero. Both requests addressed.

@rkazants please review

@rkazants
Copy link
Contributor

@evolosen, please review it as well. Thanks!

@pavel-esir pavel-esir marked this pull request as draft November 11, 2021 20:36
@pavel-esir pavel-esir marked this pull request as ready for review November 12, 2021 11:42
@pavel-esir
Copy link
Contributor Author

After @mvafin changes #7588 importance of transformation ChangeOutputTypeAttributes.py has been lowered.

Added also solution for division to zero in FP16 models. @sadolini @sevhabert @mvafin @rkazants please take a look one again to the latest changes.

Copy link
Contributor

@rkazants rkazants left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed to proceed with different solution, namely, to fuse the sub-graph into MVN operation with original epsilon value.

@pavel-esir
Copy link
Contributor Author

We discussed to proceed with different solution, namely, to fuse the sub-graph into MVN operation with original epsilon value.

As we agreed on Friday (19 Nov) moved small eps modifying transformation in specific patterns into nGraph with separate PR. But the current PR is still needed.

@pavel-esir pavel-esir marked this pull request as draft December 3, 2021 10:10
@pavel-esir
Copy link
Contributor Author

Closing since old mechanism to convert to FP16 will not be supported in MO.

@pavel-esir pavel-esir closed this Dec 3, 2021
@pavel-esir pavel-esir deleted the relax_fp16_limit_checks branch December 3, 2021 13:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: MO Model Optimizer
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants