Skip to content
This repository was archived by the owner on Jul 1, 2025. It is now read-only.

Conversation

jfix71
Copy link
Contributor

@jfix71 jfix71 commented Jan 18, 2021

Summary:

  • Add RescaleQuantized parallelization support to graph opts' parallelization code
  • On NNPI, mirror Rescale parallelization for FC/Relus that come before it
  • Sink Reshapes below Quantize and ConvertTo
  • Remove unnecessary ConvertTo when following a Dequantize (i.e. just change the elem kind of the Dequantize instead)

Differential Revision: D25947824

Summary:
- Add RescaleQuantized parallelization support to graph opts' parallelization code
- On NNPI, mirror Rescale parallelization for FC/Relus that come before it
- Sink Reshapes below Quantize and ConvertTo
- Remove unnecessary ConvertTo when following a Dequantize (i.e. just change the elem kind of the Dequantize instead)

Differential Revision: D25947824

fbshipit-source-id: 897e0aa507293647fdf5ff58d0119427dcee5aee
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D25947824

@facebook-github-bot
Copy link

This pull request has been merged in b0c3ec2.

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Jan 24, 2021
Summary:
Pull Request resolved: pytorch/glow#5257

- Add RescaleQuantized parallelization support to graph opts' parallelization code
- On NNPI, mirror Rescale parallelization for FC/Relus that come before it
- Sink Reshapes below Quantize and ConvertTo
- Remove unnecessary ConvertTo when following a Dequantize (i.e. just change the elem kind of the Dequantize instead)

Test Plan: Added unit tests

Reviewed By: hyuen, mjanderson09

Differential Revision: D25947824

fbshipit-source-id: 771abd36a1bc7270bf1f901d1ec6cb6d78e9fd1f
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants