Skip to content
This repository was archived by the owner on Jul 1, 2025. It is now read-only.

Conversation

842974287
Copy link
Contributor

Summary: Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Differential Revision: D27451237

@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D27451237

842974287 added a commit to 842974287/glow that referenced this pull request May 13, 2021
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: 18b1836cdee9f2f078d6aaaddd1cb14a7c43890f
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D27451237

842974287 added a commit to 842974287/glow that referenced this pull request May 13, 2021
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: 1640bf3747e6baef03cafc39f6749109e723878a
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D27451237

842974287 pushed a commit to 842974287/glow that referenced this pull request May 13, 2021
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Differential Revision: D27451237

fbshipit-source-id: 291644983064e877dde1e1348fa3c423a4a9561f
842974287 pushed a commit to 842974287/glow that referenced this pull request May 13, 2021
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Differential Revision: D27451237

fbshipit-source-id: bd6c3af4837890b57b394cae7385e3bdd4aca52b
842974287 pushed a commit to 842974287/glow that referenced this pull request May 13, 2021
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Differential Revision: D27451237

fbshipit-source-id: 6410fa0a3becab1f0162eb847ac8a8f253ccd398
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: d8fd32321f6f4450731e32e1f56a91228484a9a4
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D27451237

842974287 added a commit to 842974287/pytorch that referenced this pull request May 14, 2021
…ytorch#57483)

Summary:
Pull Request resolved: pytorch#57483

Pull Request resolved: pytorch/glow#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: cbdc0b875ff7425f14f8c59cd7b86eb4145624bc
842974287 pushed a commit to 842974287/glow that referenced this pull request May 14, 2021
Summary:
Pull Request resolved: pytorch/pytorch#57483

Pull Request resolved: pytorch#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Differential Revision: D27451237

fbshipit-source-id: b707f491efdf7e7b09b8e9fd052f0b8d4b6c5536
842974287 pushed a commit to 842974287/pytorch that referenced this pull request May 14, 2021
…ytorch#57483)

Summary:
Pull Request resolved: pytorch#57483

Pull Request resolved: pytorch/glow#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`

Differential Revision: D27451237

fbshipit-source-id: f2760ebe84b2afcd4125cc2ed3334bcf6d4fca14
@facebook-github-bot
Copy link

This pull request has been merged in 94b78b9.

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request May 15, 2021
…57483)

Summary:
Pull Request resolved: #57483

Pull Request resolved: pytorch/glow#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: e46e961734788fd5333e227ca6143fd37c33204e
krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
…ytorch#57483)

Summary:
Pull Request resolved: pytorch#57483

Pull Request resolved: pytorch/glow#5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: e46e961734788fd5333e227ca6143fd37c33204e
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants