-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] ONNX doesn't support exporting non-persistent buffer included models in FakeMode #107211
Labels
module: onnx
Related to torch.onnx
onnx-triaged
triaged by ONNX team
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Comments
titaiwangms
added
module: onnx
Related to torch.onnx
onnx-triaged
triaged by ONNX team
labels
Aug 15, 2023
titaiwangms
added a commit
that referenced
this issue
Aug 15, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 15, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 16, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
soulitzer
added
the
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
label
Aug 17, 2023
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 17, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 18, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 18, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 18, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 18, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 18, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 18, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 21, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 21, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 21, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 21, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 22, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 22, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 23, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
titaiwangms
added a commit
that referenced
this issue
Aug 23, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) [ghstack-poisoned]
pytorchmergebot
pushed a commit
that referenced
this issue
Aug 23, 2023
1. Add a list of HF models to CI tests. The PR intends to build them from Config, but some of them are not supported with Config. NOTE: Loaded from pre-trained model could potentially hit [uint8/bool conflict](huggingface/transformers#21013) when a newer version of transformers is used. - Dolly has torch.fx.Node in OnnxFunction attribute, which is currently not supported. - Falcon and MPT has unsupported user coding to Dynamo. 2. Only update GPT2 exporting with real tensor to Config, as FakeMode rises unequal input errors between PyTorch and ORT. The reason is that [non-persistent buffer is not supported](#107211) Pull Request resolved: #107247 Approved by: https://github.com/wschin, https://github.com/BowenBao
|
@titaiwangms IIRC this should work with the |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
module: onnx
Related to torch.onnx
onnx-triaged
triaged by ONNX team
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
To avoid out of memory issue during exporting models to ONNX, we need to detach the parameters and persistent buffers with state_dict().
However, some models, for example, GPT2, there is non-persistent buffer which can't be detached to state_dict(). Subsequently, ONNX graph complains about the missing buffers, but we don't have it in external data of the model initializer. This kind of case can be be reproduced when we use Config to
create_model()
.cc @BowenBao @thiagocrepaldi @wschin
The text was updated successfully, but these errors were encountered: