-
Notifications
You must be signed in to change notification settings - Fork 158
Enforce order on input keys to export #545
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@natuan need to regenerate the input_shapes arg based on the new inputs that will be passed through so we log properly. Additionally, can we add a warning in the event that the keys in the inputs do not match the keys for the forwards.
I fixed the first one you mentioned. For the warning, the input output by the tokenizer is a Dict so by issuing that warning we imply an ordering over it right? (which is what this fix is) In the pytorch/utils/exporter we did have this type of warning if the input batch is a Dict but not an OrderedDict |
I was mainly concerned that we are now technically filtering out inputs based on whether or not they are defined in the forward function. This is something that might be a good thing to do, but we should have a warning in place to say that the inputs were filtered and not everything was passed through. On the other side, if not all inputs are found that are defined in the forward, it would be good to have this called out as well in the event that some have defaults and therefore would not fail the forward |
Put in a warning |
* Enforce order on input keys to export * Warn if input dropped from onnx export
* Update README.md for transformers to note the quantization conversion issue (#539) * Update README.md * Update integrations/huggingface-transformers/README.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Enforce order on input keys to export (#545) * Enforce order on input keys to export * Warn if input dropped from onnx export * Restrict mistune version to fix docs build (#547) * quantization fixes for transformers flows (#548) * quantization fixes for transformers flows * match on class name instead * quality * set release branch version to 0.10.1 * Revert "Update README.md for transformers to note the quantization conversion issue (#539)" This reverts commit 9304997. Co-authored-by: Mark Kurtz <mark@neuralmagic.com> Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> Co-authored-by: Tuan Nguyen <tuan@neuralmagic.com> Co-authored-by: Michael Goin <michael@neuralmagic.com>
* Enforce order on input keys to export * Warn if input dropped from onnx export
Putting back the order of input's keys passing into the pytorch onnx export