You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2
The onnx file extracted from the link above appears to have only one input which is input_ids. Therefore, position_ids and attention_mask inputs seem to be not used. Is there any problem with this one_input_onnx_file performing text generation? I am asking you because the onnx_file used in the link below has 15 inputs.
You mentioned text generation code as below capture.
Using the code you provided, text generation worked properly up to one sentence, but continuous sentence generation did not work properly. I wonder if this is because of the structure of the onnx model we used that does not handle past_inputs.
Additionally, the above code differs from the original sample.py code. Only dealing with outputs[0] and torch.multinomial(sampling) and top_k algorithms are missing. Is this due to the onnx model structure that considers only one input (not considering past_state) ?
Further information
Relevant Area (e.g. model usage, backend, best practices, pre-/post- processing, converters):
Is this issue related to a specific model? Model name (e.g. mnist): Model opset (e.g. 7):
Notes
Any additional information, code snippets.
The text was updated successfully, but these errors were encountered:
Ask a Question
Question
I have two questions.
The onnx file extracted from the link above appears to have only one input which is input_ids. Therefore, position_ids and attention_mask inputs seem to be not used. Is there any problem with this one_input_onnx_file performing text generation? I am asking you because the onnx_file used in the link below has 15 inputs.
https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb
Using the code you provided, text generation worked properly up to one sentence, but continuous sentence generation did not work properly. I wonder if this is because of the structure of the onnx model we used that does not handle past_inputs.
Additionally, the above code differs from the original sample.py code. Only dealing with outputs[0] and torch.multinomial(sampling) and top_k algorithms are missing. Is this due to the onnx model structure that considers only one input (not considering past_state) ?
Further information
Relevant Area (e.g. model usage, backend, best practices, pre-/post- processing, converters):
Is this issue related to a specific model?
Model name (e.g. mnist):
Model opset (e.g. 7):
Notes
Any additional information, code snippets.
The text was updated successfully, but these errors were encountered: