Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where are top_k function and sample_sequence function in your code ?? #1

Open
shp776 opened this issue Aug 9, 2021 · 0 comments
Open

Comments

@shp776
Copy link

shp776 commented Aug 9, 2021

Hello, I am working with the goal of completing the text generation by calling the gpt onnx model using C++ API like you. By the way, I wonder where the top_k_logits and sample_sequence functions in sample.py(original gpt-2 source) were processed in your post-processing operation code. I couldn't find any work in your code related to variable "logits" and any torch functions in sample.py.

Also the onnx model you used has 13 outputs, but your ORT in the main function specifies one output. It looks like a combined output , but I wonder if this is different from the original text generation algorithm used in the pytorch(sample_sequence + topk_logits). To summarize, I wonder how your text generation code works.

Your code inspires me a lot. So thank you very much for your work. I look forward to your answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant