-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encoding/decoding NLP model in tensorflow lite (fine-tuned GPT2) #549
Comments
@lu-wang-g Hi Lu, it seems to be an issue related to GPT2Tokenizer, do you know which team maintains the GPT2Tokenizer? |
You should be able to do step 2 by copying date from context_idx to input_data. Let me know what your problem is when doing so. |
Dear @lu-wang-g , Many thanks for your answer.
So let me rephrase the questions we have now: About the shapes of the input/output tensors:I am not sure I understand what this 5 stands for in the input data shape printed. Should it not be the length of the sequence which may vary? Or is it a minimum length ? About the input encoding:Shall we encode the inputs in a specific way beforehand and feed encoder_outputs to the interpreter.set_tensors? About the output decoding:Is the next_token_logits right as such ? I am not sure which part of output_data should be taken to be sampled (here, greedily). You can find a colab with our process here: |
According to the error message, the input shape is [batch, 5]. Your model doesn't seem to accept variable input length. You can verify it by printing out
It depends on what your output tensor is. Seems like your model is a text-to-text model. So the output contains the ids of tokens that represent the output text. Similarly, if the output shape is [batch, N], N will be the maximum ids that can return by this model.
Not that I'm aware of. What is the encoder in your link? Do you have pointers?
Inference should use exactly the same encoder/decoder as the training script, and encoding varies from model to model.
I'm out of context of what the output tensor is, i.e. what does each dimension in the shape (1, 5, 50257) referring to. It will be more helpful if you can provide information. In general, always refer to the training script for pre/post-processing. If you can point me to the training script, I can help to take a look as well. |
Closing the issue. Feel free to reopen. |
Thanks @lu-wang-g, sorry I did not have time to look into it since last time. |
@Guillaume-slize Hi, I know it is after a long time. Do you remember how you solved this issue with the output decoding? |
We are in the process of building a small virtual assistant and would like it to be able to run a fine-tuned version of GPT-2 on a raspberry-pi with a coral accelerator.
So far, we managed to convert our model to a tflite and to get first results. We know how to convert from words to indices with the previous tokenizer but then we need a bigger tensor as input to the interpreter. We miss the conversion from indices to tensors. Is there a way to do this simply?
You can find our pseudo-code here, we are stuck at step 2 and 6 :
The text was updated successfully, but these errors were encountered: