Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encoding/decoding NLP model in tensorflow lite (fine-tuned GPT2) #549

Closed
Guillaume-slize opened this issue Jun 6, 2021 · 7 comments
Closed
Assignees

Comments

@Guillaume-slize
Copy link

We are in the process of building a small virtual assistant and would like it to be able to run a fine-tuned version of GPT-2 on a raspberry-pi with a coral accelerator.

So far, we managed to convert our model to a tflite and to get first results. We know how to convert from words to indices with the previous tokenizer but then we need a bigger tensor as input to the interpreter. We miss the conversion from indices to tensors. Is there a way to do this simply?

You can find our pseudo-code here, we are stuck at step 2 and 6 :

import tensorflow as tf
 
#Prelude
TF_MODEL_PATH_LITE = "/path/model.tflite"
 
interpreter = tf.lite.Interpreter(model_path=TF_MODEL_PATH_LITE)
interpreter.allocate_tensors()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
 
#1-Encode input, giving you indices
context_idx = tokenizer.encode("Hello world.", return_tensors = "tf")
 
#2-How to convert the context_idx to appropriate np.array ?
input_data = np.array(np.random.random_sample(input_shape), dtype=np.int32) #dummy input for now
 
#3- feed input
interpreter.set_tensor(input_details[0]['index'], input_data)
 
#4- Run model
interpreter.invoke()
 
#5- Get output as tensor
output_data = interpreter.get_tensor(output_details[0]['index'])
 
#6- How decode this np array to idx
output_idx=np.random.randint(100) #dummy for now ...
 
#7- Decode Output from idx to word
string_tf = tokenizer.decode(output_idx, skip_special_tokens=True)


@wangtz
Copy link
Member

wangtz commented Jun 10, 2021

@lu-wang-g Hi Lu, it seems to be an issue related to GPT2Tokenizer, do you know which team maintains the GPT2Tokenizer?

@lu-wang-g
Copy link
Member

You should be able to do step 2 by copying date from context_idx to input_data. Let me know what your problem is when doing so.

@lu-wang-g lu-wang-g self-assigned this Jun 10, 2021
@Guillaume-slize
Copy link
Author

Guillaume-slize commented Jun 17, 2021

Dear @lu-wang-g ,

Many thanks for your answer.
When we try what you suggest, this is what we get:

ValueError                                Traceback (most recent call last)

<ipython-input-40-d8a95a34331a> in <module>()
     24 
     25 #3- feed input
---> 26 interpreter.set_tensor(input_details[0]['index'], input_data)
     27 
     28 #4- Run inference

/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/interpreter.py in set_tensor(self, tensor_index, value)
    570       ValueError: If the interpreter could not set the tensor.
    571     """
--> 572     self._interpreter.SetTensor(tensor_index, value)
    573 
    574   def resize_tensor_input(self, input_index, tensor_size, strict=False):

ValueError: Cannot set tensor: Dimension mismatch. Got 4 but expected 5 for dimension 1 of input 0.

So let me rephrase the questions we have now:

About the shapes of the input/output tensors:

I am not sure I understand what this 5 stands for in the input data shape printed. Should it not be the length of the sequence which may vary? Or is it a minimum length ?
Similarly, what are the output shape dimensions standing for ? Should not it be the size of the dictionary, and the length of the input fed?

About the input encoding:

Shall we encode the inputs in a specific way beforehand and feed encoder_outputs to the interpreter.set_tensors?
I saw that in hugging face library they first pass it to an encoder before feeding it to the model. Cf this link:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_tf_utils.py#L320
Is there a simple way to do this similarly in this context?
Is it a one-hot encoding we need simply?

About the output decoding:

Is the next_token_logits right as such ? I am not sure which part of output_data should be taken to be sampled (here, greedily).
In order to get the next token index. It seems it produce a string instead of one token currently.

You can find a colab with our process here:
https://colab.research.google.com/drive/1dPzO058qtS0VHO3BmrrJgGLxXjrH6CHd?usp=sharing

@lu-wang-g
Copy link
Member

Dear @lu-wang-g ,

Many thanks for your answer.
When we try what you suggest, this is what we get:

ValueError                                Traceback (most recent call last)

<ipython-input-40-d8a95a34331a> in <module>()
     24 
     25 #3- feed input
---> 26 interpreter.set_tensor(input_details[0]['index'], input_data)
     27 
     28 #4- Run inference

/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/interpreter.py in set_tensor(self, tensor_index, value)
    570       ValueError: If the interpreter could not set the tensor.
    571     """
--> 572     self._interpreter.SetTensor(tensor_index, value)
    573 
    574   def resize_tensor_input(self, input_index, tensor_size, strict=False):

ValueError: Cannot set tensor: Dimension mismatch. Got 4 but expected 5 for dimension 1 of input 0.

So let me rephrase the questions we have now:

About the shapes of the input/output tensors:

I am not sure I understand what this 5 stands for in the input data shape printed. Should it not be the length of the sequence which may vary? Or is it a minimum length ?

According to the error message, the input shape is [batch, 5]. Your model doesn't seem to accept variable input length. You can verify it by printing out input_shape, where input_shape = input_details[0]['shape'], and see if each dimension has value other than -1. Normally for NLP models, it (5 in this case) means the maximum number of ids accepted. If your number of ids is less than 5, pad it with the PAD tokens in the vocabulary file. See our text classification app for example. Also see the input processing logic.

Similarly, what are the output shape dimensions standing for ? Should not it be the size of the dictionary, and the length of the input fed?

It depends on what your output tensor is. Seems like your model is a text-to-text model. So the output contains the ids of tokens that represent the output text. Similarly, if the output shape is [batch, N], N will be the maximum ids that can return by this model.

About the input encoding:

Shall we encode the inputs in a specific way beforehand and feed encoder_outputs to the interpreter.set_tensors?
I saw that in hugging face library they first pass it to an encoder before feeding it to the model. Cf this link:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_tf_utils.py#L320
Is there a simple way to do this similarly in this context?

Not that I'm aware of. What is the encoder in your link? Do you have pointers?

Is it a one-hot encoding we need simply?

Inference should use exactly the same encoder/decoder as the training script, and encoding varies from model to model.

About the output decoding:

Is the next_token_logits right as such ? I am not sure which part of output_data should be taken to be sampled (here, greedily).
In order to get the next token index. It seems it produce a string instead of one token currently.

You can find a colab with our process here:
https://colab.research.google.com/drive/1dPzO058qtS0VHO3BmrrJgGLxXjrH6CHd?usp=sharing

I'm out of context of what the output tensor is, i.e. what does each dimension in the shape (1, 5, 50257) referring to. It will be more helpful if you can provide information. In general, always refer to the training script for pre/post-processing. If you can point me to the training script, I can help to take a look as well.

@lu-wang-g
Copy link
Member

Closing the issue. Feel free to reopen.

@Guillaume-slize
Copy link
Author

Thanks @lu-wang-g, sorry I did not have time to look into it since last time.

@krishnarajk
Copy link

@Guillaume-slize Hi, I know it is after a long time. Do you remember how you solved this issue with the output decoding?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants