diff --git a/tgi-messages-api.md b/tgi-messages-api.md index 6a2d2cf4b2..d3415ded28 100644 --- a/tgi-messages-api.md +++ b/tgi-messages-api.md @@ -121,7 +121,7 @@ for message in chat_completion: Behind the scenes, TGI’s Messages API automatically converts the list of messages into the model’s required instruction format using its [chat template](https://huggingface.co/docs/transformers/chat_templating). -> ##### 💡 Certain OpenAI features, like function calling, are not compatible with TGI. Currently, the Messages API supports the following chat completion parameters: `stream`, `max_new_tokens`, `frequency_penalty`, `logprobs`, `seed`, `temperature`, and `top_p`. +> ##### 💡 Certain OpenAI features, like function calling, are not compatible with TGI. Currently, the Messages API supports the following chat completion parameters: `stream`, `max_tokens`, `frequency_penalty`, `logprobs`, `seed`, `temperature`, and `top_p`. ### With the JavaScript client @@ -299,4 +299,4 @@ endpoint.delete() The new Messages API in Text Generation Inference provides a smooth transition path from OpenAI models to open LLMs. We can’t wait to see what use cases you will power with open LLMs running on TGI! -_See [this notebook](https://github.com/andrewrreed/hf-notebooks/blob/main/tgi-messages-api-demo.ipynb) for a runnable version of the code outlined in the post._ \ No newline at end of file +_See [this notebook](https://github.com/andrewrreed/hf-notebooks/blob/main/tgi-messages-api-demo.ipynb) for a runnable version of the code outlined in the post._