-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPTSimpleVectorIndex azure embedding error #990
Comments
I'm running to the same problem myself. Somehow the embedding models are not considering the maximum token lengths from the PromptHelper and I did not find any alternative way to provide the information from the documentation. Could someone help us solve this? |
Doesn't seem like that the embedding operation uses the PromptHelper, I got past that error by explicitly setting chunk_size_limit : |
I'm having the exact same problem when using the Azure OpenAI example |
This didn't solve the issue for my case. |
This solved the error for me. Maybe, the OpenAI Demo could be updated accordingly? |
Hi, @kwin-wang! I'm here to help the LlamaIndex team manage their backlog and I wanted to let you know that we are marking this issue as stale. Based on my understanding, the issue you reported was about the code requesting more tokens than the maximum context length allowed by the model. It seems that the issue has been resolved by setting the Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LlamaIndex repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days. Thank you for your contribution to the LlamaIndex repository! |
After referring to this example Azure Openai demo and running the code, I received the following error message, how to fix this problem?
The text was updated successfully, but these errors were encountered: