You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2024-03-28 06:21:00,853 [llm_connection.py:411 - stream_gpt_completion() ] INFO: > Request model: gpt-4-turbo-preview
2024-03-28 06:21:00,872 [llm_connection.py:468 - stream_gpt_completion() ] INFO: problem with request (status 500): {"error":"Model with key 'gpt-4-turbo-preview' not loaded."}
2024-03-28 06:21:00,872 [llm_connection.py:274 - wrapper() ] ERROR: There was a problem with request to openai API: API responded with status code: 500. Request token size: 23 tokens. Response text: {"error":"Model with key 'gpt-4-turbo-preview' not loaded."}
The text was updated successfully, but these errors were encountered:
Version
VisualStudio Code extension
Operating System
Windows 11
What happened?
By changing the endpoint and api key from Openai to LLmStudio:
if using
OPENAI_ENDPOINT=http://localhost:1234/v1
It is not the one to use but just to troubleshoot i used this to see what happened
OPENAI_ENDPOINT=http://localhost:1234/v1/chat/completions
The text was updated successfully, but these errors were encountered: