You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i'm implementing the o1 model through azureopenai integration. so far, i've updated the files openai.py and the taskweaver config json. when launching the application using chainlit (chainlit run app.py), i'm encountering the following error:
TypeError: OpenAIService.chat_completion() takes from 2 to 6 positional arguments but 7 were given
i suspect this might be due to recent api changes or unmatched parameters between o1 model integration and azure api interface.
To Reproduce
steps to reproduce the behavior:
start the service with:
chainlit run app.py
type any user query (e.g. "hello" or "what can you do?")
wait for the response
see the error message in the terminal logs immediately after the query is processed:
TypeError: OpenAIService.chat_completion() takes from 2 to 6 positional arguments but 7 were given
Expected behavior
the service should process the query and respond normally without raising any parameter-count error.
Environment Information:
os: windows
python version: 3.13.2
llm that you're using: o1 through azureopenai
other configurations except the llm api/key related: default taskweaver config, no changes other than adjusting model integrations
Additional context
i'm going to continue debugging further, but wanted to check early in case others in the community encountered similar integration issues or if a recent change to azureopenai api may have introduced breaking changes. if anyone has insights or faced a similar issue, please share!
The text was updated successfully, but these errors were encountered:
Hi everyone!
i'm implementing the o1 model through azureopenai integration. so far, i've updated the files openai.py and the taskweaver config json. when launching the application using chainlit (
chainlit run app.py
), i'm encountering the following error:To Reproduce
steps to reproduce the behavior:
start the service with:
chainlit run app.py
type any user query (e.g. "hello" or "what can you do?")
wait for the response
see the error message in the terminal logs immediately after the query is processed:
Expected behavior
the service should process the query and respond normally without raising any parameter-count error.
Environment Information:
os: windows
python version: 3.13.2
llm that you're using: o1 through azureopenai
other configurations except the llm api/key related: default taskweaver config, no changes other than adjusting model integrations
Additional context
i'm going to continue debugging further, but wanted to check early in case others in the community encountered similar integration issues or if a recent change to azureopenai api may have introduced breaking changes. if anyone has insights or faced a similar issue, please share!
The text was updated successfully, but these errors were encountered: