Replies: 2 comments 2 replies
-
@NawarA Slowly but steady is getting there https://github.com/langchain-ai/langchain/pull/21484/files |
Beta Was this translation helpful? Give feedback.
0 replies
-
@mbonet do you happen to know if it will allow us to use an existing assistant by passing in the assistant ID instead of just creating a new assistant? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked
Feature request
Hi, love the langchain library, but it seems to be lacking support for OpenAI's newest Assistant capabilities, which makes it hard to use langchain.
For example, the OpenAI Threads API now supports streaming, but when I try to stream using the existing implementation, I just get the fully baked result back (its not actually a stream of tokens, which is my mental model of a stream).
`
Also, using if something fails, like perhaps the schema is rejected, OpenAI Assistant API complains that a run is active and OpenAI wont accept anymore requests from the source code, until a developer manually cancels a run or waits a non-developer set time period before the run
expires
.A better approach here is either if
agentExecutor
crashes or rejects, it theOpenAIAssistantRunnable
wrapper auto cancels the crashed run. Alternatively, the worse, is if runningagentExecutor.invoke
returned therunId
allowing a developer to try catch and cancel the job themselves. Without this level of error handling, a dev has to wait for a failure, and then find the last run thats still active and manually cancel it. DX is much better in the scenario I'm suggesting.Finally, OpenAI assistant supports "truncationStrategy" and a number of other params that are hidden or not settable by developers. Though in practice, setting a truncation strategy is super important, as are the other options that devs are supported to have access to.
I'd love if langchain could fully support the 3 features above, as OpenAI Assistant + LangChain seem like an indispensible DX, if done right.
Review:
Motivation
I'm using a combo of OpenAI + LangChain, and I'm at a fork in the road, because I need streaming and better error handling. Do I stop using agentExecutor and switch to OpenAI Assistant V2 API directly or can I fully rely langchain for these basics?
I'd like to rely on LangChain, so I wrote the ticket...and I'm sure i'm not the only one wishing for these features to be supported, as I think OpenAI will double down on its assistant API
Proposal (If applicable)
See above
Beta Was this translation helpful? Give feedback.
All reactions