Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Langchain_Community]: OpenLLM Client Fixes + Added Timeout Parameter #17478

Merged
merged 6 commits into from
Feb 19, 2024
Merged

[Langchain_Community]: OpenLLM Client Fixes + Added Timeout Parameter #17478

merged 6 commits into from
Feb 19, 2024

Conversation

keenborder786
Copy link
Contributor

  • OpenLLM was using outdated method to get the final text output from openllm client invocation which was raising the error. Therefore corrected that.
  • OpenLLM _identifying_params was getting the openllm's client configuration using outdated attributes which was raising error.
  • Updated the docstring for OpenLLM.
  • Added timeout parameter to be passed to underlying openllm client.

Copy link

vercel bot commented Feb 13, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Feb 16, 2024 7:03pm

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Feb 13, 2024
@keenborder786
Copy link
Contributor Author

@baskaryan

@keenborder786
Copy link
Contributor Author

@eyurtsev

@baskaryan
Copy link
Collaborator

cc @aarnphm

@baskaryan baskaryan merged commit 43dc5d3 into langchain-ai:master Feb 19, 2024
58 checks passed
@keenborder786 keenborder786 deleted the open_llm_client_fix branch February 25, 2024 12:29
haydeniw pushed a commit to haydeniw/langchain that referenced this pull request Feb 27, 2024
…gchain-ai#17478)

- OpenLLM was using outdated method to get the final text output from
openllm client invocation which was raising the error. Therefore
corrected that.
- OpenLLM `_identifying_params` was getting the openllm's client
configuration using outdated attributes which was raising error.
- Updated the docstring for OpenLLM.
- Added timeout parameter to be passed to underlying openllm client.
eyurtsev pushed a commit that referenced this pull request Apr 9, 2024
…20007)

Same changes as this merged
[PR](#17478)
(#17478), but for the
async client, as the same issues persist.

- Replaced 'responses' attribute of OpenLLM's GenerationOutput schema to
'outputs'.
reference:
https://github.com/bentoml/OpenLLM/blob/66de54eae7e420a3740ddd77862fd7f7b7d8a222/openllm-core/src/openllm_core/_schemas.py#L135

- Added timeout parameter for the async client.

---------

Co-authored-by: Seray Arslan <seray.arslan@knime.com>
junkeon pushed a commit to UpstageAI/langchain that referenced this pull request Apr 16, 2024
…angchain-ai#20007)

Same changes as this merged
[PR](langchain-ai#17478)
(langchain-ai#17478), but for the
async client, as the same issues persist.

- Replaced 'responses' attribute of OpenLLM's GenerationOutput schema to
'outputs'.
reference:
https://github.com/bentoml/OpenLLM/blob/66de54eae7e420a3740ddd77862fd7f7b7d8a222/openllm-core/src/openllm_core/_schemas.py#L135

- Added timeout parameter for the async client.

---------

Co-authored-by: Seray Arslan <seray.arslan@knime.com>
hinthornw pushed a commit that referenced this pull request Apr 26, 2024
…20007)

Same changes as this merged
[PR](#17478)
(#17478), but for the
async client, as the same issues persist.

- Replaced 'responses' attribute of OpenLLM's GenerationOutput schema to
'outputs'.
reference:
https://github.com/bentoml/OpenLLM/blob/66de54eae7e420a3740ddd77862fd7f7b7d8a222/openllm-core/src/openllm_core/_schemas.py#L135

- Added timeout parameter for the async client.

---------

Co-authored-by: Seray Arslan <seray.arslan@knime.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants