Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Anyscale endpoints #213

Merged
merged 24 commits into from
Jan 8, 2024
Merged

Add Anyscale endpoints #213

merged 24 commits into from
Jan 8, 2024

Conversation

plaguss
Copy link
Contributor

@plaguss plaguss commented Jan 5, 2024

Description

This PR adds anyscale endpoints on top of the OpenAI integration.

Would close #121

@plaguss plaguss marked this pull request as ready for review January 5, 2024 10:59
@plaguss plaguss self-assigned this Jan 5, 2024
Copy link
Member

@alvarobartt alvarobartt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but shouldn't we add a check to ensure that OpenAILLM and AnyscaleLLM are not used within the same Pipeline? As AFAIK that would fail since the OPENAI_BASE_URL is modified as well as the OPENAI_API_KEY, right?

examples/anyscale-endpoints.py Outdated Show resolved Hide resolved
src/distilabel/llm/anyscale.py Outdated Show resolved Hide resolved
@alvarobartt
Copy link
Member

LGTM, but shouldn't we add a check to ensure that OpenAILLM and AnyscaleLLM are not used within the same Pipeline? As AFAIK that would fail since the OPENAI_BASE_URL is modified as well as the OPENAI_API_KEY, right?

I guess it would work fine as long as we don't rely on the environment variables, right? But unsure on how that's internally handled by OpenAI, will the provided key via arg override whatever is on the environment variable?

alvarobartt and others added 6 commits January 5, 2024 14:16
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Copy link
Member

@gabrielmbmb gabrielmbmb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Could we add some unit tests to the OpenAILLM class similar to the ones from VertexAILLM?

@plaguss
Copy link
Contributor Author

plaguss commented Jan 5, 2024

LGTM, but shouldn't we add a check to ensure that OpenAILLM and AnyscaleLLM are not used within the same Pipeline? As AFAIK that would fail since the OPENAI_BASE_URL is modified as well as the OPENAI_API_KEY, right?

I guess it would work fine as long as we don't rely on the environment variables, right? But unsure on how that's internally handled by OpenAI, will the provided key via arg override whatever is on the environment variable?

It seems that the arg variable takes precedence in the OpenAI API, so it should work fine as long as we don't use both OpenAILLM and AnyscaleLLM and use the same names for the environment variables. I've updated the docs to take this into account, it will look for ANYSCALE_API_KEY instead of OPENAI_API_KEY in case someone prefers the environment variables.

@plaguss plaguss merged commit 27b0c6b into main Jan 8, 2024
4 checks passed
@plaguss plaguss deleted the feat/anyscale branch January 8, 2024 17:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE] Add support for AnyScale inference endpoints
3 participants