-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add AnyscaleLLM
#447
Add AnyscaleLLM
#447
Conversation
Add `tests`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @davidberenstein1957!
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, great work! Thanks for also removing the model
defaults, feel free to merge core-refactor
in your branch, delete the load
method from AnyscaleLLM
and then we can merge, thanks again!
Anyscale LLM implementation running the async API client of OpenAI because of duplicate API behavior. | ||
|
||
Attributes: | ||
model: the model name to use for the LLM, e.g., `google/gemma-7b-it`. [Supported models](https://docs.endpoints.anyscale.com/text-generation/supported-models/google-gemma-7b-it). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model: the model name to use for the LLM, e.g., `google/gemma-7b-it`. [Supported models](https://docs.endpoints.anyscale.com/text-generation/supported-models/google-gemma-7b-it). | |
model: the model name to use for the LLM, e.g., `google/gemma-7b-it`. [Supported models](https://docs.endpoints.anyscale.com/text-generation/supported-models). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly this is not defined haha
src/distilabel/llm/anyscale.py
Outdated
def load( | ||
self, api_key: Optional[str] = None, base_url: Optional[str] = None | ||
) -> None: | ||
"""Loads the `AsyncOpenAI` client to benefit from async requests.""" | ||
|
||
try: | ||
from openai import AsyncOpenAI | ||
except ImportError as ie: | ||
raise ImportError( | ||
"OpenAI Python client is not installed which is required for `AnyscaleLLM`. Please install it using" | ||
" `pip install openai`." | ||
) from ie | ||
|
||
self.api_key = self._handle_api_key_value( | ||
self_value=self.api_key, load_value=api_key, env_var="OPENAI_API_KEY" | ||
) | ||
self.base_url = base_url or self.base_url | ||
|
||
self._aclient = AsyncOpenAI( | ||
api_key=self.api_key.get_secret_value(), | ||
base_url=self.base_url, | ||
max_retries=6, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to define this function, as there's already an update on core-refactor
, so feel free to merge the changes from that branch and remove this function instead 👍🏻
Closes #446