Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature containerized #14

Merged
merged 21 commits into from
Mar 22, 2024
Merged

Feature containerized #14

merged 21 commits into from
Mar 22, 2024

Conversation

dockhardman
Copy link
Owner

No description provided.

- Add additional ModelDeploy instances for Llama2-70B and Mixtral-8x7B models
- Allow specifying custom model deployments in GroqOpenaiAction constructor
- Set _model_deploys attribute based on provided or default model deployments
- Import Sequence type hint from typing module
…set to 0

The git diff shows the following changes:

1. Added several new OpenAI models to the ModelDeploy list, including various versions of GPT-3.5, GPT-4, text embeddings, moderation models, and text-to-speech models.

2. In the __init__ method, added a check to remove the "frequency_penalty" parameter from kwargs if its value is 0.0.

3. In the chat_stream method, moved the removal of the "stream" parameter from kwargs before validating the model. Also added the same check for "frequency_penalty" as in the __init__ method.

The commit comment summarizes these changes as adding support for more OpenAI models and removing the "frequency_penalty" parameter when its value is set to 0.
…tuple, including models like codellama, llama-2, mistral, mixtral, pplx, and sonar variants.

2. Modify the `__init__` method to accept an optional `model_deploys` parameter, allowing the default model list to be overridden when initializing the `PerplexityAction` class.

3. Update the `__init__` method to use the provided `model_deploys` sequence if available, otherwise fallback to the default `self.model_deploys` tuple.

4. Minor import change to include `Sequence` from the `typing` module.
Add languru-llm-pplx service for Perplexity (pplx) integration
Configure environment variables, ports, volumes, and dependencies
Associate with all and pplx profiles
Add languru-llm-groq service for Groq integration
Configure environment variables, ports, volumes, and dependencies
Associate with all and groq profiles
Copy link
Owner Author

@dockhardman dockhardman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@dockhardman dockhardman merged commit 741d45f into master Mar 22, 2024
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant