Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Mistral models via Bedrock #400

Merged
merged 3 commits into from
Mar 7, 2024

Conversation

massi-ang
Copy link
Collaborator

Issue #, if available:

Resolves: #390

Description of changes:
Given that the prompt format of Mistral and Mixtral are identical to Llama2, we reused the Llama2 adapter registering it also for Mistral models

The input and output adaptations are different and taken care by the base Bedrock LLM class

Finally, we revised the LLama2 prompts.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@@ -33,7 +29,9 @@ def prepare_input(
if provider == "anthropic":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we move model providers to enum?

@bigadsoleiman bigadsoleiman merged commit 0545d45 into aws-samples:main Mar 7, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Add Mistral/Mixtral in Bedrock adapter
2 participants