Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error : Stop sequence key name for {meta or mistral or any other mode} is not supported with AWS Bedrock #20053

Open
5 tasks done
david7joy opened this issue Apr 5, 2024 · 8 comments
Labels
🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@david7joy
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

I am trying to use AWS Bedrock models such as Llama / Mistral with Langchain Libraries such as SQLDatabaseToolkit.

model = Bedrock(credentials_profile_name="my-profile",
                model_id="meta.llama2-70b-chat-v1",
                model_kwargs={"temperature": 0.5},
                streaming=True,
                callbacks=[StreamingStdOutCallbackHandler()])
                
db = SQLDatabase.from_uri('database url')
toolkit = SQLDatabaseToolkit(llm=model,db=db)
agent_executor = create_sql_agent(llm=model, toolkit=toolkit,verbose=True, handle_parsing_errors=True)

# Query Module 
result = agent_executor.invoke(prompt)

Error Message and Stack Trace (if applicable)

This errors out with the following.

  File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 833, in _call
    for chunk in self._stream(
  File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 613, in _prepare_input_and_invoke_stream
    raise ValueError(
ValueError: Stop sequence key name for meta is not supported.

Description

I have tried the same code with OpenAI / Ollama Mistral/Lamma as well as google GenAI models and they don't seem to show this error. This seems like something with the way the bedrock library works in Langchain or the bedrock service.

Is there a workaround I can use to get this to work.

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023
Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]

Package Information

langchain_core: 0.1.40
langchain: 0.1.14
langchain_community: 0.0.31
langsmith: 0.1.40
langchain_experimental: 0.0.56
langchain_openai: 0.0.5
langchain_text_splitters: 0.0.1

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langgraph
langserve

@dosubot dosubot bot added 🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Apr 5, 2024
@jonathancaevans
Copy link
Contributor

Bedrock meta models don't currently support stop sequences.

Mistral models have a parameter mapping issue addressed in the above PR'

If you want to use any of the three Mistral models for an agent you can use the kwargs settings to define them at the client level. And then modify the agent_executor object to remove any stop sequences it will try to pass over to the llm class until the PR is merged.

Like so'ish

from langchain_community.llms import Bedrock

llm = Bedrock(
    model_id="mistral.mixtral-8x7b-instruct-v0:1",
    model_kwargs={'stop': ['Stop!']}
)

llm('<s>[INST]Can you say `Stop!`?[/INST]')

@t-mac81
Copy link

t-mac81 commented Apr 11, 2024

Thanks for the PR, i'm surprised this wasn't tested before the bedrock wrapper was released.

@david7joy
Copy link
Author

@jonathancaevans Thanks .. I did try this before. but it doesn't work either specially when I am using an agent_executor.

prompt = "How much money do I have in my account. Final Result should show amount. Think step by step."

model = Bedrock(credentials_profile_name="crl-revenue",
                model_id="mistral.mistral-7b-instruct-v0:2",
                model_kwargs={'stop' : ['Stop!']},
                streaming=True,
                callbacks=[StreamingStdOutCallbackHandler()])

db = SQLDatabase.from_uri('database url')
toolkit = SQLDatabaseToolkit(llm=model,db=db)
agent_executor = create_sql_agent(llm=model, toolkit=toolkit, verbose=True, handle_parsing_errors=True)

# Query Modeule 
result = agent_executor.invoke(prompt)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 654, in _prepare_input_and_invoke_stream
  raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: #: extraneous key [stop_sequences] is not permitted, please reformat your input and try again.

I tried other services like - OpenAI models, Gemini Models via Vertex and none of them have this issue. Hopefully that PR merge will help.

@t-mac81
Copy link

t-mac81 commented Apr 15, 2024

Fairly simple workaround for now until the PR is merged:
after your declare your llm:

llm.provider_stop_sequence_key_name_map = {'anthropic': 'stop_sequences', 'amazon': 'stopSequences',
                                                   'ai21': 'stop_sequences', 'cohere': 'stop_sequences',
                                                   'mistral': 'stop'}

baskaryan pushed a commit that referenced this issue Apr 30, 2024
- **Description:** Change Bedrock's Mistral stop sequence key mapping to
"stop" rather than "stop_sequences" which is the correct key [Bedrock
docs
link](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
`{
    "prompt": string,
    "max_tokens" : int,
    "stop" : [string],    
    "temperature": float,
    "top_p": float,
    "top_k": int
}`
- **Issue:** #20053 
- **Dependencies:** N/A
- **Twitter handle:** N/a
@wild-thomas
Copy link

Fairly simple workaround for now until the PR is merged: after your declare your llm:

llm.provider_stop_sequence_key_name_map = {'anthropic': 'stop_sequences', 'amazon': 'stopSequences',
                                                   'ai21': 'stop_sequences', 'cohere': 'stop_sequences',
                                                   'mistral': 'stop'}

What do we have to change for 'meta' here? I'm getting the same error for 'meta.llama3-8b-instruct-v1:0'.

@t-mac81
Copy link

t-mac81 commented May 4, 2024

Fairly simple workaround for now until the PR is merged: after your declare your llm:

llm.provider_stop_sequence_key_name_map = {'anthropic': 'stop_sequences', 'amazon': 'stopSequences',
                                                   'ai21': 'stop_sequences', 'cohere': 'stop_sequences',
                                                   'mistral': 'stop'}

What do we have to change for 'meta' here? I'm getting the same error for 'meta.llama3-8b-instruct-v1:0'.

Meta is not in the key map, maybe you can add in stop sequences as kawargs, you'll have to check what boto3 supports with meta models.

@bqmackay
Copy link

bqmackay commented May 6, 2024

I added "meta": "" into the list. It doesn't crash, but I doubt that's the right way to do it. The fact that Llama3 doesn't have a stop sequence makes me want to believe that leaving it blank is ok.

@wild-thomas
Copy link

llm.provider_stop_sequence_key_name_map = {...} is not the solution for the initial bug reported.
#19220 seems to address the problem at the create_react_agent() function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

5 participants