Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create_react_agent incompatible with AWS Bedrock input validation due to hard coded ['\nObservation:'] stop sequence #16840

Closed
4 tasks done
manwithaplandy opened this issue Jan 31, 2024 · 9 comments
Labels
Ɑ: agent Related to agents module 🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@manwithaplandy
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

The following code:

llm = Bedrock(
    credentials_profile_name="Bedrock",
    model_id="amazon.titan-text-express-v1",
    model_kwargs={
        "temperature": 0.9,
        },
    verbose=True
    )
agent_executor = create_sql_agent(
    llm,
    db=db,
    verbose=True
    )

agent_executor.invoke("Retrieve all table data from the last 3 months.")

Error Message and Stack Trace (if applicable)

Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 533, in _prepare_input_and_invoke_stream
response = self.client.invoke_model_with_response_stream(**request_options)
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(|+|User:)$, please reformat your input and try again.

Description

The function 'create_react_agent' in langchain/agent/react/agent.py binds the stop sequence ["\nObservation"] to the runnable, making it incompatible with Bedrock's validation regex: ^(|+|User:)$

When line 103 is changed from

llm_with_stop = llm.bind(stop=["\nObservation"])

to

llm_with_stop = llm.bind(stop=["User:"])

the call to invoke the model is successful as part of the agent executor chain because "User:" is one of the allowed stop sequences according to AWS. I think these limitations on the stop sequences allowed are a bit nonsensical, and this may be a bug with AWS itself. However, hard coding the stop sequence into the react agent constructor prevents this from being fixed without modifying the Langchain code.

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
Python Version: 3.10.13 (main, Jan 24 2024, 14:54:55) [Clang 15.0.0 (clang-1500.1.0.2.5)]

Package Information

langchain_core: 0.1.17
langchain: 0.1.4
langchain_community: 0.0.16
langchain_openai: 0.0.5
langchainhub: 0.1.14
langserve: 0.0.41

@dosubot dosubot bot added Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: aws Primarily related to Amazon Web Services (AWS) integrations labels Jan 31, 2024
Copy link

dosubot bot commented Jan 31, 2024

🤖

It's great to see that you've identified a potential issue and even provided a suggested fix for the problem. Your understanding of the code and the specific changes needed is very helpful. If you're interested, you could consider creating a pull request with your proposed solution. This would allow the maintainers to review and potentially merge your changes into the repository. Thank you for your contribution!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@sudranga
Copy link
Contributor

I dont have access to bedrock. But, i think if you modify line 499 of langchain-community/langchain_community/llms/bedrock.py in the function _prepare_input_and_invoke_stream, it should work.

if stop:
if provider not in self.provider_stop_sequence_key_name_map:
raise ValueError(
f"Stop sequence key name for {provider} is not supported."
)

    if provider == "amazon": 
        _model_kwargs["textGenerationConfig"]["stopSequences"] = stop
    else:
        # stop sequence from _generate() overrides
        # stop sequences in the class attribute
        _model_kwargs[self.provider_stop_sequence_key_name_map.get(provider)] = stop

Can you try and let me know?

@manwithaplandy
Copy link
Author

@sudranga it looks like your formatting got messed up, but here's what I think you were trying to suggest:

if stop:
        if provider not in self.provider_stop_sequence_key_name_map:
            raise ValueError(
                f"Stop sequence key name for {provider} is not supported."
            )

        if provider == "amazon": 
            _model_kwargs["textGenerationConfig"]["stopSequences"] = stop
        else:
            # stop sequence from _generate() overrides
            # stop sequences in the class attribute
            _model_kwargs[self.provider_stop_sequence_key_name_map.get(provider)] = stop

here is the error when I tried that:

_model_kwargs["textGenerationConfig"]["stopSequences"] = stop
KeyError: 'textGenerationConfig'

@sudranga
Copy link
Contributor

sorry, it should be _model_kwargs["textGenerationConfig"] = {"stopSequences": stop}

@sudranga
Copy link
Contributor

sudranga commented Feb 8, 2024

@manwithaplandy Were you able to give it a try?

@manwithaplandy
Copy link
Author

This has been confirmed as a bug on the AWS side per issue boto/boto3#3993. AWS documentation has been clarified and hopefully a bug fix will be implemented to allow more stop sequences.

@sudranga
Copy link
Contributor

sudranga commented Feb 8, 2024

Ok, i believe you still need the langchain changes i proposed for it to work. The exact format of stopSequences seems to be under question.

@manwithaplandy
Copy link
Author

If AWS allows for arbitrary strings to be stop sequences, then the Langchain code shouldn't need to be changed at all. The code was passing the stop sequence to AWS correctly, but it was being rejected by AWS due to their validation bug.

@aqiao
Copy link

aqiao commented Mar 5, 2024

HI all,
after adding 'textGenerationConfig': {"stopSequences": "Observation"} to model_kwargs, the error fixed. However, it raised another exception as below

Traceback (most recent call last):
  File "/Users/aqiao/Learning/bedrock/langchain-agent/demo2.py", line 58, in <module>
    result = agent_executor.invoke({"input": "User: call say_hi function and return the result\nBot:"})
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
    raise e
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call
    next_step_output = self._take_next_step(
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
    [
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
    [
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step
    output = self.agent.plan(
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan
    for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2446, in stream
    yield from self.transform(iter([input]), config, **kwargs)
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2433, in transform
    yield from self._transform_stream_with_config(
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
    for output in final_pipeline:
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1051, in transform
    for chunk in input:
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4173, in transform
    yield from self.bound.transform(
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1061, in transform
    yield from self.stream(final, config, **kwargs)
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 452, in stream
    raise e
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 436, in stream
    for chunk in self._stream(
  File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 546, in _prepare_input_and_invoke_stream
    raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: 2 schema violations found, please reformat your input and try again.

Here is my prompt following langchain official doc

react_prompt_template="""
Answer the following questions as best you can. You have access to the following tools:

{tools}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought:{agent_scratchpad}
"""

I passed input value as below

result = agent_executor.invoke({"input": "User: call say_hi function and return the result\nBot:"})

Here is my completed code

from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import tool
from langchain.llms.bedrock import Bedrock
import boto3
from langchain_core.prompts import PromptTemplate
from langchain import hub

react_prompt_template="""
Answer the following questions as best you can. You have access to the following tools:

{tools}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought:{agent_scratchpad}
"""
# prompt = hub.pull("hwchase17/react")
prompt = PromptTemplate(
    input_variables=["input"],
    template=react_prompt_template
)

@tool
def say_hi(name: str) -> str:
    """Say hi to the world"""
    return f"hi {name}"


def specify_bedrock_titan_llm():
    bedrock_client = boto3.client(
        service_name="bedrock-runtime",
        region_name="us-east-1",
    )
    # https://github.com/langchain-ai/langchain/issues/16840
    bedrock_llm = Bedrock(
        model_id="amazon.titan-text-express-v1",
        client=bedrock_client,
        model_kwargs={'temperature': 0, 'textGenerationConfig': {"stopSequences": "Observation"}}
    )
    return bedrock_llm


if __name__ == '__main__':
    llm = specify_bedrock_titan_llm()
    agent = create_react_agent(llm, [say_hi], prompt)
    agent_executor = AgentExecutor(agent=agent, tools=[say_hi], verbose=True, handle_parsing_errors=True)
    result = agent_executor.invoke({"input": "User: call say_hi function and return the result\nBot:"})
    print(result)

any suggestion on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants