Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Amazon Bedrock Cohere Command - Malformed input request: 2 schema violations found, please reformat your input and try again. #12620

Closed
nishanth-k-10 opened this issue Oct 31, 2023 · 7 comments
Labels
🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@nishanth-k-10
Copy link

nishanth-k-10 commented Oct 31, 2023

Issue you'd like to raise.

I've have been trying to work with AWS bedrock Cohere command LLM with Langchain and I'm referring https://github.com/aws-samples/rag-using-langchain-amazon-bedrock-and-opensearch/blob/main/ask-bedrock-with-rag.py as source.
Below are few snippet of code that I'm working with,

def create_bedrock_llm(bedrock_client):
bedrock_llm = Bedrock(
model_id="cohere.command-text-v14",
client=bedrock_client,
verbose = True,
model_kwargs={'temperature': 0}
)
return bedrock_llm

bedrock_client = bedrock_client = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
llm = create_bedrock_llm(bedrock_client)
template = """"Your are a expert financial reporter who analyzes performance of the company. The relevant information for answering the question is given below. Try to give detailed answer using the context available. If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}

Answer:"""
prompt = PromptTemplate(template=template,
input_variables=['context', 'question'])

embedding = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
db = Chroma(persist_directory=DB_CHROMA_PATH, embedding_function=embedding)
retriever=db.as_retriever(search_kwargs={'k': 3})

qa = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": prompt, "verbose": True},
verbose=True)

while True:
query = input("\nEnter the query\n")
if query.lower() == 'exit':
break
res = qa(query)
print(res)

Error:
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again.

Suggestion:

I have tried with API request using invoke_model() function with the same prompt schema which worked perfectly fine and response was recieved.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: aws Primarily related to Amazon Web Services (AWS) integrations labels Oct 31, 2023
@dosubot
Copy link

dosubot bot commented Oct 31, 2023

🤖

Hello @nishanth-k-10,

I'm here to assist you with your issue. I'm a bot designed to help solve bugs, answer questions, and guide you in becoming a contributor. While we wait for a human maintainer, let's see if we can resolve the problem you're facing.

The error message you're receiving indicates that the input request to the AWS Bedrock service is not properly formatted. This could be due to a variety of reasons, such as incorrect or missing parameters in the request.

Looking at your code, it seems like the issue might be related to the way the input is prepared for the AWS Bedrock service. Here are a few things you can check:

  1. Format of your input: Ensure that the prompt and model_kwargs you're passing to the Bedrock class are in the correct format. The prompt should be a string and model_kwargs should be a dictionary.

  2. Model_id: Make sure that the model_id you're using is valid and corresponds to a model that exists in the AWS Bedrock service. In your code, the model_id is "cohere.command-text-v14", make sure this model exists in your AWS Bedrock service.

  3. AWS credentials: Ensure that your AWS credentials are correctly set up and have the necessary permissions to access the AWS Bedrock service. In your code, you're using the boto3.client to create the bedrock_client, make sure this client is correctly set up and has the necessary permissions.

  4. Provider: The prepare_input method in the LLMInputOutputAdapter class prepares the input differently based on the provider. Make sure that the provider is correctly identified and supported.

  5. Endpoint_url: If you're providing an endpoint_url when initializing the Bedrock class, make sure that this URL is correct and accessible.

In addition, based on similar issues in the LangChain repository, you might want to check the following:

  • Ensure that the variables used in the prompt_template string match exactly with the input_variables defined in the PromptTemplate.

  • The prompt format expected by the Bedrock model. The error message indicates that the prompt must start with "Human:" and end with "Assistant:". The user needs to provide a prompt in the correct format.

  • The combined length of the input documents does not exceed the 20k character limit imposed by Bedrock across all models.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Best,
Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@nishanth-k-10
Copy link
Author

Still didn't solve the issue though.
Below is the API request where I passed the same prompt and it worked.
{
"modelId": "cohere.command-text-v14",
"contentType": "application/json",
"accept": "/",
"body": {
"prompt": prompt,
"max_tokens": 100,
"temperature": 0.8,
"return_likelihood": "GENERATION"
}
}

But I want to do it using with Langchain's bedrock

@nishanth-k-10 nishanth-k-10 changed the title Issue: Malformed input request: 2 schema violations found, please reformat your input and try again. Issue: Amazon Bedrock Cohere Command - Malformed input request: 2 schema violations found, please reformat your input and try again. Oct 31, 2023
@nishanth-k-10
Copy link
Author

I found the problem, the langchain library was outdated.

@kishoreiitd
Copy link

@nishanth-k-10 which version of langchain solved this issue? I am also facing the same issue while making an API call to Amazon Bedrock (claude v2 model).

@Abubakarjutt
Copy link

I am also getting this error when I use llama v1.

@Dinuda
Copy link

Dinuda commented Jan 2, 2024

Using anthropic.claude-v2 resolved my issue, was using amazon.titan-text-express-v1

@mdgulshan
Copy link

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔌: aws Primarily related to Amazon Web Services (AWS) integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

5 participants