-
Notifications
You must be signed in to change notification settings - Fork 15.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue: Amazon Bedrock Cohere Command - Malformed input request: 2 schema violations found, please reformat your input and try again. #12620
Comments
🤖 Hello @nishanth-k-10, I'm here to assist you with your issue. I'm a bot designed to help solve bugs, answer questions, and guide you in becoming a contributor. While we wait for a human maintainer, let's see if we can resolve the problem you're facing. The error message you're receiving indicates that the input request to the AWS Bedrock service is not properly formatted. This could be due to a variety of reasons, such as incorrect or missing parameters in the request. Looking at your code, it seems like the issue might be related to the way the input is prepared for the AWS Bedrock service. Here are a few things you can check:
In addition, based on similar issues in the LangChain repository, you might want to check the following:
I hope this helps! If you have any more questions or need further clarification, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Still didn't solve the issue though. But I want to do it using with Langchain's bedrock |
I found the problem, the langchain library was outdated. |
@nishanth-k-10 which version of langchain solved this issue? I am also facing the same issue while making an API call to Amazon Bedrock (claude v2 model). |
I am also getting this error when I use llama v1. |
Using |
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again. |
Issue you'd like to raise.
I've have been trying to work with AWS bedrock Cohere command LLM with Langchain and I'm referring https://github.com/aws-samples/rag-using-langchain-amazon-bedrock-and-opensearch/blob/main/ask-bedrock-with-rag.py as source.
Below are few snippet of code that I'm working with,
def create_bedrock_llm(bedrock_client):
bedrock_llm = Bedrock(
model_id="cohere.command-text-v14",
client=bedrock_client,
verbose = True,
model_kwargs={'temperature': 0}
)
return bedrock_llm
bedrock_client = bedrock_client = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
llm = create_bedrock_llm(bedrock_client)
template = """"Your are a expert financial reporter who analyzes performance of the company. The relevant information for answering the question is given below. Try to give detailed answer using the context available. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
prompt = PromptTemplate(template=template,
input_variables=['context', 'question'])
embedding = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
db = Chroma(persist_directory=DB_CHROMA_PATH, embedding_function=embedding)
retriever=db.as_retriever(search_kwargs={'k': 3})
qa = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": prompt, "verbose": True},
verbose=True)
while True:
query = input("\nEnter the query\n")
if query.lower() == 'exit':
break
res = qa(query)
print(res)
Error:
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again.
Suggestion:
I have tried with API request using invoke_model() function with the same prompt schema which worked perfectly fine and response was recieved.
The text was updated successfully, but these errors were encountered: