-
Notifications
You must be signed in to change notification settings - Fork 13.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid Output Parser Format for "Router Chain" #5163
Comments
I have the same problem. |
Although
Does anyone have any idea whats going wrong? |
+1 |
I can confirm that this is still an issue. However, if you use |
gpt-3.5-turbo is optimized for chat rather than schema generation, so it is less reliable. Recommend trying out the openai functions agent or use a different model. |
To get through the tutorial, I had to create a new class: import json
import langchain
from typing import Any, Dict, List, Optional, Type, cast
class RouterOutputParser_simple(langchain.schema.BaseOutputParser[Dict[str, str]]):
"""Parser for output of router chain int he multi-prompt chain."""
default_destination: str = "DEFAULT"
next_inputs_type: Type = str
next_inputs_inner_key: str = "input"
def parse(self, text: str) -> Dict[str, Any]:
try:
expected_keys = ["destination", "next_inputs"]
parsed = json.loads(text) ### this line is changed
if not isinstance(parsed["destination"], str):
raise ValueError("Expected 'destination' to be a string.")
if not isinstance(parsed["next_inputs"], self.next_inputs_type):
raise ValueError(
f"Expected 'next_inputs' to be {self.next_inputs_type}."
)
parsed["next_inputs"] = {self.next_inputs_inner_key: parsed["next_inputs"]}
if (
parsed["destination"].strip().lower()
== self.default_destination.lower()
):
parsed["destination"] = None
else:
parsed["destination"] = parsed["destination"].strip()
return parsed
except Exception as e:
raise OutputParserException(
f"Parsing text\n{text}\n raised following error:\n{e}"
) Then use this class as the output_parser router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser_simple(), ### replaced RouterOutputParser()
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt) |
I get an 'OutputParserException' not defined error when running: chain.run("what is 2 + 2") |
I met the same problem. |
I resolved the problem by adding an example at the end of the MULTI_PROMPT_ROUTER_TEMPLATE. eg:<< INPUT >> |
@LyndonZhao, Thanks! |
@aaalexlit Welcome! |
Am getting the below mentioned error. |
When iadded the above example |
In addition, to suggested solution, found out that the following worked just as well.
|
…rompt output in JSON markdown code block, resolving JSON parsing error.
…ut in JSON markdown code block, resolving JSON parsing error. (#8709) Resolves occasional JSON parsing error when some predictions are passed through a `MultiPromptChain`. Makes [this modification](#5163 (comment)) to `multi_prompt_prompt.py`, which is much cleaner than appending an entire example object, which is another community-reported solution. @hwchase17, @baskaryan cc: @SimasJan
@SimasJan
|
THIS HELPS FOR ME |
@hueiyuan |
@mpduarte |
This solution worked for me:
Then I used UPDATED_MULTI_ROUTER_TEMPLATE instead. |
Combining the above answer and the answer from @lucarp, helped in my case. Updating the << OUTPUT >> on the MULTI_PROMPT_ROUTER_TEMPLATE as below worked: << OUTPUT (must include ```json at the start of the response and must end with ```) >> |
Hi, @anapple00, I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue involves an error in the output parser format for the "Router Chain" in the langchain system. It seems that the issue has been resolved in version langchain 0.0.222, and users are advised to upgrade their langchain version to resolve the problem. Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you! |
System Info
langchain version: 0.0.170
python: 3.8
Who can help?
No response
Information
Related Components
Reproduction
Here I came across an issue related to the output of router chain.
When I ran the tutorial of "router chain" in langchain website, the input query is: "What is black body radiation?" and the output of LLM is:
Use the class RouterOutputParser to parse the output then I got the error:
When I debug step by step I found the error raised in this function: parse_json_markdown
You can see there is no "```json" string in the output of LLM, so it will step into the "if" in the first row of this function and raise the bug.
Expected behavior
Can anyone give me some solutions? thanks.
The text was updated successfully, but these errors were encountered: