Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: RetryWithErrorOutputParser usage examples #21376

Open
2 tasks done
austinmw opened this issue May 7, 2024 · 3 comments
Open
2 tasks done

DOC: RetryWithErrorOutputParser usage examples #21376

austinmw opened this issue May 7, 2024 · 3 comments
Labels
🔌: anthropic Primarily related to Anthropic integrations 🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder Ɑ: parsing Related to output parser module

Comments

@austinmw
Copy link

austinmw commented May 7, 2024

Checklist

  • I added a very descriptive title to this issue.
  • I included a link to the documentation page I am referring to (if applicable).

Issue with current documentation:

I don't see any usage examples for RetryWithErrorOutputParser. I'd like to update the following code that uses chain.with_retry to use retry with errors instead, but it's not clear how from the documentation:

sonnet = ChatBedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0")

parser = PydanticOutputParser(pydantic_object=GeneratePlanStructAPI)

generate_args_for_gen_plan_struct_prompt_template = """
You are an expert at reading natural language feedback and generating the most relevant API arguments.

Look at the previous arguments and feedback below and generate the most appropriate arguments for the API endpoint.
Only modify arguments that are relevant to the feedback; leave the rest as they are.

Previous arguments:
{previous_generate_args}

User feedback: {feedback}

Important: Only output valid parsable JSON without any descriptions or comments. Follow the formatting instructions below:
{format_instructions}
"""

prompt = PromptTemplate(
    template=generate_args_for_gen_plan_struct_prompt_template,
    input_variables=["previous_generate_args", "feedback"],
    partial_variables={"format_instructions": parser.get_format_instructions()},
)

chain = prompt | sonnet | parser

retryable_gen_plan_chain = chain.with_retry(
    retry_if_exception_type=(ValueError,), # Retry only on ValueError
    wait_exponential_jitter=False,
    stop_after_attempt=5,
)

feedback = "I want to workout more days per week and do more cardio"

previous_generate_args = {
    "workout_prefs": ('strength', 'strength', 'intro'),
    "num_days": 3,
}

create_args = retryable_gen_plan_chain.invoke({"previous_generate_args": previous_generate_args, "feedback": feedback})

Idea or request for content:

No response

@dosubot dosubot bot added Ɑ: parsing Related to output parser module 🔌: anthropic Primarily related to Anthropic integrations 🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder labels May 7, 2024
@austinmw
Copy link
Author

austinmw commented May 9, 2024

Any help @dosubot? 😅

Copy link

dosubot bot commented May 9, 2024

To use RetryWithErrorOutputParser effectively in your scenario, you'll need to integrate it with your existing code structure. This involves setting up the parser to work with your PromptTemplate and handling retries based on parsing errors. Here's how you can adjust your code:

from langchain.output_parsers import RetryWithErrorOutputParser, PydanticOutputParser
from langchain.prompts import PromptTemplate
from langchain_openai import OpenAI

# Assuming GeneratePlanStructAPI is your Pydantic model
parser = PydanticOutputParser(pydantic_object=GeneratePlanStructAPI)

# Your existing prompt setup remains unchanged
generate_args_for_gen_plan_struct_prompt_template = """
...Your existing template...
"""

prompt = PromptTemplate(
    template=generate_args_for_gen_plan_struct_prompt_template,
    input_variables=["previous_generate_args", "feedback"],
    partial_variables={"format_instructions": parser.get_format_instructions()},
)

# Initialize RetryWithErrorOutputParser with your parser and an OpenAI instance
retry_parser = RetryWithErrorOutputParser.from_llm(llm=OpenAI(temperature=0), parser=parser)

# Example usage with a bad response and formatted prompt
bad_response = '{"action": "search"}'  # Example of a bad response
prompt_value = prompt.format_prompt(previous_generate_args=previous_generate_args, feedback=feedback)
parsed_output = retry_parser.parse_with_prompt(bad_response, prompt_value)
print(parsed_output)

This setup demonstrates how to incorporate RetryWithErrorOutputParser to handle retries based on parsing errors. The parser is designed to retry the parsing process by incorporating error messages into the retry prompt, potentially leading to better outcomes.

For more detailed examples and documentation, consider checking the official documentation of RetryWithErrorOutputParser. Remember, the effectiveness of this approach relies on the accurate identification and reporting of errors by the initial parser. Each retry involves an additional call to the language model, which may increase operation latency, especially if multiple retries are necessary.

Details

About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@keenborder786
Copy link
Contributor

Why don't you create a custom corrector chain? and then use try,exception block to invoke the corrector chain:

try:
# your original chain invoke
except Exception as e:
# your corrector chain invoke with the exception error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔌: anthropic Primarily related to Anthropic integrations 🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder Ɑ: parsing Related to output parser module
Projects
None yet
Development

No branches or pull requests

2 participants