Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fix parser needs the actual completion #413

Conversation

johnnagro
Copy link
Contributor

@johnnagro johnnagro commented Dec 7, 2023

Description

The README examples of the OutputFixingParser don't work when the initial response is invalid json and needs fixing:

fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm(
  llm: llm,
  parser: parser
)
fix_parser.parse(llm_response)

results in:

ruby-3.2.2/gems/langchainrb-0.7.5/lib/langchain/output_parsers/structured_output_parser.rb:78:in `rescue in parse': Failed to parse. Text: "#<Langchain::LLM::OpenAIResponse:0x000000010a2c1630>". Error: undefined method `include?' for #<Langchain::LLM::OpenAIResponse:0x000000010a2c1630...

where as the previous example includes a .completion:

llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
llm_response = llm.chat(prompt: prompt_text).completion
parser.parse(llm_response)

which is the reference for this PR. I'm not sure how to fix the tests yet - I could use some pointers or help.

Extended Example

## taken from the README
json_schema = {
  type: "object",
  properties: {
    name: {
      type: "string",
      description: "Persons name"
    },
    age: {
      type: "number",
      description: "Persons age"
    },
    interests: {
      type: "array",
      items: {
        type: "object",
        properties: {
          interest: {
            type: "string",
            description: "A topic of interest"
          },
          levelOfInterest: {
            type: "number",
            description: "A value between 0 and 100 of how interested the person is in this interest"
          }
        },
        required: ["interest", "levelOfInterest"],
        additionalProperties: false
      },
      minItems: 1,
      maxItems: 3,
      description: "A list of the person's interests"
    }
  },
  required: ["name", "age", "interests"],
  additionalProperties: false
}
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema)
fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm(
  llm: llm,
  parser: parser
)

## This response is poorly formed, first "interest" is a color
bad_response =<<EOF
{
  "name": "Ji-hyun Kim",
  "age": 21,
  "interests": [
    {
      "color": "Red",
    },
    {
      "interest": "Physical Chemistry",
      "levelOfInterest": 70
    },
    {
      "interest": "Analytical Chemistry",
      "levelOfInterest": 60
    }
  ]
}
EOF

## rather than succeed in prompting the llm to fix the json, it will blow up with the error above
fix_parser.parse(bad_response)

@andreibondarev andreibondarev merged commit 7b2a745 into patterns-ai-core:main Dec 7, 2023
2 of 5 checks passed
@andreibondarev
Copy link
Collaborator

@johnnagro I couldn't push directly to your branch so I pushed the spec fix to main directly. Thank you for the fix!

@johnnagro
Copy link
Contributor Author

and for posterity, it looks like these are the test fixes dd0c02b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants