Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate LLM response that is structured as a pydantic object? #25

Open
CKAbundant opened this issue Mar 27, 2024 · 0 comments
Open

Comments

@CKAbundant
Copy link

I've structured my query_engine to output as a pydantic object:

class TechOffer(BaseModel):
    """Title of technical offer that is relevant to the query including reasons why technical offer is relevant to the query."""

    title: str = Field(
        description="Title of the technical offer that is relevant to the query."
    )
    reason: str = Field(
        description="Detailed step-by-step reasons why the technical offer are relevant to the query."
    )
    unique_value_proposition: str = Field(
        description="Unique benefits or advantages offered by the technical offer with reference to the query."
    )


class TechOfferTitles(BaseModel):
    """List of titles of technical offers that are relevant to the query."""

    title_list: List[TechOffer]

When I run eval_result = evaluator.evaluate_response(response=response), I got this error message:

AttributeError: 'TechOfferTitles' object has no attribute 'query_str'

How can I perform faithfulness evaluation when my LLM output is a pydantic object? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant