Skip to content

bug: Enhance LLM response evaluation in generate_value to handle invalid syntax #938

@milk333445

Description

@milk333445

Did you check docs and existing issues?

  • I have read all the NeMo-Guardrails docs
  • I have updated the package to the latest version before submitting this issue
  • (optional) I have used the develop branch
  • I have searched the existing issues of NeMo-Guardrails

Python version (python --version)

Python 3.10.16

Operating system/version

Windows 11

NeMo-Guardrails version (if you must use a specific version and not the latest

0.11.0 (latest pip install)

Code Location

  • File: NeMo-Guardrails/nemoguardrails/actions/llm/generation.py
  • Function: generate_value

Describe the bug

In the current implementation of generate_value located at NeMo-Guardrails/nemoguardrails/actions/llm/generation.py, the method directly uses literal_eval to parse LLM-generated values. This approach can cause runtime errors, such as SyntaxError, when the generated values contain unescaped quotes, invalid syntax, or other formatting issues.

Steps To Reproduce

  1. Use the generate_value method in generation.py with an LLM-generated response containing unescaped single quotes, e.g., "It's a sunny day". or '"It is a sunny day"' or "'It is a sunny day'"
  2. Observe the exception raised by literal_eval.

Expected Behavior

The evaluation of the generated value should:

  1. Escape special characters (e.g., single and double quotes) gracefully.
  2. Avoid raising runtime exceptions by sanitizing the input string.

Actual Behavior

The original literal_eval approach raises SyntaxError when the generated value contains unescaped quotes or invalid syntax.

Proposed Solution

To address these issues, I implemented a new safe_eval function and updated the generate_value method as follows:

Modified Code

  1. Updated generate_value(line1040-1042):

    log.info(f"Generated value for ${var_name}: {value}")
    
    try:
        return safe_eval(value)
    except Exception:
        raise Exception(f"Invalid LLM response: `{value}`")
  2. New safe_eval Function:

    def safe_eval(input_value: str) -> str:
        if input_value.startswith(("'", '"')) and input_value.endswith(("'", '"')):
            try:
                return literal_eval(input_value)
            except ValueError:
                pass  
        escaped_value = input_value.replace("'", "\\'").replace('"', '\\"')
        input_value = f"'{escaped_value}'"
        return literal_eval(input_value)

Functionality of safe_eval:

  • Escaping Special Characters: Automatically escapes single (') and double (") quotes to avoid syntax errors.
  • Graceful Fallback: Handles exceptions raised by literal_eval and ensures sanitized input.
  • Integration: Works seamlessly with the generate_value method.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workinggood first issueGood for newcomersstatus: help wantedIssues where external contributions are encouraged.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions