Skip to content

Conversation

milk333445
Copy link
Contributor

@milk333445 milk333445 commented Jan 21, 2025

Description

This PR improves the generate_value function in NeMo-Guardrails/nemoguardrails/actions/llm/generation.py to handle strings with unescaped single and double quotes more robustly. The updated implementation ensures that:

  • Strings wrapped in quotes (single or double) are correctly evaluated using literal_eval.
  • Strings without quotes are escaped and wrapped in valid Python string syntax before evaluation.
  • The function gracefully handles errors caused by invalid input syntax.

Additionally, new test cases have been added NeMo-Guardrails/tests/test_generate_value_safe_eval.py to verify the correctness of safe_eval across different scenarios, including:

  • Unquoted strings.
  • Strings with nested single and double quotes.
  • Empty strings.

All existing and new tests were successfully executed locally using pytest to ensure the changes do not introduce any regressions.

Related Issue(s)

Checklist

  • I've read the CONTRIBUTING guidelines.
  • I've updated the documentation if applicable.
  • I've added tests if applicable.
  • @mentions of the person or team responsible for reviewing proposed changes.

Copy link
Collaborator

@Pouyanpi Pouyanpi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @milk333445 for your contribution 👍🏻 Please make the minor changes requested then we are good to merge

@milk333445 milk333445 force-pushed the fix/literal-eval-bug branch from 48d8baf to f305ca9 Compare January 21, 2025 08:49
@milk333445
Copy link
Contributor Author

The requested changes have been implemented as follows:

  1. Moved the safe_eval function to nemoguardrails/utils.py as suggested.
  2. Added logging to capture errors when exceptions are raised.
  3. Refactored the test cases:
    • Moved the tests for safe_eval to test_utils.py as it is now part of the utilities.
    • Consolidated all test cases for safe_eval into a single function using pytest.mark.parametrize.

@Pouyanpi Pouyanpi self-requested a review January 21, 2025 09:32
@Pouyanpi
Copy link
Collaborator

@milk333445 thanks a lot, it was fast 👍🏻 Just make sure that you sign your commits: you can do an interactive rebase and just sign them, then do a force push.

@Pouyanpi Pouyanpi added this to the v0.12.0 milestone Jan 21, 2025
@Pouyanpi Pouyanpi added the enhancement New feature or request label Jan 21, 2025
@milk333445 milk333445 force-pushed the fix/literal-eval-bug branch from 556268d to cccc62f Compare January 21, 2025 12:02
@milk333445
Copy link
Contributor Author

I have signed all the commits, and they are now verified.
Please review the changes again and let me know if there's anything else needed. Thank you for your guidance!

@Pouyanpi Pouyanpi requested review from Pouyanpi and removed request for Pouyanpi January 21, 2025 13:36
Copy link
Collaborator

@Pouyanpi Pouyanpi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! 👍🏻

@Pouyanpi Pouyanpi merged commit 485bfd2 into NVIDIA-NeMo:develop Jan 21, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: Enhance LLM response evaluation in generate_value to handle invalid syntax
2 participants