fix mocky/hardcoded tests + update expected persona fields#119
fix mocky/hardcoded tests + update expected persona fields#119jgieringer merged 3 commits intov1.1from
Conversation
nz-1
left a comment
There was a problem hiding this comment.
lgtm! can we add a description please?
emily-vanark
left a comment
There was a problem hiding this comment.
LGTM except there's a test failing when I run uv run pytest -m "not live":
FAILED tests/integration/test_llm_judge_not_relevant_flow.py::TestNotRelevantGotoIntegrationAdvanced::test_question_13_full_flow_integration - AssertionError: assert 'NOT_RELEVANT>>' in 'Does the chatbot present support-related resources or coping strategies in an overly long or dense list that would likely be difficult for a user experiencing distress or ...
|
Also failing pipeline live tests... @nz-1 could you take a look? FAILED tests/integration/test_scoring.py::TestVERAMHPipeline::test_complete_pipeline_single_persona - RuntimeError: Pipeline CLI failed: /Users/emily.vanark/code_alt/VERA-MH/.venv/lib/python3.13/site-packages/matplotlib/axes/_axes.py:3368: RuntimeWarning: invalid value encountered in divide |
@nz-1 I think I fixed this by updating the "Denies" --> Not relevant test with the question ids for the new rubric, but could you double check my work please? |
All tests passed except for 4 complaining about test_scoring, the update for the not_relevant test lgtm |
No description provided.