fix: issues with tests (alora example, rag intrinsics, mistral tool use, vllm auto-skip)#570
fix: issues with tests (alora example, rag intrinsics, mistral tool use, vllm auto-skip)#570jakelorocco merged 3 commits intomainfrom
Conversation
|
The PR description has been updated. Please fill out the template for your PR to be reviewed. |
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
8795893 to
8ed36c8
Compare
8ed36c8 to
f9b18ce
Compare
f9b18ce to
42e7e80
Compare
|
Two of the test_rag tests failed for me: |
|
The other three all passed (or were skipped) successfully |
@psschwei, can you please clarify. Did these tests fail when running against this branch and with packages updated? |
Yes, against this branch with a fresh venv (checkout branch as new worktree and |
|
though I think your force push came after I checked out, let me retry |
|
more failures now (though all seem to be related to modules not found after the granite-common merge) |
42e7e80 to
f919af4
Compare
|
Updated the commit to fix the pyproject packages and tests pass for me locally on mac and on linux with a clean environment. |
psschwei
left a comment
There was a problem hiding this comment.
tests all pass for me now too
…se, vllm auto-skip) (generative-computing#570) * fix: issues with tests (alora example, rag intrinsics, mistral tool use) * fix: uv lock update after pyproject changes
Misc PR
Type of PR
Description
A few issues with tests that I stumbled on. These were errors in our test code not the mellea code the tests were testing.
test/backends/test_huggingface_tools.py- we are using a mistral model that requires sentencepiece package installed -> fixed in pyproject.toml
test/stdlib/components/intrinsic/test_rag.py- changes to the adapters for citations / hallucination detection resulted in slightly different values -> fixed the expected data
docs/examples/aLora/102_example.py- expected input-> fixed by skipping this example and unskipping the 101_example.py that tests the same functionality.
test/backends/test_openai_vllm.py -> exceptions raised during vllm setup were causing the test to error out instead of be skipped
Test passes
Testing