Conversation
📝 WalkthroughWalkthroughA sweeping standardization refactors gemini.LLM() usage across examples and tests by removing explicit model parameters, adds Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@plugins/moondream/tests/test_moondream_local_vlm.py`:
- Around line 64-67: The class decorator list on TestMoondreamLocalVLM includes
an unconditional `@pytest.mark.skip` which prevents any tests from running and
makes the `@pytest.mark.skipif`(HF_TOKEN) ineffective; remove the bare
`@pytest.mark.skip` decorator from the TestMoondreamLocalVLM class so the
conditional `@pytest.mark.skipif`(not os.getenv("HF_TOKEN"), reason=...) controls
skipping (or replace the unconditional skip with a skip(reason="...") if you
truly want a documented temporary disable).
In `@plugins/moondream/tests/test_moondream.py`:
- Around line 86-97: The tests are violating the "never mock in tests" guideline
by monkey-patching processor._run_detection_sync in tests like test_moondream.py
(affecting test_run_inference, test_process_and_add_frame, etc.); refactor tests
to avoid monkey-patching by either invoking the real detection path with
deterministic inputs (use a controlled/sample image so
processor._run_detection_sync runs normally) or refactor the processor to accept
an injectable detection dependency (e.g., a DetectionInterface passed to the
constructor) and update tests to construct a test-specific implementation
instead of replacing processor._run_detection_sync at runtime; ensure tests such
as test_run_inference and test_process_and_add_frame create the processor with
the injectable dependency or use known test images and remove any assignments to
processor._run_detection_sync.
- Around line 240-244: The test uses hasattr(processor, "_last_results") which
violates the guideline; replace the hasattr check with direct attribute access
on processor._last_results and assert its contents (e.g., assert
processor._last_results is not None and "detections" in processor._last_results)
so an AttributeError will surface if the attribute is missing and the test still
verifies "detections" exists; update the lines referencing hasattr to directly
access processor._last_results in the test before calling await
processor.close().
🧹 Nitpick comments (6)
plugins/deepgram/tests/test_deepgram_stt.py (1)
29-36: Consider extracting the duplicateParticipantinto a local variable.Two identical
Participant({}, user_id="hi", id="hi")constructions in the same test — a small thing, like two mirrors facing each other in an empty room. A local variable or the sharedparticipantfixture fromconftest.pywould reduce duplication.♻️ Suggested simplification
+ participant = Participant({}, user_id="hi", id="hi") # Process the mia audio await stt.process_audio( - mia_audio_48khz, participant=Participant({}, user_id="hi", id="hi") + mia_audio_48khz, participant=participant ) # Send 2 seconds of silence to trigger end of turn await stt.process_audio( - silence_2s_48khz, participant=Participant({}, user_id="hi", id="hi") + silence_2s_48khz, participant=participant )tests/test_function_calling.py (1)
5-12: The mocking in this file violates coding guidelines — but that's pre-existing, not introduced here.The newly added
OpenAILLMimport andpytestreorder look fine. Just noting: the extensiveMock/@patchusage throughout this file conflicts with the project guideline "Never mock in tests." This predates the current PR, but it may be worth a follow-up cleanup. As per coding guidelines,**/*test*.py: "Never mock in tests; use pytest for testing."plugins/moondream/tests/test_moondream_local_vlm.py (2)
68-86: Redundantwarmup()call—the fixture already warms up the VLM.The
local_vlm_vqafixture (line 43) already callsawait vlm.warmup()before yielding. Calling it again on line 73 is unnecessary overhead (and the same pattern repeats intest_local_caption_modeon line 93).Proposed fix
async def test_local_vqa_mode( self, golf_frame: av.VideoFrame, local_vlm_vqa: LocalVLM ): """Test LocalVLM VQA mode with a question about the image.""" - - await local_vlm_vqa.warmup() assert local_vlm_vqa.model is not None, "Model must be loaded before test"
34-46: Async generator fixtures should useAsyncIteratorin the return type hint, not a bare type.These fixtures
yield, making them generators, but the type hint says-> LocalVLM/-> LocalVLM. This won't cause runtime failures but misrepresents the fixture's nature to type checkers and readers.Also applies to: 49-61
plugins/moondream/tests/test_moondream.py (2)
43-56: Image opened in_threadis never closed—potential resource leak.
Image.open(image_path)on line 49 is not wrapped in awithstatement or explicitly closed. The image handle stays open until GC collects it. Awithblock would be safer.Proposed fix
def _thread(): if image_path.exists(): - image = Image.open(image_path) - frame_array = np.array(image) - return frame_array + with Image.open(image_path) as image: + frame_array = np.array(image) + return frame_array else: return None
332-334:loggingor remove.Multiple integration tests use
print()for diagnostic output (lines 332–334, 360–363, 392–396). In a pytest-based suite, captured stdout is typically hidden unless the test fails. Consider usingloggingor pytest'scaplog/capsysif the output is genuinely valuable, or removing it if it was debug scaffolding.Also applies to: 360-363, 392-396
"nvidia/cosmos-reason2-8b" model doesn't seem to be available for completions api
This PR fixes some integration tests, see commits for more details.
Summary by CodeRabbit
Release Notes
Documentation
New Features
idfield in addition to existing properties.Tests
Updates