Skip to content

Conversation

@chenghao-mou
Copy link
Member

This should close #4459

Inference just merged a change to support language detection. So it should also work soon.

@chenghao-mou chenghao-mou requested a review from a team January 15, 2026 16:30
@chenghao-mou
Copy link
Member Author

/test-stt

@github-actions
Copy link
Contributor

STT Test Results

Status: ✗ Some tests failed

Metric Count
✓ Passed 23
✗ Failed 0
× Errors 1
→ Skipped 15
▣ Total 39
⏱ Duration 179.0s
Failed Tests
  • tests.test_stt::test_stream[livekit.plugins.aws]
    def finalizer() -> None:
            """Yield again, to finalize."""
      
            async def async_finalizer() -> None:
                try:
                    await gen_obj.__anext__()  # type: ignore[union-attr]
                except StopAsyncIteration:
                    pass
                else:
                    msg = "Async generator fixture didn't stop."
                    msg += "Yield only once."
                    raise ValueError(msg)
      
            task = _create_task_in_context(event_loop, async_finalizer(), context)
    >       event_loop.run_until_complete(task)
    
    .venv/lib/python3.12/site-packages/pytest_asyncio/plugin.py:347: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <_UnixSelectorEventLoop running=False closed=True debug=False>
    future = <Task finished name='Task-113' coro=<_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.finalizer.<loc... File "/home/runner/work/agents/agents/.venv/lib/python3.12/site-packages/smithy_http/aio/crt.py", line 104, in chunks>
    
        def run_until_complete(self, future):
            """Run until the Future is done.
      
            If the argument is a coroutine, it is wrapped in a Task.
      
            WARNING: It would be disastrous to call run_until_complete()
            with the same coroutine twice -- it would wrap it in two
            different Tasks and that can't be good.
      
            Return the Future's result, or raise its exception.
            """
            self._check_closed()
            self._check_running()
      
            new_task = not futures.isfuture(future)
            future = tasks.ensure_future(future, loop=self)
            if new_task:
                # An exception is raised if the future didn't complete, so there
                # is no need to log the "destroy pending task" message
                future._log_destroy_pending = False
      
            future.add_done_callback(_run_until_complete_cb)
            try:
                self.run_forever()
            except:
                if new_task and future.done() and not future.canc
    
Skipped Tests
Test Reason
tests.test_stt::test_recognize[livekit.plugins.assemblyai] universal-streaming-english@AssemblyAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.fireworksai] unknown@FireworksAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.speechmatics] unknown@Speechmatics does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.nvidia] unknown@unknown does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.aws] unknown@Amazon Transcribe does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.cartesia] ink-whisper@Cartesia does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.soniox] stt-rt-v3@Soniox does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.deepgram.STTv2] flux-general-en@Deepgram does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.gradium.STT] unknown@Gradium does not support batch recognition
tests.test_stt::test_recognize[livekit.agents.inference] unknown@livekit does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.azure] unknown@Azure STT does not support batch recognition
tests.test_stt::test_stream[livekit.plugins.elevenlabs] Scribe@ElevenLabs does not support streaming
tests.test_stt::test_stream[livekit.plugins.mistralai] voxtral-mini-latest@MistralAI does not support streaming
tests.test_stt::test_stream[livekit.plugins.openai] gpt-4o-mini-transcribe@api.openai.com does not support streaming
tests.test_stt::test_stream[livekit.plugins.fal] Wizper@Fal does not support streaming

Triggered by workflow run #194

@davidzhao davidzhao merged commit 18d2519 into main Jan 15, 2026
17 of 19 checks passed
@davidzhao davidzhao deleted the feat/assembly-stt-language branch January 15, 2026 18:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AssemblyAI STT Model Name Clarification: LiveKit Inference vs Direct Plugin

3 participants