Skip to content

chore(deps): update github workflows (major)#5424

Merged
davidzhao merged 1 commit intomainfrom
renovate/major-github-workflows
Apr 12, 2026
Merged

chore(deps): update github workflows (major)#5424
davidzhao merged 1 commit intomainfrom
renovate/major-github-workflows

Conversation

@renovate
Copy link
Copy Markdown
Contributor

@renovate renovate bot commented Apr 11, 2026

This PR contains the following updates:

Package Type Update Change Pending
actions/checkout action major v4v6
actions/download-artifact action major v4v8
actions/github-script action major v7v8 v9
actions/setup-python action major v5v6
actions/setup-python action major v4v6
actions/upload-artifact action major v4v6 v7
astral-sh/setup-uv action major v5v7

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

actions/checkout (actions/checkout)

v6

Compare Source

v5

Compare Source

actions/download-artifact (actions/download-artifact)

v8

Compare Source

v7

Compare Source

v6

Compare Source

v5

Compare Source

actions/github-script (actions/github-script)

v8: .0.0

Compare Source

What's Changed
⚠️ Minimum Compatible Runner Version

v2.327.1
Release Notes

Make sure your runner is updated to this version or newer to use this release.

New Contributors

Full Changelog: actions/github-script@v7.1.0...v8.0.0

actions/setup-python (actions/setup-python)

v6

Compare Source

actions/upload-artifact (actions/upload-artifact)

v6

Compare Source

v5

Compare Source

astral-sh/setup-uv (astral-sh/setup-uv)

v7

Compare Source

v6

Compare Source


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • At any time (no schedule defined)
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Generated by renovateBot
@chenghao-mou chenghao-mou requested a review from a team April 11, 2026 23:16
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

@github-actions
Copy link
Copy Markdown
Contributor

STT Test Results

Status: ✗ Some tests failed

Metric Count
✓ Passed 19
✗ Failed 4
× Errors 1
→ Skipped 15
▣ Total 39
⏱ Duration 208.9s
Failed Tests
  • tests.test_stt::test_recognize[livekit.plugins.google]
    self = <google.api_core.grpc_helpers_async._WrappedUnaryUnaryCall object at 0x7f43f8b482f0>
    
        def __await__(self) -> Iterator[P]:
            try:
    >           response = yield from self._call.__await__()
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    .venv/lib/python3.12/site-packages/google/api_core/grpc_helpers_async.py:86: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <grpc.aio._interceptor.InterceptedUnaryUnaryCall object at 0x7f43f8b789b0>
    
        def __await__(self):
            call = yield from self._interceptors_task.__await__()
    >       response = yield from call.__await__()
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    .venv/lib/python3.12/site-packages/grpc/aio/_interceptor.py:474: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <_AioCall of RPC that terminated with:
    	status = Request had invalid authentication credentials. Expected OAuth 2 acce...hentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.", grpc_status:16}"
    >
    
        def __await__(self) -> Generator[Any, None, ResponseType]:
            """Wait till the ongoing RPC request finishes."""
            try:
                response = yield from self._call_response
            except asyncio.CancelledError:
                # Even if we caught all other CancelledError, there is still
                # this corner case. If the application cancels immediately after
                # the Call object is created, we will observe this
                # `CancelledError`.
                if not self.cancelled():
                    self.cancel()
                raise
      
            # NOTE(lidiz) If we raise RpcError in the task, and users doesn't
            # 'await' on it. AsyncIO will log 'Task exception was never retrieved'.
            # Instead, if we move the exception raising here, the spam stops.
            # Unfortunately, there can only be one 'yield from' in '__await__'. So,
            # we need to access the private instanc
    
  • tests.test_stt::test_stream[livekit.plugins.speechmatics]
    def finalizer() -> None:
            """Yield again, to finalize."""
      
            async def async_finalizer() -> None:
                try:
                    await gen_obj.__anext__()
                except StopAsyncIteration:
                    pass
                else:
                    msg = "Async generator fixture didn't stop."
                    msg += "Yield only once."
                    raise ValueError(msg)
      
    >       runner.run(async_finalizer(), context=context)
    
    .venv/lib/python3.12/site-packages/pytest_asyncio/plugin.py:330: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <asyncio.runners.Runner object at 0x7f43f8b71730>
    coro = <coroutine object _wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.finalizer.<locals>.async_finalizer at 0x7f43f4650110>
    
        def run(self, coro, *, context=None):
            """Run a coroutine inside the embedded event loop."""
            if not coroutines.iscoroutine(coro):
                raise ValueError("a coroutine was expected, got {!r}".format(coro))
      
            if events._get_running_loop() is not None:
                # fail fast with short traceback
                raise RuntimeError(
                    "Runner.run() cannot be called from a running event loop")
      
            self._lazy_init()
      
            if context is None:
                context = self._context
            task = self._loop.create_task(coro, context=context)
      
            if (threading.current_thread() is threading.main_thread()
                and signal.getsignal(signal.SIGINT) is signal.default_int_handler
            ):
                sigint_handler = functools.partial(self._on_sigint, main_task=task)
                try:
                    signal.signal(signal.SIGINT, sigint_handler)
                except ValueError:
                    # `signal.signal` may throw if `threading.main_thread` does
                    # not support signals (e.g. embedded interpreter with signals
                    # not registered - see gh-91880)
                    sigint_handler = None
    
  • tests.test_stt::test_stream[livekit.plugins.nvidia]
    stt_factory = <function parameter_factory.<locals>.<lambda> at 0x7f0985425760>
    request = <FixtureRequest for <Coroutine test_stream[livekit.plugins.nvidia]>>
    
        @pytest.mark.usefixtures("job_process")
        @pytest.mark.parametrize("stt_factory", STTs)
        async def test_stream(stt_factory: Callable[[], STT], request):
            sample_rate = SAMPLE_RATE
            plugin_id = request.node.callspec.id.split("-")[0]
            frames, transcript, _ = await make_test_speech(chunk_duration_ms=10, sample_rate=sample_rate)
      
            # TODO: differentiate missing key vs other errors
            try:
                stt_instance: STT = stt_factory()
            except ValueError as e:
                pytest.skip(f"{plugin_id}: {e}")
      
            async with stt_instance as stt:
                label = f"{stt.model}@{stt.provider}"
                if not stt.capabilities.streaming:
                    pytest.skip(f"{label} does not support streaming")
      
                for attempt in range(MAX_RETRIES):
                    try:
                        state = {"closing": False}
      
                        async def _stream_input(
                            frames: list[rtc.AudioFrame], stream: RecognizeStream, state: dict = state
                        ):
                            for frame in frames:
                                stream.push_frame(frame)
                                await asyncio.sleep(0.005)
      
                            stream.end_input()
                            state["closing"] = True
      
                        async def _stream_output(stream: RecognizeStream, state: dict = state):
                            text = ""
                            # make sure the events are sent in the right order
                            recv_start, recv_end = False, True
                            start_time = time.time()
                            got_final_transcript = False
      
                            async for event in stream:
                                if event.type == agents.stt.SpeechEventType.START_OF_SPEECH:
    
  • tests.test_stt::test_stream[livekit.agents.inference]
    stt_factory = <function parameter_factory.<locals>.<lambda> at 0x7f0985425a80>
    request = <FixtureRequest for <Coroutine test_stream[livekit.agents.inference]>>
    
        @pytest.mark.usefixtures("job_process")
        @pytest.mark.parametrize("stt_factory", STTs)
        async def test_stream(stt_factory: Callable[[], STT], request):
            sample_rate = SAMPLE_RATE
            plugin_id = request.node.callspec.id.split("-")[0]
            frames, transcript, _ = await make_test_speech(chunk_duration_ms=10, sample_rate=sample_rate)
      
            # TODO: differentiate missing key vs other errors
            try:
                stt_instance: STT = stt_factory()
            except ValueError as e:
                pytest.skip(f"{plugin_id}: {e}")
      
            async with stt_instance as stt:
                label = f"{stt.model}@{stt.provider}"
                if not stt.capabilities.streaming:
                    pytest.skip(f"{label} does not support streaming")
      
                for attempt in range(MAX_RETRIES):
                    try:
                        state = {"closing": False}
      
                        async def _stream_input(
                            frames: list[rtc.AudioFrame], stream: RecognizeStream, state: dict = state
                        ):
                            for frame in frames:
                                stream.push_frame(frame)
                                await asyncio.sleep(0.005)
      
                            stream.end_input()
                            state["closing"] = True
      
                        async def _stream_output(stream: RecognizeStream, state: dict = state):
                            text = ""
                            # make sure the events are sent in the right order
                            recv_start, recv_end = False, True
                            start_time = time.time()
                            got_final_transcript = False
      
                            async for event in stream:
                                if event.type == agents.stt.SpeechEventType.START_OF_SPEECH:
    
  • tests.test_stt::test_stream[livekit.plugins.google]
    stt_factory = <function parameter_factory.<locals>.<lambda> at 0x7f43f842dc60>
    request = <FixtureRequest for <Coroutine test_stream[livekit.plugins.google]>>
    
        @pytest.mark.usefixtures("job_process")
        @pytest.mark.parametrize("stt_factory", STTs)
        async def test_stream(stt_factory: Callable[[], STT], request):
            sample_rate = SAMPLE_RATE
            plugin_id = request.node.callspec.id.split("-")[0]
            frames, transcript, _ = await make_test_speech(chunk_duration_ms=10, sample_rate=sample_rate)
      
            # TODO: differentiate missing key vs other errors
            try:
                stt_instance: STT = stt_factory()
            except ValueError as e:
                pytest.skip(f"{plugin_id}: {e}")
      
            async with stt_instance as stt:
                label = f"{stt.model}@{stt.provider}"
                if not stt.capabilities.streaming:
                    pytest.skip(f"{label} does not support streaming")
      
                for attempt in range(MAX_RETRIES):
                    try:
                        state = {"closing": False}
      
                        async def _stream_input(
                            frames: list[rtc.AudioFrame], stream: RecognizeStream, state: dict = state
                        ):
                            for frame in frames:
                                stream.push_frame(frame)
                                await asyncio.sleep(0.005)
      
                            stream.end_input()
                            state["closing"] = True
      
                        async def _stream_output(stream: RecognizeStream, state: dict = state):
                            text = ""
                            # make sure the events are sent in the right order
                            recv_start, recv_end = False, True
                            start_time = time.time()
                            got_final_transcript = False
      
                            async for event in stream:
                                if event.type == agents.stt.SpeechEventType.START_OF_SPEECH:
    
Skipped Tests
Test Reason
tests.test_stt::test_recognize[livekit.plugins.assemblyai] universal-streaming-english@AssemblyAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.speechmatics] enhanced@Speechmatics does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.fireworksai] unknown@FireworksAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.cartesia] ink-whisper@Cartesia does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.soniox] stt-rt-v4@Soniox does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.aws] unknown@Amazon Transcribe does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.deepgram.STTv2] flux-general-en@Deepgram does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.gradium.STT] unknown@Gradium does not support batch recognition
tests.test_stt::test_recognize[livekit.agents.inference] unknown@livekit does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.azure] unknown@Azure STT does not support batch recognition
tests.test_stt::test_stream[livekit.plugins.elevenlabs] scribe_v1@ElevenLabs does not support streaming
tests.test_stt::test_recognize[livekit.plugins.nvidia] unknown@unknown does not support batch recognition
tests.test_stt::test_stream[livekit.plugins.mistralai] voxtral-mini-latest@MistralAI does not support streaming
tests.test_stt::test_stream[livekit.plugins.fal] Wizper@Fal does not support streaming
tests.test_stt::test_stream[livekit.plugins.openai] gpt-4o-mini-transcribe@api.openai.com does not support streaming

Triggered by workflow run #1657

@davidzhao davidzhao merged commit f10dcb2 into main Apr 12, 2026
30 checks passed
@davidzhao davidzhao deleted the renovate/major-github-workflows branch April 12, 2026 20:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant