Skip to content

Docs CI is broken #203

@scotts

Description

@scotts

🐛 Describe the bug

Our CI for docs is broken. See example: https://github.com/pytorch/torchcodec/actions/runs/10744233389/job/29800698755?pr=202. The failure is:

generating gallery for generated_examples... [100%] basic_example.py
Warning, treated as error:

../../examples/basic_example.py unexpectedly failed to execute correctly:

    Traceback (most recent call last):
      File "/home/runner/work/torchcodec/torchcodec/examples/basic_example.py", line 66, in <module>
        decoder = SimpleVideoDecoder(raw_video_bytes)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/runner/work/torchcodec/torchcodec/src/torchcodec/decoders/_simple_video_decoder.py", line 1[31](https://github.com/pytorch/torchcodec/actions/runs/10743607733/job/29798699894?pr=201#step:8:32), in __init__
        core.scan_all_streams_to_update_metadata(self._decoder)
      File "/usr/share/miniconda3/envs/test/lib/python3.12/site-packages/torch/_ops.py", line 667, in __call__
        return self_._op(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
    NotImplementedError: Could not run 'torchcodec_ns::scan_all_streams_to_update_metadata' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::scan_all_streams_to_update_metadata' is only available for these backends: [CUDA, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

The PR that kicked off this job only added a single space to the README: #202. This code being reference has not changed recently.

The only idea I have is that we're pulling PyTorch nightly: https://github.com/pytorch/torchcodec/blob/2b4f8094e1bba2ff30225c70209b25d60fc2bca8/.github/workflows/docs.yaml#L31
Could something have changed upstream with how these operators get registered? However, the same is true for other tests that succeed. For example: https://github.com/pytorch/torchcodec/blob/2b4f8094e1bba2ff30225c70209b25d60fc2bca8/.github/workflows/wheel.yaml#L81

Versions

Internal CI.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions