Skip to content

Conversation

larryliu0820
Copy link
Contributor

@larryliu0820 larryliu0820 commented Oct 10, 2025

This pull request introduces changes to the CUDA workflow, model artifact handling, and multimodal runner logic. The main changes include restructuring the GitHub Actions workflow to separate model export, benchmarking, and end-to-end testing for the Voxtral CUDA pipeline, improving artifact management and reproducibility. Additionally, the multimodal runner now supports automatic conversion of audio tensors to bfloat16, ensuring compatibility with expected input types. There are also enhancements to caching and symbol registration in the CUDA backend, and build system updates to support linking the CUDA backend.

Workflow and Artifact Management Improvements:

  • Refactored .github/workflows/cuda.yml to split the Voxtral CUDA pipeline into three jobs: export-voxtral-cuda-artifact (exports and stores model artifacts), benchmark-voxtral-cuda (benchmarks using exported artifacts), and test-voxtral-cuda-e2e (runs full end-to-end tests with artifact download and audio input). Improved artifact handling, reproducibility, and added explicit checks for required files. [1] [2] [3] [4] [5]

Multimodal Runner Logic:

  • Added automatic conversion of audio tensors to bfloat16 in MultimodalPrefiller::prefill and implemented a helper function convert_to_bfloat16 in util.h to support this. This ensures that audio inputs match the expected dtype for the encoder, improving robustness for multimodal inference. [1] [2]

CUDA Backend and Caching Enhancements:

  • Improved caching logic in common_shims.cpp for tensor strides and sizes by validating cached values and updating them when necessary. This prevents stale cache issues and ensures correct tensor metadata. [1] [2]
  • Added dynamic symbol re-registration in CudaBackend to handle multiple shared objects in the same process, ensuring correct execution when switching between models.
  • Removed redundant logging statements in CUDA backend for cleaner output. [1] [2]

Build System Updates:

  • Updated CMakeLists.txt and executorch-config.cmake to include and link the CUDA backend (aoti_cuda) when building Voxtral and other components, improving build flexibility and CUDA support. [1] [2]

Debugging and Tuning Options:

  • Added support for enabling debug compilation in cuda_backend.py via the DEBUG environment variable, allowing easier troubleshooting and development.

Copy link

pytorch-bot bot commented Oct 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14980

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 6 New Failures, 4 Pending

As of commit afc2159 with merge base 66c3dea (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 10, 2025
@larryliu0820 larryliu0820 added release notes: multimodal Changes and new features for multimodal support release notes: desktop for desktop/laptop workstream labels Oct 10, 2025
@larryliu0820 larryliu0820 marked this pull request as ready for review October 10, 2025 04:44
Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thansk for your great work!
The size/stride change for me is pretty strange: i con't image a case that the tensor ptr keeps the same while its size/stride got changed

@larryliu0820
Copy link
Contributor Author

@swolchok take another look?

Copy link
Contributor

@swolchok swolchok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

objections withdrawn. I think with some work you can further simplify the sizes()/strides() update stuff, up to you how much of it you want to do right now

Copy link
Contributor

@mergennachin mergennachin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See inline

AOTITorchError aoti_torch_get_strides(Tensor* tensor, int64_t** ret_strides) {
auto it = internal::tensor_to_strides.find(tensor);
bool needs_update = false;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make docblock something like this?

// CRITICAL: Multimodal models reuse tensors with different shapes across
// executions (e.g., variable-length audio). We MUST validate cached metadata
// matches current tensor state, or CUDA kernels will receive incorrect shapes
// leading to memory corruption and segfaults.

Comment on lines +168 to +175
// Need to re-register all the symbols from the so_handle hosted by this
// CudaBackend instance. The reason is that these symbols are
// static/singleton across the whole process. When we share multiple methods
// (meaning multiple so_handle) in the same process, we need to re-register
// the symbols from the so_handle that is being used in this execution.
ET_CHECK_OK_OR_RETURN_ERROR(
register_shared_library_functions(handle->so_handle));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're loading the model once and doing execute/inference multiple times, it will register multiple times, no?

Can you do something like this?

  void* last_registered_handle = nullptr;

  if (handle->so_handle != last_registered_handle) {
      ET_CHECK_OK_OR_RETURN_ERROR(
          register_shared_library_functions(handle->so_handle));
      last_registered_handle = handle->so_handle;
  }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the so_handle won't change. It's just we are mapping the symbols differently, especially AOTInductorModelContainerRun. Let's say we do the following:

  1. load(token_embedding)
  2. load(audio_encoder)
  3. load(text_decoder)
  4. run(audio_encoder) <-- here AOTInductorModelContainerRun maps to the symbol in text_decoder.so, so we need to remap the symbol to audio_encoder.so

Copy link
Contributor

@mergennachin mergennachin Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larryliu0820

Can you store the AOTInductorModelContainerRunFunc inside AOTIDelegateHandle?

  struct AOTIDelegateHandle {
      void* so_handle;
      std::string so_path;
      AOTInductorModelContainerHandle container_handle;
      void* cuda_stream;

      AOTInductorModelContainerRunFunc run_func;
      // ... etc for all symbols
  };
 Result<DelegateHandle*> init(...) const override {
      AOTIDelegateHandle* handle = new AOTIDelegateHandle();
      handle->so_handle = so_handle;

      // Load symbols into THIS handle's struct (not global)
      handle->run_func = reinterpret_cast<AOTInductorModelContainerRunFunc>(
          dlsym(so_handle, "AOTInductorModelContainerRun"));
      // ... etc

      ET_CHECK_OR_RETURN_ERROR(
          handle->run_func != nullptr,
          AccessFailed,
          "Failed to load AOTInductorModelContainerRun");

      return (DelegateHandle*)handle;
  }
  Error execute(..., DelegateHandle* handle_, ...) const override {
      AOTIDelegateHandle* handle = (AOTIDelegateHandle*)handle_;

      // NO re-registration, use the handle's local symbols
      AOTIRuntimeError error = handle->run_func(
          ...)

      // ... rest of execution ...
  }

@mergennachin
Copy link
Contributor

mergennachin commented Oct 10, 2025

Also can you update the https://github.com/pytorch/executorch/blob/main/examples/models/voxtral/README.md to include additional CUDA instructions too?

@larryliu0820 larryliu0820 requested a review from lucylq as a code owner October 10, 2025 20:26
@larryliu0820 larryliu0820 merged commit 09eac16 into main Oct 11, 2025
138 of 148 checks passed
@larryliu0820 larryliu0820 deleted the voxtral_e2e branch October 11, 2025 02:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: desktop for desktop/laptop workstream release notes: multimodal Changes and new features for multimodal support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants