Skip to content

[Feature][QDP] F32 support for angle/basis encoders, fidelity metrics and pipeline improvements#1275

Merged
ryankert01 merged 8 commits into
apache:mainfrom
rich7420:pipeline-improvement
May 11, 2026
Merged

[Feature][QDP] F32 support for angle/basis encoders, fidelity metrics and pipeline improvements#1275
ryankert01 merged 8 commits into
apache:mainfrom
rich7420:pipeline-improvement

Conversation

@rich7420
Copy link
Copy Markdown
Contributor

@rich7420 rich7420 commented Apr 19, 2026

Related Issues

Changes

  • Bug fix
  • New feature
  • Refactoring
  • Documentation
  • Test
  • CI/CD pipeline
  • Other

Why

Angle and basis encoding always dispatched to F64 CUDA kernels even though F32
is sufficient for typical ML workloads. The hard-coded prefetch depth of 16
was also unsafe at high qubit counts (20-qubit amplitude encoding = 512 MB/batch
× 16 = 8 GB of buffered data). The PipelineIterator shutdown sequence also
had no ordered teardown, risking thread leaks on drop.

How

F32 batch kernels for angle and basis

  • Added angle_encode_batch_kernel_f32 (grid-stride) and
    launch_angle_encode_batch_f32 in angle.cu
  • Added basis_encode_kernel_f32, basis_encode_batch_kernel_f32, and their
    launchers in basis.cu
  • Extended Encoding::supports_f32() to return true for
    Amplitude | Angle | Basis

F32 zero-copy CUDA tensor paths

  • New encode_{,angle_,basis_}{,batch_}from_gpu_ptr_f32{,_with_stream} methods
    on QdpEngine cover the six (encoding, ndim) combinations
    (Amplitude | Angle | Basis × 1D | 2D)
  • _encode_from_cuda_tensor (qdp-python/src/engine.rs) was refactored from
    six separately-wrapped branches into a single match (encoding, ndim) table,
    hoisting the duplicated validate_shape + get_torch_cuda_stream_ptr calls
    shared by both the f32 and f64 paths (-45 LoC, behavior unchanged, clippy clean)

GPU input validation (new gpu/validation.rs + kernels/validation.cu)

  • Basis f32 zero-copy needs to reject inputs that violate the basis-index
    contract before they touch the state-vector write. Float32 inputs can be
    NaN/Inf, negative, non-integer, or ≥ 2^num_qubits — any of which
    would silently corrupt the encoded state if not caught
  • check_basis_indices_kernel_{f32,f64} uses an atomic error-flag bitmask
    (NON_FINITE | NEGATIVE | NON_INTEGER | OUT_OF_RANGE) so a single device
    pass catches all four failure modes
  • assert_all_finite_f32 is reused by the angle f32 path; integration test
    test_angle_batch_f32_rejects_nan / _rejects_infinity verifies behavior

Benchmark results

Original (different host), 16 qubits, batch_size = 64, 200 batches:

Encoding main (F64) this PR (F32) Speedup
amplitude 1,170 vec/s 1,184 vec/s ~1×
angle 7,466 vec/s 29,790 vec/s
basis 59,883 vec/s 314,031 vec/s

Re-verified locally post-rebase (RTX 2080 Ti, 16 qubits, batch_size = 64,
200 batches, prefetch = 16):

Encoding this PR (F32) — sustained pipeline
amplitude 13,583 vec/s
angle 177,391 vec/s
basis 622,483 vec/s

Amplitude tracks the pre-rebase baseline (≤2% delta on this hardware);
angle and basis show the expected 4–5× F32 speedup relative to F64.

Auto prefetch depth

  • compute_optimal_prefetch_depth(num_qubits, batch_size, encoding, dtype)
    targets a 256 MB CPU buffer, clamped to [1, 32]; checked arithmetic
    guards overflow at very large qubit counts. Default prefetch_depth is
    now 0 (resolved in normalize() at iterator start)

PipelineIterator cleanup

  • rx wrapped in Mutex<Receiver> (required for PyO3 #[pyclass] Sync bound)
  • recycle_tx and producer_handle changed to Option<_> for owned teardown
  • Drop follows correct shutdown order: drop sender → drain channel → join thread

Fidelity metrics (gpu/metrics.rs)

  • CPU-side |⟨ψ|φ⟩|² fidelity and trace distance, including cross-precision
    (F32 vs F64) helpers
  • Module + re-exports are #[doc(hidden)] and gated as test-only — not part
    of the supported runtime API. Integration tests tests/gpu_fidelity.rs
    verify ≥ 0.9999996 fidelity at 8 / 12 / 16 / 20 qubits

Python / benchmark

  • backend("auto") falls back to PyTorch with a RuntimeWarning
  • as_torch_dataset() wraps the loader as a torch.utils.data.IterableDataset
  • --warmup N flag added to benchmark_throughput.py / benchmark_latency.py
    (default 5; same as the previously-hardcoded WARMUP_BATCHES)
  • Fixed misleading profile label GPU::H2D_Indices_f32GPU::H2D_BasisIndices
  • Restored _validate_loader_args() call on the synthetic path

CI / docs

  • python-testing.yml adds a fast rust-check gate (with and without the
    QDP_NO_CUDA stub-only build) so duplicate stub definitions or cfg
    mismatches fail in ~30 s instead of during the slower maturin build
  • DEVELOPMENT.md documents the pre-push QDP_NO_CUDA=1 cargo build
    sanity check

Test results (local, RTX 2080 Ti)

  • Rust unit + integration: 284 / 284 passed (CPU)
  • GPU-required tests: 93 / 93 passed
    • tests/gpu_fidelity.rs: 17 (cross-precision F32 vs F64 @ 8/12/16/20 q)
    • tests/gpu_ptr_encoding.rs: 64 (f32 zero-copy single + batch, all encodings)
    • tests/gpu_angle_encoding.rs: 12 (angle f32 batch + async pipeline + validation rejects)
  • Python pytest (qdp/qdp-python/tests): 7 / 7 passed (22 ROCm skips on CI without AMD GPU)
  • Clippy / fmt / license-header: clean
  • Baseline benchmark: run_pipeline_baseline.py reports 13,583 vec/s median
    (within hardware noise of the original 14,301 baseline; previous host)

Known scope limitations (intentional, follow-up tracked)

This PR delivers F32 on the hot paths — CUDA-tensor zero-copy and the
synthetic / file / streaming loader. The following entry points still go
through the F64 path; they are not regressions, they remain at parity with
upstream main before this PR:

  • engine.encode(np.array(..., dtype=np.float32)) — numpy extraction only
    accepts PyReadonlyArray<f64> (engine.rs:143, 162)
  • engine.encode(cpu_tensor.to(torch.float32)) — same numpy view path
  • engine.encode("data.parquet") with a float32 column — DataReader::read_batch
    returns Vec<f64> (this is the pre-existing reader.rs:106 structural
    debt, now annotated in-source as a comment)
  • loader.backend("pytorch") reference fallback — always uses torch.float64
  • Three loader factories hardcode Dtype::Float32; there is no user-facing
    precision argument yet (e.g. for forcing F64 to debug)

Resolving the above is the proper scope of follow-up work on generic
readers + InputAdapter + a user-facing .dtype() builder method, not
of this PR.

Migration / compatibility

  • Encoding::supports_f32() widened from Amplitude only to
    Amplitude | Angle | Basis. Callers gating on this method will now route
    angle / basis batches through the new f32 path. If a downstream caller
    was depending on the previous F64-only behavior for angle / basis batch
    pipelines, it must explicitly request Dtype::Float64 in PipelineConfig
  • PipelineConfig::default() now uses prefetch_depth = 0 (auto). Callers
    reading the field directly should call config.normalize() before use
  • compute_optimal_prefetch_depth signature changed:
    (num_qubits, batch_size, &str, bool)(num_qubits, batch_size, Encoding, Precision).
    This was a pub(crate) helper, but flagged for visibility

Checklist

  • Added or updated unit tests for all changes
  • Added or updated documentation for all changes (DEVELOPMENT.md,
    qumat_qdp/README.md, in-source structural-debt notes)

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR improves QDP GPU encoding and pipeline ergonomics by adding float32 (F32) support to angle/basis encoders, introducing CPU-side fidelity/trace-distance metrics for validation, and making the prefetch/pipeline shutdown behavior safer and more configurable across Rust and Python entrypoints.

Changes:

  • Add CUDA F32 batch kernels + Rust encoder plumbing for angle and basis encodings, and expand F32 support gating.
  • Introduce auto-computed prefetch depth (targeting ~256MB CPU buffer) and rework PipelineIterator teardown logic.
  • Add GPU metrics helpers (fidelity/trace distance + GPU readback) and Python usability improvements (backend “auto” fallback, torch dataset wrapper, benchmark warmup flag).

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
testing/qdp_python/test_fallback.py Updates invalid-backend test expectations for the loader backend selection.
qdp/qdp-python/qumat_qdp/loader.py Adds 'auto' backend option with warning-based fallback + earlier file/extension validation + as_torch_dataset().
qdp/qdp-python/benchmark/benchmark_throughput.py Adds --warmup flag and forwards warmup batches to benchmark runner.
qdp/qdp-python/benchmark/benchmark_latency.py Adds --warmup flag and forwards warmup batches to benchmark runner.
qdp/qdp-kernels/src/lib.rs Extends kernel FFI surface with new basis/angle F32 entrypoints and batch angle launcher.
qdp/qdp-kernels/src/basis.cu Implements basis encode single/batch F32 CUDA kernels and launchers.
qdp/qdp-kernels/src/angle.cu Implements angle batch F32 CUDA kernel and launcher.
qdp/qdp-core/tests/gpu_fidelity.rs Adds unit + GPU cross-precision fidelity validation tests.
qdp/qdp-core/src/pipeline_runner.rs Adds auto prefetch computation, expands F32 encoding support, and reworks iterator teardown/storage.
qdp/qdp-core/src/gpu/mod.rs Exposes new metrics module and re-exports fidelity/trace distance helpers.
qdp/qdp-core/src/gpu/metrics.rs Adds fidelity/trace-distance implementations plus Linux-only GPU download helpers.
qdp/qdp-core/src/gpu/encodings/basis.rs Adds encode_batch_f32 for basis encoding and wires it to the new F32 kernel.
qdp/qdp-core/src/gpu/encodings/angle.rs Adds encode_batch_f32 for angle encoding and wires it to the new F32 kernel.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

//! They are intended for **testing and validation**, not the hot path.

#[cfg(target_os = "linux")]
use cudarc::driver::CudaDevice;
Comment thread qdp/qdp-core/tests/gpu_fidelity.rs Outdated
);
}

// ═════════════════════════════════════════════════════════════════════���═
Comment on lines +33 to +68
/// Compute the state fidelity |⟨ψ|φ⟩|² between two complex state vectors
/// given as interleaved (re, im) f64 slices of equal length.
///
/// Both slices must have length `2 * state_dim` (re0, im0, re1, im1, ...).
/// Returns a value in [0, 1]. Fidelity == 1 means identical states (up to
/// global phase).
pub fn fidelity_f64(state_a: &[f64], state_b: &[f64]) -> Result<f64> {
if state_a.len() != state_b.len() {
return Err(MahoutError::InvalidInput(format!(
"fidelity: length mismatch ({} vs {})",
state_a.len(),
state_b.len()
)));
}
if !state_a.len().is_multiple_of(2) {
return Err(MahoutError::InvalidInput(
"fidelity: length must be even (interleaved re/im pairs)".to_string(),
));
}

// ⟨ψ|φ⟩ = Σ_i conj(a_i) * b_i
let mut re_acc = 0.0_f64;
let mut im_acc = 0.0_f64;
for i in (0..state_a.len()).step_by(2) {
let a_re = state_a[i];
let a_im = state_a[i + 1];
let b_re = state_b[i];
let b_im = state_b[i + 1];
// conj(a) * b = (a_re - i*a_im)(b_re + i*b_im)
// = (a_re*b_re + a_im*b_im) + i*(a_re*b_im - a_im*b_re)
re_acc += a_re * b_re + a_im * b_im;
im_acc += a_re * b_im - a_im * b_re;
}

Ok(re_acc * re_acc + im_acc * im_acc)
}
Comment thread qdp/qdp-python/qumat_qdp/loader.py Outdated
Comment on lines +247 to +250
``'auto'`` (default-like): tries the Rust backend first and silently
falls back to the PyTorch reference backend if the Rust extension is
unavailable. ``'rust'`` raises if the extension is missing.
``'pytorch'`` always uses the pure-PyTorch path.
Comment thread qdp/qdp-core/src/pipeline_runner.rs Outdated
Comment on lines +142 to +153
let sample_len = match encoding_method.to_lowercase().as_str() {
"angle" => num_qubits,
"basis" => 1,
_ => 1usize << num_qubits, // amplitude / iqp
};
let bytes_per_element = if float32 { 4usize } else { 8usize };
let bytes_per_batch = batch_size * sample_len * bytes_per_element;

if bytes_per_batch == 0 {
return MAX_DEPTH;
}
(TARGET_BYTES / bytes_per_batch).clamp(MIN_DEPTH, MAX_DEPTH)
Comment thread qdp/qdp-core/src/pipeline_runner.rs Outdated
Comment on lines +433 to +434
// Drain any remaining items so the producer's send() unblocks.
while self.rx.lock().unwrap().try_recv().is_ok() {}
Copy link
Copy Markdown
Member

@ryankert01 ryankert01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read this pr high levelly, lg so far

Copy link
Copy Markdown
Member

@400Ping 400Ping left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM

@viiccwen
Copy link
Copy Markdown
Contributor

F32 support for angle encoding is duplicated to #1268

@rich7420
Copy link
Copy Markdown
Contributor Author

@viiccwen yes! I think #1268 should be merged first. then I'll rebase it

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR extends QDP’s GPU encoding pipeline to support float32 (F32) for angle and basis encoders (including batched CUDA tensor paths), adds CPU-side fidelity/trace-distance metrics for validation, and improves pipeline robustness via auto prefetch sizing and safer iterator teardown.

Changes:

  • Added/extended F32 CUDA support for angle/basis encodings (including zero-copy batched paths) and updated Python/Rust dispatch + validation.
  • Introduced GPU validation helpers (finite checks, basis-index validate+cast) and CPU-side fidelity/trace-distance utilities with new GPU precision comparison tests.
  • Improved pipeline ergonomics: auto-computed prefetch depth, safer PipelineIterator drop order, Python loader “auto” backend fallback + iterable dataset wrapper, and benchmark warmup flag.

Reviewed changes

Copilot reviewed 27 out of 27 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
testing/qdp_python/test_fallback.py Updates backend validation test for new backend options.
testing/qdp_python/test_dlpack_validation.py Adjusts CUDA F32 angle validation to accept 2D batch path.
testing/qdp/test_bindings.py Updates bindings tests to assert F32 CUDA angle batch support.
qdp/qdp-python/src/pytorch.rs Updates CUDA tensor validation rules for angle/basis F32 acceptance.
qdp/qdp-python/src/engine.rs Centralizes CUDA tensor dispatch to route F32 angle/basis to correct zero-copy paths.
qdp/qdp-python/src/dlpack.rs Marks DLPack helper as currently unused but preserved for planned refactor.
qdp/qdp-python/qumat_qdp/loader.py Adds backend('auto') fallback with warnings, earlier file/streaming validation, and as_torch_dataset().
qdp/qdp-python/benchmark/benchmark_throughput.py Adds --warmup support and passes it to benchmark runner.
qdp/qdp-python/benchmark/benchmark_latency.py Adds --warmup support and passes it to benchmark runner.
qdp/qdp-kernels/src/validation.cu Adds f64 finite-check and basis-index validation/cast kernels + launchers.
qdp/qdp-kernels/src/lib.rs Extends FFI surface for new kernels and adds no-CUDA stubs.
qdp/qdp-kernels/src/basis.cu Adds F32 basis kernels + launchers (single + batch).
qdp/qdp-core/tests/gpu_ptr_encoding.rs Adds tests for basis F32 GPU-pointer encode APIs (incl. rejection cases).
qdp/qdp-core/tests/gpu_fidelity.rs Adds fidelity/trace-distance unit tests + GPU F32 vs F64 comparisons.
qdp/qdp-core/tests/gpu_angle_encoding.rs Adds large-batch async pipeline test for F32 angle batch encoding.
qdp/qdp-core/src/reader.rs Notes structural limitation: file readers still materialize f64 before f32 casting.
qdp/qdp-core/src/pipeline_runner.rs Implements auto prefetch depth, extends F32 encoding support set, and hardens iterator teardown.
qdp/qdp-core/src/lib.rs Documents that F32 GPU-pointer APIs don’t dispatch by encoding method; adds basis F32 GPU-pointer APIs.
qdp/qdp-core/src/gpu/validation.rs Adds reusable GPU-side validation helpers for finite checks and basis indices.
qdp/qdp-core/src/gpu/pipeline.rs Generalizes async pipeline to typed buffers and updates copy API semantics.
qdp/qdp-core/src/gpu/mod.rs Exposes new metrics and validation modules/exports.
qdp/qdp-core/src/gpu/metrics.rs Adds fidelity/trace-distance implementations and GPU download helpers.
qdp/qdp-core/src/gpu/memory.rs Makes pinned host buffers generic over element type (e.g., f32/f64).
qdp/qdp-core/src/gpu/encodings/mod.rs Adds notes about dispatcher allocations and future refactor direction.
qdp/qdp-core/src/gpu/encodings/basis.rs Adds basis index bounds checks and implements F32 basis GPU-pointer path via validate+cast kernel.
qdp/qdp-core/src/gpu/encodings/angle.rs Adds stricter input validation, finite checks, and F32 async pipeline path for large batches.
qdp/qdp-core/src/gpu/buffer_pool.rs Generalizes pinned buffer pool/handle over element type (f32/f64).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 98 to 116
/// Async H2D copy on the copy stream.
///
/// # Safety
/// `src` must be valid for `len_elements` `f64` values and properly aligned.
/// `dst` must point to device memory for `len_elements` `f64` values on the same device.
/// `src` must be valid for `len_bytes` bytes and properly aligned.
/// `dst` must point to device memory for `len_bytes` bytes on the same device.
/// Both pointers must remain valid until the copy completes on `stream_copy`.
pub unsafe fn async_copy_to_device(
&self,
src: *const c_void,
dst: *mut c_void,
len_elements: usize,
len_bytes: usize,
) -> Result<()> {
crate::profile_scope!("GPU::H2D_Copy");
unsafe {
let ret = cudaMemcpyAsync(
dst,
src,
len_elements * std::mem::size_of::<f64>(),
len_bytes,
CUDA_MEMCPY_HOST_TO_DEVICE,
Comment on lines 615 to 637
@@ -546,6 +636,42 @@ pub extern "C" fn launch_check_finite_batch_f32(
999
}
@rich7420
Copy link
Copy Markdown
Contributor Author

@viiccwen , @400Ping , @ryankert01 and @guan404ming do you want to take another look?

@ryankert01
Copy link
Copy Markdown
Member

@rich7420 Main branch has been differs a lot because of a previous change. Could you help verify this change works or not?

@rich7420
Copy link
Copy Markdown
Contributor Author

@ryankert01 no problem

rich7420 added 8 commits May 11, 2026 13:37
Cleanup pass on review feedback for the f32 angle/basis PR:

- gpu/mod.rs: mark `metrics` module and its re-exports `#[doc(hidden)]`
  to signal that fidelity / trace-distance helpers are test-only and
  not part of the supported runtime API.
- loader.py: lift inline `import os/sys/warnings` to module scope; add
  named constants for backend literals (`_BACKEND_RUST/PYTORCH/AUTO`)
  and supported file extensions (`_STREAMING_FILE_EXTS`,
  `_SUPPORTED_FILE_EXTS`); extract `_path_extension` and
  `_platform_hint` helpers to remove duplicated string parsing and
  platform-message construction; cache the IterableDataset subclass
  at module scope via `_build_torch_dataset` so `as_torch_dataset()`
  no longer redefines the class on every call.
…ture

Tests still passed args using the pre-apache#1276 string/bool signature.
Update them to the post-rebase (Encoding, Precision) signature.
Three small cleanups in _encode_from_cuda_tensor:

- Hoist validate_shape(ndim, ...) to the top so the redundant per-branch
  ndim error arms (one in the f32 path, one in the f64 path with the
  same message) both collapse into a single unreachable! guard.
- Hoist the duplicate get_torch_cuda_stream_ptr(data)? call shared by
  both paths.
- Merge the six-arm f32 match-of-tuples into a single match block with
  shared num_samples/sample_size/input_len bindings, dropping ~45 lines
  of repeated unsafe { ... }.map_err(...)? scaffolding.

Net: -45 LoC. No behavior change; same error messages; clippy clean.
@rich7420 rich7420 force-pushed the pipeline-improvement branch from e80c1ce to 7a93c65 Compare May 11, 2026 14:21
@rich7420
Copy link
Copy Markdown
Contributor Author

rich7420 commented May 11, 2026

@ryankert01 I've updated the pr description

@ryankert01 ryankert01 merged commit ffae662 into apache:main May 11, 2026
5 checks passed
@rich7420
Copy link
Copy Markdown
Contributor Author

@ryankert01 thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants