Skip to content

Align Iris amplitude encoding benchmark between PennyLane and QDP#1088

Merged
ryankert01 merged 1 commit intoapache:mainfrom
rich7420:iris-amplitude
Feb 24, 2026
Merged

Align Iris amplitude encoding benchmark between PennyLane and QDP#1088
ryankert01 merged 1 commit intoapache:mainfrom
rich7420:iris-amplitude

Conversation

@rich7420
Copy link
Contributor

Related Issues

Changes

  • Bug fix
  • New feature
  • Refactoring
  • Documentation
  • Test
  • CI/CD pipeline
  • Other

Why

Align the Iris amplitude encoding benchmark between the pure PennyLane baseline and the QDP pipeline so we can directly compare full training behavior with only the encoding step changed.

How

Updated pennylane_baseline/iris_amplitude.py and qdp_pipeline/iris_amplitude.py to share the same data loading, CLI options, and training loop, differing only in encoding (get_angles vs QDP QuantumDataLoader + StatePrep).
Added --data-file, optimizer/early-stop/trials flags, and consistent logging to both scripts to support reproducible comparisons.

please try it

Checklist

  • Added or updated unit tests for all changes
  • Added or updated documentation for all changes

@rich7420
Copy link
Contributor Author

results in my local environment
our throuphput is better

uv run python benchmark/encoding_benchmarks/pennylane_baseline/iris_amplitude.py --data-file benchmark/encoding_benchmarks/pennylane_baseline/data/iris_classes1and2_scaled.txt --optimizer nesterov --lr 0.01 --layers 6 --trials 3 --iters 80 --early-stop 0 2>&1
Iris amplitude baseline (PennyLane) — 2-class variational classifier
  Data: official file (2 features): benchmark/encoding_benchmarks/pennylane_baseline/data/iris_classes1and2_scaled.txt → L2 norm → get_angles  (n=100; 2-class Iris = 100 samples)
  Iters: 80, batch_size: 5, layers: 6, lr: 0.01, optimizer: nesterov

  Trial 1:
    Compile:   0.0121 s
    Train:     1.4786 s
    Train acc: 1.0000  (n=75)
    Test acc:  1.0000  (n=25)
    Throughput: 270.5 samples/s

  Trial 2:
    Compile:   0.0101 s
    Train:     1.5911 s
    Train acc: 1.0000  (n=75)
    Test acc:  1.0000  (n=25)
    Throughput: 251.4 samples/s

  Trial 3:
    Compile:   0.0102 s
    Train:     1.8057 s
    Train acc: 1.0000  (n=75)
    Test acc:  1.0000  (n=25)
    Throughput: 221.5 samples/s

  Best test accuracy:  1.0000  (median: 1.0000, min: 1.0000, max: 1.0000)
  → Target ≥0.9 achieved.

uv run python benchmark/encoding_benchmarks/qdp_pipeline/iris_amplitude.py --data-file benchmark/encoding_benchmarks/pennylane_baseline/data/iris_classes1and2_scaled.txt --optimizer nesterov --lr 0.01 --layers 6 --trials 3 --iters 80 --early-stop 0 2>&1
Iris amplitude (QDP encoding) — 2-class variational classifier
  Data: official file (2 features): benchmark/encoding_benchmarks/pennylane_baseline/data/iris_classes1and2_scaled.txt → QDP amplitude  (n=100; 2-class Iris = 100 samples)
  Iters: 80, batch_size: 5, layers: 6, lr: 0.01, optimizer: nesterov

  Trial 1:
    QML device: cpu
    Compile:   0.0117 s
    Train:     1.2235 s
    Train acc: 0.9867  (n=75)
    Test acc:  1.0000  (n=25)
    Throughput: 326.9 samples/s

  Trial 2:
    QML device: cpu
    Compile:   0.0081 s
    Train:     1.2546 s
    Train acc: 1.0000  (n=75)
    Test acc:  1.0000  (n=25)
    Throughput: 318.8 samples/s

  Trial 3:
    QML device: cpu
    Compile:   0.0081 s
    Train:     1.3195 s
    Train acc: 0.9867  (n=75)
    Test acc:  1.0000  (n=25)
    Throughput: 303.1 samples/s

  Best test accuracy:  1.0000  (median: 1.0000, min: 1.0000, max: 1.0000)
  → Target ≥0.9 achieved.

Copy link
Member

@ryankert01 ryankert01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

import torch


NUM_QUBITS = 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you set qubits = 2?

Copy link
Member

@ryankert01 ryankert01 Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, It might be we want to keep the qubit small for a clear speedup result.

Copy link
Member

@ryankert01 ryankert01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, I think if we are going to add multiple dataset, we can abstract many things in common as a followup (not necessary) and utility to store results in a csv or something.

@ryankert01 ryankert01 merged commit d7cf2c3 into apache:main Feb 24, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants