Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of QubitMapper.mode_based_mapping #644

Closed
mrossinek opened this issue Apr 21, 2022 · 1 comment
Closed

Improve performance of QubitMapper.mode_based_mapping #644

mrossinek opened this issue Apr 21, 2022 · 1 comment

Comments

@mrossinek
Copy link
Member

mrossinek commented Apr 21, 2022

What is the expected enhancement?

When looking towards larger systems and an increased amount of auxiliary operators (multiple hundreds to thousands), the performance of QubitMapper.mode_based_mapping becomes a serious bottleneck in the pre-processing step. Waiting times for all auxiliary operators to be mapped can reach on the order of minutes.

I believe, performance can be significantly improved if we:

  • cache the processed pauli_list object within QubitMapper (requires changing the mode_based_mapping from a static to an instance method)
  • refactor the iteration to deal with sparse labels rather than dense ones (requires sparse labels to be the default, a conversion of dense to sparse, and updating the VibrationalOp to be inline with the FermionicOp again; if this gets tackled before said update, we can also handle the mapping FermionicOps separately for the time being)
  • potentially do something similar to Improve TwoBodyElectronicIntegrals.to_second_q_op performance #638 by constructing a single SparsePauliOp from a list of terms rather than performing the full computation each time

I put together a script which benchmarks the performance of the mapping.

import time

from qiskit_nature.second_q.mappers import JordanWignerMapper
from qiskit_nature.second_q.operators import FermionicOp

timings = {}

with open("timings.csv", "w") as file:
    try:
        for size in range(10, 21, 5):
            timings[size] = {}
            for num in range(100, 1001, 100):
                print(f"Mapping {num} operators of length {size}", flush=True)
                start = time.time()
                mapper = JordanWignerMapper()
                for i in range(num):
                    q_ops = mapper.map(FermionicOp(" ".join(f"N_{i}" for i in range(size)),
                                                   display_format="sparse"))
                stop = time.time()
                timings[size][num] = stop - start

                file.write(f"{size},{num},{stop - start}\n")
                file.flush()
    except KeyboardInterrupt:
        pass

EDIT: updated the script above to reflect the new code locations.
Below is the timing report. Right now, this pretty much scales 1-to-1 with the number of operators but I believe we can significantly reduce the overhead.

10,100,1.338921308517456
10,200,2.6084437370300293
10,300,3.912541151046753
10,400,5.215270042419434
10,500,6.510892629623413
10,600,7.804781436920166
10,700,9.110262393951416
10,800,10.41125202178955
10,900,11.708135843276978
10,1000,13.017327785491943
15,100,10.335224628448486
15,200,20.665453672409058
15,300,30.995742797851562
15,400,41.318920612335205
15,500,51.67417097091675
15,600,61.98542332649231
15,700,72.29619765281677
15,800,82.63712310791016
15,900,92.9549400806427
15,1000,103.26946425437927
20,100,301.58223485946655
20,200,602.6008727550507
20,300,903.9176499843597
20,400,1205.038892030716
20,500,1506.27596616745
20,600,1807.5807330608368
20,700,2108.723217725754
20,800,2410.188409090042
20,900,2711.2628829479218
20,1000,3012.600988149643
@mrossinek
Copy link
Member Author

The simple improvements like caching and sparse-operator iteration have been implemented. Further improvements can safely be tracked by #771.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant