Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generating invalid expected distribution #81

Open
necaisej opened this issue May 20, 2022 · 0 comments
Open

Generating invalid expected distribution #81

necaisej opened this issue May 20, 2022 · 0 comments

Comments

@necaisej
Copy link

necaisej commented May 20, 2022

In this PR, it's found that it's possible for the maxcut benchmark to produce expected distributions with norm=0. At line 139 of get_expectation, there is a step:

# scale to number of shots
        for k, v in counts.items():
            counts[k] = round(v * num_shots)

Correct me if I'm wrong, but this is used to compare the results against a discrete approximation to the theoretical distribution, possibly to not penalize a result list that does not contain any counts for bitstrings that have very small probability mass (so that you'd expect 0 appearances at the given shot count) and also just to peg an integer number of results for each bitstring as ideal. This means for the case you describe, the only way to run the problem instance throwing the error is to increase the number of shots until the discretized expected distribution has at least a single nonzero element. I worry this has its own issues, because a significant distortion between the original distribution and the discretized distribution causes a distortion in the actual fidelity calculation. You could conceivably be comparing the results to a discrete distribution with whacky finite-size effects that make it look very different from the distribution that an ideal quantum computer is pulling from. This should only happen when the theoretical distribution is very wide and mostly but not perfectly flat, (I think), but it's worth considering.

Just wanted to share these thoughts... I don't think we use this kind of step in other benchmarks? It seems odd that we'd calculate fidelity against discretized distributions for some benchmarks and continuous exact distributions for others. Maybe we should perform a check that something like this doesn't happen in maxcut_benchmark.py or pass the exact distribution even if the fidelity moderately underperforms at low shot counts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant