Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added q_sum check #77

Merged
merged 3 commits into from May 21, 2022
Merged

Added q_sum check #77

merged 3 commits into from May 21, 2022

Conversation

japanavi
Copy link
Contributor

When running some benchmarks, if the expected distribution is too small q_sum will be zero and result in a ZeroDivisionError error when calculating hellinger_fidelity_with_expected.

I wasn't sure on what the printed Error message should say. Please let me know if you have any suggestions.

@necaisej
Copy link

necaisej commented May 19, 2022

The expected_dist that gets passed is generated in each benchmark, usually by different means.

Phase Estimation:

# Convert theta to a bitstring distribution
def theta_to_bitstring(theta, num_counting_qubits):
    counts = {format( int(theta * (2**num_counting_qubits)), "0"+str(num_counting_qubits)+"b"): 1.0}
    return counts

QFT:

# Define expected distribution calculated from applying the iqft to the prepared secret_int state
def expected_dist(num_qubits, secret_int, counts):
    dist = {}
    s = num_qubits - secret_int
    for key in counts.keys():
        if key[(num_qubits-secret_int):] == ''.zfill(secret_int):
            dist[key] = 1/(2**s)
    return dist

The QPE distribution will never have q_sum=0 but if the QFT benchmark analyzes a counts distribution with no overlap with the correct distribution, the expected_dist generated will be empty and the error you mention will be thrown. Something like this is only done when the theoretical distribution has prohibitively large support (in this case, exponentially large support w.r.t. the parameter s).

I think that always renormalizing by q_sum is actually just a mistake given we use this scheme in certain benchmarks. The point of omitting probabilities from expected_dist for states that do not appear in the results distribution is that when passed, the fidelity contributions for the missing states will implicitly be 0. In the case that expected_dist is empty (norm = 0), then that must mean that there was no overlap between the expected distribution and the results, the fidelity will just be zero.

Unfortunately this was done to support both pre-normalized and "number of counts" expected_dist formats. I think the correct solution is checking whether the normalization is greater than one (i.e. the expected distribution is not using the scheme above and just needs to be normalized):

if q_sum > 1:
    q_normed = {}
    for key, val in q.items():
        q_normed[key] = val/q_sum
else:
    q_normed = q

@rtvuser1
Copy link
Collaborator

Josh ran into this specific case:

in the maxcut benchmark, when the num_shots is 100 and num_qubits is >= 10, the routine that computes the expected distribution produces an array of counts that are all zero ... 2**10 is 1024 possible measurements, and with only 100 shots, each expected bar is just 100/1024, less than 0.5 which rounds down to 0. This results in the divide by 0 error. This fix simply catches and reports that error, and permits the benchmark program to continue with crashing.
Josh, the error message should probably just be:

print("ERROR: polarization_fidelity(), expected distribution is invalid, all counts equal to 0")

Jason, your comments are relevant, but it is a different case than what Josh ran into. What you describe should be discussed in a separate thread.

@necaisej
Copy link

Thanks for the clarification Tom! This does somewhat make me rethink the discretization step i.e. rounding to the number of expected results at given shot counts instead of exact probabilities. Basically, should we ever allow this error to actually occur? I agree this error message looks great.

@rtvuser1 rtvuser1 merged commit c870d9d into SRI-International:master May 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants