Skip to content

Enhancement: Maximum confidence state distinguishability #34

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vprusso opened this issue Jan 28, 2021 · 31 comments
Open

Enhancement: Maximum confidence state distinguishability #34

vprusso opened this issue Jan 28, 2021 · 31 comments
Assignees
Labels
enhancement New feature or request feature request good first issue Good for newcomers

Comments

@vprusso
Copy link
Owner

vprusso commented Jan 28, 2021

Presently the state_distinguishability function in state_distinguishability.py includes minimum-error and unambiguous quantum state distinguishability using the argument dist_method = "min-error" and dist_method="unambiguous", respectively.

This task should enhance the state_distinguishability function to include the ability to compute the maximum confidence discrimination. Refer to Section 2.5 of arXiv:1707.02571, specifically the SDP below equation (31) in this section.

The formulation of the SDP should look familiar to the other distinguishability methods and should serve as an example for how to include this feature.

@vprusso vprusso added enhancement New feature or request good first issue Good for newcomers labels Jan 28, 2021
@harshvardhan-pandey
Copy link
Contributor

@vprusso can I work on this?

@vprusso
Copy link
Owner Author

vprusso commented Mar 17, 2025

That would be great, go right ahead, @harshvardhan-pandey !

@harshvardhan-pandey
Copy link
Contributor

@vprusso I have a small question regarding the already implemented strategies. Is it always guaranteed that the optimal measurement operators are hermitian? From what I understand, the measurement port just needs to be a POVM.

@vprusso
Copy link
Owner Author

vprusso commented Mar 18, 2025

Correct, the optimal measurements are POVMs (i.e. positive semidefinite operators that sum to the identity). But PSD operators are by definition always Hermitian, so yes, if the operators that form the POVM are PSD (which they need to be) they should, by definition also be Hermitian.

Let me know if that clears things up, @harshvardhan-pandey !

@harshvardhan-pandey
Copy link
Contributor

Thanks! That clears it up. One question regarding the SDP

Image

This has simultaneous maximization of N functions p(rho_k|M_k) for k in range 1...N. It is possible that each of these optimization problems have different solutions right?

@vprusso
Copy link
Owner Author

vprusso commented Mar 18, 2025

To be clear, this is one maximization function that is operating over a choice of $N$ quantum states. The result of this optimization problem should only have one solution (which is the maximization over all such $p(\rho_k|M_k)$ items. Does that make sense?

@harshvardhan-pandey
Copy link
Contributor

I see. So it is essentially maximize the maximum of p(rho_k|M_k)?

@vprusso
Copy link
Owner Author

vprusso commented Mar 18, 2025

I see. So it is essentially maximize the maximum of p(rho_k|M_k)?

Yes, this is what I take away from it, that's right.

@harshvardhan-pandey
Copy link
Contributor

Interesting. That doesn't seem like a concave objective though. Because even if for each k, p(pho_k|M_k) is a concave function then their maximum is not guaranteed to be.

@vprusso
Copy link
Owner Author

vprusso commented Mar 18, 2025

Interesting. That doesn't seem like a concave objective though. Because even if for each k, p(pho_k|M_k) is a concave function then their maximum is not guaranteed to be.

Hmm, that is a good point. I would have to delve into their paper to get a handle on things. Maybe I am misinterpreting the optimization problem?

@harshvardhan-pandey
Copy link
Contributor

Yeah. Even I haven't been able to look into the paper properly yet. I'll do that.

@harshvardhan-pandey
Copy link
Contributor

From what it looks like based on section 2.5.2, for a given k, we can maximize p(rho_k|M_k) by just tuning M_k and then adjust M_{N+1} at end accordingly.

@vprusso
Copy link
Owner Author

vprusso commented Mar 19, 2025

From what it looks like based on section 2.5.2, for a given k, we can maximize p(rho_k|M_k) by just tuning M_k and then adjust M_{N+1} at end accordingly.

Ah, okay, interesting. Yeah, I think that sounds reasonable (at least from a quick glance and from your comment here). It might be good to see if the results from their paper using that approach could be replicated numerically as a sanity check.

@harshvardhan-pandey
Copy link
Contributor

Hi @vprusso! Apologies for not getting back sooner, I was a little busy. I did try the solution mentioned in the paper. It works for the example they have given (three equiprobable symmetric qubit states that lie on the same latitude of the Bloch sphere). However, the result involves rho^(-1) whereas rho isn't always invertible. For example when the vectors are [bell(0), bell(1), bell(2)]. I am not sure how this can be handled. I will look into it further.

@vprusso
Copy link
Owner Author

vprusso commented Mar 23, 2025

@harshvardhan-pandey No worries! Hmm, I'm not entirely sure how that can be handled either, although I'm definitely interested in hearing about any progress or insights you come up with as you look into it! Happy to stay in the loop and try to help if I am able to. Thanks again for keeping me posted!

@harshvardhan-pandey
Copy link
Contributor

I was considering that we could just abandon the solution framework presented by the paper. In the end, we have N independent optimization problems for M_1, ..., M_N. Those can be converted to SDPs using the Charnes–Cooper transformation.

@vprusso
Copy link
Owner Author

vprusso commented Mar 23, 2025

@harshvardhan-pandey That's true, and that might be worth going down that road. If you do decide to put some cycles on that, feel free to share any of that here, and I'll do my best to provide guidance and input as I am able to do so!

@purva-thakre
Copy link
Collaborator

purva-thakre commented Mar 24, 2025

Charnes–Cooper transformation

@harshvardhan-pandey I do not know what this is. Can you briefly explain this?

@harshvardhan-pandey
Copy link
Contributor

@purva-thakre it is just a simple rearrangement. For example in this case, we want to maximize trace(M_k * rho_k)/trace(M_k * rho). I can instead maximize trace(M_k * rho_k) with the constraint that trace(M_k * rho) = 1. Because any positive semidefinite matrix which maximizes the original objective, can be scaled so that it maximizes the new objective and satisfies the new constraint.

@harshvardhan-pandey
Copy link
Contributor

harshvardhan-pandey commented Mar 29, 2025

@vprusso I am stuck on a bug that I just can't seem to figure out. I am working on the example provided in the paper. The optimal value for trace(M_k * rho_k)/trace(M_k * rho) = 2 in that case. For some reason whatever I do cvxopt says 1 is the optimal answer.

n = len(vectors)
probs = np.array(probs)
density_matrices = np.array([to_density_matrix(vector) for vector in vectors])
rho = np.sum(probs[:, np.newaxis, np.newaxis] * density_matrices, axis=0) #rho = sum of probs[i] * density_matrices[i]
unscaled_measurement_operators = []

for rho_k in density_matrices:
    problem = picos.Problem()
    M_k = picos.HermitianVariable("M_k", (dim, dim))
    problem.set_objective("max", picos.trace(M_k @ rho_k).real)
    problem.add_constraint(M_k >> 0)
    problem.add_constraint(picos.trace(M_k @ rho) == 1)
    solution = problem.solve(solver=solver, verbosity=True, **kwargs)
    print(solution.value)
    unscaled_measurement_operators.append(M_k.value)
unscaled_measurement_operators = np.array(unscaled_measurement_operators)
states_max_confidence = [
    # Symmetric states on the same latitude of bloch sphere
    ([np.cos(np.pi/6)*e_0+np.sin(np.pi/6)*e_1,
      np.cos(np.pi/6)*e_0+np.exp(2*np.pi*1j/3)*np.sin(np.pi/6)*e_1,
      np.cos(np.pi/6)*e_0+np.exp(-2*np.pi*1j/3)*np.sin(np.pi/6)*e_1],
      [2/3, 2/3, 2/3]),
]

Is there anything obvious that I seem to be missing?

@vprusso
Copy link
Owner Author

vprusso commented Mar 30, 2025

Hi @harshvardhan-pandey ,

From what I can tell, each iteration of the loop is computing the optimal value of discriminating two states (rho and rho_k) Since these states are orthogonal, it should always be possible to distinguish these two states. Or am I maybe missing something?

@harshvardhan-pandey
Copy link
Contributor

rho is the density matrix of the mixture. So if the states are rho_k, then rho = sum of p_k * rho_k.

@vprusso
Copy link
Owner Author

vprusso commented Mar 30, 2025

@harshvardhan-pandey Yes, but these are still two states, one is the average state of all of the states in the ensemble, and the other state is just one of the states from the set. At the end of the day, each iteration in the loop is still computing the distinguishability between two states; rho and rho_k, right?

@harshvardhan-pandey
Copy link
Contributor

harshvardhan-pandey commented Mar 30, 2025

@vprusso I mean yes. But it is not doing optimal distinguishing. If we use the solution in the paper, this likelihood ratio comes out to be 2. However, this optimization setup computes 1. I was thinking maybe I have made some mistake with the convex optimization formulation. Should I maybe open a draft PR so that you can see the entire thing?

@vprusso
Copy link
Owner Author

vprusso commented Mar 30, 2025

@harshvardhan-pandey , yes it might be helpful to see the whole thing, so if you'd like to open a PR, that would be helpful!

@harshvardhan-pandey
Copy link
Contributor

@vprusso I realized that even this kind of optimization setup requires rho to be invertible. So I am not sure what to do.

@vprusso
Copy link
Owner Author

vprusso commented Apr 4, 2025

Hmm, okay. Might be a silly and obvious question, but are there any such requirements on rho for any of the other state discrimination modalities to be invertible?

@harshvardhan-pandey
Copy link
Contributor

I'm not sure. I'll look into it and get back.

@vprusso
Copy link
Owner Author

vprusso commented Apr 4, 2025

I'm not sure. I'll look into it and get back.

Sounds good, and thank you!

@harshvardhan-pandey
Copy link
Contributor

I think the rest are fine because no explicit division by 0 issues exist. Also, I am applying for GSOC this year. Is it ok if I mention this issue even though it is not completely resolved by then? I have some ideas that may work, but I am a little under time constraints for the next few weeks.

@vprusso
Copy link
Owner Author

vprusso commented Apr 5, 2025

I think the rest are fine because no explicit division by 0 issues exist. Also, I am applying for GSOC this year. Is it ok if I mention this issue even though it is not completely resolved by then? I have some ideas that may work, but I am a little under time constraints for the next few weeks.

Yes, feel free to mention this issue in your application!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants