Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] add Transitioning from MATLAB tutorial #3698

Merged
merged 17 commits into from Mar 6, 2024
Merged

[docs] add Transitioning from MATLAB tutorial #3698

merged 17 commits into from Mar 6, 2024

Conversation

Copy link

codecov bot commented Mar 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 98.35%. Comparing base (ecaf80c) to head (319e9ff).
Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #3698   +/-   ##
=======================================
  Coverage   98.35%   98.35%           
=======================================
  Files          43       43           
  Lines        5707     5707           
=======================================
  Hits         5613     5613           
  Misses         94       94           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@odow odow mentioned this pull request Mar 3, 2024
@araujoms
Copy link
Contributor

araujoms commented Mar 3, 2024

I don't have write access anymore since this PR is now in your branch.

Copy link
Contributor

@araujoms araujoms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the commands querying the solution status to the JuMP, YALMIP, and CVX codes. Also, note that in MATLAB the main function needs to be the first one, so I changed back the order.

@odow
Copy link
Member Author

odow commented Mar 3, 2024

I don't have write access anymore since this PR is now in your branch.

Yeah. I changed it here so that we can get a preview: https://jump.dev/JuMP.jl/previews/PR3698/tutorials/getting_started/transitioning_from_matlab/

@araujoms
Copy link
Contributor

araujoms commented Mar 3, 2024

Maybe it is worth it adding a paragraph explaining Symmetric and Hermitian?

@blegat
Copy link
Member

blegat commented Mar 4, 2024

It's also worth mentioning the difference between YALMIP and JuMP according to dualization. YALMIP is always in image form / geometric conic form while in JuMP it depends on the solver. See section 2.1 of the MOI paper for instance. You can refer to https://jump.dev/JuMP.jl/stable/tutorials/conic/dualization/

@odow
Copy link
Member Author

odow commented Mar 5, 2024

Maybe it is worth it adding a paragraph explaining Symmetric and Hermitian?

I'd like to keep this tutorial pretty tightly scoped to focus on JuMP/YALMIP/CVX.

It is not the place for this tutorial to be a cheatsheet of every difference between Julia and MATLAB. Symmetric and Hermitian should be reasonably self-evident. If it isn't users can always look up the Julia documentation.

@odow
Copy link
Member Author

odow commented Mar 5, 2024

It's also worth mentioning the difference between YALMIP and JuMP according to dualization.

Is this necessary for users transitioning from MATLAB? It seems in the weeds. I'd prefer we left it out and got feedback first.

@araujoms
Copy link
Contributor

araujoms commented Mar 5, 2024

It is not the place for this tutorial to be a cheatsheet of every difference between Julia and MATLAB. Symmetric and Hermitian should be reasonably self-evident. If it isn't users can always look up the Julia documentation.

The difference is that with YALMIP and CVX one needs to do a = 0.5*(a+a') over and over again, to make sure your matrices are exactly Hermitian, whereas with JuMP this doesn't matter, you just have to wrap them in Hermitian.

@blegat
Copy link
Member

blegat commented Mar 5, 2024

Is this necessary for users transitioning from MATLAB? It seems in the weeds. I'd prefer we left it out and got feedback first.

This is the n°1 comment people have when coming from YALMIP (see for instance https://discourse.julialang.org/t/yalmip-vs-jump/30776/10 and https://discourse.julialang.org/t/jump-sumofsquares-generate-sdps-with-18x-more-constraints-than-yalmip/65377/15). It can just be a small note refering to the relevant part of the doc. (sorry I closed, missed click)

@blegat blegat closed this Mar 5, 2024
@blegat blegat reopened this Mar 5, 2024
@araujoms
Copy link
Contributor

araujoms commented Mar 5, 2024

I also think it would be worth it such a remark, it has a large impact on performance.

optimize!(model)
if is_solved_and_feasible(model)
WT = dual(PPT)
return value(λ), real(LinearAlgebra.dot(WT, rhoT))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did add real here? It's redundant, dot already returns a real number from Julia 1.11 onwards.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the docs currently build with Julia 1.10 and we support Julia 1.6. Also, v1.11 isn't released yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the problem? It doesn't cause any error with older versions of Julia, the output is just slightly less neat.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dot currently returns a Complex{Float64} instead of a Float64. Your MATLAB examples return a Float64.

julia> import LinearAlgebra

julia> A = LinearAlgebra.Hermitian(rand(ComplexF64, (3, 3)))
3×3 LinearAlgebra.Hermitian{ComplexF64, Matrix{ComplexF64}}:
 0.468242+0.0im       0.296799+0.49972im   0.621408+0.148878im
 0.296799-0.49972im   0.413853+0.0im       0.677598+0.875476im
 0.621408-0.148878im  0.677598-0.875476im   0.59341+0.0im

julia> LinearAlgebra.dot(A, A)
4.6860981128420285 + 0.0im

julia> real(LinearAlgebra.dot(A, A))
4.6860981128420285

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know. This is not an error, and will change anyway in Julia 1.11.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know

I know because I saw your PR 😄

My point is that I personally would argue that changing from ComplexF64 to Float64 is a breaking change, and the fact it will change in Julia v1.11 is not compelling because we still have many JuMP users on a range of old versions. I want something that works across all versions, not just some upcoming one. We cannot assume that new users from MATLAB will use a recent version of Julia. See, for example, a discourse user on Julia v1.5.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really don't understand the problem. It works. It's just slightly uglier. And moreover it's just example code in a tutorial. Nothing depends on whether its return value is Float64 or ComplexF64.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't work. One return a ComplexF64, and the other returns a Float64.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a problem?


# The same applies for Hermitian matrices, using `LinearAlgebra.Hermitian` and
# `LinearAlgebra.ishermitian`.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I wanted to point out here is that with MATLAB you have to be paranoid about making your matrices exactly Hermitian, in order to not get subtle errors, whereas with JuMP you just need to wrap them and you're good to go.

But now I've tested it, and it turns out that it's not true. If you try to constrain a non-Hermitian matrix to be in HermitianPSDCone() you get an error message, as I expected, but if you try to constrain a non-Symmetric matrix to be in PSDCone() JuMP will let you (and fail afterwards, of course). Is that intentional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For non-symmetric matrices, JuMP adds equality constraints for the off-diagonal elements. It lets you write

model = Model()
@variable(model, X[1:2, 1:2])
@constraint(model, X in PSDCone())

We don't have the same for Hermitian because there is no HermitianPostiveSemidefininteConeSquare (I'm sure we've discussed, but I can't find the link. Edit: found it: #3369 (comment)).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? Is there any case where you would need that? As far as I can see this can either generate less efficient code or give you an error.

Copy link
Member Author

@odow odow Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless of the need, it exists in JuMP 1.0 so we are not changing it.

Someone could also write [x y; 1 z] in PSDCone() and we would add a constraint that y == 1. It may have been sensible to support only the triangular form of PSD, but we didn't do that, so it's no longer up for discussion.

We added Hessian support after 1.0, which is why it supports only the triangular form.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just curious, I wasn't suggesting to change this, it would clearly be a breaking change.

Although now that you brought it up, I will definitely suggest removing it when the time comes for JuMP 2.0.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will definitely suggest removing it when the time comes for JuMP 2.0

My goal is not to release JuMP 2.0 if at all possible. If we did, it would be for a minimal set of breaking changes, not just to remove some things we don't like. The only candidate was the nonlinear changes, but we seem to have managed to sneak them in with minimal disruption. The JuMP developers have no plans for JuMP 2.0.

@odow
Copy link
Member Author

odow commented Mar 6, 2024

Merging because this is good enough for now. If needed, we can make tweaks in smaller PRs, instead of hashing everything out here. (There are already 56 comments!)

@odow odow merged commit a39e241 into master Mar 6, 2024
11 checks passed
@odow odow deleted the od/matlab branch March 6, 2024 22:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants