Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgumentError: row indices I[k] must satisfy 1 <= I[k] <= m #130

Closed
angeris opened this issue Feb 4, 2021 · 3 comments
Closed

ArgumentError: row indices I[k] must satisfy 1 <= I[k] <= m #130

angeris opened this issue Feb 4, 2021 · 3 comments
Labels
bug Something isn't working

Comments

@angeris
Copy link

angeris commented Feb 4, 2021

Currently using COSMO v0.8.0 to solve some new chorally-sparse SDPs for some computational physics bounds.

Here's a basic example reproducing the bug(?) in question

using COSMO, SparseArrays

n = 10

m = COSMO.Model()

A_all = spzeros(n^2, n)

for i=1:n
    E_i = spzeros(n, n)
    E_i[i,i] = 1
    A_all[:, i] .= reshape(E_i, :)
end

cons = COSMO.Constraint(A_all, spzeros(n^2), COSMO.PsdCone)

COSMO.assemble!(m, spzeros(n, n), spzeros(n), cons)
result = COSMO.optimize!(m)

which yields the following error:

julia> include("mwe.jl")
ERROR: LoadError: ArgumentError: row indices I[k] must satisfy 1 <= I[k] <= m
Stacktrace:
 [1] sparse!(::Array{Int64,1}, ::Array{Int64,1}, ::Array{Float64,1}, ::Int64, ::Int64, ::typeof(+), ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Float64,1}, ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Float64,1}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.5/SparseArrays/src/sparsematrix.jl:775
 [2] allocate_sparse_matrix(::Array{Int64,1}, ::Array{Int64,1}, ::Array{Float64,1}, ::Int64, ::Int64) at /Users/guille/.julia/packages/COSMO/83lVi/src/transformations.jl:211
 [3] augment_clique_based!(::COSMO.Workspace{Float64}) at /Users/guille/.julia/packages/COSMO/83lVi/src/transformations.jl:192
 [4] chordal_decomposition!(::COSMO.Workspace{Float64}) at /Users/guille/.julia/packages/COSMO/83lVi/src/chordal_decomposition.jl:24
 [5] macro expansion at ./timing.jl:233 [inlined]
 [6] optimize!(::COSMO.Workspace{Float64}) at /Users/guille/.julia/packages/COSMO/83lVi/src/solver.jl:90
 [7] top-level scope at /Users/guille/Documents/code/efficient-power-dual/mwe.jl:18
 [8] include(::String) at ./client.jl:457
 [9] top-level scope at REPL[49]:1
in expression starting at /Users/guille/Documents/code/efficient-power-dual/mwe.jl:18

It is very possible I'm doing something quite silly here; usually I would go the JuMP -> MOI route, but the resulting SDPs I'm working with have rather unwieldy dimension, even though they are relatively sparse. In general, building them in JuMP takes a very long time; i.e., calling optimize! takes a long time to pass the model through the MOI bridge down to COSMO.jl. I can also attempt to provide a minimal working example there, but it would be a separate issue :)

Thank you so much for all of your work COSMO team! :)

EDIT: Being a little more specific here, the problem should look something like (in this specific case)

minimize     0
subject to   diag(x) ≥ 0.

Is the problem construction correct? I think it might be, but I'm not quite sure how to deal with this error, then.

@angeris angeris added the bug Something isn't working label Feb 4, 2021
@migarstka
Copy link
Member

migarstka commented Feb 4, 2021

I am suspicious that this is because you are using a square PSD constraint COSMO.PsdCone and not the upper-triangular COSMO.PsdConeTriangle (+ appropriate scaling of the off-diagonals). JuMP/MOI always transforms all PSD constraints into upper-triangle form when used with COSMO.
The reason I dropped support for chordal decomposition + PsdCone is that it doesn't make much sense for people that care about performance to carry along twice the amount of data.

That being said, I still consider this a bug as we should remove support for the square PSD constraint or do some internal transformation to upper-triangular.

If passing the model from JuMP to COSMO is slow, please open an issue and I can take a look at it / profile it.

@angeris
Copy link
Author

angeris commented Feb 4, 2021

I am suspicious that this is because you are using a square PSD constraint COSMO.PsdCone and not the upper-triangular COSMO.PsdConeTriangle (+ appropriate scaling of the off-diagonals). JuMP/MOI always transforms all PSD constraints into upper-triangle form when used with COSMO.
The reason I dropped support for chordal decomposition + PsdCone is that it doesn't make much sense for people that care about performance to carry along twice the amount of data.

Got it, thanks. I was using the documentation example code; agreed re: upper-triangular, it is generally more efficient (though in my particular problem, the runtime performance hit is negligible on constructing the full matrix vs just the upper-triangular part).

That being said, I still consider this a bug as we should remove support for the square PSD constraint or do some internal transformation to upper-triangular.

This would be great if possible. The scaling is generally canonical, but is sometimes implementation-dependent, so it would be nice to have a slightly more generic interface :)

If passing the model from JuMP to COSMO is slow, please open an issue and I can take a look at it / profile it.

Great! Will come up with a minimal working example and link it.

@angeris
Copy link
Author

angeris commented Feb 8, 2021

Ok, from the UnitTests I see that there is a function extract_upper_triangle! defined in convexset.jl, which seems to do the right thing in place, which is neat.

I will try this and maybe do a PR if I manage to figure it all out :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants