Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove StrideOneVector alias #174

Merged
merged 22 commits into from
Jul 2, 2022
Merged

Remove StrideOneVector alias #174

merged 22 commits into from
Jul 2, 2022

Conversation

frapac
Copy link
Collaborator

@frapac frapac commented Jun 6, 2022

StrideOneVector lead to type inference issues in MadNLP. This PR replaces StrideOneVector by using explicit typing in the impacted structure (SparseKKTSystem and SparseUnreducedKKTSystem). This PR might improve the time to first solve in MadNLP.

- StrideOneVector lead to type inference issues
@frapac frapac requested a review from sshin23 June 6, 2022 21:43
@@ -48,7 +48,7 @@ mutable struct Solver <: AbstractLinearSolver
logger::Logger
end

ma27ad!(n::Cint,nz::Cint,I::StrideOneVector{Cint},J::StrideOneVector{Cint},
ma27ad!(n::Cint,nz::Cint,I::AbstractVector{Cint},J::AbstractVector{Cint},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that StrideOneVector is not ideal, but the motivation for using it was to make sure that when calling linear solvers, we don't accidentally pass AbstractVectors with non-uniform memory stride (in particular, we want it to be always 1 byte). My concern with this change is that we don't have anything that prevents ma27ad! to accept a pointer to a non-stride-one-array. Of course, we're using these only internally within MadNLP, so shouldn't be a problem if we write the code carefully in the future, but it raises a bit of concern to me. Any thoughts on this @frapac?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I have exactly the same concern as you. I thought about this the last few days, and I may have a potential workaround. How about adding a safety check at the beginning of the function:

@assert strides(I) == (1, )

@sshin23
Copy link
Member

sshin23 commented Jul 1, 2022

@frapac can you take a look?

Copy link
Collaborator Author

@frapac frapac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It overall looks good to me!
I think we might want to implement our own function on top of unsafe_wrap. For instance, something like:

function _madview(vec::VT, n::Int, shift::Int=0)
    return unsafe_wrap(VT, pointer(vec) + shift, n)
end

function eval_f_wrapper(ips::InteriorPointSolver, x::Vector{Float64})
nlp = ips.nlp
cnt = ips.cnt
@trace(ips.logger,"Evaluating objective.")
cnt.eval_function_time += @elapsed obj_val = (get_minimize(nlp) ? 1. : -1.) * obj(nlp,view(x,1:get_nvar(nlp)))
cnt.eval_function_time += @elapsed obj_val = (get_minimize(nlp) ? 1. : -1.) * obj(nlp,unsafe_wrap(Vector{Float64},pointer(x),get_nvar(nlp)))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To improve readability, I would move the unsafe_wrap in a line before the function call. Something like

x_primal = unsafe_wrap(Vector{Float64},pointer(x),get_nvar(nlp))

jac::StrideOneVector{T}
pr_diag::StrideOneVector{T}
du_diag::StrideOneVector{T}
struct SparseKKTSystem{T, MT, VT} <: AbstractReducedKKTSystem{T, MT}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch

# Augmented system
aug_raw::SparseMatrixCOO{T,Int32}
aug_raw::SparseMatrixCOO{T,Int32,Vector{T}}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Vector{T} or VT?

@codecov-commenter
Copy link

codecov-commenter commented Jul 1, 2022

Codecov Report

Merging #174 (d206889) into master (0974881) will increase coverage by 0.04%.
The diff coverage is 96.25%.

@@            Coverage Diff             @@
##           master     #174      +/-   ##
==========================================
+ Coverage   72.73%   72.78%   +0.04%     
==========================================
  Files          38       38              
  Lines        3418     3424       +6     
==========================================
+ Hits         2486     2492       +6     
  Misses        932      932              
Impacted Files Coverage Δ
lib/MadNLPGraph/src/schur.jl 0.00% <ø> (ø)
lib/MadNLPGraph/src/schwarz.jl 0.00% <ø> (ø)
lib/MadNLPHSL/src/MadNLPHSL.jl 80.00% <ø> (ø)
lib/MadNLPKrylov/src/MadNLPKrylov.jl 0.00% <0.00%> (ø)
lib/MadNLPPardiso/src/MadNLPPardiso.jl 100.00% <ø> (ø)
lib/MadNLPPardiso/src/pardiso.jl 0.00% <0.00%> (ø)
src/IPM/utils.jl 91.80% <ø> (ø)
src/KKT/dense.jl 84.21% <ø> (ø)
src/matrixtools.jl 80.88% <ø> (ø)
lib/MadNLPHSL/src/ma27.jl 85.93% <100.00%> (ø)
... and 16 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0974881...d206889. Read the comment docs.

@sshin23 sshin23 merged commit 03d728e into master Jul 2, 2022
@sshin23 sshin23 deleted the fp/stridedonevector branch July 5, 2022 15:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants