Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: eliminate global cutest_lib #97

Closed

Conversation

timholy
Copy link
Contributor

@timholy timholy commented Nov 10, 2016

I was writing code to test different models in a loop, and got the error "CUTEst: call cutest_finalize on current model first". I followed that advice, and encountered a nasty problem:

  • call CUTEst.cutest_finalize(nlp)
  • load a new problem and begin optimization
  • at some random point, Julia's gc runs and triggers the finalizer on the original problem. This sets the global variable cutest_lib to NULL right in the middle of an optimization. Boom.

I didn't think to just try finalize, which would have worked much better. Instead, I reworked this package so that

  • Each nlp object stores its own lib pointer
  • low-level calls pass this pointer to wrappers of the library routines

This eliminates the need to manually call finalize. As a side benefit, it also lets you have multiple problems loaded at once. It's a fairly big change, so I'll understand if you don't want it.

The WIP is because the documentation needs to be updated, too.

Each nlp stores its own lib pointer, and calls pass this pointer to wrappers of the library routines
Fixes some classes of bugs and removes the need to call finalize.
@coveralls
Copy link

coveralls commented Nov 10, 2016

Coverage Status

Coverage decreased (-0.3%) to 62.08% when pulling a6d7420 on timholy:pull-request/a6d74202 into fd7d708 on JuliaSmoothOptimizers:develop.

@dpo
Copy link
Member

dpo commented Nov 10, 2016

Thanks @timholy! The initial version of this package used to do just that: store the lib pointer inside the NLP. We ran into troubles that I can't recall, and it was changed to a global variable. It would be great if we could solve the multiple problem conundrum. Perhaps @abelsiqueira remembers better than I do.

@gragusa
Copy link

gragusa commented Nov 10, 2016

Wherever I go I see @timholy gets there first. I was also looking a CUTESt in a loop and run into the same issue. I support this PR or any other change that allows this.

@timholy
Copy link
Contributor Author

timholy commented Nov 10, 2016

If it turns out there is some problem with this strategy (e.g., if there's some library that stores global state for the most recently-loaded problem), then a simpler approach will be to change that error message to direct users to call finalize rather than cutest_finalize.

@abelsiqueira
Copy link
Member

Hello @timholy . Thanks for this. I didn't realize gc could break it, thanks for catching it.
The second solution, direct to finalize was the intended use, I did make a mistake.

I don't really remember if there were a main reason to not use cutest_lib inside nlp. One reason was the additional trouble of passing cutest_lib to the specialized interface. However, we now believe that the specialized interface will be mostly ignored in lieu of the NLPModels interface.

On the other hand, the global cutest_instances is used also because of the "multiple problem conundrum" (#51). Could you test it?

@timholy
Copy link
Contributor Author

timholy commented Nov 10, 2016

Closing due to #51 (comment)

@timholy timholy closed this Nov 10, 2016
@timholy
Copy link
Contributor Author

timholy commented Nov 10, 2016

I'll leave the branch around just in case someone comes up with a workaround, but feel free to delete it whenever you feel like it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants