Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OptimizationFrameworks/clnlbeam.jmd #836

Merged
merged 1 commit into from Jan 22, 2024

Conversation

odow
Copy link
Contributor

@odow odow commented Jan 21, 2024

This is a model similar to the OPF example, except that it is much simpler and has no type stability issues. This makes it a simpler candidate for benchmarking during development.

It is taken from the JuMP docs: https://jump.dev/JuMP.jl/stable/tutorials/nonlinear/simple_examples/#The-clnlbeam-problem
As a goal, we're looking to solve the N = 1000 case, although the jmd file only goes up to N = 200.

Am I meant to update the Manifest.toml? How does that work.

I get a graph like this, but I was doing stuff on my machine while it executed (and it's really hot here, so the fan was going full bore), so results may differ. At the very least, it's a good candidate model for testing the structural simplification of MTK, and it teases apart different differentiation implementations.

image

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

This is a model similar to the OPF example, except that it is much
simpler and has no type stability issues. This makes it a simpler
candidate for benchmarking during development.
@odow
Copy link
Contributor Author

odow commented Jan 22, 2024

That seemed to work. Here's the CI plot:

@ChrisRackauckas
Copy link
Member

Awesome, thanks! While you got this running, can you also share a quick StatProfilerHTML on the AutoSparseReverseDiff(true) one on one of the bigger sizes to see if that confirms overhead dominated by sparsity detection?

@ChrisRackauckas ChrisRackauckas merged commit f6c4583 into SciML:master Jan 22, 2024
1 of 2 checks passed
@odow odow deleted the od/clnlbeam branch January 22, 2024 08:54
@odow
Copy link
Contributor Author

odow commented Jan 22, 2024

Seems like almost all the time is spent in hessian_sparsity, so we're on the right track:

flamegraph

@ChrisRackauckas
Copy link
Member

Awesome, yup that's the benchmark we've been looking for.

@odow
Copy link
Contributor Author

odow commented Jan 22, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants