New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New, simpler pareto front optimization #133
Conversation
This seems wrong. Locally, the model is recovered. The docs do not recover the model, and tests recover a third one. |
Interesting. What are the sources of randomness here? If you set a random seed does it always give the same result, or is it something deeper? |
AFAIK there is none. That's what's bugging me the most. But I'll try that on Sunday / Monday. Did you try to run it on your machine by any chance? I will give this a shot on my old laptop as well, just in case. Even if I think the hardware should not be the issue. |
I don't know if this is a win, but after changing to Ubuntu 20.04 and Julia 1.5 the test is failing for me as well :). |
Looks like it's good now? |
Nope. Michaelis is still wrong by a small factor ( and again not on my laptop but just on travis ). See the log of the tests for that. But in general, yes. This seems to work and is a simple enough way while being flexible. |
Works! Woohooo! |
what was the trick? |
Honestly, I do not know yet. I am working towards that. I added the max convergence to This does not explain why it worked locally for me and not on Travis though... |
src/sindy/isindy.jl
Outdated
|
||
@inbounds for i in 1:size(Ẋ, 1) | ||
@simd for i in 1:size(Ẋ, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this isn't a simd-able loop. The macro will just be a no-op
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright. I will change that.
#130