Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing goal programming issues. #49

Merged
merged 18 commits into from
Dec 12, 2019
Merged

Fixing goal programming issues. #49

merged 18 commits into from
Dec 12, 2019

Conversation

1ozturkbe
Copy link
Contributor

No description provided.

@1ozturkbe
Copy link
Contributor Author

This is almost ready to merge, but unfortunately the diffs do not match in test_methods.
I saved solution files on my personal machine and uploaded to git, but these do not match the solutions in the remote machines as I would like. I cannot recreate and upload the solution files required to get test_methods to not fail. @bqpd how does one get around this?

@1ozturkbe
Copy link
Contributor Author

@bqpd bump!

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

You have to figure out a way to either make those tests cross-machine deterministic (what gpkit does with sorts in various places) or set a seed, set thresholds for closeness and have them occasionally fail when the random number generator implementation changes (what gpfit does).

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

For this one, does just increasing reltol not work?

@1ozturkbe
Copy link
Contributor Author

I think the issue is with the printouts, not with the actual solutions.

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

According to the test results it was failing at an assertAlmostEqual...

@1ozturkbe
Copy link
Contributor Author

1ozturkbe commented Nov 25, 2019

It's sol.almost_equal that the test is failing. I am digging rn, since I would like to avoid doing machine specific things if possible.

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

oh derp, egg on my face. try setting sens_abstol a bit higher too? (or maybe just to 10-ish to not test sensitivity closeness at all??)

Maybe the first thing to do is print out the sol.diff, so you can tell just what the differences are; if they make sense, set the tolerances accordingly, but if they don't then something else is up

@1ozturkbe
Copy link
Contributor Author

I figured the tolerances were high enough, there shouldn't be a significant difference between machines. But I will keep turning the knobs until it freaking works.

@1ozturkbe
Copy link
Contributor Author

(and start actually printing the solutions on the machines if it doesn't...)

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

yeah I wonder if there's some configuration difference (different solver? different library version?) that's causing non-trivial changes to the result.

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

OH it might also be a reltol problem, if there's variables that you or jenkins solve as "exactly zero"...

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019

but the diff will show all that, if you print it. Probably good for debugging purposes too.

@1ozturkbe
Copy link
Contributor Author

1ozturkbe commented Nov 25, 2019 via email

@bqpd
Copy link
Contributor

bqpd commented Nov 25, 2019 via email

@1ozturkbe
Copy link
Contributor Author

Alright, I've detected two issues through the diffs.
One, on the other machines the Boyd model is infeasible...
Two, the auto-generated variable names do not match between the two models.

@bqpd
Copy link
Contributor

bqpd commented Nov 27, 2019

Huh, for Two I wonder if you should name those variables?

@bqpd
Copy link
Contributor

bqpd commented Nov 27, 2019

Also possible to make the variable numbering resettable

@bqpd
Copy link
Contributor

bqpd commented Nov 27, 2019

Having it be specific to a particular model environment might also work, but I'm leary of adding complexity to what is basically an old grandfathered hack (unnamed variables) anyway.

@1ozturkbe
Copy link
Contributor Author

I decided that the issues with using diffs were too annoying, and decided to go with assertAlmostEqual over all original variables.

@1ozturkbe
Copy link
Contributor Author

Yaaas, merging!!

@1ozturkbe 1ozturkbe merged commit 611bcee into master Dec 12, 2019
@1ozturkbe 1ozturkbe deleted the robust_experiment branch December 12, 2019 20:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants