-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pymbar 4.0 adaptive logic #446
Conversation
mbar_solve was not passing the current free energies on failures.
OK, this appears to have failed with functions returning the error messages that aren't expected. I will check this . . . Interesting that it passed when running pytest locally, but not here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the robust
approach become the default? Other, faster approaches could be selected by the user.
devtools/conda-recipe/README.md
Outdated
@@ -6,7 +6,7 @@ it, running the tests, and then if successful pushing the package to binstar | |||
(and the docs to AWS S3). The binstar auth token is an encrypted environment | |||
variable generated using: | |||
|
|||
binstar auth -n repex-travis -o omnia --max-age 22896000 -c --scopes api:write | |||
binstar auth -n repex-travis -o conda-forge --max-age 22896000 -c --scopes api:write |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we're using binstar
anymore.
Does this repo need to be updated to the current MolSSI cookiecutter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Lnaden any thoughts on binstar or updating the cookiecutter?
@@ -224,7 +224,7 @@ def main(): | |||
# Initialize MBAR. | |||
print("Running MBAR...") | |||
# TODO: change to u_kn inputs | |||
mbar = pymbar.MBAR(u_kln, N_k, verbose=True, relative_tolerance=1.0e-10) | |||
mbar = pymbar.MBAR(u_kln, N_k, verbose=True, relative_tolerance=1.0e-10, solver_protocol = 'robust') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't robust
be default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question. I think this is to get the framework right for now, and the precise solvers used will need a bit more experimentation - what is "robust", and what is "default". I'm totally find with using what we think is the most robust for the default, and then having a "faster" option. I will set those solvers in a later PR closer to release after some testing.
@@ -1,50 +1,50 @@ | |||
494.896 3701.003313 58.541052 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the point of including "expected output" targets if this changes even if we are only changing the efficiency and robustness, rather than the expected result? Shouldn't we just adjust the comparison tolerance and keep a single, well-converged set of results here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reproducibility, basically. They should get the same answer as in the code.
I don't know how to have git compare "up to a number".
dict(method="adaptive", options=dict(min_sc_iter=0)), | ||
) | ||
|
||
ROBUST_SOLVER_PROTOCOL = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't "robust" be default, with faster methods available if desired?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. default
is essentially what was done before. I can play around with this a little.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(and see above for when to release it)
@@ -253,12 +284,11 @@ def adaptive(u_kn, N_k, f_k, tol=1.0e-12, options=None): | |||
|
|||
options: dictionary of options | |||
gamma (float between 0 and 1) - incrementor for NR iterations (default 1.0). Usually not changed now, since adaptively switch. | |||
maximum_iterations (int) - maximum number of Newton-Raphson iterations (default 250: either NR converges or doesn't, pretty quickly) | |||
maxiter (int) - maximum number of Newton-Raphson iterations (default 10000: either NR converges or doesn't, pretty quickly) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why change the name from something clear and comprehensible (maximum_iterations
) to something shorter (maxiter
)? Why change the name at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are options that go to either scipy.optimize.minimize
or to adaptive
, and scipy.optimize.minimize
minimizers use maxiter
. The idea is to make all of the calls consistent, so you don't have to use maximum_iterations
for one solution approach and maxiter
for another.
We could just take 'maximum_iterations', and under the hood change it to "maxiter" for the scipy calls, but probably it would be better to just document up front what options it takes.
…ymbar4_adaptive_logic
Pass at adding adaptive logic to better converge mbar estimates. Now passes pytest test. Could use a bit more cleaning up, but basic logic is working.