New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for long-range dispersion correction to be (efficiently?) computed when parameter offsets are used? #3054
Comments
I just finished tracking down the problem, then found you'd already come to the same conclusion. It's best to add a new comment rather than editing the previous one. Editing a comment doesn't cause emails to get sent out, so no one realizes you've done it. Anyway, the difference does in fact come from the long range dispersion correction. The magnitude of that correction is just a constant divided by the box volume. So it adds negligible cost at each time step, but computing the constant at the start of the simulation can be expensive. That's why it gets computed using the base parameters for each particle, not taking offsets into account. If you disable the correction, you'll see the difference goes away: force.setUseDispersionCorrection(False) |
@peastman Many thanks for the reply. Even if it is expensive, is there a way to force the constant recalculation from Python (ideally without having to reinitialise the |
Reinitializing the context is the only way. And even then, it will do it based on the default value of the parameter, not the current value. |
We've run into this issue before as well. Would it be possible to write a GPU kernel to recompute the dispersion correction on the fly when the parameters change? |
Of course it's possible, but it would be very expensive. |
I think in many practical situations only a few parameters are changed using offsets, so maybe the update could only recalculate the contributions involving the perturbed parameters? I am not sure how straightforward it would be to implement this though. |
Good point! We could cluster the interactions of the non-offset particles into specific classes of (ncounts, epsilon, sigma), and then only integrate all (offset particle, non-offset particle class) combinations whenever the offset parameters are changed. This could be done in parallel for each combination and in a kernel that runs independently of others? |
Hello,
Consider this basic code which creates a simple system of two Lennard-Jones particles with a periodic
NonbondedForce
in OpenMM 7.4.1/7.4.2:When we perturb the epsilon parameter back to the initial value of the unperturbed system we don't recover the original energy:
However, had we instantiated the
switch
parameter to 1 before defining the parameter offset, we would have got the expected result:This also seems to only happen when periodic boundary conditions are used, because when we swap
CutoffPeriodic
forCutoffNonPeriodic
we get the expected behaviour:It therefore seems that there is a bug specifically related to parameter offsets in periodic systems? Any advice would be appreciated.
EDIT: It seems that this problem is related to the dispersion correction, which I assume doesn't get updated with the parameter. Is there any way to trigger this update manually without calling the integrator?
The text was updated successfully, but these errors were encountered: