Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release thread for OpenMM 8.0 #3610

Closed
peastman opened this issue May 19, 2022 · 128 comments
Closed

Release thread for OpenMM 8.0 #3610

peastman opened this issue May 19, 2022 · 128 comments

Comments

@peastman
Copy link
Member

In the latest dev build, the errors on Mac described in #3495 seem to have disappear. We don't know why, just as we don't know why they appeared in the first place. But given that, I suggest we move forward with building a beta of the next release. It contains changes needed for ML.

Are there any final features we want to get in before building the beta? Don't worry about bug fixes and documentation changes. Those can go in after the beta. I'm just asking about new features.

Do we want to call it 7.8 or 8.0?

@peastman
Copy link
Member Author

Does the deafening silence mean everyone agrees and I should start working on building a beta?

@raimis
Copy link
Contributor

raimis commented May 25, 2022

Yes, let's make a beta, so it is easier to check the NNP functionalities. For now, let's name 7.8.

@peastman
Copy link
Member Author

@jchodera @giadefa @tmarkland any comments? Which version number do you prefer?

@tmarkland
Copy link
Member

I think we decided on 8.0 for this one. Proceeding with the beta sounds great to me.

@jchodera
Copy link
Member

Apologies for the delay in responding---I've been traveling.

The most important functionality to test is to make sure we can make the whole openmm-ml ecosystem conda-installable in a single line to deploy accelerated QML. We haven't been able to test that without a dev release. Perhaps a dev release of that would be much more useful since we are typically reluctant to build more than one beta?

From the "OpenMM 8 status and priorities" document, here are the desiderata I laid out Feb 22:

From this list, I still have to finish up the automated benchmarks PR which I can do quickly so it will be easier for folks to run benchmarks and share them with us.

Thoughts on the others?

@egallicc
Copy link
Contributor

Thank you @jchodera and all. I have a draft conda-forge recipe for the ATMMetaForce plugin awaiting review. Yes, the plugin is rather specific to traditional bonded/non-bonded MM force fields and could likely benefit from some level of generalization. Also documentation of the API, unit testing, etc.

@peastman
Copy link
Member Author

We didn’t address fast long-range correction computation (critical for alchemical free energy calculations) for CustomNonbondedForce in 7.7

There are several changes that improve it: #3520, #3552, and #3606.

including some auto-tuning tools that we recommend by default

This would be a nice feature for someone to work on. Right now I think it's still at the research stage. Someone needs to experiments with various ways of tuning and see what works well.

Finish up automated benchmarking to make it easier for us to produce and use benchmark data

If you want to finish it up, that would be great. Since it only affects the benchmark script, I think it's fine if that doesn't happen until after the beta goes out.

Experimental AMD HIP/ROCm platform plugin

It's still very experimental.

Clean up and add structure to examples directory:
Update makefiles for examples:

This would be nice, and can also be done post-beta.

Can we add support for OpenFF untis natively / pint?

I'm not convinced that's a good idea. It's certainly something that would need a lot of design and discussion, so not happening in this release.

@peastman
Copy link
Member Author

peastman commented Jun 2, 2022

@jchodera given the above comments, are you ok with releasing a beta now?

It would be nice to have coordinated versions of OpenMM, OpenMM-Torch, and OpenMM-ML so users can test out the full workflow. @raimis do you think the other two packages are also in a suitable state for beta testers to use them, or are there changes we should get in first?

@tmarkland has a strong preference for calling this release 8.0. Is that ok with everyone?

@raimis
Copy link
Contributor

raimis commented Jun 3, 2022

It would be nice to have coordinated versions of OpenMM, OpenMM-Torch, and OpenMM-ML so users can test out the full workflow. @raimis do you think the other two packages are also in a suitable state for beta testers to use them, or are there changes we should get in first?

Regarding OpenMM-Torch, it is in a working state after the latest release (https://github.com/openmm/openmm-torch/releases/tag/v0.7). I'm not sure what is the status of OpenMM-ML.

@tmarkland has a strong preference for calling this release 8.0. Is that ok with everyone?
I'm fine.

@peastman
Copy link
Member Author

peastman commented Jun 7, 2022

@jchodera given the above comments are you ok with moving forward? Everyone else has given their ok. If I don't hear from you by the end of the week, I'll start building the beta.

I think OpenMM-ML is in reasonable shape. I just made a PR filling in the README. Once that's merged, I think we can create a 1.0beta release and then build conda packages for it. It would be nice if we could finish up one of the two PRs that adds NNPOps integration first, but since that only affects speed, not features, I don't think it's essential.

@zhang-ivy
Copy link
Contributor

@peastman : What's the status of the beta release? Has it been built / Is it available on conda-forge? If not, when do you expect it to be?

@peastman
Copy link
Member Author

Not yet. We first need to finish up openmm/openmm-torch#80 so we can build a beta package for OpenMM-Torch, and we need to make changes to OpenMM-ML so it will use the optimized NNPOps kernels. @dominicrufa I think you were going to do the latter?

@dominicrufa
Copy link

@peastman, I can update the openmm-ml test and push the nnpops bits tomorrow.

@peastman
Copy link
Member Author

Excellent, thanks.

@jchodera
Copy link
Member

jchodera commented Jul 5, 2022

I managed to finish up this PR, which would be great to include in the beta:
#3386

Otherwise, I think we're ready to go once openmm/openmm-torch#80 is finished?

@peastman
Copy link
Member Author

peastman commented Jul 7, 2022

I opened a PR to build the beta packages: conda-forge/openmm-feedstock#83. Once it's done, we can build corresponding packages for the plugins.

@jchodera
Copy link
Member

Would be great to get #3386 in before we cut the beta so that we can have people send us their benchmarks in a standard JSON format!

@peastman
Copy link
Member Author

It looks like #3386 is going to take a while yet, so I don't want to hold up the beta for it.

@jchodera
Copy link
Member

jchodera commented Jul 11, 2022 via email

@peastman
Copy link
Member Author

The packages are now online! Next we need to build corresponding packages for the openmm-torch and openmm-ml packages (and possibly nnpops as well)? They should be able to generally follow the same procedure used for OpenMM beta and rc builds. The critical things are that they should be pushed to the openmm_rc label instead of main, and they should list the OpenMM 8.0 beta as a dependency. They won't work correctly with 7.7.

@peastman
Copy link
Member Author

I created a PR at conda-forge/openmm-torch-feedstock#23 to build the beta package for OpenMM-Torch. But it looks like we've never set it up to build on any platform other than Linux x86 and Mac x86. I'm not sure exactly what needs to be done to make it build packages for other platforms. And then we need to set up a feedstock to build packages for OpenMM-ML.

@mikemhenry and @raimis can you help? This is pushing the limits of my experience building conda packages.

@mikemhenry
Copy link
Contributor

@peastman what platforms do we want to target? Parity with openmm or a subset?

@peastman
Copy link
Member Author

As many as possible, but of course it's limited by what PyTorch supports. The conda-forge package includes ARM Mac as well. The official packages on the pytorch channel also include Windows, but I don't think there's any way we can build against that on conda-forge. :(

@egallicc
Copy link
Contributor

They won't work correctly with 7.7.

Are there guidelines and suggestions regarding porting plugins? Is there a specific change of the OpenMM API from 7.7 to 8.0 that is likely to break a plugin such as the ATM Meta Force plugin? Thanks.

@peastman
Copy link
Member Author

There's no API changes. We had to make a number of internal changes related to context handling so it would interoperate correctly with PyTorch. Without those changes, you get crashes in various situations.

@egallicc
Copy link
Contributor

A personal note about HIP if you are still considering it. We have been playing around with the StreamHPC implementation https://github.com/StreamHPC/openmm and https://github.com/StreamHPC/openmm-hip. We were able to build and test it on an AMD card with minor effort. We were also able to port the ATM Meta Force plugin to HIP via the Common platform (great invention btw), again, with very minor effort (https://github.com/Gallicchio-Lab/openmm-atmmetaforce-plugin/tree/hip).

We have not done detailed benchmarking, but the performance appears unexpectedly good, probably ~3X better than with OpenCL. We are getting more throughput on an inexpensive and power-moderate AMD RX6750XT card than on a CUDA/V100 accelerator on XSEDE's Expanse.

It would be great to have HIP support in a stable OpenMM release soon.

@peastman
Copy link
Member Author

That's good to know! Thanks for the datapoint.

@peastman
Copy link
Member Author

Two months later, I think we finally have the problems with building the PyTorch plugin on conda-forge resolved! Which means we can move forward with the beta release. I believe these are the next steps.

  1. I'll do a new beta build of OpenMM, since there have been a lot of code changes since I built the first one.
  2. We need suitable conda packages for NNPOps and OpenMM-ML that work correctly with the OpenMM beta (either existing packages or beta packages coordinated with it). @mikemhenry and @raimis what needs to be done for this?
  3. @mikemhenry suggested creating a hosted environment file to make it easy for users to install appropriate beta versions of all the packages. Can you describe exactly how that would be done? (Consider not just how it will work for this release, but also future releases.)

@peastman
Copy link
Member Author

@mikemhenry @raimis what's the status of the things listed above? The OpenMM and OpenMM-Torch packages have been built, so those are all that's blocking the beta.

@mikemhenry
Copy link
Contributor

I'll make a PR that details how to create the hosted environment so it will be easier for people to try the beta.

Last I checked NNPOps needed an OSX build.

I can have a status report put together tomorrow morning that triages any blockers.

@peastman
Copy link
Member Author

New packages are built with the fix from #3912. @mikemhenry we're just waiting for you to fix the problem in the environment file. Then we can announce it.

@peastman
Copy link
Member Author

Thanks @mikemhenry! I confirmed that conda env create mmh/openmm-8-rc1-linux installs the right versions of everything and that TestMLPotential.py runs correctly. That means we're ready to call this an official release candidate. I'll post an announcement on the forum. Feel free to notify your followers on Twitter (does it still exist?) or wherever else you think will be good for reaching people. We want to encourage people to run their actual workflows with it and make sure everything is ok. The more testing we can get the better.

If no problems are found, we'll plan to make it an official release in 1-2 weeks.

@mikemhenry
Copy link
Contributor

I updated the PR+pushed the envs to the cloud

@peastman
Copy link
Member Author

I'm building an updated release candidate to get the fixes from #3923. They're very minor, localized changes that shouldn't impact any of the downstream packages. They just fix the Python wrappers for two fairly obscure methods. They're mainly important because they impact ForceBalance.

@mikemhenry
Copy link
Contributor

@peastman I will update the hosted envs, is the only new build for openmm? I'll need to make new builds of the downstream packages so they pull in 8.0.0rc2 instead of the old one.

@peastman
Copy link
Member Author

I don't think that's necessary. As noted above, the only change was unrelated to ML. Lee-Ping already confirmed it fixed the problem. For everything else, it's fine if people are testing rc1.

@peastman
Copy link
Member Author

There have been no further reports of problems. That means it's time to start building the release! I'll create a PR for the OpenMM package, then we can move on to the other packages. Hopefully we can release everything tomorrow.

@jchodera
Copy link
Member

Can we hold off until at least Monday? We are still tracking down an issue with running perses / openmmtools with the latest rc.

@peastman
Copy link
Member Author

What issue?

@jchodera
Copy link
Member

jchodera commented Jan 27, 2023

I'll ask @ijpulidos and @mikemhenry to summarize their findings, but we're still trying to complete a successful set of local GPU tests of our tools. We have not yet managed to complete this.

The other outstanding issues I know of are

  1. This issue with Python subclasses of OpenMM classes. I think this is an issue with a change in behavior affecting openmmtools but we haven't fully tracked this down yet.
  2. The difference in computed results we see, which we suspect to be an issue with the OpenEye toolkit import and are working with them to resolve.

There may be others I don't have visibility into right now, but in any case, waiting until Monday seems prudent.

@ijpulidos @mikemhenry : Can you make sure to summarize your findings?

@peastman
Copy link
Member Author

It's Monday and you haven't posted anything further, so I'm assuming it's ok to move forward with the release. If I don't hear from you in the next hour or so, I'll merge the PR to build the packages.

@ijpulidos
Copy link
Contributor

ijpulidos commented Jan 30, 2023

As far as I can tell the findings are correctly summarized in the last comment from @jchodera . The behavior on how global parameters changed in commit 9d9ffb0 and we have CustomNonbondedForce objects that define different default values for the same global parameter, which is a behavior that doesn't seem to be allowed now in Openmm8.

I still have to dig deeper what are the consequences of this behavior in terms of openmmtools. It is only a single specific test that is failing and maybe we just need to refactor how the RestorableOpenMMObject works.

@peastman
Copy link
Member Author

That isn't really a change in behavior. It was always wrong and produced undefined results. It's just that it now tells you you're doing something wrong, where before you would silently get different results than you expected.

@jchodera
Copy link
Member

@peastman I think we are safe to move forward.
Can you point me to the complete release notes to tweet once things have posted?

@peastman
Copy link
Member Author

The packages are building now. Once they're done and I've tested a few of them, I'll move on to openmm-torch and openmm-ml. Once everything's done, we'll be ready to announce it.

@peastman
Copy link
Member Author

All packages are built. If I install OpenMM and OpenMM-Torch with

mamba install -c conda-forge openmm-torch

it installs the correct packages and all tests pass.

If instead I tell it to install OpenMM-ML with

mamba install -c conda-forge openmm-ml

then it incorrectly installs a CPU build of PyTorch:

  pytorch                   1.12.1  cpu_py310h75c9ab6_0      conda-forge/linux-64      62 MB

This causes all tests involving PyTorch to fail with the error

E ImportError: libtorch_cuda.so: cannot open shared object file: No such file or directory

Any idea what causes that?

@sef43
Copy link
Member

sef43 commented Jan 31, 2023

I don't get this error, if I do mamba install -c conda-forge openmm-ml in a new conda environment it correctly installs the latest versions of openmm, openmm-torch, openmm-ml, and pytorch 1.13.1 cuda

@peastman
Copy link
Member Author

The behavior seems to be really erratic depending on the python version, whether you use conda or mamba, whether you install openmm-ml when you create the environment, and probably some other things. Here are some examples.

mamba create --name test -c conda-forge python=3.11 openmm-ml

fails with the error

  • package openmm-ml-1.0-pyhd8ed1ab_0 requires torchani >=2.2.2, but none of the providers can be installed

If I downgrade from python 3.11 to 3.10

mamba create --name test -c conda-forge python=3.10 openmm-ml

then it works and wants to install

   pytorch                   1.13.1  cuda112py310he33e0d6_200  conda-forge/linux-64     Cached

If I first create the environment with

mamba create --name test -c conda-forge python=3.10

then activate it and install openmm-ml with

mamba install -c conda-forge openmm-ml

then it installs a different, older pytorch version:

  pytorch                   1.12.1  cuda112py310he33e0d6_201  conda-forge/linux-64     488 MB

But if I try to use conda,

conda install -c conda-forge openmm-ml

then it fails with

ResolvePackageNotFound: 
  - python==3.10

@peastman
Copy link
Member Author

I figured out how to reproduce the original problem: don't list conda-forge when creating the environment.

mamba create --name test python=3.10
conda activate test
mamba install -c conda-forge openmm-ml

That leads it to install

  pytorch                   1.12.1  cpu_py310h75c9ab6_0      conda-forge/linux-64     Cached

@sef43
Copy link
Member

sef43 commented Feb 1, 2023

I can reproduce the problem using conda:

conda create --name test python=3.10
conda activate test
conda install -c conda-forge openmm-ml

mamba always works for me, either it gives a working env, or gives an error (torchani is not build for python 3.11).

I think once again the issue is conda struggling to deal with pytorch. The newer versions of pytorch seem to need a version of python that comes from the conda-forge channel rather than the main channel. By doing conda create --name test python=3.10 it is installing python from pkgs/main channel, rather than conda-forge channel. If i use mamba it resinstalls python from conda-forge. Here are the examples.

1. use conda and install python from main when creating the env:

conda create -n test python=3.10
conda activate test
conda install -c conda-forge pytorch --dry-run
Collecting package metadata (current_repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: /home/sfarr/miniconda3/envs/test

  added / updated specs:
    - pytorch


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    cffi-1.15.0                |  py310h0fdd8cc_0         433 KB  conda-forge
    libblas-3.9.0              |   16_linux64_mkl          13 KB  conda-forge
    libcblas-3.9.0             |   16_linux64_mkl          12 KB  conda-forge
    liblapack-3.9.0            |   16_linux64_mkl          12 KB  conda-forge
    numpy-1.22.3               |  py310h4ef5377_2         6.8 MB  conda-forge
    pytorch-1.11.0             |cpu_py310h75c9ab6_1        47.2 MB  conda-forge
    ------------------------------------------------------------
                                           Total:        54.5 MB

The following NEW packages will be INSTALLED:

  cffi               conda-forge/linux-64::cffi-1.15.0-py310h0fdd8cc_0 
  intel-openmp       pkgs/main/linux-64::intel-openmp-2022.1.0-h9e868ea_3769 
  libblas            conda-forge/linux-64::libblas-3.9.0-16_linux64_mkl 
  libcblas           conda-forge/linux-64::libcblas-3.9.0-16_linux64_mkl 
  liblapack          conda-forge/linux-64::liblapack-3.9.0-16_linux64_mkl 
  libprotobuf        pkgs/main/linux-64::libprotobuf-3.20.1-h4ff587b_0 
  mkl                pkgs/main/linux-64::mkl-2022.1.0-hc2b9512_224 
  ninja              conda-forge/linux-64::ninja-1.11.0-h924138e_0 
  numpy              conda-forge/linux-64::numpy-1.22.3-py310h4ef5377_2 
  pycparser          conda-forge/noarch::pycparser-2.21-pyhd8ed1ab_0 
  python_abi         conda-forge/linux-64::python_abi-3.10-2_cp310 
  pytorch            conda-forge/linux-64::pytorch-1.11.0-cpu_py310h75c9ab6_1 
  sleef              conda-forge/linux-64::sleef-3.5.1-h9b69904_2 
  typing_extensions  conda-forge/noarch::typing_extensions-4.4.0-pyha770c72_0 

The following packages will be SUPERSEDED by a higher-priority channel:

  ca-certificates    pkgs/main::ca-certificates-2023.01.10~ --> conda-forge::ca-certificates-2022.12.7-ha878542_0 
  certifi            pkgs/main/linux-64::certifi-2022.12.7~ --> conda-forge/noarch::certifi-2022.12.7-pyhd8ed1ab_0 



DryRunExit: Dry run. Exiting.

conda installs pytorch conda-forge/linux-64::pytorch-1.11.0-cpu_py310h75c9ab6_1

2. use mamba and install python from main when creating the env:

mamba create -n test python=3.10
mamba activate test
mamba install -c conda-forge pytorch --dry-run


Looking for: ['pytorch']

pkgs/main/linux-64                                            No change
pkgs/main/noarch                                              No change
pkgs/r/linux-64                                               No change
pkgs/r/noarch                                                 No change
conda-forge/noarch                                  11.1MB @   3.3MB/s  3.6s
conda-forge/linux-64                                29.4MB @   3.3MB/s  9.8s

Pinned packages:
  - python 3.10.*


Transaction

  Prefix: /home/sfarr/miniconda3/envs/test

  Updating specs:

   - pytorch
   - ca-certificates
   - certifi
   - openssl


  Package               Version  Build                     Channel                    Size
────────────────────────────────────────────────────────────────────────────────────────────
  Install:
────────────────────────────────────────────────────────────────────────────────────────────

  + cffi                 1.15.1  py310h255011f_3           conda-forge/linux-64     Cached
  + cudatoolkit          11.8.0  h37601d7_11               conda-forge/linux-64     Cached
  + cudnn              8.4.1.50  hed8a83a_0                conda-forge/linux-64     Cached
  + icu                    70.1  h27087fc_0                conda-forge/linux-64     Cached
  + libblas               3.9.0  16_linux64_openblas       conda-forge/linux-64     Cached
  + libcblas              3.9.0  16_linux64_openblas       conda-forge/linux-64     Cached
  + libgfortran-ng       12.2.0  h69a702a_19               conda-forge/linux-64     Cached
  + libgfortran5         12.2.0  h337968e_19               conda-forge/linux-64     Cached
  + libhwloc              2.8.0  h32351e8_1                conda-forge/linux-64     Cached
  + libiconv               1.17  h166bdaf_0                conda-forge/linux-64     Cached
  + liblapack             3.9.0  16_linux64_openblas       conda-forge/linux-64     Cached
  + libnsl                2.0.0  h7f98852_0                conda-forge/linux-64     Cached
  + libopenblas          0.3.21  pthreads_h78a6416_3       conda-forge/linux-64     Cached
  + libprotobuf         3.21.12  h3eb15da_0                conda-forge/linux-64     Cached
  + libsqlite            3.40.0  h753d276_0                conda-forge/linux-64     Cached
  + libxml2              2.10.3  h7463322_0                conda-forge/linux-64     Cached
  + libzlib              1.2.13  h166bdaf_4                conda-forge/linux-64     Cached
  + llvm-openmp          15.0.7  h0cdce71_0                conda-forge/linux-64     Cached
  + magma                 2.6.2  hc72dce7_0                conda-forge/linux-64     Cached
  + mkl                2022.2.1  h84fe81f_16997            conda-forge/linux-64     Cached
  + nccl               2.14.3.1  h0800d71_0                conda-forge/linux-64     Cached
  + ninja                1.11.0  h924138e_0                conda-forge/linux-64     Cached
  + numpy                1.24.1  py310h8deb116_0           conda-forge/linux-64     Cached
  + pycparser              2.21  pyhd8ed1ab_0              conda-forge/noarch       Cached
  + python_abi             3.10  3_cp310                   conda-forge/linux-64     Cached
  + pytorch              1.13.1  cuda112py310he33e0d6_200  conda-forge/linux-64     Cached
  + sleef                 3.5.1  h9b69904_2                conda-forge/linux-64     Cached
  + tbb                2021.7.0  h924138e_1                conda-forge/linux-64     Cached
  + typing_extensions     4.4.0  pyha770c72_0              conda-forge/noarch       Cached

  Change:
────────────────────────────────────────────────────────────────────────────────────────────

  - _libgcc_mutex           0.1  main                      pkgs/main                      
  + _libgcc_mutex           0.1  conda_forge               conda-forge/linux-64     Cached
  - zlib                 1.2.13  h5eee18b_0                pkgs/main                      
  + zlib                 1.2.13  h166bdaf_4                conda-forge/linux-64     Cached

  Upgrade:
────────────────────────────────────────────────────────────────────────────────────────────

  - libgcc-ng            11.2.0  h1234567_1                pkgs/main                      
  + libgcc-ng            12.2.0  h65d4601_19               conda-forge/linux-64     Cached
  - libgomp              11.2.0  h1234567_1                pkgs/main                      
  + libgomp              12.2.0  h65d4601_19               conda-forge/linux-64     Cached
  - libstdcxx-ng         11.2.0  h1234567_1                pkgs/main                      
  + libstdcxx-ng         12.2.0  h46fd767_19               conda-forge/linux-64     Cached
  - libuuid              1.41.5  h5eee18b_0                pkgs/main                      
  + libuuid              2.32.1  h7f98852_1000             conda-forge/linux-64     Cached
  - openssl              1.1.1s  h7f8727e_0                pkgs/main                      
  + openssl               3.0.7  h0b41bf4_2                conda-forge/linux-64     Cached

  Downgrade:
────────────────────────────────────────────────────────────────────────────────────────────

  - _openmp_mutex           5.1  1_gnu                     pkgs/main                      
  + _openmp_mutex           4.5  2_kmp_llvm                conda-forge/linux-64     Cached
  - python               3.10.9  h7a1cb2a_0                pkgs/main                      
  + python               3.10.8  h4a9ceb5_0_cpython        conda-forge/linux-64     Cached

  Summary:

  Install: 29 packages
  Change: 2 packages
  Upgrade: 5 packages
  Downgrade: 2 packages

  Total download: 0 B

────────────────────────────────────────────────────────────────────────────────────────────


Dry run. Exiting.

DryRunExit: Dry run. Exiting.

mamba installs pytorch 1.13.1 cuda112py310he33e0d6_200 conda-forge/linux-64
and changes python (and other packages) to conda-forge builds

Solutions
The solution as you have found is to make sure the python is a conda-forge build, there a several working ways to do this.

  1. specify conda-forge when you create the environment with a specific python version
conda create -n openmm -c conda-forge python=3.10
conda activate openmm
conda install -c conda-forge openmm-ml
  1. or make a completely empty environment first
conda create -n openmm 
conda activate openmm
conda install -c conda-forge openmm-ml

the newest compatible python will be installed from conda-forge

  1. install everything when you create the environment
conda create -n openmm -c conda-forge openmm-ml
  1. or faster with mamba
mamba create -n openmm -c conda-forge openmm-ml

will install the latest version of everything correctly.

The order you do things with conda matters, by giving it all requirements at the same time, in a completely clean environment, it can work them out correctly (although slowly).

@peastman
Copy link
Member Author

peastman commented Feb 1, 2023

Your option 4 doesn't work for me. Installing everything with mamba still produces the error.

If someone is creating a completely new environment from scratch, there are ways to make this work. But what if someone wants to install it into an existing environment that already includes other software they want to use along with OpenMM? Is there a way to force it to install the correct pytorch?

@sef43
Copy link
Member

sef43 commented Feb 1, 2023

Your option 4 doesn't work for me. Installing everything with mamba still produces the error.

What mamba version do you have? I have version 1.2

If someone is creating a completely new environment from scratch, there are ways to make this work. But what if someone wants to install it into an existing environment that already includes other software they want to use along with OpenMM? Is there a way to force it to install the correct pytorch?

On my machine with mamba 1.2 then it does correctly install pytorch in an existing environment, the only time it fails to install things are if you try and use python 3.11 where it complains that it cant install torchani

@sef43
Copy link
Member

sef43 commented Feb 1, 2023

Using just conda if I use the flag --override-channels then it also installs correctly in an existing environment:

conda create -n test python=3.10
conda activate test
conda install -c conda-forge openmm-ml --override-channels

@peastman
Copy link
Member Author

peastman commented Feb 1, 2023

I posted the announcement on the forum. I also created a PR to update the installation instruction in the OpenMM-ML README.

Hopefully the conda issues won't affect too many people. If we start getting reports from people running into them, we may need to investigate the causes and see if there's a way to prevent them.

@peastman
Copy link
Member Author

peastman commented Feb 2, 2023

I was using mamba 0.10.0. I just upgraded to 0.27.0, which is the newest version available on conda-forge. It now installs a GPU pytorch rather than a CPU one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests