-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate MPCD component to v4 #775
Comments
@mphoward While I still don't have a hard timeframe for a 3.0.0 final release, the todo list on the project board is getting smaller. I would guess that 3.0.0 final will be completed this summer. Do you plan to update MPCD for that release, or should we delay MPCD until a future 3.x feature release? |
I think I can do the migration and breaking changes by this summer, as we would like to be able to use MPCD with other v3 features. Could we have a short discussion to walk through what needs to be done to migrate a component? Feel free to email me. |
@joaander I am still interested to migrate this component, but I don't really know where to start given the large number of changes that have been made on the python side. I am guessing this will not get done in time for a 3.0.0 release, and I would rather wait to implement all of the proposed changes above in one 3.x release. |
@mphoward understood. If you need guidance, feel free to set up a real time meeting with @b-butler and I and we can give you an overview of the new Python API internals and how the data model is laid out in v3. We have yet to write overview documentation for it (#979) To time all this work in one 3.x release, we can target the independently reviewed PRs to a staging branch and then merge the staging branch to master when all work is complete. |
Sounds good! I'll send you both an email to set something up. |
I am still planning to do this but given my bandwidth and the scope of work, I will not likely not hit a 3.0 release in March. Confirming that I will target a 3.1 release for the updated MPCD component. |
Thanks for the confirmation. |
FYI: #1351 updates the use of specific |
OK, thank you! I will make sure to apply those changes once the user API gets fully updated. I started on the data structures and had to take a break, but will come back to it soon. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
A student will be working on this during the summer. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
@joaander I've run short acceptance tests on the new MPCD code for some common applications, and I was able to reproduce my results from v2. I'm currently getting feedback on the new MPCD API from some folks I know that use it, then I think we will be ready to open a final PR to merge MPCD into Before we do that, I was thinking it could be good to have my acceptance tests included in HOOMD's longer-running validation tests, but I wanted to check if you had guidelines on how long those tests can be allowed to run and on what hardware. I was able to run them in a couple minutes on a GPU, but I can probably change the statistics I'm measuring so that they can run shorter if necessary. |
I implement long-running acceptance tests in a signac-flow workflow: https://github.com/glotzerlab/hoomd-validation/. I run this workflow on both CPUs and GPUs on HPC resources for each minor and major release. Individual tests in the workflow should preferably run in less than 2 hours (the walltime limit for small jobs on Frontier) - though most run in 10-15 minutes. If a longer time is needed, the test should be restartable (see e.g. the NVE energy conservation tests in the repository). The number of replicates is configurable, allowing one to test the workflow without spending too many CPU/GPU hours. You will find most tests in the repository run on both the CPU (with MPI to keep the run time reasonable) and GPU for completeness. That is not strictly necessary if you would prefer to validate only one of the code paths. Note that the GPU tests are opt-in - the GPU operations are only added when the user sets the configuration option There are plotting utility functions in the repository to compare the measured values against expected (or against the average of all the measured when there is no known expected value). See the pull requests on that repository for example outputs - or run one of the shorter subprojects (e.g. |
OK thanks! We can plan to add our tests there. To make sure I understand right, your acceptance of the tests is based on a visual check of the plots that come out at the end of the workflow right? So the plots should show quantities in a way that is easy to quickly see if they match. |
Yes, acceptance of the validation tests is visual. It is really hard to get good estimates of the actual error on the expected value which in turn makes it difficult to implement automated tests with a known confidence interval. Even with a known large confidence interval, tests should statistically fail often enough to make automated testing challenging. Visual inspection allows one to identify systematic bias, outliers, and judge whether something slightly outside the confidence interval should be investigated as a bug or considered a pass. Visual inspection also allows one to let known problematic methods (like the MTTK thermostat) to be considered a pass when the inaccuracy is due to the numerical method itself and not HOOMD's implementation. |
@joaander I'm going to write a short migration guide for MPCD from HOOMD 2 to 4, but |
OK, I will test on AMD when I get a chance. If you prefer, you are welcome to maintain your validation tests separately. |
Description
The MPCD component needs to be migrated to the v4 API. This is more involved than some other components because MPCD has its own particle data structures. Tasks and related issues:
The text was updated successfully, but these errors were encountered: