Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
new stress/mop and stress/mop/profile computes for USER-MISC #1092
This pull request provides the stress/mop and stress.mop/profile compute styles. These are LAMMPS compute styles that calculate components of the local stress tensor using the method of planes as described in the paper by Todd et al. (B. D. Todd, Denis J. Evans, and Peter J. Daivis: "Pressure tensor for inhomogeneous fluids", Phys. Rev. E 52, 1627 (1995)).
The persons who created these files are Laurent Joly at University of Lyon 1 and Romain Vermorel at University of Pau and Pays de l'Adour.
No changes in the pull request break backward compatibility.
We verified the computations of the local stress tensor by comparing our results to some simple configurations for which abundant data is found in the literature (bulk Lennard-Jones fluid, Lennard-Jones fluid confined in a slit pore, etc). We benchmarked the code in serial and parallel, for various boundary conditions and spatial decomposition between processors.
Post Submission Checklist
Please check the fields below as they are completed
Further Information, Files, and Links
@RomainVermorel thanks for your contribution. I've made a bunch of cosmetic changes to follow the LAMMPS style conventions and moved files from the source folder into the examples and doc tree. I've also merged to two mostly identical doc pages into one describing both commands. Please see my review message for and issue that needs fixing or explaining and a suggestion for more clarity.
left a comment
Before merging these compute styles into LAMMPS, I'd like to suggest to rename the commands from
When running in parallel (with 4 MPI ranks), the output for the profile.z file differs where the parallel output has multiple rows with zeros, where the serial output has values there. this hints at some parallel processing issue when merging, outputting the data. Or if this is correct behavior, there should be an explanation in the documentation.
Following your suggestions, I have changed the names of the computes to 'stress/mop' and 'stress/mop/profile' and modified the documentation and example files accordingly. I have also completed the comment sections of the .h files to list and describe all possible error and warning messages.
Laurent Joly and I were not able to reproduce the difference you mention between the profile.z files obtained when running lammps in serial or parallel. Moreover, the file profile.z only gathers results from the existing compute stress/atom, and was included in the example script as a comparison with the outputs from compute mop and compute mop/profile (files mopz0.time and moppz.time). Therefore if there is indeed a problem, it seems to be independent from our new computes.
If you think it would be preferable, we can remove this part from the example file and only leave the compute stress/mop and compute stress/mop/profile calculations. In the meantime, I have removed the reference output and log files.
Moreover, we consider changing the name of this package to 'USER-STRESS' as it would leave open the possibility of adding other stress computation methods in the future. Would that be suitable ?
@RomainVermorel please note, that you didn't "rename" but just added new files. I had to do a significant amount of cleaning up, removing obsoleted files and changing doc entries. I have been doing this, since this is your first contribution. In the future, please make a better effort and particularly consider following the github tutorial in the LAMMPS manual more closely. Starting with not doing development in the "master" branch, but using a feature branch (after your code is merged, it is probably best to start over cleanly and drop your LAMMPS fork and make a new one, but don't do that yet).
As for the renaming of the package, this is a question to be answered by our mastermind @sjplimp
As for differences between single and multi-processor, i see them for all generated files. Please check out the files uploaded to the pull request on github in the
left a comment
yes, I like the names compute stress/mop/... better,
@akohlmey sorry for the inconvenience caused, I am new to GitHub... I must have overlooked the part of the tutorial on the feature branch. Next time I will pay more attention.
I suppose the differences between serial and parallel runs resulted from the initialization of the system in the example input script. We indeed used some random procedures for deleting some atoms and creating initial velocities, which led to different results depending on the number of processors.
We have prepared a new example input script and a data file to make sure the initial conditions are the same, irrespective of the number of processors. This simulation provides the same output data with single and multi-processors. Please check out these new input scripts in the attached file and let me know if it solves the issue.