Skip to content

Commit

Permalink
Merge branch 'master' of github.com:ur-whitelab/hoomd-tf
Browse files Browse the repository at this point in the history
  • Loading branch information
RainierBarrett committed Aug 25, 2021
2 parents c671e4c + f928c5c commit a893532
Show file tree
Hide file tree
Showing 10 changed files with 686 additions and 27 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,5 +71,5 @@ jobs:
if: matrix.md-analysis == 'false'
run: |
cd examples
jupyter nbconvert --ExecutePreprocessor.timeout=-1 --execute "01. Quickstart.ipynb" "02. Preparing Coarse-grained Mapped Simulation.ipynb" "03. Experiment Directed Simulations.ipynb" "04. Particle Simulations.ipynb" "07. Molecules CG Mapping.ipynb" --to notebook --clear-output --output-dir='temp' --Application.log_level='ERROR'
jupyter nbconvert --ExecutePreprocessor.timeout=-1 --execute "01. Quickstart.ipynb" "02. Preparing Coarse-grained Mapped Simulation.ipynb" "03. Experiment Directed Simulations.ipynb" "04. Particle Simulations.ipynb" "11. Using External Models.ipynb" --to notebook --clear-output --output-dir='temp' --Application.log_level='ERROR'
rm -rf temp
File renamed without changes.
636 changes: 636 additions & 0 deletions examples/11. Using External Models.ipynb

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion htf/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ def __init__(self, low, high, count):
super(RBFExpansion, self).__init__(name='rbf-layer')
self.low = low
self.high = high
self.centers = tf.cast(tf.linspace(low, high, count), dtype=tf.float32)
self.centers = tf.cast(tf.linspace(
float(low), float(high), count), dtype=tf.float32)
self.gap = self.centers[1] - self.centers[0]

def get_config(self):
Expand Down
2 changes: 1 addition & 1 deletion htf/tensorflowcompute.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ def enable_mapped_nlist(self, system, mapping_fxn):
self.model._map_i = AAN
self._map_typeid_start = map_typeid_start
# these are inclusive semantics
map_group = hoomd.group.tags(M, M + AAN - 1)
map_group = hoomd.group.tags(AAN, M + AAN - 1)
aa_group = hoomd.group.tags(0, AAN - 1)

if self._nlist is not None:
Expand Down
6 changes: 5 additions & 1 deletion htf/test-py/test_tensorflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -595,11 +595,15 @@ def test_mapped_nlist(self):
self.assertEqual(len(system.particles), N)
aa_group, mapped_group = tfcompute.enable_mapped_nlist(
system, build_examples.MappedNlist.my_map)

assert len(aa_group) == N
assert len(mapped_group) == 2
# 2 CG sites
self.assertEqual(len(system.particles), N + CGN)
nlist = hoomd.md.nlist.cell()
hoomd.md.integrate.mode_standard(dt=0.001)
hoomd.md.integrate.nve(group=aa_group).randomize_velocities(seed=1, kT=0.8)
hoomd.md.integrate.nve(
group=aa_group).randomize_velocities(seed=1, kT=0.8)
tfcompute.attach(nlist, r_cut=rcut, save_output_period=2)
hoomd.run(8)
positions = tfcompute.outputs[0].reshape(-1, N + CGN, 4)
Expand Down
44 changes: 26 additions & 18 deletions sphinx-docs/source/building_a_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Building a Model
==================

To modify a simulation, you create a Keras model that will be executed at
To modify a simulation, you create a Keras :obj:`tf.keras.Model` that will be executed at
each step (or some multiple of steps) during the simulation. See the :ref:`running`
to see how to train your model instead, though these instructions still apply.

Expand All @@ -24,15 +24,15 @@ where ``NN`` is the maximum number of nearest neighbors to consider
are unsure, you can guess and add ``check_nlist = True`` to your
constructor. This will cause the program to halt if you choose too low.
``output_forces`` indicates if the model will output forces to use in
the simulation. In the :py:meth:`.SimModel.compute` function you will have three
tensors that can be used:``nlist``, ``positions``, ``box``:
the simulation. In the :py:meth:`SimModel.compute(nlist, positions, box)<.SimModel.compute>` function you will have three
tensors that can be used:

* ``nlist`` is an ``N`` x ``NN`` x 4 tensor containing the nearest
neighbors. An entry of all zeros indicates that less than ``NN`` nearest
neighbors where present for a particular particle. The 4 right-most
dimensions are ``x,y,z`` and ``w``, which is the particle type. Particle
type is an integer starting at 0. Note that the ``x,y,z`` values are a
vector originating at the particle and ending at its neighbor.
* ``nlist`` is an ``N`` x ``NN`` x 4 tensor containing the nearest
neighbors. An entry of all zeros indicates that less than ``NN`` nearest
neighbors where present for a particular particle. The 4 right-most
dimensions are ``x,y,z`` and ``w``, which is the particle type. Particle
type is an integer starting at 0. Note that the ``x,y,z`` values are a
vector originating at the particle and ending at its neighbor.

* ``positions`` is an ``N`` x 4 tensor of particle positions (x,y,z) and type.

Expand All @@ -47,7 +47,7 @@ desired.
Keras Model
-----------

Your models are `Keras Models <https://keras.io/api/models/model/>`_ so that all
Your models are Keras :obj:`tf.keras.Model`s so that all
the usual process of building layers, saving, and computing metrics apply. For example,
here is a two hidden layer neural network force-field that uses the 8 nearest neighbors to compute
forces.
Expand Down Expand Up @@ -109,7 +109,7 @@ that the molecules can be different size and atoms can exist in multiple
molecules.


`mol_compute` has the following additional arguments:
:obj:`MolSimModel.mol_compute(self, nlist, positions, mol_nlist, mol_pos)<.mol_compute>` has the following additional arguments:
``mol_positions`` and ``mol_nlist``. These new attributes are dimension
``M x MN x ...`` where ``M`` is the number of molecules and ``MN`` is
the atom index within the molecule. If your molecule has fewer than
Expand Down Expand Up @@ -166,11 +166,11 @@ the neighbor list. For example, to compute a :math:`1 / r` potential:
Notice that in the above example that we have used the
``tf.math.divide_no_nan`` method, which allows
:obj:`tf.math.divide_no_nan` method, which allows
us to safely treat a :math:`1 / 0`, which can arise because ``nlist``
contains 0s for when fewer than ``NN`` nearest neighbors are found.

There is also a method :py:func:`.compute_positions_forces` which
There is also a method :py:func:`compute_positions_forces(positions, energy)<.compute_positions_forces>` which
can be used to compute position dependent forces.

**Note:** because ``nlist`` is a *full*
Expand All @@ -184,20 +184,20 @@ Neighbor lists

The ``nlist`` is an ``N x NN x 4``
neighbor list tensor. You can compute a masked versions of this with
:py:func:`.masked_nlist` via ``masked_nlist(nlist, type_tensor, type_i, type_j)``
:py:func:`masked_nlist(nlist, type_tensor, type_i, type_j)<.masked_nlist>`
where ``type_i`` and ``type_j`` are optional integers that specify the type of
the origin (``type_i``) or neighbor (``type_j``). ``type_tensor`` is
``positions[:,3]`` or your own types can be chosen. You can also use :py:func:`.nlist_rinv` which gives a
``positions[:,3]`` or your own types can be chosen. You can also use :py:func:`nlist_rinv(nlist)<.nlist_rinv>` which gives a
pre-computed ``1 / r`` (dimension ``N x NN``).

.. _virial:

Virial
------

A virial term can be added by doing the following extra steps:
A virial term can be added by doing *both* of the following extra steps:

1. Compute virial with your forces :py:func:`.compute_nlist_forces` by adding the ``virial=True`` arg.
1. Compute virial with your forces :py:func:`compute_nlist_forces(nlist, energy,virial=True)<.compute_nlist_forces>` by adding the ``virial=True`` arg.
2. Add the ``modify_virial=True`` argument to your model constructor

.. _model_saving_and_loading:
Expand All @@ -207,18 +207,26 @@ Mapped quantities
------------------

If mapped quantities are desired for coarse-graining while running a simulation, you can call
:py:meth:`.tfcompute.enable_mapped_nlist` to utilize hoomd to compute fast neigbhor lists.
:py:meth:`tfcompute.enable_mapped_nlist(system, mapping_fxn)<.tfcompute.enable_mapped_nlist>` to utilize hoomd to compute fast neighbor lists.
The model code can then use :py:meth:`.SimModel.mapped_nlist` and
:py:meth:`.SimModel.mapped_positions` to access mapped nlist and positions. An example:

.. code:: python
import hoomd.htf as htf
def mapping_fxn(AA):
return M @ AA
class MyModel(htf.SimModel):
def compute(self, nlist, positions, forces):
aa_nlist, mapped_nlist = self.mapped_nlist(nlist)
aa_pos, mapped_pos = self.mapped_positions(positions)
...
tfcompute.enable_mapped_nlist(system, mapping_fxn)
Call :py:meth:`.tfcompute.enable_mapped_nlist` prior to running
the simulation.

Expand Down
14 changes: 12 additions & 2 deletions sphinx-docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
# -- Project information -----------------------------------------------------

project = 'HOOMD-TF'
copyright = '2020 HOOMD-TF Developers'
copyright = '2021 HOOMD-TF Developers'
author = 'Andrew D White, Rainier Barrett, Heta A Gandhi,\
Geemi Wellawatte, Maghesree Chakraborty,\
Mehrad Ansari, Dilnoza B Amirkulova, \
Expand All @@ -44,7 +44,8 @@
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.mathjax'
'sphinx.ext.mathjax',
'sphinx.ext.intersphinx'
]

if sphinx_ver < (1, 8, 0):
Expand Down Expand Up @@ -80,3 +81,12 @@

# define master doc for newer versions of sphinx
master_doc = 'index'

# make sure we see broken links
nitpicky = True

intersphinx_mapping = {
'hoomd': ('https://hoomd-blue.readthedocs.io/en/stable/', None),
'tf': ('https://www.tensorflow.org/api_docs/python',
'https://github.com/mr-ubik/tensorflow-intersphinx/raw/master/tf2_py_objects.inv')
}
2 changes: 1 addition & 1 deletion sphinx-docs/source/model_layers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,6 @@ biasing to a system, use an EDS Layer (:py:class:`.EDSLayer`):
return forces, alpha
Here,
:py:class:`.EDSLayer.update_state`
:obj:`EDSModel.update_state<tf.keras.metrics.Mean>`
returns the lagrange multiplier/eds coupling that
is used to bias the simulation.
4 changes: 2 additions & 2 deletions sphinx-docs/source/running.rst
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,8 @@ Here is a complete example:
Changing Model Object
----------------------

If your :py:class: `.MolSimModel.compute` method depends on attributes of ``self``,
you have to call an :py:class: `.MolSimModel.retrace_compute` if you updated these attributes.
If your :py:meth:`SimModel.compute(nlist, positions, box)<.SimModel.compute>` method depends on attributes of ``self``,
you have to call an :py:meth:`.SimModel.retrace_compute` if you updated these attributes.
See an example

.. code:: python
Expand Down

0 comments on commit a893532

Please sign in to comment.