Skip to content
Merged
4 changes: 2 additions & 2 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@ jobs:

- name: Install dependencies
run: |
uv pip install .[test,docs] --system
uv pip install ".[test,docs]" --system

- name: Install extras for tutorial generation
run: uv pip install ".[graphpes,mace]" --system
run: uv pip install ".[graphpes,mace,metatensor]" --system

- name: Copy tutorials
run: |
Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,8 @@ jobs:
--ignore=tests/models/test_mace.py \
--ignore=tests/models/test_fairchem.py \
--ignore=tests/models/test_orb.py \
--ignore=tests/models/test_sevennet.py
--ignore=tests/models/test_sevennet.py \
--ignore=tests/models/test_metatensor.py

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
Expand All @@ -64,6 +65,7 @@ jobs:
- { name: mace, test_path: "tests/models/test_mace.py" }
- { name: mace, test_path: "tests/test_elastic.py" }
- { name: mattersim, test_path: "tests/models/test_mattersim.py" }
- { name: metatensor, test_path: "tests/models/test_metatensor.py" }
- { name: orb, test_path: "tests/models/test_orb.py" }
- { name: sevenn, test_path: "tests/models/test_sevennet.py" }
- { name: graphpes, test_path: "tests/models/test_graphpes.py" }
Expand Down Expand Up @@ -112,7 +114,6 @@ jobs:
if: ${{ matrix.model.name != 'fairchem' }}
run: uv pip install -e .[test,${{ matrix.model.name }}] --resolution=${{ matrix.version.resolution }} --system


- name: Run Tests with Coverage
run: |
pytest --cov=torch_sim --cov-report=xml ${{ matrix.model.test_path }}
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ era. By rewriting the core primitives of atomistic simulation in Pytorch, it all
orders of magnitude acceleration of popular machine learning potentials.

* Automatic batching and GPU memory management allowing significant simulation speedup
* Support for MACE, Fairchem, and SevenNet MLIP models with more in progress
* Support for MACE, Fairchem, SevenNet, ORB, MatterSim and metatensor MLIP models
* Support for classical lennard jones, morse, and soft-sphere potentials
* Molecular dynamics integration schemes like NVE, NVT Langevin, and NPT Langevin
* Relaxation of atomic positions and cell with gradient descent and FIRE
Expand Down
10 changes: 9 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,15 @@
"html_image",
]

autodoc_mock_imports = ["fairchem", "mace", "mattersim", "orb", "sevennet", "graphpes"]
autodoc_mock_imports = [
"fairchem",
"mace",
"mattersim",
"metatensor",
"orb",
"sevennet",
"graphpes",
]

# use type hints
autodoc_typehints = "description"
Expand Down
6 changes: 3 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ about/contributing
about/license
```

# torch_sim documentation
# TorchSim documentation

**Date**: {sub-ref}`today`

Expand All @@ -50,7 +50,7 @@ TorchSim is a next-generation open-source atomistic simulation engine for the ML
:class-header: bg-light
**User Guide** 🚀
^^^
The user guide provides in-depth information and tutorials for using *torch_sim*.
The user guide provides in-depth information and tutorials for using *TorchSim*.
:::

:::{grid-item-card}
Expand All @@ -59,7 +59,7 @@ The user guide provides in-depth information and tutorials for using *torch_sim*
:class-header: bg-light
**API reference** 📖
^^^
The reference guide contains a detailed description of the *torch_sim* API. It
The reference guide contains a detailed description of the *TorchSim* API. It
assumes that you have an understanding of the key concepts.
:::

Expand Down
1 change: 1 addition & 0 deletions docs/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,4 @@ versions of the tutorials can also be found in the `torch-sim /examples/tutorial
low_level_tutorial
hybrid_swap_tutorial
using_graphpes_tutorial
metatensor_tutorial
73 changes: 73 additions & 0 deletions examples/tutorials/metatensor_tutorial.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# %% [markdown]
# <details>
# <summary>Dependencies</summary>
# /// script
# dependencies = [
# "metatrain[pet] >=2025.4",
# "metatensor-torch >=0.7,<0.8"
# ]
# ///
# </details>


# %% [markdown]
"""
# Using the PET-MAD model with metatensor

This tutorial explains how to use the PET-MAD model (https://arxiv.org/abs/2503.14118)
via TorchSim's metatensor interface.

## Loading the model

Loading the model is simple: you simply need to specify the model name (in this case
"pet-mad"), as shown below. All other arguments are optional: for example, you could
specify the device. (If the device is not specified, like in this case, the optimal
device is chosen automatically.)
"""

# %%
from torch_sim.models import MetatensorModel

model = MetatensorModel("pet-mad")

# %% [markdown]
"""
## Using the model to run a molecular dynamics simulations

Once the model is loaded, you can use it just like any other TorchSim model to run
simulations. Here, we show how to run a simple MD simulation consisting of an initial
NVT equilibration run followed by an NVE run.
"""
# %%
from ase.build import bulk
import torch_sim as ts

atoms = bulk("Si", "diamond", a=5.43, cubic=True)

equilibrated_state = ts.integrate(
system=atoms,
model=model,
integrator=ts.nvt_langevin,
n_steps=100,
temperature=300, # K
timestep=0.001, # ps
)

final_state = ts.integrate(
system=equilibrated_state,
model=model,
integrator=ts.nve,
n_steps=100,
temperature=300, # K
timestep=0.001, # ps
)

# %% [markdown]
"""
## Further steps

Of course, in reality, you would want to run the simulation for much longer, probably
save trajectories, and much more. However, this is all you need to get started with
metatensor and PET-MAD. For more details on how to use TorchSim, you can refer to the
other tutorials in this section.
"""
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ test = [
]
mace = ["mace-torch>=0.3.11"]
mattersim = ["mattersim>=0.1.2"]
metatensor = ["metatensor-torch >=0.7,<0.8", "metatrain[pet] >=2025.4"]
orb = [
"orb-models@git+https://github.com/orbital-materials/orb-models#egg=637a98d49cfb494e2491a457d9bbd28311fecf21",
]
Expand Down
65 changes: 65 additions & 0 deletions tests/models/test_metatensor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
import pytest
import torch

from tests.models.conftest import (
consistency_test_simstate_fixtures,
make_model_calculator_consistency_test,
make_validate_model_outputs_test,
)


try:
from metatensor.torch.atomistic import ase_calculator
from metatrain.utils.io import load_model

from torch_sim.models.metatensor import MetatensorModel
except ImportError:
pytest.skip("Metatensor not installed", allow_module_level=True)


@pytest.fixture
def dtype() -> torch.dtype:
"""Fixture to provide the default dtype for testing."""
return torch.float32


@pytest.fixture
def metatensor_calculator(device: torch.device):
"""Load a pretrained metatensor model for testing."""
return ase_calculator.MetatensorCalculator(
model=load_model(
"https://huggingface.co/lab-cosmo/pet-mad/resolve/main/models/pet-mad-latest.ckpt"
).export(),
device=device,
)


@pytest.fixture
def metatensor_model(device: torch.device) -> MetatensorModel:
"""Create an MetatensorModel wrapper for the pretrained model."""
return MetatensorModel(
model="pet-mad",
device=device,
)


def test_metatensor_initialization(device: torch.device) -> None:
"""Test that the metatensor model initializes correctly."""
model = MetatensorModel(
model="pet-mad",
device=device,
)
assert model.device == device
assert model.dtype == torch.float32


test_mattersim_consistency = make_model_calculator_consistency_test(
test_name="metatensor",
model_fixture_name="metatensor_model",
calculator_fixture_name="metatensor_calculator",
sim_state_names=consistency_test_simstate_fixtures,
)

test_mattersim_model_outputs = make_validate_model_outputs_test(
model_fixture_name="metatensor_model",
)
5 changes: 5 additions & 0 deletions torch_sim/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,8 @@
from torch_sim.models.graphpes import GraphPESWrapper
except ImportError:
pass

try:
from torch_sim.models.metatensor import MetatensorModel
except ImportError:
pass
Loading
Loading