QUESTS provides model-free uncertainty and entropy estimation methods for interatomic potentials. Among the methods, we propose a structural descriptor based on k-nearest neighbors that:
- Is fast to compute, as it uses only distances between atoms within an environment. Because the computation of descriptors is efficiently parallelized, generation of descriptors for 1.5M environments takes about 3 seconds on 56 threads (tested against Intel Xeon CLX-8276L CPUs).
- Can be used to analyze datasets for atomistic machine learning, providing quantities such as dataset entropy, diversity, information gap, and others.
- Is shown to recover many useful properties of information theory, and can be used to inform dataset compression
This package also contains tools to interface with other representations and packages.
pip install quests
To install the quests
package directly from the repository, clone it from GitHub and use pip
to install it into your virtual environment:
git clone https://github.com/dskoda/quests.git
cd quests
pip install .
Once installed, you can use the quests
command to perform different analyses. For example, to compute the entropy of any dataset (the input can be anything that ASE reads, including xyz files), you can use the quests entropy
command:
quests entropy dump.lammpstrj --bandwidth 0.015
For subsampling the dataset and avoiding using the entire dataset, use the entropy_sampler
example:
quests entropy_sampler dataset.xyz --batch_size 20000 -s 100000 -n 3
-s
specifies the number of sampled environments, -n
specifies how many runs will be computed (for statistics).
For additional help with these commands, please use quests --help
, quests entropy --help
, and others.
To use the QUESTS package to create descriptors and compute entropies, you can use the descriptor and entropy submodules:
from ase.io import read
from quests.descriptor import get_descriptors
from quests.entropy import perfect_entropy, diversity
dset = read("dataset.xyz", index=":")
x = get_descriptors(dset, k=32, cutoff=5.0)
h = 0.015
batch_size = 10000
H = perfect_entropy(x, h=h, batch_size=batch_size)
D = diversity(x, h=h, batch_size=batch_size)
In this example, descriptors are being created using 32 nearest neighbors and a 5.0 Å cutoff. The entropy and diversity are being computed using a Gaussian kernel (default) with bandwidth of 0.015 1/Å and batch size of 10,000.
from ase.io import read
from quests.descriptor import get_descriptors
from quests.entropy import delta_entropy
dset_x = read("reference.xyz", index=":")
dset_y = read("test.xyz", index=":")
k, cutoff = 32, 5.0
x = get_descriptors(dset_x, k=k, cutoff=cutoff)
y = get_descriptors(dset_y, k=k, cutoff=cutoff)
# computes dH (Y | X)
dH = delta_entropy(y, x, h=0.015)
from ase.io import read
from quests.descriptor import get_descriptors
from quests.entropy import approx_delta_entropy
dset_x = read("reference.xyz", index=":")
dset_y = read("test.xyz", index=":")
k, cutoff = 32, 5.0
x = get_descriptors(dset_x, k=k, cutoff=cutoff)
y = get_descriptors(dset_y, k=k, cutoff=cutoff)
# approximates dH (Y | X)
# n = 5 and graph_neighbors = 10 are arguments for
# pynndescent, which performs an approximate nearest
# neighbor search for dH
dH = approx_delta_entropy(y, x, h=0.015, n=5, graph_neighbors=10)
To accelerate the computation of entropy of datasets, one can use PyTorch to compute the entropy of a system. This can be done by first installing the optional dependencies for this repository:
pip install quests[gpu]
The syntax of the entropy, as computed with PyTorch, is identical to the one above.
Instead of loading the functions from quests.entropy, however, you should load them from quests.gpu.entropy.
The descriptors remain the same - as of now, creating descriptors using GPUs is not supported.
Note that this constraint requires the descriptors to be generated using the traditional routes, and later converted into a torch.tensor
.
import torch
from ase.io import read
from quests.descriptor import get_descriptors
from quests.gpu.entropy import perfect_entropy
dset = read("dataset.xyz", index=":")
x = get_descriptors(dset, k=32, cutoff=5.0)
x = torch.tensor(x, device="cuda")
h = 0.015
batch_size = 10000
H = perfect_entropy(x, h=h, batch_size=batch_size)
To compute the overlap between two datasets, you can use the overlap
command-line interface or the API:
quests overlap test.xyz ref.xyz -o results.json
This command will compute the overlap between the environments in test.xyz and ref.xyz, and save the results to results.json.
Using the API:
from ase.io import read
from quests.descriptor import get_descriptors
from quests.entropy import delta_entropy
test = read("test.xyz", index=":")
ref = read("ref.xyz", index=":")
k, cutoff = 32, 5.0
x1 = get_descriptors(test, k=k, cutoff=cutoff)
x2 = get_descriptors(ref, k=k, cutoff=cutoff)
h = 0.015 # bandwidth
eps = 1e-5 # threshold for overlap
delta = delta_entropy(x1, x2, h=h)
overlap = (delta < eps).mean()
print(f"Overlap value: {overlap:.4f}")
This example computes the overlap between two datasets using a bandwidth of 0.015 and an overlap threshold of 1e-3. The overlap is defined as the fraction of environments where the delta entropy is below the threshold.
To generate a learning curve using the command line interface, you can use the learning_curve
command.
This command computes the entropy at different dataset fractions, allowing you to see how the entropy changes as you include more data:
quests learning_curve dataset.xyz -o learning_curve_results.json
This command will:
- Use the default fractions (0.1 to 0.9 in steps of 0.1)
- Compute the entropy for each fraction
- Run the computation 3 times for each fraction (default value)
- Save the results in a JSON file named
learning_curve_results.json
You can customize the command with various options:
-f
or--fractions
: Specify custom fractions (e.g.,-f 0.2,0.4,0.6,0.8
)-n
or--num_runs
: Set the number of runs for each fraction (e.g.,-n 5
)-b
or--bandwidth
: Set the bandwidth for entropy calculation (e.g.,-b 0.015
)
A more customized command might look like this:
quests learning_curve dataset.xyz -f 0.2,0.4,0.6,0.8 -n 5 -c 5.0 -k 32 -b 0.015 -o custom_learning_curve.json
This will compute the learning curve for fractions 0.2, 0.4, 0.6, and 0.8, running each fraction 5 times, with a cutoff of 5.0 Å, 32 neighbors, and a bandwidth of 0.015.
The resulting JSON file will contain detailed information about the learning curve, including the entropy values for each fraction and run, as well as the mean and standard deviation of the entropy for each fraction.
If you use QUESTS in a publication, please cite the following paper:
@article{schwalbekoda2024information,
title = {Model-free quantification of completeness, uncertainties, and outliers in atomistic machine learning using information theory},
author = {Schwalbe-Koda, Daniel and Hamel, Sebastien and Sadigh, Babak and Zhou, Fei and Lordi, Vincenzo},
year = {2024},
journal = {arXiv:2404.12367},
url = {https://arxiv.org/abs/2404.12367},
}
The QUESTS software is distributed under the following license: BSD-3
All new contributions must be made under this license.
SPDX: BSD-3-Clause
This work was initially produced under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, with support from LLNL's LDRD program under tracking codes 22-ERD-055 and 23-SI-006.
Code released as LLNL-CODE-858914