A PyTorch-based crystallographic refinement library
TorchRef is a crystallographic refinement package built entirely on PyTorch. By leveraging PyTorch's automatic differentiation and GPU acceleration, TorchRef enables seamless integration with machine learning workflows and provides a flexible, extensible framework for crystallographic structure refinement.
-
Native PyTorch Integration: Built on PyTorch's
nn.Modulearchitecture, TorchRef integrates naturally with the PyTorch ecosystem, including machine learning models, optimizers, and GPU acceleration. -
Automatic Differentiation: Dynamic computational graphs eliminate the need for manually implemented gradient calculations. Define new refinement targets directly—PyTorch handles the derivatives automatically.
-
Modular Architecture: Following PyTorch's module pattern, components are easily composable and extensible. Add custom targets, restraints, or optimizers without modifying core code.
-
GPU Acceleration: Leverage CUDA for structure factor calculations, scaling, and optimization—achieving significant speedups for large structures.
-
FFT-based Structure Factors: Efficient structure factor calculation using Fast Fourier Transform (FFT) methods, enabling rapid F_calc computation even for large unit cells.
-
State Management: Full
state_dictsupport enables saving and loading complete refinement states, including model parameters, scaler settings, and restraints.
pip install torchref
git clone https://github.com/HatPdotS/TorchRef.git cd torchref
pip install -e .
pip install -e ".[dev]"
- Python ≥ 3.8
- PyTorch ≥ 1.9
- NumPy ≥ 1.20
- Gemmi ≥ 0.5
- reciprocalspaceship ≥ 0.9
- SciPy ≥ 1.7
# Run all tests
pytest tests/
# Run with coverage
pytest tests/ --cov=torchref
# Run specific test categories
pytest tests/unit/ # Fast unit tests
pytest tests/integration/ # Integration tests
pytest tests/functional/ # Full workflow testsContributions are welcome! Please follow these guidelines:
- Follow the NumPy docstring style
- Add tests for new functionality
- Ensure all tests pass before submitting
This project is licensed under the MIT License - see the LICENSE file for details.