Skip to content

raulorteg/QuantumDeepField_molecule

 
 

Repository files navigation

Note: This is a fork of the original implementation (https://github.com/masashitsubaki/molecularGNN_smiles)

Quantum deep field for molecule

Installation


  1. Clone the repository: git clone https://github.com/raulorteg/QuantumDeepField_molecule
  2. Create the python virtual environment (I use python 3.9.14): virtualenv --py=python3.9 qdf
  3. Activate virtualenv source qdf/bin/activate
  4. Install requirements python -m pip install -r requirements.txt Note: Your system might need a different torch installation (https://pytorch.org/get-started/locally/)

Requirements


see the requirements.txt file

Usage


In /scripts you may find some scripts prepared to run the default values with the only input being the dataset to be used, through the argument --dataset.

NOTE: Configure the paths to the datasets by editting the file in qdf/settings.py:

DATASET_PATH = "/home/raul/git/QuantumDeepField_molecule/dataset"
SAVE_PATH = "/home/raul/git/QuantumDeepField_molecule/output"

1. Preprocessing (for training):

 python preprocess_train.py --dataset=$dataset_trained

e.g python preprocess_train.py --dataset=QM9under7atoms_homolumo_eV

Options:

  • dataset: [string] dataset to be used in pre-training. From those that can be installed directly from the cloned repository the options are:
    • "QM9under14atoms_atomizationenergy_eV"
    • "QM9full_atomizationenergy_eV"
    • "QM9full_homolumo_eV" Note: Two properties (homo and lumo)
    • ""

2. Training:

 python train.py --dataset=$dataset_trained --num_workers=$num_workers --seed=$seed --device=$device

e.g python train.py --dataset=QM9under7atoms_homolumo_eV

Options:

  • dataset [required]: [string] dataset to be used in pre-training. From those that can be installed directly from the cloned repository the options are:
    • "QM9under14atoms_atomizationenergy_eV"
    • "QM9full_atomizationenergy_eV"
    • "QM9full_homolumo_eV" Note: Two properties (homo and lumo)
    • ""
  • num_workers: [int] number of workers to use for the dataloader. Defaults to 1.
  • seed: [int] integer used to specify the seed for the model initialization. Defaults to 1729.
  • device: [string] device to use for training and inference in the model, options are ["cuda", "cpu"], if None is specified it will use "cuda" if available in your system, else will use "cpu" (slower).

3. Preprocessing inference (predict):

 python preprocess_predict.py --dataset_train=$dataset_trained --dataset_predict=$dataset_predict

_e.g python preprocess_predict.py --dataset_train=QM9under7atoms_homolumo_eV --dataset_predict=QM9full_homolumo_eV Options:

  • dataset_train [required]: [string] dataset that was used in pre-training. It is use to look for and load the appropriate orbital dictionaries so that the preprocessing done in the prediction dataset is coherent to what was done in pre-processing the original dataset trained on.
  • dataset_predict [required]: [string] dataset to be used in prediction.

4. Prediction (Inference):

 python predict.py --dataset_train=$dataset_trained --dataset_predict=$dataset_predict --model_path=$model_path --num_workers=$num_workers --seed=$seed --device=$device

_e.g python predict.py --dataset_train=QM9under7atoms_homolumo_eV --dataset_predict=QM9full_homolumo_eV --model_path="../pretrained/model"

Options:

  • dataset_train [required]: [string] dataset that was used in pre-training. It is use to look for and load the appropriate orbital dictionaries so that the preprocessing done in the prediction dataset is coherent to what was done in pre-processing the original dataset trained on.
  • dataset_predict [required]: [string] dataset to be used in prediction.
  • model_path [required]: [string] path to file where the pre-trained model is saved.
  • num_workers: [int] number of workers to use for the dataloader. Defaults to 1.
  • seed: [int] integer used to specify the seed for the model initialization. Defaults to 1729.
  • device: [string] device to use for training and inference in the model, options are ["cuda", "cpu"], if None is specified it will use "cuda" if available in your system, else will use "cpu" (slower).

Datasets


The QM9full dataset provided in this repository contains 130832 samples; the original QM9 dataset contains 133885 samples but we removed 3053 samples that failed the consistency check (i.e., 130832 = 133885 - 3053).

We note that, as described in README of the QM9 dataset, the original QM9 dataset provides U0 as the internal energy at 0 K in units of Hartree. We transformed the internal energy into the atomization energy E in units of eV, which can be calculated using Atomref of the QM9 dataset, that is, E = U0 - sum(Atomrefs in the molecule), and 1 Hartree = 27.2114 eV.

In this way, we created the atomization energy dataset, extracted the QM9under14atoms and QM9over15atoms datasets from it, and provided them in the dataset directory (note that the QM9over15atoms contains only test.txt for extrapolation evaluation). On the other hand, the homolumo dataset does not require such preprocessing and we only transformed their units from Hartree into eV. The final format of the preprocessed QM9 dataset is as follows.

Note that our QDF model can learn multiple properties simultaneously (i.e., the model output has multiple dimensions) when the training dataset format is prepared as the same as the above QM9full_homolumo_eV.

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.0%
  • Shell 2.0%