Skip to content
Deep Reinforcement Learning for de-novo Drug Design
Branch: master
Clone or download
Mariewelt Merge pull request #24 from isayev/DeepRL_paper_2018
Updated the generator checkpoint
Latest commit 3d0f553 Mar 1, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
checkpoints
data
figures Added model schemes for demos Feb 26, 2019
release Moved all python modules to release folder Feb 26, 2019
tests
.gitignore Modified gitignore Feb 26, 2019
JAK2_min_max_demo.ipynb Added updated tutorials for JAK2 and LogP Feb 26, 2019
LICENSE license file Aug 23, 2018
LogP_optimization_demo.ipynb Added updated tutorials for JAK2 and LogP Feb 26, 2019
README.md Update README.md Feb 27, 2019
RecurrentQSAR-example-logp.ipynb
conda_requirements.txt Added ipykernel to requirements Feb 27, 2019
pip_requirements.txt Added file with requirements for pip Feb 27, 2019

README.md

ReLeaSE (Reinforcement Learning for Structural Evolution)

Deep Reinforcement Learning for de-novo Drug Design

Currently works only under Linux

This is an official PyTorch implementation of Deep Reinforcement Learning for de-novo Drug Design aka ReLeaSE method.

REQUIREMENTS:

In order to get started you will need:

Installation with Anaconda

If you installed your Python with Anacoda you can run the following commands to get started:

# Clone the reopsitory to your desired directory
git clone https://github.com/isayev/ReLeaSE.git
cd ReLeaSE
# Create new conda environment with Python 3.6
conda create --new release python=3.6
# Activate the environment
conda activate release
# Install conda dependencies
conda install --yes --file conda_requirements.txt
conda install -c rdkit rdkit nox cairo
conda install pytorch=0.4.1 torchvision=0.2.1 -c pytorch
# Instal pip dependencies
pip install pip_requirements.txt
# Add new kernel to the list of jupyter notebook kernels
python -m ipykernel install --user --name release --display-name ReLeaSE

Demos

We uploaded several demos in a form of iPython notebooks:

  • JAK2_min_max_demo.ipynb -- JAK2 pIC50 minimization and maximization
  • LogP_optimization_demo.ipynb -- optimization of logP to be in a drug-like region from 0 to 5 according to Lipinski's rule of five.
  • RecurrentQSAR-example-logp.ipynb -- training a Recurrent Neural Network to predict logP from SMILES using OpenChem toolkit.

Disclaimer: JAK2 demo uses Random Forest predictor instead of Recurrent Neural Network, since the availability of the dataset with JAK2 activity data used in the "Deep Reinforcement Learning for de novo Drug Design" paper is restricted under the license agreement. So instead we use the JAK2 activity data downladed from ChEMBL (CHEMBL2971) and curated. The size of this dataset is ~2000 data points, which is not enough to build a reliable deep neural network. If you want to see a demo with RNN, please checkout logP optimization.

Citation

If you use this code or data, please cite:

ReLeaSE method paper:

Mariya Popova, Olexandr Isayev, Alexander Tropsha. Deep Reinforcement Learning for de-novo Drug Design. Science Advances, 2018, Vol. 4, no. 7, eaap7885. DOI: 10.1126/sciadv.aap7885

You can’t perform that action at this time.