Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit 02fce35
Merge: 31c9b6b a01e688
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Tue Oct 20 21:27:44 2020 -0700

    Merge pull request deepchem#2235 from deepchem/smiles2vec

    Adding in some more tests for save/reload

commit a01e688
Author: Bharath Ramsundar <bharath@Bharaths-MBP.zyxel.com>
Date:   Tue Oct 20 19:39:38 2020 -0700

    Cleaning up

commit 8a01506
Author: Bharath Ramsundar <bharath@Bharaths-MBP.zyxel.com>
Date:   Tue Oct 20 19:27:30 2020 -0700

    Getting some more tests in

commit 55e3df9
Author: Bharath Ramsundar <bharath@Bharaths-MBP.zyxel.com>
Date:   Thu Oct 15 23:50:11 2020 -0700

    First steps to reload test

commit 31c9b6b
Merge: 19eeac1 9e6155f
Author: peastman <peastman@stanford.edu>
Date:   Tue Oct 20 13:49:47 2020 -0700

    Merge pull request deepchem#2213 from peastman/molnet

    [WIP] Updated API for MoleculeNet loader functions

commit 9e6155f
Author: peastman <peastman@stanford.edu>
Date:   Mon Oct 19 16:01:27 2020 -0700

    Attempt at fixing travis failures

commit 19eeac1
Merge: 29d01b5 8a06870
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Sun Oct 18 23:19:40 2020 -0700

    Merge pull request deepchem#2232 from hsjang001205/WEAVE_reload

    Fix weave bug

commit 8a06870
Author: hsjang001205 <71421490+hsjang001205@users.noreply.github.com>
Date:   Mon Oct 19 13:30:17 2020 +0900

    Update test_reload.py

commit 3b74cde
Author: hsjang001205 <71421490+hsjang001205@users.noreply.github.com>
Date:   Mon Oct 19 13:29:42 2020 +0900

    Update test_reload.py

commit 4123f02
Author: hsjang001205 <71421490+hsjang001205@users.noreply.github.com>
Date:   Mon Oct 19 13:27:03 2020 +0900

    Update test_reload.py

commit 2374713
Author: hsjang001205 <71421490+hsjang001205@users.noreply.github.com>
Date:   Mon Oct 19 13:22:52 2020 +0900

    Update layers.py

commit 669a311
Merge: 98ad20e 29d01b5
Author: hsjang001205 <71421490+hsjang001205@users.noreply.github.com>
Date:   Mon Oct 19 12:58:09 2020 +0900

    Merge branch 'master' into WEAVE_reload

commit 29d01b5
Merge: 2b792b4 e94b9db
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Sun Oct 18 20:53:45 2020 -0700

    Merge pull request deepchem#2228 from hsjang001205/DAG_reload

    Fix directed acyclic graph network bug

commit e94b9db
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Sat Oct 17 17:12:40 2020 +0900

    DAG_reload

commit 98ad20e
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Sat Oct 17 13:11:31 2020 +0900

    WEAVE_reload

commit 01a2b5f
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Sat Oct 17 13:01:40 2020 +0900

    WEAVE_reload

commit 438b956
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Sat Oct 17 10:25:48 2020 +0900

    DAG_reload

commit a1480a3
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Sat Oct 17 09:39:17 2020 +0900

    DAG_reload

commit 1894509
Author: peastman <peastman@stanford.edu>
Date:   Fri Oct 16 16:41:39 2020 -0700

    Changed how molnet handles Transformers

commit 2b792b4
Merge: 51a76d9 c7f0cec
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Fri Oct 16 10:46:37 2020 -0700

    Merge pull request deepchem#2223 from deepchem/chembl

    Fixing chembl example

commit d792c5b
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Fri Oct 16 21:31:59 2020 +0900

    DAG_reload

commit 032ac8b
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Fri Oct 16 21:05:14 2020 +0900

    DAG_reload

commit 95d32df
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Fri Oct 16 20:58:43 2020 +0900

    DAG_reload

commit e5827cc
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Fri Oct 16 18:22:54 2020 +0900

    DAG_reload

commit ba3455b
Author: hsjang001205 <hsjang1205@naver.com>
Date:   Fri Oct 16 17:43:02 2020 +0900

    DAG_reload_fix

commit 51a76d9
Merge: 6eb5f18 64c3fbf
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Thu Oct 15 23:40:06 2020 -0700

    Merge pull request deepchem#2221 from vincentweisser/patch-1

    Wrong link corrected

commit 6eb5f18
Merge: cc7e2ec c3d9ef1
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Thu Oct 15 23:39:10 2020 -0700

    Merge pull request deepchem#2222 from vincentweisser/patch-2

    Wrong Link Corrected

commit cc7e2ec
Merge: cd7d2c1 55bdc19
Author: Bharath Ramsundar <rbharath@stanford.edu>
Date:   Thu Oct 15 23:38:07 2020 -0700

    Merge pull request deepchem#2224 from deepchem/chemception

    Adding chemception save/reload tests

commit c7f0cec
Author: Bharath Ramsundar <bharath@Bharaths-MBP.zyxel.com>
Date:   Thu Oct 15 22:49:20 2020 -0700

    Fixing example

commit 55bdc19
Author: Bharath Ramsundar <bharath@Bharaths-MBP.zyxel.com>
Date:   Thu Oct 15 22:17:52 2020 -0700

    Fixing chemception tests

commit 76ed80e
Author: Bharath Ramsundar <bharath@Bharaths-MBP.zyxel.com>
Date:   Thu Oct 15 16:29:22 2020 -0700

    Fixing chembl example

commit c3d9ef1
Author: Vincent Weisser <32839303+vincentweisser@users.noreply.github.com>
Date:   Thu Oct 15 22:32:07 2020 +0200

    Wrong Link Corrected

commit 64c3fbf
Author: Vincent Weisser <32839303+vincentweisser@users.noreply.github.com>
Date:   Thu Oct 15 22:29:16 2020 +0200

    Wrong link corrected

commit 5c55f23
Author: peastman <peastman@stanford.edu>
Date:   Wed Oct 14 14:33:05 2020 -0700

    Bug fix

commit 407db0e
Author: peastman <peastman@stanford.edu>
Date:   Wed Oct 14 14:29:32 2020 -0700

    Refactored molnet loader

commit 47006c5
Author: peastman <peastman@stanford.edu>
Date:   Wed Oct 14 10:59:46 2020 -0700

    Minor improvements to molnet loader functions

commit 413c6a4
Merge: bd52b89 ae12a7e
Author: peastman <peastman@stanford.edu>
Date:   Wed Oct 14 10:11:26 2020 -0700

    Merge branch 'master' into molnet

commit bd52b89
Author: peastman <peastman@stanford.edu>
Date:   Tue Oct 13 14:28:58 2020 -0700

    Updated API for load_delaney()
  • Loading branch information
nissy-dev committed Oct 21, 2020
1 parent 31efa85 commit f0da848
Show file tree
Hide file tree
Showing 13 changed files with 1,092 additions and 780 deletions.
137 changes: 111 additions & 26 deletions deepchem/models/layers.py
Expand Up @@ -2344,7 +2344,13 @@ def build(self, input_shape):
input_shape: tuple
Ignored since we don't need the input shape to create internal weights.
"""
init = initializers.get(self.init) # Set weight initialization

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

self.W_AA = init([self.n_atom_input_feat, self.n_hidden_AA])
self.b_AA = backend.zeros(shape=[
Expand Down Expand Up @@ -2566,7 +2572,14 @@ def get_config(self):

def build(self, input_shape):
if self.compress_post_gaussian_expansion:
init = initializers.get(self.init)

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

self.W = init([self.n_input * 11, self.n_input])
self.b = backend.zeros(shape=[self.n_input])
self.built = True
Expand Down Expand Up @@ -2673,7 +2686,14 @@ def get_config(self):
return config

def build(self, input_shape):
init = initializers.get(self.init)

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

self.embedding_list = init([self.periodic_table_length, self.n_embedding])
self.built = True

Expand Down Expand Up @@ -2726,7 +2746,14 @@ def get_config(self):
return config

def build(self, input_shape):
init = initializers.get(self.init)

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

self.W_cf = init([self.n_embedding, self.n_hidden])
self.W_df = init([self.n_distance, self.n_hidden])
self.W_fc = init([self.n_hidden, self.n_embedding])
Expand Down Expand Up @@ -2811,7 +2838,14 @@ def get_config(self):
def build(self, input_shape):
self.W_list = []
self.b_list = []
init = initializers.get(self.init)

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

prev_layer_size = self.n_embedding
for i, layer_size in enumerate(self.layer_sizes):
self.W_list.append(init([prev_layer_size, layer_size]))
Expand Down Expand Up @@ -2935,22 +2969,37 @@ def build(self, input_shape):
self.W_list = []
self.b_list = []
self.dropouts = []
init = initializers.get(self.init)
prev_layer_size = self.n_inputs
for layer_size in self.layer_sizes:
self.W_list.append(init([prev_layer_size, layer_size]))
self.b_list.append(backend.zeros(shape=[
layer_size,
]))
self.W_list.append(
self.add_weight(
name='kernel',
shape=(prev_layer_size, layer_size),
initializer=self.init,
trainable=True))
self.b_list.append(
self.add_weight(
name='bias',
shape=(layer_size,),
initializer='zeros',
trainable=True))
if self.dropout is not None and self.dropout > 0.0:
self.dropouts.append(Dropout(rate=self.dropout))
else:
self.dropouts.append(None)
prev_layer_size = layer_size
self.W_list.append(init([prev_layer_size, self.n_outputs]))
self.b_list.append(backend.zeros(shape=[
self.n_outputs,
]))
self.W_list.append(
self.add_weight(
name='kernel',
shape=(prev_layer_size, self.n_outputs),
initializer=self.init,
trainable=True))
self.b_list.append(
self.add_weight(
name='bias',
shape=(self.n_outputs,),
initializer='zeros',
trainable=True))
if self.dropout is not None and self.dropout > 0.0:
self.dropouts.append(Dropout(rate=self.dropout))
else:
Expand Down Expand Up @@ -3068,22 +3117,37 @@ def build(self, input_shape):
self.W_list = []
self.b_list = []
self.dropouts = []
init = initializers.get(self.init)
prev_layer_size = self.n_graph_feat
for layer_size in self.layer_sizes:
self.W_list.append(init([prev_layer_size, layer_size]))
self.b_list.append(backend.zeros(shape=[
layer_size,
]))
self.W_list.append(
self.add_weight(
name='kernel',
shape=(prev_layer_size, layer_size),
initializer=self.init,
trainable=True))
self.b_list.append(
self.add_weight(
name='bias',
shape=(layer_size,),
initializer='zeros',
trainable=True))
if self.dropout is not None and self.dropout > 0.0:
self.dropouts.append(Dropout(rate=self.dropout))
else:
self.dropouts.append(None)
prev_layer_size = layer_size
self.W_list.append(init([prev_layer_size, self.n_outputs]))
self.b_list.append(backend.zeros(shape=[
self.n_outputs,
]))
self.W_list.append(
self.add_weight(
name='kernel',
shape=(prev_layer_size, self.n_outputs),
initializer=self.init,
trainable=True))
self.b_list.append(
self.add_weight(
name='bias',
shape=(self.n_outputs,),
initializer='zeros',
trainable=True))
if self.dropout is not None and self.dropout > 0.0:
self.dropouts.append(Dropout(rate=self.dropout))
else:
Expand Down Expand Up @@ -3187,9 +3251,16 @@ def get_config(self):
return config

def build(self, input_shape):

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

n_pair_features = self.n_pair_features
n_hidden = self.n_hidden
init = initializers.get(self.init)
self.W = init([n_pair_features, n_hidden * n_hidden])
self.b = backend.zeros(shape=(n_hidden * n_hidden,))
self.built = True
Expand Down Expand Up @@ -3219,7 +3290,14 @@ def get_config(self):

def build(self, input_shape):
n_hidden = self.n_hidden
init = initializers.get(self.init)

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

self.Wz = init([n_hidden, n_hidden])
self.Wr = init([n_hidden, n_hidden])
self.Wh = init([n_hidden, n_hidden])
Expand Down Expand Up @@ -3274,7 +3352,14 @@ def get_config(self):
return config

def build(self, input_shape):
init = initializers.get(self.init)

def init(input_shape):
return self.add_weight(
name='kernel',
shape=(input_shape[0], input_shape[1]),
initializer=self.init,
trainable=True)

self.U = init((2 * self.n_hidden, 4 * self.n_hidden))
self.b = tf.Variable(
np.concatenate((np.zeros(self.n_hidden), np.ones(self.n_hidden),
Expand Down

0 comments on commit f0da848

Please sign in to comment.