Skip to content

Conversation

@DEKHTIARJonathan
Copy link
Member

@DEKHTIARJonathan DEKHTIARJonathan commented Apr 19, 2018

This PR goes toward many objectives:

  • Implementing Flake8 coding-style practices
  • Following PEP8 coding-style practices
  • Fixing the small errors in the documentation to remove warning from Sphinx.
  • Adding modules/db to the documentation (if not, it raises an error).

Max Column Length: Changed from 160 to 120 (it was the double of the standard value)

@lgarithm as you pointed out yesterday, we should follow Google coding practices.
It was a good idea, this PR goes in this direction.

@tensorlayer tensorlayer deleted a comment Apr 19, 2018
@tensorlayer tensorlayer deleted a comment Apr 19, 2018
@tensorlayer tensorlayer deleted a comment Apr 19, 2018
@luomai
Copy link
Member

luomai commented Apr 19, 2018

@zsdonghao are you happy with changing the max column length to 120?

net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384)
net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192)
net = DenseLayer(net, n_units=10, act=tf.identity, W_init=tf.truncated_normal_initializer(stddev=1 / 192.0), name='output') # output: (batch_size, 10)
net = DenseLayer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

net = DenseLayer(
net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2,
name='d2relu') # output: (batch_size, 192)
net = DenseLayer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384)
net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192)
net = DenseLayer(net, n_units=10, act=tf.identity, W_init=tf.truncated_normal_initializer(stddev=1 / 192.0), name='output') # output: (batch_size, 10)
net = DenseLayer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange


train_params = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params)
train_op = tf.train.AdamOptimizer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384)
net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192)
net = DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output') # output: (batch_size, 10)
net = DenseLayer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

@DEKHTIARJonathan DEKHTIARJonathan added this to the 1.8.5 milestone Apr 19, 2018
net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384)
net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192)
net = DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output') # output: (batch_size, 10)
net = DenseLayer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

net = tl.layers.DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384)
net = tl.layers.DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192)
net = tl.layers.DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output') # output: (batch_size, 10)
net = tl.layers.DenseLayer(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

predict = tf.argmax(y, 1) # chose action greedily with reward. in Q-Learning, policy is greedy, so we use "max" to select the next action.
predict = tf.argmax(
y,
1) # chose action greedily with reward. in Q-Learning, policy is greedy, so we use "max" to select the next action.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need arrange

@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
# Conflicts:
#	tensorlayer/files.py
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@DEKHTIARJonathan DEKHTIARJonathan changed the title Documentation Fix to allow unittest to properly run [WIP] - Documentation Fix to allow unittest to properly run Apr 20, 2018
if reward != 0:
print(('episode %d: game %d took %.5fs, reward: %f' % (episode_number, game_number, time.time() - start_time, reward)),
('' if reward == -1 else ' !!!!!!!!'))
print(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need shorten~

"label": tf.train.Feature(int64_list=tf.train.Int64List(value=[label])),
'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw])),
}))
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need shorten~

x_train_batch, y_train_batch = tf.train.shuffle_batch(
[x_train_, y_train_], batch_size=batch_size, capacity=2000, min_after_dequeue=1000, num_threads=32) # set the number of threads here
[x_train_, y_train_], batch_size=batch_size, capacity=2000, min_after_dequeue=1000, num_threads=32
) # set the number of threads here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need shorten~

) # set the number of threads here
# for testing, uses batch instead of shuffle_batch
x_test_batch, y_test_batch = tf.train.batch([x_test_, y_test_], batch_size=batch_size, capacity=50000, num_threads=32)
x_test_batch, y_test_batch = tf.train.batch(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need shorten~

net = tl.layers.LocalResponseNormLayer(
net, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1'
)
net = tl.layers.BinaryConv2d(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need shorten~

@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@tensorlayer tensorlayer deleted a comment Apr 20, 2018
@DEKHTIARJonathan
Copy link
Member Author

This PR becomes too clumsy. I close it and re-open one

@DEKHTIARJonathan DEKHTIARJonathan changed the title [WIP] - Documentation Fix to allow unittest to properly run [Archive - Stale] - Documentation Fix to allow unittest to properly run Apr 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants