-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[Archive - Stale] - Documentation Fix to allow unittest to properly run #509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…nge are used to prevent updating requirements all the time.
9048d25 to
0174a27
Compare
c16ea57 to
aea99c0
Compare
aea99c0 to
dc46dfc
Compare
|
@zsdonghao are you happy with changing the max column length to 120? |
| net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384) | ||
| net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192) | ||
| net = DenseLayer(net, n_units=10, act=tf.identity, W_init=tf.truncated_normal_initializer(stddev=1 / 192.0), name='output') # output: (batch_size, 10) | ||
| net = DenseLayer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
| net = DenseLayer( | ||
| net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, | ||
| name='d2relu') # output: (batch_size, 192) | ||
| net = DenseLayer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
| net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384) | ||
| net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192) | ||
| net = DenseLayer(net, n_units=10, act=tf.identity, W_init=tf.truncated_normal_initializer(stddev=1 / 192.0), name='output') # output: (batch_size, 10) | ||
| net = DenseLayer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
example/tutorial_cifar10.py
Outdated
|
|
||
| train_params = network.all_params | ||
| train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params) | ||
| train_op = tf.train.AdamOptimizer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
| net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384) | ||
| net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192) | ||
| net = DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output') # output: (batch_size, 10) | ||
| net = DenseLayer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
| net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384) | ||
| net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192) | ||
| net = DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output') # output: (batch_size, 10) | ||
| net = DenseLayer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
| net = tl.layers.DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu') # output: (batch_size, 384) | ||
| net = tl.layers.DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu') # output: (batch_size, 192) | ||
| net = tl.layers.DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output') # output: (batch_size, 10) | ||
| net = tl.layers.DenseLayer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
example/tutorial_frozenlake_dqn.py
Outdated
| predict = tf.argmax(y, 1) # chose action greedily with reward. in Q-Learning, policy is greedy, so we use "max" to select the next action. | ||
| predict = tf.argmax( | ||
| y, | ||
| 1) # chose action greedily with reward. in Q-Learning, policy is greedy, so we use "max" to select the next action. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need arrange
# Conflicts: # tensorlayer/files.py
| if reward != 0: | ||
| print(('episode %d: game %d took %.5fs, reward: %f' % (episode_number, game_number, time.time() - start_time, reward)), | ||
| ('' if reward == -1 else ' !!!!!!!!')) | ||
| print( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need shorten~
| "label": tf.train.Feature(int64_list=tf.train.Int64List(value=[label])), | ||
| 'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw])), | ||
| })) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need shorten~
| x_train_batch, y_train_batch = tf.train.shuffle_batch( | ||
| [x_train_, y_train_], batch_size=batch_size, capacity=2000, min_after_dequeue=1000, num_threads=32) # set the number of threads here | ||
| [x_train_, y_train_], batch_size=batch_size, capacity=2000, min_after_dequeue=1000, num_threads=32 | ||
| ) # set the number of threads here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need shorten~
| ) # set the number of threads here | ||
| # for testing, uses batch instead of shuffle_batch | ||
| x_test_batch, y_test_batch = tf.train.batch([x_test_, y_test_], batch_size=batch_size, capacity=50000, num_threads=32) | ||
| x_test_batch, y_test_batch = tf.train.batch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need shorten~
| net = tl.layers.LocalResponseNormLayer( | ||
| net, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1' | ||
| ) | ||
| net = tl.layers.BinaryConv2d( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need shorten~
|
This PR becomes too clumsy. I close it and re-open one |
This PR goes toward many objectives:
modules/dbto the documentation (if not, it raises an error).Max Column Length: Changed from 160 to 120 (it was the double of the standard value)
@lgarithm as you pointed out yesterday, we should follow Google coding practices.
It was a good idea, this PR goes in this direction.