Skip to content

Commit

Permalink
tl.layers API Refactoring and various modifications (#667)
Browse files Browse the repository at this point in the history
* Decorators API Refactored

* extra_requires `all`, `all_cpu` and `all_gpu` added

* Error fix

* YAPF Formating Correction

* Test for private method decorator added

* Test Logging Verbosity Fixed to Debug when runned individually

* YAPF corrections applied

* Changelog Added

* Changelog updated

* PR number changed

* First Refactoring Pass done

* cleaning second pass

* Refactoring 3rd pass

* Refactoring 4th Pass

* Code Error fix

* YAPF Formating Fix

* Arguments now using self

* YAPF error correction

* Bug Fix in Decorator

* act name bug fix

* Error Correction

* YAPF formating fix

* Useless tf.identity removed

* Error Fix

* Changelog Updated

* Error fix in tl.activation

* Documentation error fix

* Lazy Import added

* Import Refactoring with LazyImport when necessary

* Changelog Updated

* Gitter Removed

* Fixed proposed by @zsdonghao

* Documentation updated

* Missing requirements added

* Update to TensorLayer 1.8.6rc1

* Requirements error fix

* Docker Files updated
  • Loading branch information
Jonathan DEKHTIAR authored and zsdonghao committed Jun 2, 2018
1 parent cc39503 commit c5b6cee
Show file tree
Hide file tree
Showing 69 changed files with 1,231 additions and 1,073 deletions.
38 changes: 28 additions & 10 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,20 +71,21 @@ To release a new version, please update the changelog as followed:
### Added
- API:
- `tl.alphas` and `tl.alphas_like` added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
- `tl.lazy_imports.LazyImport` to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
- CI Tool:
- [Stale Probot](https://github.com/probot/stale) added to clean stale issues (by @DEKHTIARJonathan in #573)
- [Changelog Probot](https://github.com/mikz/probot-changelog) Configuration added (by @DEKHTIARJonathan in #637)
- Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
- CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
- Decorator:
- `tl.decorators` API created including `deprecated_alias` and `private_method` (by @DEKHTIARJonathan in #660)
- Docker:
- Docker:
- Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
- Containers built in the following configurations (by @DEKHTIARJonathan in #648):
- Containers built in the following configurations (by @DEKHTIARJonathan in #648):
- py2 + cpu
- py2 + gpu
- py3 + cpu
- py3 + gpu
- py3 + gpu
- Documentation:
- Release semantic version added on index page (by @DEKHTIARJonathan in #633)
- Optimizers page added (by @DEKHTIARJonathan in #636)
Expand Down Expand Up @@ -121,10 +122,15 @@ To release a new version, please update the changelog as followed:
- Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
- Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
- All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
- `tf.identity` as activation is **ignored**, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
- argument dictionaries are now checked and saved within the `Layer` Base Class (by @DEKHTIARJonathan in #667)

### Deprecated
- `tl.layers.TimeDistributedLayer` argurment `args` is deprecated in favor of `layer_args` (by @DEKHTIARJonathan in #667)

### Removed
- `assert()` calls remove and replaced by `raise AssertionError()` (by @DEKHTIARJonathan in #667)
- `tl.identity` is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)

### Fixed
- Issue #498 - Deprecation Warning Fix in `tl.layers.RNNLayer` with `inspect` (by @DEKHTIARJonathan in #574)
Expand All @@ -135,10 +141,11 @@ To release a new version, please update the changelog as followed:
- Error in `tl.layers.TernaryConv2d` fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
- Deprecation warning fixed in `tl.layers.binary._compute_threshold()` (by @DEKHTIARJonathan in #658)
- All references to `tf.logging` replaced by `tl.logging` (by @DEKHTIARJonathan in #661)
- Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
- Tutorial:
- `tutorial_word2vec_basic.py` saving issue #476 fixed (by @DEKHTIARJonathan in #635)
- All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

### Security

### Dependencies Update
Expand All @@ -149,25 +156,26 @@ To release a new version, please update the changelog as followed:
### Contributors
@lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

## [1.8.6] - 2018-05-30
## [1.8.6] - 2018-06-02

### Added
- API:
- `tl.alphas` and `tl.alphas_like` added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
- `tl.lazy_imports.LazyImport` to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
- CI Tool:
- [Stale Probot](https://github.com/probot/stale) added to clean stale issues (by @DEKHTIARJonathan in #573)
- [Changelog Probot](https://github.com/mikz/probot-changelog) Configuration added (by @DEKHTIARJonathan in #637)
- Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
- CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
- Decorator:
- `tl.decorators` API created including `deprecated_alias` and `private_method` (by @DEKHTIARJonathan in #660)
- Docker:
- Docker:
- Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
- Containers built in the following configurations (by @DEKHTIARJonathan in #648):
- Containers built in the following configurations (by @DEKHTIARJonathan in #648):
- py2 + cpu
- py2 + gpu
- py3 + cpu
- py3 + gpu
- py3 + gpu
- Documentation:
- Release semantic version added on index page (by @DEKHTIARJonathan in #633)
- Optimizers page added (by @DEKHTIARJonathan in #636)
Expand Down Expand Up @@ -204,6 +212,15 @@ To release a new version, please update the changelog as followed:
- Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
- Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
- All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
- `tf.identity` as activation is **ignored**, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
- argument dictionaries are now checked and saved within the `Layer` Base Class (by @DEKHTIARJonathan in #667)

### Deprecated
- `tl.layers.TimeDistributedLayer` argurment `args` is deprecated in favor of `layer_args` (by @DEKHTIARJonathan in #667)

### Removed
- `assert()` calls remove and replaced by `raise AssertionError()` (by @DEKHTIARJonathan in #667)
- `tl.identity` is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)

### Fixed
- Issue #498 - Deprecation Warning Fix in `tl.layers.RNNLayer` with `inspect` (by @DEKHTIARJonathan in #574)
Expand All @@ -214,6 +231,7 @@ To release a new version, please update the changelog as followed:
- Error in `tl.layers.TernaryConv2d` fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
- Deprecation warning fixed in `tl.layers.binary._compute_threshold()` (by @DEKHTIARJonathan in #658)
- All references to `tf.logging` replaced by `tl.logging` (by @DEKHTIARJonathan in #661)
- Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
- Tutorial:
- `tutorial_word2vec_basic.py` saving issue #476 fixed (by @DEKHTIARJonathan in #635)
- All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)
Expand Down Expand Up @@ -265,6 +283,6 @@ To release a new version, please update the changelog as followed:
### Contributors
@zsdonghao @luomai @DEKHTIARJonathan

[Unreleased]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc0...master
[1.8.6]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc0...1.8.5
[Unreleased]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...master
[1.8.6]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...1.8.5
[1.8.5]: https://github.com/tensorlayer/tensorlayer/compare/1.8.4...1.8.5
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

[![Build Status](https://img.shields.io/travis/tensorlayer/tensorlayer.svg?label=Travis&branch=master)](https://travis-ci.org/tensorlayer/tensorlayer)
[![PyPI version](https://badge.fury.io/py/tensorlayer.svg)](https://pypi.org/project/tensorlayer/)
[![Github commits (since latest release)](https://img.shields.io/github/commits-since/tensorlayer/tensorlayer/latest.svg)](https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc0...master)
[![Github commits (since latest release)](https://img.shields.io/github/commits-since/tensorlayer/tensorlayer/latest.svg)](https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...master)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/tensorlayer.svg)](https://pypi.org/project/tensorlayer/)
[![Supported TF Version](https://img.shields.io/badge/tensorflow-1.6.0+-blue.svg)](https://github.com/tensorflow/tensorflow/releases)
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/ca2a29ddcf7445588beff50bee5406d9)](https://app.codacy.com/app/tensorlayer/tensorlayer)
Expand All @@ -16,7 +16,6 @@
[![Documentation Status](https://img.shields.io/readthedocs/tensorlayer/latest.svg?label=ReadTheDocs-EN)](https://tensorlayer.readthedocs.io/)
[![Documentation Status](https://img.shields.io/readthedocs/tensorlayercn/latest.svg?label=ReadTheDocs-CN)](https://tensorlayercn.readthedocs.io/)
[![PyUP Updates](https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg)](https://pyup.io/repos/github/tensorlayer/tensorlayer/)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/tensorlayer/Lobby)

<br/>

Expand Down
5 changes: 1 addition & 4 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
:target: https://pypi.org/project/tensorlayer/

.. image:: https://img.shields.io/github/commits-since/tensorlayer/tensorlayer/latest.svg
:target: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc0...master
:target: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...master

.. image:: https://img.shields.io/pypi/pyversions/tensorlayer.svg
:target: https://pypi.org/project/tensorlayer/
Expand Down Expand Up @@ -45,9 +45,6 @@
.. image:: https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg
:target: https://pyup.io/repos/github/tensorlayer/tensorlayer/

.. image:: https://badges.gitter.im/Join%20Chat.svg
:target: https://gitter.im/tensorlayer/Lobby

.. raw:: html

<br/><br/>
Expand Down
2 changes: 1 addition & 1 deletion docker/python2/cpu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ RUN if [ -z "$TL_VERSION" ]; then \
&& cd /tensorlayer_dist/ \
&& git clone https://github.com/tensorlayer/tensorlayer.git \
&& cd tensorlayer \
&& pip install -e .[db,dev,doc,extra,test] \
&& pip install -e .[all] \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Building Tag Release:" "$TL_VERSION" \
Expand Down
2 changes: 1 addition & 1 deletion docker/python2/gpu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ RUN if [ -z "$TL_VERSION" ]; then \
&& cd /tensorlayer_dist/ \
&& git clone https://github.com/tensorlayer/tensorlayer.git \
&& cd tensorlayer \
&& pip install -e .[db,dev,doc,extra,test] \
&& pip install -e .[all] \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Building Tag Release:" "$TL_VERSION" \
Expand Down
2 changes: 1 addition & 1 deletion docker/python3/cpu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ RUN if [ -z "$TL_VERSION" ]; then \
&& cd /tensorlayer_dist/ \
&& git clone https://github.com/tensorlayer/tensorlayer.git \
&& cd tensorlayer \
&& pip install -e .[db,dev,doc,extra,test] \
&& pip install -e .[all] \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Building Tag Release:" "$TL_VERSION" \
Expand Down
2 changes: 1 addition & 1 deletion docker/python3/gpu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ RUN if [ -z "$TL_VERSION" ]; then \
&& cd /tensorlayer_dist/ \
&& git clone https://github.com/tensorlayer/tensorlayer.git \
&& cd tensorlayer \
&& pip install -e .[db,dev,doc,extra,test] \
&& pip install -e .[all] \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Building Tag Release:" "$TL_VERSION" \
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
# html_title = 'TensorLayer v1.8.5'
# html_title = 'TensorLayer'

# A shorter title for the navigation bar. Default is the same as html_title.
#
Expand Down
5 changes: 0 additions & 5 deletions docs/modules/activation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,13 @@ For more complex activation, TensorFlow API will be required.

.. autosummary::

identity
ramp
leaky_relu
swish
sign
hard_tanh
pixel_wise_softmax

Identity
-------------
.. autofunction:: identity

Ramp
------
.. autofunction:: ramp
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_binarynet_cifar10_tfrecord.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ def model(x_crop, y_, reuse):
net = tl.layers.BinaryDenseLayer(net, 384, act=tf.nn.relu, name='d1relu')
net = tl.layers.SignLayer(net)
net = tl.layers.BinaryDenseLayer(net, 192, act=tf.nn.relu, name='d2relu')
net = tl.layers.DenseLayer(net, 10, act=tf.identity, name='output')
net = tl.layers.DenseLayer(net, 10, act=None, name='output')

y = net.outputs

Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_cartpole_ac.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ def __init__(self, sess, n_features, lr=0.01):
n = InputLayer(self.s, name='in')
n = DenseLayer(n, n_units=30, act=tf.nn.relu6, W_init=tf.random_uniform_initializer(0, 0.01), name='hidden')
# n = DenseLayer(n, n_units=5, act=tf.nn.relu, W_init=tf.random_uniform_initializer(0, 0.01), name='hidden2')
n = DenseLayer(n, n_units=1, act=tf.identity, name='V')
n = DenseLayer(n, n_units=1, act=None, name='V')
self.v = n.outputs

with tf.variable_scope('squared_TD_error'):
Expand Down
4 changes: 2 additions & 2 deletions example/tutorial_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def model(x, y_, reuse):
net = FlattenLayer(net, name='flatten')
net = DenseLayer(net, 384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu')
net = DenseLayer(net, 192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu')
net = DenseLayer(net, 10, act=tf.identity, W_init=W_init2, name='output')
net = DenseLayer(net, 10, act=None, W_init=W_init2, name='output')
y = net.outputs

ce = tl.cost.cross_entropy(y, y_, name='cost')
Expand Down Expand Up @@ -63,7 +63,7 @@ def model_batch_norm(x, y_, reuse, is_train):
net = FlattenLayer(net, name='flatten') # output: (batch_size, 2304)
net = DenseLayer(net, 384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu')
net = DenseLayer(net, 192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu')
net = DenseLayer(net, 10, act=tf.identity, W_init=W_init2, name='output')
net = DenseLayer(net, 10, act=None, W_init=W_init2, name='output')
y = net.outputs

ce = tl.cost.cross_entropy(y, y_, name='cost')
Expand Down
4 changes: 2 additions & 2 deletions example/tutorial_cifar10_tfrecord.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ def model(x_crop, y_, reuse):
net = FlattenLayer(net, name='flatten')
net = DenseLayer(net, 384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu')
net = DenseLayer(net, 192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu')
net = DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output')
net = DenseLayer(net, n_units=10, act=None, W_init=W_init2, name='output')
y = net.outputs

ce = tl.cost.cross_entropy(y, y_, name='cost')
Expand Down Expand Up @@ -237,7 +237,7 @@ def model_batch_norm(x_crop, y_, reuse, is_train):
net = FlattenLayer(net, name='flatten')
net = DenseLayer(net, 384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu')
net = DenseLayer(net, 192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu')
net = DenseLayer(net, n_units=10, act=tf.identity, W_init=W_init2, name='output')
net = DenseLayer(net, n_units=10, act=None, W_init=W_init2, name='output')
y = net.outputs

ce = tl.cost.cross_entropy(y, y_, name='cost')
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_dorefanet_cifar10_tfrecord.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ def model(x_crop, y_, reuse):
net = tl.layers.FlattenLayer(net, name='flatten')
net = tl.layers.DorefaDenseLayer(net, 1, 3, 384, act=tf.nn.relu, name='d1relu')
net = tl.layers.DorefaDenseLayer(net, 1, 3, 192, act=tf.nn.relu, name='d2relu')
net = tl.layers.DenseLayer(net, 10, act=tf.identity, name='output')
net = tl.layers.DenseLayer(net, 10, act=None, name='output')
y = net.outputs

ce = tl.cost.cross_entropy(y, y_, name='cost')
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_frozenlake_dqn.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def to_one_hot(i, n_classes=None):
# 4x4 grid can be represented by one-hot vector with 16 integers.
inputs = tf.placeholder(shape=[1, 16], dtype=tf.float32)
net = InputLayer(inputs, name='observation')
net = DenseLayer(net, 4, act=tf.identity, W_init=tf.random_uniform_initializer(0, 0.01), b_init=None, name='q_a_s')
net = DenseLayer(net, 4, act=None, W_init=tf.random_uniform_initializer(0, 0.01), b_init=None, name='q_a_s')
y = net.outputs # action-value / rewards of 4 actions
# chose action greedily with reward. in Q-Learning, policy is greedy, so we use "max" to select the next action.
predict = tf.argmax(y, 1)
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_generate_text.py
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ def inference(x, is_train, sequence_length, reuse=None):
return_seq_2d=True, name='lstm1'
)
lstm1 = network
network = DenseLayer(network, vocab_size, W_init=rnn_init, b_init=rnn_init, act=tf.identity, name='output')
network = DenseLayer(network, vocab_size, W_init=rnn_init, b_init=rnn_init, act=None, name='output')
return network, lstm1

# Inference for Training
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_mlp_dropout1.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
# the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to
# speed up computation, so we use identity here.
# see tf.nn.sparse_softmax_cross_entropy_with_logits()
network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output')
network = tl.layers.DenseLayer(network, n_units=10, act=None, name='output')

# define cost function and metric.
y = network.outputs
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_mlp_dropout2.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ def mlp(x, is_train=True, reuse=False):
network = tl.layers.DropoutLayer(network, keep=0.5, is_fix=True, is_train=is_train, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800, act=tf.nn.relu, name='relu2')
network = tl.layers.DropoutLayer(network, keep=0.5, is_fix=True, is_train=is_train, name='drop3')
network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output')
network = tl.layers.DenseLayer(network, n_units=10, act=None, name='output')
return network


Expand Down
8 changes: 4 additions & 4 deletions example/tutorial_mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,12 @@ def main_test_layers(model='relu'):
net = tl.layers.DropoutLayer(net, keep=0.5, name='drop2')
net = tl.layers.DenseLayer(net, n_units=800, act=tf.nn.relu, name='relu2')
net = tl.layers.DropoutLayer(net, keep=0.5, name='drop3')
net = tl.layers.DenseLayer(net, n_units=10, act=tf.identity, name='output')
net = tl.layers.DenseLayer(net, n_units=10, act=None, name='output')
elif model == 'dropconnect':
net = tl.layers.InputLayer(x, name='input')
net = tl.layers.DropconnectDenseLayer(net, keep=0.8, n_units=800, act=tf.nn.relu, name='dropconnect1')
net = tl.layers.DropconnectDenseLayer(net, keep=0.5, n_units=800, act=tf.nn.relu, name='dropconnect2')
net = tl.layers.DropconnectDenseLayer(net, keep=0.5, n_units=10, act=tf.identity, name='output')
net = tl.layers.DropconnectDenseLayer(net, keep=0.5, n_units=10, act=None, name='output')

# To print all attributes of a Layer.
# attrs = vars(net)
Expand Down Expand Up @@ -234,7 +234,7 @@ def main_test_stacked_denoise_AE(model='relu'):
recon_layer2 = tl.layers.ReconLayer(net, x_recon=x_recon1, n_units=800, act=act_recon, name='recon_layer2')
# 3rd layer
net = tl.layers.DropoutLayer(net, keep=0.5, name='drop3')
net = tl.layers.DenseLayer(net, 10, act=tf.identity, name='output')
net = tl.layers.DenseLayer(net, 10, act=None, name='output')

# Define fine-tune process
y = net.outputs
Expand Down Expand Up @@ -398,7 +398,7 @@ def main_test_cnn_layer():
net = tl.layers.DropoutLayer(net, keep=0.5, name='drop1')
net = tl.layers.DenseLayer(net, 256, act=tf.nn.relu, name='relu1')
net = tl.layers.DropoutLayer(net, keep=0.5, name='drop2')
net = tl.layers.DenseLayer(net, 10, act=tf.identity, name='output')
net = tl.layers.DenseLayer(net, 10, act=None, name='output')

y = net.outputs

Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_mnist_distributed.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
# the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to
# speed up computation, so we use identity here.
# see tf.nn.sparse_softmax_cross_entropy_with_logits()
network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output')
network = tl.layers.DenseLayer(network, n_units=10, act=None, name='output')

# define cost function and metric.
y = network.outputs
Expand Down
2 changes: 1 addition & 1 deletion example/tutorial_mnist_float16.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def model(x, is_train=True, reuse=False):
n = DropoutLayer(n, 0.5, True, is_train, name='drop1')
n = DenseLayer(n, 256, act=tf.nn.relu, name='relu1')
n = DropoutLayer(n, 0.5, True, is_train, name='drop2')
n = DenseLayer(n, 10, act=tf.identity, name='output')
n = DenseLayer(n, 10, act=None, name='output')
return n


Expand Down
Loading

0 comments on commit c5b6cee

Please sign in to comment.