Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,9 @@ method, this part of the documentation is for you.
modules/rein
modules/files
modules/visualize
modules/ops
modules/activation
modules/distributed

..
modules/db


Command-line Reference
----------------------
Expand Down
Empty file modified docs/modules/activation.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/cost.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/distributed.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/files.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/iterate.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/layers.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/nlp.rst
100755 → 100644
Empty file.
43 changes: 0 additions & 43 deletions docs/modules/ops.rst

This file was deleted.

Empty file modified docs/modules/prepro.rst
100755 → 100644
Empty file.
Empty file modified docs/modules/rein.rst
100755 → 100644
Empty file.
16 changes: 16 additions & 0 deletions docs/modules/utils.rst
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -56,3 +56,19 @@ Convert list of string to dictionary
Flatten a list
^^^^^^^^^^^^^^^^^^^
.. autofunction:: flatten_list

Close TF session and associated processes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autofunction:: exit_tensorflow

Open TensorBoard
^^^^^^^^^^^^^^^^^^^
.. autofunction:: open_tensorboard

Clear TensorFlow placeholder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autofunction:: clear_all_placeholder_variables

Set GPU functions
---------------------------
.. autofunction:: set_gpu_fraction
Empty file modified docs/modules/visualize.rst
100755 → 100644
Empty file.
Empty file modified docs/user/development.rst
100755 → 100644
Empty file.
Empty file modified docs/user/example.rst
100755 → 100644
Empty file.
Empty file modified docs/user/installation.rst
100755 → 100644
Empty file.
9 changes: 9 additions & 0 deletions docs/user/more.rst
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,15 @@ After you get the variable list, you can define your optimizer like that so as t
train_op = tf.train.AdamOptimizer(0.001).minimize(cost, var_list= train_params)


Logging
-------

TensorLayer adopts the `Python logging module <https://docs.python.org/3/library/logging.html>`__
to log running information.
The logging module would print logs to the console in default.
If you want to configure the logging module,
you shall follow its `manual <https://docs.python.org/3/library/logging.html>`__.

Visualization
--------------

Expand Down
Empty file modified docs/user/tutorial.rst
100755 → 100644
Empty file.
3 changes: 0 additions & 3 deletions example/tutorial_mnist_distributed.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,6 @@
import tensorflow as tf
import tensorlayer as tl

# set buffer mode to _IOLBF for stdout
tl.ops.setlinebuf()

# load environment for distributed training
task_spec = tl.distributed.TaskSpec()
task_spec.create_server()
Expand Down
1 change: 0 additions & 1 deletion tensorlayer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
from . import files
from . import iterate
from . import layers
from . import ops
from . import utils
from . import visualize
from . import prepro
Expand Down
16 changes: 7 additions & 9 deletions tensorlayer/layers/time_distribution.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-

from .. import ops
from .core import *


Expand Down Expand Up @@ -62,14 +61,13 @@ def __init__(
timestep = input_shape[1]
x = tf.unstack(self.inputs, axis=1)

with ops.suppress_stdout():
for i in range(0, timestep):
with tf.variable_scope(name, reuse=(set_keep['name_reuse'] if i == 0 else True)) as vs:
set_name_reuse((set_keep['name_reuse'] if i == 0 else True))
net = layer_class(InputLayer(x[i], name=args['name'] + str(i)), **args)
# net = layer_class(InputLayer(x[i], name="input_"+args['name']), **args)
x[i] = net.outputs
variables = tf.get_collection(TF_GRAPHKEYS_VARIABLES, scope=vs.name)
for i in range(0, timestep):
with tf.variable_scope(name, reuse=(set_keep['name_reuse'] if i == 0 else True)) as vs:
set_name_reuse((set_keep['name_reuse'] if i == 0 else True))
net = layer_class(InputLayer(x[i], name=args['name'] + str(i)), **args)
# net = layer_class(InputLayer(x[i], name="input_"+args['name']), **args)
x[i] = net.outputs
variables = tf.get_collection(TF_GRAPHKEYS_VARIABLES, scope=vs.name)

self.outputs = tf.stack(x, axis=1, name=name)

Expand Down
Loading