Skip to content

Commit

Permalink
readme
Browse files Browse the repository at this point in the history
  • Loading branch information
zsdonghao committed Aug 22, 2016
1 parent 59bedb3 commit b93d386
Show file tree
Hide file tree
Showing 4 changed files with 10 additions and 5 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -370,7 +370,7 @@ pip install . -e

# Ways to Contribute

TensorLayer begins as an internal repository at Imperial College Lodnon, helping researchers to test their new methods. It now encourage researches from all over the world to publish their new methods so as to promote the development of machine learning.
TensorLayer begins as an internal repository at Imperial College London, helping researchers to test their new methods. It now encourage researches from all over the world to publish their new methods so as to promote the development of machine learning.

Your method can be merged into TensorLayer, if you can prove it is better than the existing methods. Test script with detailed descriptions is required.

Expand Down
4 changes: 2 additions & 2 deletions docs/user/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ The TensorLayer project was started by Hao Dong, Imperial College London in Jun
2016. It is developed by a core team (in alphabetical order:
`Akara Supratak <https://akaraspt.github.io>`_,
`Hao Dong <https://zsdonghao.github.io>`_,
`Simiao Yu <https://github.com/zsdonghao>`_,)
and numerous additional contributors on `GitHub`_,
`Simiao Yu <https://github.com/zsdonghao>`_)
and numerous additional contributors on `GitHub`_.

As an open-source project by Researchers for Researchers and Engineers,
we highly welcome contributions!
Expand Down
Binary file removed tensorlayer/__pycache__/__init__.cpython-34.pyc
Binary file not shown.
9 changes: 7 additions & 2 deletions tensorlayer/files.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ def load_mnist_dataset(shape=(-1,784)):
Examples
--------
>>> X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1,784))
>>> X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1))
"""
# We first define a download function, supporting both Python 2 and 3.
if sys.version_info[0] == 2:
Expand Down Expand Up @@ -296,7 +297,11 @@ def load_ptb_dataset():
after 14 epochs they start to reduce the learning rate by a factor of 1.15
after each epoch. They clip the norm of the gradients (normalized by
minibatch size) at 10.
Returns
--------
train_data, valid_data, test_data, vocabulary size
Examples
--------
>>> train_data, valid_data, test_data, vocab_size = tl.files.load_ptb_dataset()
Expand Down Expand Up @@ -491,7 +496,7 @@ def download_imbd(filename):
def load_nietzsche_dataset():
"""Load Nietzsche dataset.
Returns a string.
Examples
--------
>>> see tutorial_generate_text.py
Expand Down

0 comments on commit b93d386

Please sign in to comment.