Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/explosion/thinc
Browse files Browse the repository at this point in the history
  • Loading branch information
honnibal committed May 13, 2017
2 parents 2e9490c + e4c8cb6 commit a805530
Showing 1 changed file with 15 additions and 12 deletions.
27 changes: 15 additions & 12 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,6 @@ architecture. It's designed to be easy to install, efficient for CPU usage and
optimised for NLP and deep learning with text – in particular, hierarchically
structured input and variable-length sequences.

Thinc's deep learning functionality is still under active development: APIs are
unstable, and we're not yet ready to provide usage support. However, if you're
already quite familiar with neural networks, there's a lot here you might find
interesting. Thinc's conceptual model is quite different from TensorFlow's.
Thinc also implements some novel features, such as a small DSL for concisely
wiring up models, embedding tables that support pre-computation and the
hashing trick, dynamic batch sizes, a concatenation-based approach to
variable-length sequences, and support for model averaging for the
Adam solver (which performs very well).

🔮 **Version 6.6 out now!** `Read the release notes here. <https://github.com/explosion/thinc/releases/>`_

.. image:: https://img.shields.io/travis/explosion/thinc/master.svg?style=flat-square
Expand Down Expand Up @@ -53,7 +43,20 @@ Adam solver (which performs very well).
:target: https://twitter.com/explosion_ai
:alt: Follow us on Twitter

No computational graph — just higher order functions
Development status
==================

Thinc's deep learning functionality is still under active development: APIs are
unstable, and we're not yet ready to provide usage support. However, if you're
already quite familiar with neural networks, there's a lot here you might find
interesting. Thinc's conceptual model is quite different from TensorFlow's.
Thinc also implements some novel features, such as a small DSL for concisely
wiring up models, embedding tables that support pre-computation and the
hashing trick, dynamic batch sizes, a concatenation-based approach to
variable-length sequences, and support for model averaging for the
Adam solver (which performs very well).

No computational graph – just higher order functions
======================================================

The central problem for a neural network implementation is this: during the
Expand All @@ -68,7 +71,7 @@ because we put the state from the forward pass into callbacks.

All nodes in the network have a simple signature:

.. code:: python
.. code::
f(inputs) -> {outputs, f(d_outputs)->d_inputs}
Expand Down

0 comments on commit a805530

Please sign in to comment.