Skip to content

Commit

Permalink
docs/lib: add architecture information to docs
Browse files Browse the repository at this point in the history
  • Loading branch information
MichaelHirn committed Dec 2, 2015
1 parent a7f8a69 commit 8eae87f
Show file tree
Hide file tree
Showing 2 changed files with 57 additions and 16 deletions.
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ For more information,
If you're using Cargo, just add Leaf to your Cargo.toml:

[dependencies]
leaf = "0.1.0"
leaf = "0.1.1"

If you're using [Cargo Edit][cargo-edit], you can
call:
Expand All @@ -75,8 +75,10 @@ We design Leaf and all other crates for machine learning completely modular and
as extensible as possible. More helpful crates you can use with Leaf:

- [**Cuticula**][cuticula]: Preprocessing Framework for Machine Learning
- [**Phloem**][phloem]: Universal CPU/GPU Data Blob for Machine Learning
- [**Collenchyma**][collen]: Backend-agnostic parallel computation
- [**Phloem**][phloem]: Universal Data Blob for Machine Learning on CUDA, OpenCL
or common CPU
- [**Collenchyma**][collen]: Portable, High Performance Computation on CUDA,
OpenCL and common CPU

[cuticula]: https://github.com/autumnai/cuticula
[phloem]: https://github.com/autumnai/phloem
Expand Down
65 changes: 52 additions & 13 deletions src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,20 +1,58 @@
//! Leaf is a open, fast and a well-designed, modular Framework for distributed
//! Deep Learning on {C, G}PUs.
//! Leaf is a open, modular and clear-designed Machine Intelligence Framework providing
//! state-of-the-art performance for distributed (Deep|Machine) Learning - sharing concepts from
//! Tensorflow and Caffe.
//!
//! ## Overview
//! An important module in Leaf is the backend-agnostic, high-performance computation Framework
//! [Collenchyma][collenchyma], which combines performance and usability for Leaf Networks.
//! This allows you to run and deploy Leaf Networks to servers, desktops or even mobiles
//! using the full available computation power of GPUs or other CUDA/OpenCL supported
//! devices for the learning of your Networks. And if your machine does not have a GPU or you do
//! not want to install CUDA/OpenCL on your local machine, Leaf will gracefully fall back to
//! your native host CPU.
//!
//! To build a Deep Neural Network you first need to create a
//! [Network][network] which is a container for all different types of
//! [Layers][layers]. These layers are grouped in different types such as
//! [Activation Layers][activation] and [Loss Layers][loss] (these state the
//! characteristics of the layer).
//! ## Architecture
//!
//! Now to train your network you can use one of the [Solvers][solvers]. The
//! Solver defines the [Optimization Method][optimization] and keeps track on
//! the learning progress.
//! Leaf's [Network][network] is a compositional model, representing a collection of connected
//! [layers][layers], making operations over numerical data.
//!
//! The operations can run on different Backends {CPU, GPU} and doesn't have
//! to be defined at compile time, which allows for easy backend swapping.
//! The Network defines the entire model, by defining the hirarchical structure of layers from
//! bottom to top. At execution time, the Network passes the data, flowing through the Network,
//! from one layer to the next. The output of one layer is the input for the layer on top. On a
//! backward pass, the Network passes the deriviates inverted through the Network.
//!
//! Layers, the building block of a Leaf Network, are small units, describing computation over
//! numerical input data. Generally speaking Layers take input and produce an output, but
//! essentially a Layer can describe any functionality e.g. logging as long as it obeys to the
//! general behaviour specifications of a Layer. Any Layer can be grouped in one of four
//! Layer types which are closer defined at the [Layers page][layers]. Every
//! layer serves a special purpose and can occur zero, one or many times inside a Network.
//!
//! Leaf uses a Blob, provided by the [Phloem][phloem] module, an N-dimensional array
//! for a unified memory interface over the actual data for automatic synchronization between
//! different devices (CUDA, OpenCL, host CPU). A Blob stores the actual data as well as the
//! derivatives and is used for the data flowing through the system and for the state
//! representation of Layers, which is important for portability and performance.
//! A Blob can be swapped from backend to backend and can be used for computations on CUDA, OpenCL
//! and native host CPU. It provides performance optimizations and automatically takes care of
//! memory management and synchronization.
//!
//! The learning and optimization of the Network happens at the [Solver][solver] and is decoupled
//! from the Network making the setup clean and flexibel. One of the four layer types is a Loss
//! Layer, which is used for the interaction of Network and Solver. The Network procudes the loss
//! and gradients, which the Solver uses to optimize the Network through parameter updates. Beside
//! that, the Solver provides housekeeping and other evaluations of the Network. All operation
//! on the Solver happen through Collenchyma, therefore can be executed on Cuda, OpenCL or native
//! host CPU as well.
//!
//! Leaf provides a robust and modular design, which allows to express almost any numerical
//! computation including SVMs, RNNs and other popular learning algorithms. We hope that Leaf can
//! help future research and production development alike as it combines expressiveness,
//! performance and usability.
//!
//! [network]: ./network/index.html
//! [layers]: ./layers/index.html
//! [phloem]: https://github.com/autumnai/phloem
//! [solver]: ./solvers/index.html
//!
//! ## Philosophy
//!
Expand Down Expand Up @@ -56,6 +94,7 @@
//! - [Issue #19 for Activation Layers][issue-activation]
//! - [Issue #20 for Common Layers][issue-common]
//!
//! [collenchyma]: https://github.com/autumnai/collenchyma
//! [network]: ./network/index.html
//! [layers]: ./layers/index.html
//! [activation]: ./layers/activation/index.html
Expand Down

0 comments on commit 8eae87f

Please sign in to comment.