Skip to content
Interactive Notation for Computational Graphs
JavaScript CSS HTML
Branch: master
Clone or download
Latest commit 5390881 Jul 20, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Scale better you boxy fuck!! Jul 3, 2017
examples Forgotten scope. Nov 4, 2017
icons More towards #6 Feb 18, 2017
release-builds BAMF DMG Nov 20, 2017
sketches Keep your goddamned sketches for yourself. Mar 11, 2017
src Extra braces Nov 27, 2017
.babelrc Don't include worker in bundle. Keep console. Oct 1, 2016
.gitattributes Register Moniel source files. Oct 1, 2016
.gitignore Folder for saving must exist. My bad. Mar 10, 2017
.gitmodules Humpty Dumpty Nov 23, 2016
CONTRIBUTING.md Sheding Skin on Full Moon Jun 9, 2017
LICENSE.md IANAL Jun 15, 2017
PLAN.md Yeah. Jan 8, 2017
README.md
faEh2Rb.jpg Today (7th of May) should be the date of first public demo. Didn't ma… May 7, 2016
index.html Sheding Skin on Full Moon Jun 9, 2017
main.js Paths Nov 20, 2017
package-lock.json import of course Nov 23, 2017
package.json import of course Nov 23, 2017

README.md

⚠️ Notice

L1: Tensor Studio - a more practical continuation of the ideas presented in Moniel.


Moniel: Notation for Computational Graphs

Human-friendly declarative dataflow notation for computational graphs. See video.

Demo


Pre-built packages

macOS

Moniel.dmg (77MB)


Setup for other platforms

$ git clone https://github.com/mlajtos/moniel.git
$ cd moniel
$ npm install
$ npm start

Quick Introduction

Moniel is one of many attempts at creating a notation for deep learning models leveraging graph thinking. Instead of defining computation as list of formulea, we define the model as a declarative dataflow graph. It is not a programming language, just a convenient notation. (Which will be executable. Wanna help?)

Note: Proper syntax highlighting is not available here on GitHub. Use the application for the best experience.

Let's start with nothing, i.e. comments:

// This is line comment.

/*
	This is block
	comment.
*/

Node can be created by stating its type:

Sigmoid

You don't have to write full name of a type. Use acronym that fits you! These are all equivalent:

LocalResponseNormalization // canonical, but too long
LocRespNorm // weird, but why not?
LRN // cryptic for beginners, enough for others

Nodes connect with other nodes with an arrow:

Sigmoid -> MaxPooling

There can be chain of any length:

LRN -> Sigm -> BatchNorm -> ReLU -> Tanh -> MP -> Conv -> BN -> ELU

Also, there can be multiple chains:

ReLU -> BN
LRN -> Conv -> MP
Sigm -> Tanh

Nodes can have identifiers:

conv:Convolution

Identifiers let's you refer to nodes that are used more than once:

// inefficient declaration of matrix-matrix multiplication
matrix1:Tensor
matrix2:Tensor
mm:MatrixMultiplication

matrix1 -> mm
matrix2 -> mm

However, this can be rewritten without identifiers using list:

[Tensor,Tensor] -> MatMul

Lists let's you easily declare multi-connection:

// Maximum of 3 random numbers
[Random,Random,Random] -> Maximum

List-to-list connections are sometimes really handy:

// Range of 3 random numbers
[Rand,Rand,Rand] -> [Max,Min] -> Sub -> Abs

Nodes can take named attributes that modify their behavior:

Fill(shape = 10x10x10, value = 1.0)

Attribute names can also be shortened:

Ones(s=10x10x10)

Defining large graphs without proper structuring is unmanageable. Metanodes can help:

layer:{
    RandomNormal(shape=784x1000) -> weights:Variable
    weights -> dp:DotProduct -> act:ReLU
}

Tensor -> layer/dp // feed input into the DotProduct of the "layer" metanode
layer/act -> Softmax // feed output of the "layer" metanode into another node

Metanodes are more powerful when they define proper Input-Output boundary:

layer1:{
    RandomNormal(shape=784x1000) -> weigths:Variable
    [in:Input,weigths] -> DotProduct -> ReLU -> out:Output
}

layer2:{
    RandomNormal(shape=1000x10) -> weigths:Variable
    [in:Input,weigths] -> DotProduct -> ReLU -> out:Output
}

// connect metanodes directly
layer1 -> layer2

Alternatively, you can use inline metanodes:

In -> layer:{[In,Tensor] -> Conv -> Out} -> Out

Or you don't need to give it a name:

In -> {[In,Tensor] -> Conv -> Out} -> Out

If metanodes have identical structure, we can create a reusable metanode and use it as a normal node:

+ReusableLayer(shape = 1x1){
    RandN(shape = shape) -> w:Var
    [in:In,w] -> DP -> RLU -> out:Out
}

RL(s = 784x1000) -> RL(s = 1000x10)

Similar projects and Inspiration

  • Lobe (video) – "Build, train, and ship custom deep learning models using a simple visual interface."
  • Serrano – "A graph computation framework with Accelerate and Metal support."
  • Subgraphs – "Subgraphs is a visual IDE for developing computational graphs."
  • 💀Machine – "Machine is a machine learning IDE."
  • PyTorch – "Tensors and Dynamic neural networks in Python with strong GPU acceleration."
  • Sonnet – "Sonnet is a library built on top of TensorFlow for building complex neural networks."
  • TensorGraph – "TensorGraph is a framework for building any imaginable models based on TensorFlow"
  • nngraph – "graphical computation for nn library in Torch"
  • DNNGraph – "a deep neural network model generation DSL in Haskell"
  • NNVM – "Intermediate Computational Graph Representation for Deep Learning Systems"
  • DeepRosetta – "An universal deep learning models conversor"
  • TensorBuilder – "a functional fluent immutable API based on the Builder Pattern"
  • Keras – "minimalist, highly modular neural networks library"
  • PrettyTensor – "a high level builder API"
  • TF-Slim – "a lightweight library for defining, training and evaluating models"
  • TFLearn – "modular and transparent deep learning library"
  • Caffe – "deep learning framework made with expression, speed, and modularity in mind"
You can’t perform that action at this time.