Skip to content

Commit

Permalink
Merge branch 'master' into refactor_import_graphdef
Browse files Browse the repository at this point in the history
  • Loading branch information
abhigyan7 committed Dec 10, 2018
2 parents c3c7386 + 0c41f7a commit 99abc64
Show file tree
Hide file tree
Showing 13 changed files with 394 additions and 18 deletions.
Binary file added docs/graphvis_research/CaffeVis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/graphvis_research/KerasVis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/graphvis_research/PureVis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
51 changes: 51 additions & 0 deletions docs/graphvis_research/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Research about adding support for exporting model graphs from Fabrik
Attached code requires [common dependencies](../../requirements/common.txt), plus `networkx` and `pydot` Python packages.
## Problem
Currently there's no tools for drawing Fabrik neural network diagram directly, without need to do it by hand. This research observes some ways to implement such function.
## Observations
During research, I managed to found some ways. They even can be divided into two groups.
### Based on deep learning frameworks
These methods share the common weakness: they cannot draw unsupported layers. For example, Keras cannot draw LRN layer. Also they could be implemented in backend only.

Note that all tools can implemented with algorithms of conversion Fabrik net to framework model directly, without creating model files.
#### Keras
Keras have its own utilities, described in its [documentation](https://keras.io/visualization/). All methods are based on [Pydot](https://github.com/pydot/pydot) library, a Python interface of [Graphviz](http://graphviz.org/). One of the utilities is used in the `print_keras_model.py`. Below there is VQI model representation drawn by Keras.

![](KerasVis.png)
To get similar with this or other model type:
```
python print_keras_model.py ../../example/keras/<desired_json_model> <desired_image_name>
```
#### Caffe
Caffe has its own script for visualisation. It actually uses pydot, too. Type `python ~/caffe/caffe/python/draw_net.py --help` to see usage help. Below is vizualised AlexNet.

![](CaffeVis.png)
```
python ~/caffe/caffe/python/draw_net.py ../../example/caffe/<desired_prototxt_model> <desired_image_name>
```
#### Tensorflow
Tensorflow has Tensorboard for graph visualisations. Still cannot see the way how to use it for creating an image, not interactive page.

Also Tensorflow method cannot be used for recurrent layers due to weird representation of them in `.pbtxt`.
### Based on Fabrik's frontend
These ones mostly for frontend representation. Also they depends only on Fabrik represen
#### Creating an extension
If we decided to create an extension for Fabrik, we could obtain DOM of the graph that already represented and convert it to image. There are a [JS library](https://github.com/tsayen/dom-to-image) for doing such things. Resulted image will look like a large screenshot of Fabrik net.
#### Implementing using JSON representation
If we dig inside Fabrik a little deeper, we find out that Fabrik stores neural network inside state as JS object. There are obtained sample net representation in `state_net.json`. It's Lenet MNIST with some layers deleted.

The only step to do is to draw graph based on this data. There are lots of ways, including [NN-SVG](https://github.com/zfrenchee/NN-SVG). Also a lot of different [JS libraries](https://stackoverflow.com/questions/7034/graph-visualization-library-in-javascript) and [other tools](https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams). In order to keep it simple, I created `draw_graph.py` that outputs pictured neural network with layer types and shapes. It uses [networkx](https://networkx.github.io/) for storing graph and pydot for visualisation, so it looks like Caffe's and Keras' network diagrams.

![](PureVis.png)
## Conclusion
Framework-based are easy to implement, but have a lot of disadvantages. Also these cannot be customized (Caffe looks prettier because of color though). DOM-based also slow, non-customizable and is workaround, not real solution. However, JSON representation-based can be fast and output any form that we want, depending on library we desire.

## References
- [Keras](https://keras.io/)
- [Caffe](http://caffe.berkeleyvision.org/)
- [Tensorflow](https://www.tensorflow.org/) and [Tensorboard](https://www.tensorflow.org/guide/graph_viz)
- [Pydot](https://pypi.org/project/pydot/) and [Graphviz](https://www.graphviz.org/)
- [DOM-to-image](https://github.com/tsayen/dom-to-image)
- [NN-SVG](https://github.com/zfrenchee/NN-SVG)
- [Graph library list 1](https://stackoverflow.com/questions/7034/graph-visualization-library-in-javascript), [Graph library list 2](https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams)
- [Networkx](https://networkx.github.io/)
22 changes: 22 additions & 0 deletions docs/graphvis_research/draw_graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import networkx as nx
import json

with open('state_net.json', 'r') as f:
network = json.loads(f.read())

network_map = {}
for node, params in network.items():
new_name = (node + ' ' + params['info']['type'] + "\n" +
str(tuple(params["shape"]["output"])))
network_map[node] = new_name

graph = nx.DiGraph()
for node, params in network.items():
output_nodes = params['connection']['output']
for o_node in output_nodes:
graph.add_edge(network_map[node], network_map[o_node])

dotgraph = nx.nx_pydot.to_pydot(graph)
dotgraph.set('rankdir', 'LR')
dotgraph.set('dpi', 300)
dotgraph.write('PureVis.png', format='png')
18 changes: 18 additions & 0 deletions docs/graphvis_research/print_keras_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
from keras.models import model_from_json
from keras.utils import plot_model
import sys

try:
json_file = sys.argv[1]
output_file = sys.argv[2]
except KeyError:
print("Usage: python print_keras_model.py <json file name> <image name>")

with open(json_file, 'r') as f:
loaded_model = model_from_json(f.read())

plot_model(loaded_model,
to_file=json_file + '.png',
rankdir='LR',
show_shapes=True,
show_layer_names=False)
1 change: 1 addition & 0 deletions docs/graphvis_research/state_net.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"l6":{"info":{"phase":null,"type":"InnerProduct","parameters":10500,"class":""},"state":{"top":"566px","class":"","left":"358px"},"shape":{"input":[20,0,0],"output":[500]},"connection":{"input":["l3"],"output":["l7"]},"params":{"bias_filler":["constant",false],"bias_regularizer":["None",false],"kernel_constraint":["None",false],"bias_constraint":["None",false],"activity_regularizer":["None",false],"num_output":[500,false],"weight_filler":["xavier",false],"kernel_regularizer":["None",false],"caffe":[true,false],"use_bias":[true,false]},"props":{"name":"l6"}},"l7":{"info":{"phase":null,"type":"ReLU","parameters":0},"state":{"top":"607px","class":"","left":"358px"},"shape":{"input":[500],"output":[500]},"connection":{"input":["l6"],"output":[]},"params":{"negative_slope":[0,false],"caffe":[true,false],"inplace":[true,false]},"props":{"name":"l7"}},"l2":{"info":{"phase":null,"type":"Convolution","parameters":null},"state":{"top":"242px","class":"","left":"358px"},"shape":{"input":[],"output":[20,0,0]},"connection":{"input":["l0","l1"],"output":["l3"]},"params":{"layer_type":["2D",false],"stride_d":[1,true],"pad_h":[0,false],"kernel_constraint":["None",false],"activity_regularizer":["None",false],"stride_h":[1,false],"pad_d":[0,true],"weight_filler":["xavier",false],"stride_w":[1,false],"dilation_d":[1,true],"use_bias":[true,false],"pad_w":[0,false],"kernel_w":[5,false],"bias_filler":["constant",false],"bias_regularizer":["None",false],"bias_constraint":["None",false],"dilation_w":[1,false],"num_output":[20,false],"kernel_d":["",true],"caffe":[true,false],"dilation_h":[1,false],"kernel_regularizer":["None",false],"kernel_h":[5,false]},"props":{"name":"l2"}},"l3":{"info":{"phase":null,"type":"Pooling","parameters":0},"state":{"top":"323px","class":"","left":"358px"},"shape":{"input":[20,0,0],"output":[20,0,0]},"connection":{"input":["l2"],"output":["l6"]},"params":{"layer_type":["2D",false],"kernel_w":[2,false],"stride_d":[1,true],"pad_h":[0,false],"stride_h":[2,false],"pad_d":[0,true],"padding":["SAME",false],"stride_w":[2,false],"kernel_d":["",true],"caffe":[true,false],"kernel_h":[2,false],"pad_w":[0,false],"pool":["MAX",false]},"props":{"name":"l3"}},"l0":{"info":{"phase":0,"type":"Data","parameters":0,"class":""},"state":{"top":"161px","class":"","left":"358px"},"shape":{"input":[],"output":[]},"connection":{"input":[],"output":["l2"]},"params":{"scale":[0.00390625,false],"mean_value":["",false],"mean_file":["",false],"batch_size":[64,false],"source":["examples/mnist/mnist_train_lmdb",false],"force_color":[false,false],"force_gray":[false,false],"rand_skip":[0,false],"prefetch":[4,false],"mirror":[false,false],"caffe":[true,false],"backend":["LMDB",false],"crop_size":[0,false]},"props":{"name":"l0"}},"l1":{"info":{"phase":1,"type":"Data","parameters":0},"state":{"top":"81px","class":"","left":"358px"},"shape":{"input":[],"output":[]},"connection":{"input":[],"output":["l2"]},"params":{"scale":[0.00390625,false],"mean_value":["",false],"mean_file":["",false],"batch_size":[100,false],"source":["examples/mnist/mnist_test_lmdb",false],"force_color":[false,false],"force_gray":[false,false],"rand_skip":[0,false],"prefetch":[4,false],"mirror":[false,false],"caffe":[true,false],"backend":["LMDB",false],"crop_size":[0,false]},"props":{"name":"l1"}},"l9":{"info":{"phase":1,"type":"Accuracy","parameters":0},"state":{"top":"769px","class":"","left":"458px"},"shape":{"input":[10],"output":[10]},"connection":{"input":[],"output":[]},"params":{"top_k":[1,false],"caffe":[true,false],"axis":[1,false]},"props":{"name":"l9"}}}
8 changes: 6 additions & 2 deletions docs/source/addng_new_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@
```
4. After making these changes, test if loading the model and exporting it to both or at least one framework is working fine and document it accordingly in your pull request.
5. Create a pull request for the same and get reviewed by the mentors.
5. Add a thumbnail image for displaying a preview of the new model.
6. Add the new model to [Tested Models](https://github.com/Cloud-CV/Fabrik/blob/master/tutorials/tested_models.md).
7. Create a pull request for the same and get reviewed by the mentors.
Cheers!

### Adding New Model - Keras
Expand All @@ -22,5 +24,7 @@ Cheers!
<li><ModelElement importNet={this.props.importNet} framework="keras" id="Sample">sample</ModelElement></li>
```
4. After making these changes, test if loading the model and exporting it to both or at least one framework is working fine and document it accordingly in your pull request.
5. Create a pull request for the same and get reviewed by the mentors.
5. Add a thumbnail image for displaying a preview of the new model.
6. Add the new model to [Tested Models](https://github.com/Cloud-CV/Fabrik/blob/master/tutorials/tested_models.md).
7. Create a pull request for the same and get reviewed by the mentors.
Cheers!
274 changes: 274 additions & 0 deletions example/caffe/colornet.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,274 @@
name: "Colornet"
layer {
name: "img_lab"
top: "img_lab" # Lab color space
type: "Input"
input_param { shape { dim: 1 dim: 3 dim: 227 dim: 227 } }
}
# **************************
# ***** PROCESS COLORS *****
# **************************
# layer { # Convert to lab
# name: "img_lab"
# type: "ColorConv"
# bottom: "data"
# top: "img_lab"
# propagate_down: false
# color_conv_param {
# input: 0 # BGR
# output: 3 # Lab
# }
# }
layer {
name: "img_slice"
type: "Slice"
bottom: "img_lab"
top: "img_l" # [0,100]
top: "data_ab" # [-110,110]
propagate_down: false
slice_param {
axis: 1
slice_point: 1
}
}
layer {
name: "silence_ab"
type: "Silence"
bottom: "data_ab"
}
layer { # 0-center lightness channel
name: "data_l"
type: "Convolution"
bottom: "img_l"
top: "data_l" # scaled and centered lightness value
propagate_down: false
param {lr_mult: 0 decay_mult: 0}
param {lr_mult: 0 decay_mult: 0}
convolution_param {
kernel_size: 1
num_output: 1
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data_l"
top: "conv1"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "conv4"
type: "Convolution"
bottom: "conv3"
top: "conv4"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "conv5"
type: "Convolution"
bottom: "conv4"
top: "conv5"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
}
layer {
name: "pool5"
type: "Pooling"
bottom: "conv5"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "pool5"
top: "fc6"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 4096
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 4096
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7"
top: "fc7"
}
layer {
name: "drop7"
type: "Dropout"
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
Binary file added ide/static/img/zoo/colornet.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 99abc64

Please sign in to comment.