diff --git a/docs/graphvis_research/CaffeVis.png b/docs/graphvis_research/CaffeVis.png new file mode 100644 index 000000000..791a744b5 Binary files /dev/null and b/docs/graphvis_research/CaffeVis.png differ diff --git a/docs/graphvis_research/KerasVis.png b/docs/graphvis_research/KerasVis.png new file mode 100644 index 000000000..36f419de8 Binary files /dev/null and b/docs/graphvis_research/KerasVis.png differ diff --git a/docs/graphvis_research/PureVis.png b/docs/graphvis_research/PureVis.png new file mode 100644 index 000000000..a19530fd9 Binary files /dev/null and b/docs/graphvis_research/PureVis.png differ diff --git a/docs/graphvis_research/README.md b/docs/graphvis_research/README.md new file mode 100644 index 000000000..f80448195 --- /dev/null +++ b/docs/graphvis_research/README.md @@ -0,0 +1,51 @@ +# Research about adding support for exporting model graphs from Fabrik +Attached code requires [common dependencies](../../requirements/common.txt), plus `networkx` and `pydot` Python packages. +## Problem +Currently there's no tools for drawing Fabrik neural network diagram directly, without need to do it by hand. This research observes some ways to implement such function. +## Observations +During research, I managed to found some ways. They even can be divided into two groups. +### Based on deep learning frameworks +These methods share the common weakness: they cannot draw unsupported layers. For example, Keras cannot draw LRN layer. Also they could be implemented in backend only. + +Note that all tools can implemented with algorithms of conversion Fabrik net to framework model directly, without creating model files. +#### Keras +Keras have its own utilities, described in its [documentation](https://keras.io/visualization/). All methods are based on [Pydot](https://github.com/pydot/pydot) library, a Python interface of [Graphviz](http://graphviz.org/). One of the utilities is used in the `print_keras_model.py`. Below there is VQI model representation drawn by Keras. + +![](KerasVis.png) +To get similar with this or other model type: +``` +python print_keras_model.py ../../example/keras/ +``` +#### Caffe +Caffe has its own script for visualisation. It actually uses pydot, too. Type `python ~/caffe/caffe/python/draw_net.py --help` to see usage help. Below is vizualised AlexNet. + +![](CaffeVis.png) +``` +python ~/caffe/caffe/python/draw_net.py ../../example/caffe/ +``` +#### Tensorflow +Tensorflow has Tensorboard for graph visualisations. Still cannot see the way how to use it for creating an image, not interactive page. + +Also Tensorflow method cannot be used for recurrent layers due to weird representation of them in `.pbtxt`. +### Based on Fabrik's frontend +These ones mostly for frontend representation. Also they depends only on Fabrik represen +#### Creating an extension +If we decided to create an extension for Fabrik, we could obtain DOM of the graph that already represented and convert it to image. There are a [JS library](https://github.com/tsayen/dom-to-image) for doing such things. Resulted image will look like a large screenshot of Fabrik net. +#### Implementing using JSON representation +If we dig inside Fabrik a little deeper, we find out that Fabrik stores neural network inside state as JS object. There are obtained sample net representation in `state_net.json`. It's Lenet MNIST with some layers deleted. + +The only step to do is to draw graph based on this data. There are lots of ways, including [NN-SVG](https://github.com/zfrenchee/NN-SVG). Also a lot of different [JS libraries](https://stackoverflow.com/questions/7034/graph-visualization-library-in-javascript) and [other tools](https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams). In order to keep it simple, I created `draw_graph.py` that outputs pictured neural network with layer types and shapes. It uses [networkx](https://networkx.github.io/) for storing graph and pydot for visualisation, so it looks like Caffe's and Keras' network diagrams. + +![](PureVis.png) +## Conclusion +Framework-based are easy to implement, but have a lot of disadvantages. Also these cannot be customized (Caffe looks prettier because of color though). DOM-based also slow, non-customizable and is workaround, not real solution. However, JSON representation-based can be fast and output any form that we want, depending on library we desire. + +## References +- [Keras](https://keras.io/) +- [Caffe](http://caffe.berkeleyvision.org/) +- [Tensorflow](https://www.tensorflow.org/) and [Tensorboard](https://www.tensorflow.org/guide/graph_viz) +- [Pydot](https://pypi.org/project/pydot/) and [Graphviz](https://www.graphviz.org/) +- [DOM-to-image](https://github.com/tsayen/dom-to-image) +- [NN-SVG](https://github.com/zfrenchee/NN-SVG) +- [Graph library list 1](https://stackoverflow.com/questions/7034/graph-visualization-library-in-javascript), [Graph library list 2](https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams) +- [Networkx](https://networkx.github.io/) diff --git a/docs/graphvis_research/draw_graph.py b/docs/graphvis_research/draw_graph.py new file mode 100644 index 000000000..a050cdcdc --- /dev/null +++ b/docs/graphvis_research/draw_graph.py @@ -0,0 +1,22 @@ +import networkx as nx +import json + +with open('state_net.json', 'r') as f: + network = json.loads(f.read()) + +network_map = {} +for node, params in network.items(): + new_name = (node + ' ' + params['info']['type'] + "\n" + + str(tuple(params["shape"]["output"]))) + network_map[node] = new_name + +graph = nx.DiGraph() +for node, params in network.items(): + output_nodes = params['connection']['output'] + for o_node in output_nodes: + graph.add_edge(network_map[node], network_map[o_node]) + +dotgraph = nx.nx_pydot.to_pydot(graph) +dotgraph.set('rankdir', 'LR') +dotgraph.set('dpi', 300) +dotgraph.write('PureVis.png', format='png') diff --git a/docs/graphvis_research/print_keras_model.py b/docs/graphvis_research/print_keras_model.py new file mode 100644 index 000000000..3ea4884c8 --- /dev/null +++ b/docs/graphvis_research/print_keras_model.py @@ -0,0 +1,18 @@ +from keras.models import model_from_json +from keras.utils import plot_model +import sys + +try: + json_file = sys.argv[1] + output_file = sys.argv[2] +except KeyError: + print("Usage: python print_keras_model.py ") + +with open(json_file, 'r') as f: + loaded_model = model_from_json(f.read()) + +plot_model(loaded_model, + to_file=json_file + '.png', + rankdir='LR', + show_shapes=True, + show_layer_names=False) diff --git a/docs/graphvis_research/state_net.json b/docs/graphvis_research/state_net.json new file mode 100644 index 000000000..dbcb0515e --- /dev/null +++ b/docs/graphvis_research/state_net.json @@ -0,0 +1 @@ +{"l6":{"info":{"phase":null,"type":"InnerProduct","parameters":10500,"class":""},"state":{"top":"566px","class":"","left":"358px"},"shape":{"input":[20,0,0],"output":[500]},"connection":{"input":["l3"],"output":["l7"]},"params":{"bias_filler":["constant",false],"bias_regularizer":["None",false],"kernel_constraint":["None",false],"bias_constraint":["None",false],"activity_regularizer":["None",false],"num_output":[500,false],"weight_filler":["xavier",false],"kernel_regularizer":["None",false],"caffe":[true,false],"use_bias":[true,false]},"props":{"name":"l6"}},"l7":{"info":{"phase":null,"type":"ReLU","parameters":0},"state":{"top":"607px","class":"","left":"358px"},"shape":{"input":[500],"output":[500]},"connection":{"input":["l6"],"output":[]},"params":{"negative_slope":[0,false],"caffe":[true,false],"inplace":[true,false]},"props":{"name":"l7"}},"l2":{"info":{"phase":null,"type":"Convolution","parameters":null},"state":{"top":"242px","class":"","left":"358px"},"shape":{"input":[],"output":[20,0,0]},"connection":{"input":["l0","l1"],"output":["l3"]},"params":{"layer_type":["2D",false],"stride_d":[1,true],"pad_h":[0,false],"kernel_constraint":["None",false],"activity_regularizer":["None",false],"stride_h":[1,false],"pad_d":[0,true],"weight_filler":["xavier",false],"stride_w":[1,false],"dilation_d":[1,true],"use_bias":[true,false],"pad_w":[0,false],"kernel_w":[5,false],"bias_filler":["constant",false],"bias_regularizer":["None",false],"bias_constraint":["None",false],"dilation_w":[1,false],"num_output":[20,false],"kernel_d":["",true],"caffe":[true,false],"dilation_h":[1,false],"kernel_regularizer":["None",false],"kernel_h":[5,false]},"props":{"name":"l2"}},"l3":{"info":{"phase":null,"type":"Pooling","parameters":0},"state":{"top":"323px","class":"","left":"358px"},"shape":{"input":[20,0,0],"output":[20,0,0]},"connection":{"input":["l2"],"output":["l6"]},"params":{"layer_type":["2D",false],"kernel_w":[2,false],"stride_d":[1,true],"pad_h":[0,false],"stride_h":[2,false],"pad_d":[0,true],"padding":["SAME",false],"stride_w":[2,false],"kernel_d":["",true],"caffe":[true,false],"kernel_h":[2,false],"pad_w":[0,false],"pool":["MAX",false]},"props":{"name":"l3"}},"l0":{"info":{"phase":0,"type":"Data","parameters":0,"class":""},"state":{"top":"161px","class":"","left":"358px"},"shape":{"input":[],"output":[]},"connection":{"input":[],"output":["l2"]},"params":{"scale":[0.00390625,false],"mean_value":["",false],"mean_file":["",false],"batch_size":[64,false],"source":["examples/mnist/mnist_train_lmdb",false],"force_color":[false,false],"force_gray":[false,false],"rand_skip":[0,false],"prefetch":[4,false],"mirror":[false,false],"caffe":[true,false],"backend":["LMDB",false],"crop_size":[0,false]},"props":{"name":"l0"}},"l1":{"info":{"phase":1,"type":"Data","parameters":0},"state":{"top":"81px","class":"","left":"358px"},"shape":{"input":[],"output":[]},"connection":{"input":[],"output":["l2"]},"params":{"scale":[0.00390625,false],"mean_value":["",false],"mean_file":["",false],"batch_size":[100,false],"source":["examples/mnist/mnist_test_lmdb",false],"force_color":[false,false],"force_gray":[false,false],"rand_skip":[0,false],"prefetch":[4,false],"mirror":[false,false],"caffe":[true,false],"backend":["LMDB",false],"crop_size":[0,false]},"props":{"name":"l1"}},"l9":{"info":{"phase":1,"type":"Accuracy","parameters":0},"state":{"top":"769px","class":"","left":"458px"},"shape":{"input":[10],"output":[10]},"connection":{"input":[],"output":[]},"params":{"top_k":[1,false],"caffe":[true,false],"axis":[1,false]},"props":{"name":"l9"}}} diff --git a/docs/source/addng_new_model.md b/docs/source/addng_new_model.md index 005433bae..050a34101 100644 --- a/docs/source/addng_new_model.md +++ b/docs/source/addng_new_model.md @@ -10,7 +10,9 @@ ``` 4. After making these changes, test if loading the model and exporting it to both or at least one framework is working fine and document it accordingly in your pull request. -5. Create a pull request for the same and get reviewed by the mentors. +5. Add a thumbnail image for displaying a preview of the new model. +6. Add the new model to [Tested Models](https://github.com/Cloud-CV/Fabrik/blob/master/tutorials/tested_models.md). +7. Create a pull request for the same and get reviewed by the mentors. Cheers! ### Adding New Model - Keras @@ -22,5 +24,7 @@ Cheers!
  • sample
  • ``` 4. After making these changes, test if loading the model and exporting it to both or at least one framework is working fine and document it accordingly in your pull request. -5. Create a pull request for the same and get reviewed by the mentors. +5. Add a thumbnail image for displaying a preview of the new model. +6. Add the new model to [Tested Models](https://github.com/Cloud-CV/Fabrik/blob/master/tutorials/tested_models.md). +7. Create a pull request for the same and get reviewed by the mentors. Cheers! diff --git a/example/caffe/colornet.prototxt b/example/caffe/colornet.prototxt new file mode 100644 index 000000000..718d77e66 --- /dev/null +++ b/example/caffe/colornet.prototxt @@ -0,0 +1,274 @@ +name: "Colornet" +layer { + name: "img_lab" + top: "img_lab" # Lab color space + type: "Input" + input_param { shape { dim: 1 dim: 3 dim: 227 dim: 227 } } +} +# ************************** +# ***** PROCESS COLORS ***** +# ************************** +# layer { # Convert to lab +# name: "img_lab" +# type: "ColorConv" +# bottom: "data" +# top: "img_lab" +# propagate_down: false +# color_conv_param { +# input: 0 # BGR +# output: 3 # Lab +# } +# } +layer { + name: "img_slice" + type: "Slice" + bottom: "img_lab" + top: "img_l" # [0,100] + top: "data_ab" # [-110,110] + propagate_down: false + slice_param { + axis: 1 + slice_point: 1 + } +} +layer { + name: "silence_ab" + type: "Silence" + bottom: "data_ab" +} +layer { # 0-center lightness channel + name: "data_l" + type: "Convolution" + bottom: "img_l" + top: "data_l" # scaled and centered lightness value + propagate_down: false + param {lr_mult: 0 decay_mult: 0} + param {lr_mult: 0 decay_mult: 0} + convolution_param { + kernel_size: 1 + num_output: 1 + } +} +layer { + name: "conv1" + type: "Convolution" + bottom: "data_l" + top: "conv1" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + convolution_param { + num_output: 96 + kernel_size: 11 + stride: 4 + weight_filler { + type: "gaussian" + std: 0.01 + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "relu1" + type: "ReLU" + bottom: "conv1" + top: "conv1" +} +layer { + name: "pool1" + type: "Pooling" + bottom: "conv1" + top: "pool1" + pooling_param { + pool: MAX + kernel_size: 3 + stride: 2 + } +} +layer { + name: "conv2" + type: "Convolution" + bottom: "pool1" + top: "conv2" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + convolution_param { + num_output: 256 + pad: 2 + kernel_size: 5 + group: 2 + weight_filler { + type: "gaussian" + std: 0.01 + } + bias_filler { + type: "constant" + value: 1 + } + } +} +layer { + name: "relu2" + type: "ReLU" + bottom: "conv2" + top: "conv2" +} +layer { + name: "pool2" + type: "Pooling" + bottom: "conv2" + top: "pool2" + pooling_param { + pool: MAX + kernel_size: 3 + stride: 2 + } +} +layer { + name: "conv3" + type: "Convolution" + bottom: "pool2" + top: "conv3" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + convolution_param { + num_output: 384 + pad: 1 + kernel_size: 3 + weight_filler { + type: "gaussian" + std: 0.01 + } + bias_filler { + type: "constant" + value: 0 + } + } +} +layer { + name: "relu3" + type: "ReLU" + bottom: "conv3" + top: "conv3" +} +layer { + name: "conv4" + type: "Convolution" + bottom: "conv3" + top: "conv4" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + convolution_param { + num_output: 384 + pad: 1 + kernel_size: 3 + group: 2 + weight_filler { + type: "gaussian" + std: 0.01 + } + bias_filler { + type: "constant" + value: 1 + } + } +} +layer { + name: "relu4" + type: "ReLU" + bottom: "conv4" + top: "conv4" +} +layer { + name: "conv5" + type: "Convolution" + bottom: "conv4" + top: "conv5" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + convolution_param { + num_output: 256 + pad: 1 + kernel_size: 3 + group: 2 + weight_filler { + type: "gaussian" + std: 0.01 + } + bias_filler { + type: "constant" + value: 1 + } + } +} +layer { + name: "relu5" + type: "ReLU" + bottom: "conv5" + top: "conv5" +} +layer { + name: "pool5" + type: "Pooling" + bottom: "conv5" + top: "pool5" + pooling_param { + pool: MAX + kernel_size: 3 + stride: 2 + } +} +layer { + name: "fc6" + type: "InnerProduct" + bottom: "pool5" + top: "fc6" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + inner_product_param { + num_output: 4096 + } +} +layer { + name: "relu6" + type: "ReLU" + bottom: "fc6" + top: "fc6" +} +layer { + name: "drop6" + type: "Dropout" + bottom: "fc6" + top: "fc6" + dropout_param { + dropout_ratio: 0.5 + } +} +layer { + name: "fc7" + type: "InnerProduct" + bottom: "fc6" + top: "fc7" + param { lr_mult: 1 decay_mult: 1 } + param { lr_mult: 2 decay_mult: 0 } + inner_product_param { + num_output: 4096 + } +} +layer { + name: "relu7" + type: "ReLU" + bottom: "fc7" + top: "fc7" +} +layer { + name: "drop7" + type: "Dropout" + bottom: "fc7" + top: "fc7" + dropout_param { + dropout_ratio: 0.5 + } +} diff --git a/ide/static/img/zoo/colornet.png b/ide/static/img/zoo/colornet.png new file mode 100644 index 000000000..97e539962 Binary files /dev/null and b/ide/static/img/zoo/colornet.png differ diff --git a/ide/static/js/modelZoo.js b/ide/static/js/modelZoo.js index 7865579b4..0538c708b 100644 --- a/ide/static/js/modelZoo.js +++ b/ide/static/js/modelZoo.js @@ -24,37 +24,37 @@ class ModelZoo extends React.Component { this.refs.caption.className = " "; this.refs.segmentation.className = " "; this.refs.vqa.className = " "; - } + } else if (id == "recognition") { this.refs.recognition.className = " "; - } + } else if (id == "detection") - { + { this.refs.detection.className = " "; - } + } else if (id == "retrieval") { this.refs.retrieval.className = " "; - } + } else if (id == "seq2seq") { this.refs.seq2seq.className = " "; - } + } else if (id == "caption") { this.refs.caption.className = " "; - } + } else if (id == "segmentation") { this.refs.segmentation.className = " "; - } + } else if (id == "vqa") { this.refs.vqa.className = " "; - } + } } - + componentDidMount() { let filter = (pattern) => { let layerCompability = (searchQuery, layerName) => { @@ -75,7 +75,7 @@ class ModelZoo extends React.Component { } return { match: seq, - full_match: full_match + full_match: full_match }; } for (let elem of $('.col-sm-6')) { @@ -90,12 +90,12 @@ class ModelZoo extends React.Component { } } $('#model-search-input').keyup((e) => { - filter(e.target.value); + filter(e.target.value); }); } - + render() { - + return (
    @@ -160,6 +160,7 @@ class ModelZoo extends React.Component {
    +
    diff --git a/tutorials/adding_new_model.md b/tutorials/adding_new_model.md index 005433bae..050a34101 100644 --- a/tutorials/adding_new_model.md +++ b/tutorials/adding_new_model.md @@ -10,7 +10,9 @@ ``` 4. After making these changes, test if loading the model and exporting it to both or at least one framework is working fine and document it accordingly in your pull request. -5. Create a pull request for the same and get reviewed by the mentors. +5. Add a thumbnail image for displaying a preview of the new model. +6. Add the new model to [Tested Models](https://github.com/Cloud-CV/Fabrik/blob/master/tutorials/tested_models.md). +7. Create a pull request for the same and get reviewed by the mentors. Cheers! ### Adding New Model - Keras @@ -22,5 +24,7 @@ Cheers!
  • sample
  • ``` 4. After making these changes, test if loading the model and exporting it to both or at least one framework is working fine and document it accordingly in your pull request. -5. Create a pull request for the same and get reviewed by the mentors. +5. Add a thumbnail image for displaying a preview of the new model. +6. Add the new model to [Tested Models](https://github.com/Cloud-CV/Fabrik/blob/master/tutorials/tested_models.md). +7. Create a pull request for the same and get reviewed by the mentors. Cheers! diff --git a/tutorials/tested_models.md b/tutorials/tested_models.md index 5ae94a376..f7e1e87a3 100644 --- a/tutorials/tested_models.md +++ b/tutorials/tested_models.md @@ -34,6 +34,7 @@ ### Retrieval * MNIST Siamese [\[Source\]](https://github.com/BVLC/caffe/tree/master/examples/siamese)[\[Visualise\]](http://fabrik.cloudcv.org/caffe/load?id=20171208113503xgnfd) +* Colornet [\[Source\]](https://github.com/richzhang/colorization/blob/master/models/alexnet_deploy_lab.prototxt)[\[Visualise\]](http://fabrik.cloudcv.org/caffe/load?id=20181208162637cezkh) ### Seq2Seq