Skip to content

Commit

Permalink
Merge branch 'master' into feature/20180401-file-format-converter
Browse files Browse the repository at this point in the history
  • Loading branch information
YukioOobuchi committed Jun 18, 2018
2 parents 934354f + 57c4cc7 commit beb311a
Show file tree
Hide file tree
Showing 37 changed files with 1,411 additions and 208 deletions.
9 changes: 3 additions & 6 deletions build-tools/code_generator/code_generator_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.


from __future__ import print_function

from collections import OrderedDict
from os.path import abspath, join, dirname, exists

from utils.common import check_update, get_version
from utils.type_conv import type_from_proto

import time
import yaml
from collections import OrderedDict


def represent_odict(dumper, instance):
Expand Down Expand Up @@ -213,7 +210,7 @@ def generate_version(template=None, rootdir=None, suffix=None):
if suffix is not None:
version = version + suffix
generated = render_with_template(filename=template, template_kwargs=dict(
version=version, short_version=short_version))
version=version, short_version=short_version, build_number=time.strftime('%y%m%d%H%M%S', time.gmtime())))
path_o = template.replace('.tmpl', '')
check_update(path_o, generated, force=True)

Expand Down
2 changes: 1 addition & 1 deletion build-tools/msvc/tools/build_protobuf.bat
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ SET protobuf_include_dir=%protobuf_folder%\src
SET protobuf_bin_folder=%protobuf_folder%\build-folder\%build_type%
SET protobuf_lib_suffix=.lib
IF [%build_type%] == [Debug] (
SET protobuf_lib_suffix=.dlib
SET protobuf_lib_suffix=d.lib
)
SET protobuf_library=%protobuf_bin_folder%\libprotobuf%protobuf_lib_suffix%
SET protobuf_lite_library=%protobuf_bin_folder%\libprotobuf-lite%protobuf_lib_suffix%
Expand Down
17 changes: 12 additions & 5 deletions doc/build/build_cpp_utils_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,11 @@
## Build

First, clone [nnabla](https://github.com/sony/nnabla) and go into the root folder.
There are some additional dependency libraries that are installed automatically by running the following batch script.

Then, the following batch script does everything including setting up the rest of dependencies and compiling libraries.

```bat
build-tools\msvc\setup_cpp_utils_deps.bat
build-tools\msvc\build_cpplib.bat
```

This will setup the following dependency libraries of the NNabla C++ utility
Expand All @@ -23,18 +24,24 @@ This will setup the following dependency libraries of the NNabla C++ utility
* ZLib
* Protobuf

into the `third_party` folder (HDF5 is not supported on Windows so far), and genearte a batch script `cmake_cpp_utils.bat`. You can build the NNabla core library and the C++ utility library (`nnabla.dll`, `nnabla_utils.dll` and their `.lib` and `.exp` files) by running:
into the `third_party` folder, and these are used when compiling and running NNabla utility library.
Note that HDF5 is not supported on Windows so far, which means you can not use a `.h5` parameter file in C++ inference/training.
(TODO: Write how to create `.protobuf` file from `.nnp` or `.h5`).

It also sets up NNabla core library and the C++ utility library (`nnabla.dll`, `nnabla_utils.dll` and their `.lib` and `.exp` files).

If you want to build with Debug mode, you have to set an environment variable `build_type` as following before running the batch script above.

```bat
build-tools\msvc\cmake_cpp_utils.bat
set build_type=Debug
```

## Use the library in your C++ application

To build your C++ binary with NNabla C++ utilities, you need:

* Set `<nnabla root>\include` folder as include path
* Set `nnabla.lib` and `nnabla_utils.lib` as libraries
* Set `nnabla.lib` and `nnabla_utils.lib` as libraries (use `.dlib` when Debug mode)

At runtime, you will need the following dynamic link libraries located in a right path.

Expand Down
1 change: 1 addition & 0 deletions examples/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
add_subdirectory(cpp_graph)
add_subdirectory(mnist_runtime)
add_subdirectory(mnist_training)
4 changes: 4 additions & 0 deletions examples/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,9 @@

A walkthrough example of developing algorithm in Python and running it in C++ on MNIST handwritten digit classification talk.

### [mnist_training](mnist_training)

A walkthrough example of developing algorithm in C++ training with an nnp file of an initialized model on MNIST handwritten digit classification.

### [cpp_graph](cpp_graph)
A demonstration of graph construction using C++ low-level API. (Not well documented so far.)
12 changes: 7 additions & 5 deletions examples/cpp/mnist_runtime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,10 @@ In the above code, the network structure containing parameters and the execution
The named variables in the network are referenced by the `executors` config. The executor config is used in C++ for executing a network in a more simpler way. The executor `runtime` is added where the newtork `runtime` is executed. The input and output variables are specified by names that are registered in the `networks` field.

## Build MNIST inference example C++ code
You can find an executable file 'mnist_runtime' under the build directory located at nnabla/build/bin.
If you want to build it yourself using Makefile you can refer to the following process in linux environments.
Also you can build an executable file 'mnist_runtime_cuda', that is not in the build directory, by following process.


```shell
make
Expand Down Expand Up @@ -78,14 +82,14 @@ Output:
```
Usage: ./mnist_runtime nnp_file input_pgm
Positional arguments:
Positional arguments:
nnp_file : .nnp file created by examples/vision/mnist/save_nnp_classification.py.
input_pgm : PGM (P5) file of a 28 x 28 image where pixel values < 256.
```

Sample images that I created using GIMP editor are located in this folder.

0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:
![0](./original_images/0.png "0")|![1](./original_images/1.png "1")|![2](./original_images/2.png "2")|![3](./original_images/3.png "3")|![4](./original_images/4.png "4")|![5](./original_images/5.png "5")|![6](./original_images/6.png "6")|![7](./original_images/7.png "7")|![8](./original_images/8.png "8")|![9](./original_images/9.png "9")

Expand Down Expand Up @@ -159,7 +163,7 @@ nbla::CgVariablePtr y = executor->get_output_variables().at(0).variable;
const float *y_data = y->variable()->get_data_pointer<float>(ctx);
```
10. Show prediction scores and the most likely predicted number of the input image.
10. Show prediction scores and the most likely predicted number of the input image.
```c++
int prediction = 0;
float max_score = -1e10;
Expand All @@ -174,5 +178,3 @@ for (int i = 0; i < 10; i++) {
std::cout << std::endl;
std::cout << "Prediction: " << prediction << std::endl;
```


5 changes: 5 additions & 0 deletions examples/cpp/mnist_training/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
add_executable(mnist_training main.cpp)
find_package(ZLIB REQUIRED)
include_directories(${ZLIB_INCLUDE_DIRS})
target_link_libraries(mnist_training nnabla nnabla_utils ${ZLIB_LIBRARIES})
set_property(TARGET mnist_training PROPERTY CXX_STANDARD 11)
5 changes: 5 additions & 0 deletions examples/cpp/mnist_training/GNUmakefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
all: main.cpp
$(CXX) -std=c++11 -O -o mnist_training main.cpp -lnnabla -lnnabla_utils -lz

clean:
rm -f mnist_training
193 changes: 193 additions & 0 deletions examples/cpp/mnist_training/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,193 @@
# C++ training with MNIST classification model.

## Introduction

This example demonstrates the workflow to train a classification model in C++.
Although this example is only tested on Ubuntu 16.04 so far,
a similar procedure to build the system should work on other operating systems with little effort.
We will add more useful examples in near future.

# Install C++ libraries

Please follow [the installation manual](https://github.com/sony/nnabla/blob/master/doc/build/build_cpp_utils.md).

Note: this example requires zlib and NNabla Python package installed.

Also MNIST dataset is required in the same directory.
It can be downloaded from the following URLs.
* Training images : http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
* Training labels : http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz

## Create NNP file of an initialized model for MNIST classifcation.
This sample requires initialized model parameters and a network definition saved as an NNP file.
We provide an example script which creates the NNP file from a classification example in mnist-example collections.

```shell
python create_initialized_model.py
```

This script imports definition of the network, and creates initialized parameters and a network architecture definition into a NNP file.
Following code specifies the information necessary for the network definition.

```python
runtime_contents = {
'networks': [
{'name': 'training',
'batch_size': args.batch_size,
'outputs': {'loss': loss},
'names': {'x': x, 'y': y}}]}
nn.utils.save.save(nnp_file, runtime_contents)
```

In the above code, the network structure and initialized parameters are saved into the NNP file `lenet_initialized.nnp`
You can see the contents by unzipping the file.

The network structure contents are described in a JSON like format.
In the `networks` field, a network is given a name `training`. It has a default batch size.
The computation graph can be set by the output variable `loss` in the `outputs` field.
At the same time, the input variables `x` and `y` of the computation graph are registered in `names` field.
To query an input or intermediate variable in the computation graph via the C++ interface, you should set a filed `names` in a format of `{<name>: <Variable>}`.

## Build MNIST training example in C++ code
You can find an executable file 'mnist_training' under the build directory located at nnabla/build/bin.
If you want to build it yourself using makefile you can refer to the following commands in linux environments.

```shell
make
```

The above command generates an executable `mnist_training` at the current directry.

The build file `GNUMakefile` is simple.
It links `libnnabla.so`, `libnnabla_utils.so` and `libz.so` with the executable generated from `main.cpp`, and compiles with C++11 option `-std=c++11`.

You can also compile an executable `mnist_training_cuda` that runs computation on your CUDA device.
Please download and refer to `nnabla-ext-cuda` repository for details.

## Handwritten digit training
By running the generated example with no argument, you can see the usage documentation.

```shell
./mnist_training
```

Output:
```
Usage: ./mnist_training model.nnp
model.nnp : model file with initialized parameters.
```

The following command executes the training of the initialized model `lenet_initialized.nnp` on MNIST dataset.

```shell

./mnist_training lenet_initialized.nnp

```

The output file named `parameter.protobuf` contains the learned parameters.

Following process is temporary and at a later date, we will prepare a save function for nnp.

```shell
cp lenet_initialized.nnp lenet_learned.nnp
unzip lenet_learned.nnp
zip lenet_learned.nnp nnp_version.txt network.nntxt parameter.protobuf
```

You will be asked "replace parameter.protobuf?" when unzipping, so please answer "n".

After getting learned.nnp, you can use it as a model file for "mnist_runtime".


## Walk through the example code

[main.cpp][1]
[1]:main.cpp
1. Add NNabla headers.
```c++
#include <nbla/context.hpp>
```

2. Create an execution engine context.
```c++
nbla::Context ctx{{"cpu:float"}, "CpuCachedArray", "0"};
```
3. Execute training.
```c++
mnist_training(ctx, argv[1]);
```

[mnist_training.hpp][2]
[2]:mnist_training.cpp
1. Add NNabla headers.
```c++
#include <nbla_utils/nnp.hpp>
#include <nbla/computation_graph/variable.hpp>
#include <nbla/computation_graph/function.hpp>
#include <nbla/solver/adam.hpp>
```

3. Create `Nnp` object and set nnp file.
```c++
nbla::utils::nnp::Nnp nnp(ctx);
nnp.add(nnp_file);
```
4. Get a network instance and set batchsize.
```c++
auto net = nnp.get_network("training");
net->set_batch_size(batch_size);
```

5. Load dataset to data iterator, modify this part depending on your purpose.
```c++
MnistDataIterator data_iterator();
```
This sample works only for the mnist traing dataset downloaded to this directory

6. Create solver and set parameters.
```c++
auto adam = create_AdamSolver(ctx, 0.001, 0.9, 0.999, 1.0e-6);
auto parameters = nnp.get_parameters();
adam->set_parameters(parameters);
```
7. Get input data as a CPU array.
```c++
nbla::CgVariablePtr x = net->get_variable("x");
nbla::CgVariablePtr y = net->get_variable("y");
nbla::CgVariablePtr loss = net->get_variable("loss");
float *x_d = x->variable()->cast_data_and_get_pointer<float>(ctx);
int *y_d = y->variable()->cast_data_and_get_pointer<int>(ctx);
```
8. Provide minibatch in training loop
```c++
float *x_d = x->variable()->cast_data_and_get_pointer<float>(ctx);
int *y_d = y->variable()->cast_data_and_get_pointer<int>(ctx);
```
In order to sync with the memory of the GPU, cast processing should be inside the iteration loop.
9. Execute training loop with forward, backward and update.
```c++
adam->zero_grad();
loss->forward(/*clear_buffer=*/false, /*clear_no_need_grad=*/false);
loss->variable()->grad()->fill(1);
loss->backward(/*NdArrayPtr grad =*/nullptr, /*bool clear_buffer = */false);
adam->update();
```

10. Show mean loss.
```c++
float *loss_d = loss->variable()->cast_data_and_get_pointer<float>(ctx);
mean_loss += loss_d[0];
if ((iter + 1) % n_val_iter == 0) {
mean_loss /= n_val_iter;
std::cout << "iter: " << iter + 1 << ", loss: " << loss_d[0] << std::endl;
}
```

0 comments on commit beb311a

Please sign in to comment.