Switch branches/tags
feature/20180216-backward-all-reduce feature/20180219-ptb feature/20180220-cpp-training-mnist-training feature/20180428-pad-function feature/20180508-benchmark feature/20180509-factorization-layers feature/20180521-trigonometric feature/20180522-contrib-function-doc-update feature/20180522-cpp-training-nbla-cli feature/20180524-fix-doc-dropout feature/20180528-add-ceil-and-floor feature/20180531-need-grad-to-cg feature/20180611-mixed-precision-training-with-loss-scaling feature/20180611-numpy-as-initializer-pf feature/20180611-reshape-not-inplace feature/20180613_numpy-like-indexing feature/20180614-split-bug feature/20180618-misc-build-test feature/20180620-fix-console feature/20180622-FFT-IFFT feature/20180625-android-build feature/20180625-data-iterator-doc feature/20180626-ClipGradByValue-example-in-yaml feature/20180626-load-python feature/20180626-pip-installation-doc feature/20180627-doc-unlinked-examples feature/20180629-android-build-doc-update feature/20180629-mixed-precision-training-api feature/20180705-allow-multiple-optional-inputs-in-yaml-for-code-generator feature/20180710-improve-conv-doc feature/20180710-more_graph_manipulation feature/20180713-slice-by-negative-index feature/20180730-fix-get-unlinked-variable-need-grad-option feature/20180803-fixed-parameter-scope feature/20180806-fixed-nnabla-cli feature/20180823_inplace-leaky-relu feature/20180824-build-wheel-on-rasp3 feature/20180904-interpolate-bilinear-2d feature/20180905-simple-viewer-for-debugging feature/20180910_sort_function feature/20180911-for-graph-compression feature/20180914-mgpu-installation feature/20180914_argmin_argmax feature/20180918_arange_function feature/20180919_pad_with_reflection feature/20180920-graph-conversion feature/20181003-build-with-cuda10 feature/20181005_cudnn_deterministic_option feature/20181009_cp_als_regularization feature/20181019-fix-opencl feature/20181019-test-initializer-execution feature/20181020-improve-auto-format feature/20181023-raise-exception-in-batchnorm feature/20181023-suppress-inspect-deprecation-warning feature/20181025-cli-plot feature/20181025-function-profile feature/20181025-pip-install-options feature/20181031-fix-build feature/20181106-add-nnabla-c-runtime-artifacts feature/20181107-build-python37 feature/20181107-fix-function-profile-py27 feature/20181107-fix-getenv-usage feature/20181107-getargspec-and-test-pf feature/20181107-pf-batch-norm-initializers feature/20181107-pf-batch-norm-var-init-1 feature/20181108-improve-version-handling feature/20181115-avoid-log-output-to-file feature/20181116-sink-gradient feature/20181120-improve-file-format-converter feature/20181210-fix-cache-file-handling feature/21081019-pruning features/20180425-documentation-fix fix/20180523-docfix-for-python3x-build fix/20180524-show-function-in-doc fix/20180525-windows-doc-fix fix/20180606-global-docfix fix/20180628-dockerfile-cuda92 fix/20180629-obsolete-extension-api fix/20180703-preprocessor-more-magic fix/20180709-experimental-doc fix/20180710-import-functions fix/20180719-fix-parameters fix/20180723-backward-function-uses-need-grad-state fix/20180726-nnp-graph-and-as-need-grad fix/20180802-communicator fix/20180806-nnp-broadcast fix/20180807-docker-scl-pkgs fix/20180807-tutorial-print-python3 fix/20180910-doc-wheel-cudnn-versions fix/20180927-skip-test-slice fix/20181018-infinite-memory-increase fix/20181022-describe-init-method-pf fix/20181023-data_iterator_doc fix/20181024-fix-image-utils fix/20181029-image-uilts-inconsistant-behavior fix/20181116-communicator-name fix/20181120_cudnn_reduce_same_shape fix/20181121-imageutils-build-fail hotfix/20180518-lstm-documentation-fix hotfix/20180724-initalizer_example_corrupt master release/v1.0.0pre1 release/v1.0.0rc2 release/v1.0.0 release/v1.0.1 release/v1.0.2 release/v1.0.3 release/v1.0.4 release/v1.0.5 release/v1.0.6 release/v1.0.7 release/v1.0.8 release/v1.0.9 release/v1.0.10 release1.0.10 v0.9.9
Nothing to show
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
README.md
create_initialized_model.py
create_mnist_cache.py

README.md

C++ training cli for MNIST classification model.

Introduction

This example demonstrates the workflow to train a classification model in C++ training cli.

Install C++ libraries

Please follow the installation manual.

Note: this example requires zlib and NNabla Python package installed.

Download MNIST dataset and create cached files.

This example requires cached MNIST dataset files. We provide an example script which creates them with utils in mnist-example collections.

python create_cached_dataset.py

This command create 'cache' directory in the current directory.

Create NNP file of an initialized model for MNIST classification.

This example requires initialized model parameters and a network definition saved as an NNP file. We provide an example script which creates the NNP file from a classification example in mnist-example collections.

python create_mnist_cache.py

This script creates a NNP file including initialized parameters and information of configurations, networks, optimizers, monitors and datasets, Following code specifies the information necessary for the network definition.

    nnp_file = '{}_initialized.nnp'.format(args.net)
    training_contents = {
        'global_config': {'default_context': ctx},
        'training_config':
            {'max_epoch': args_added.max_epoch,
             'iter_per_epoch': args_added.iter_per_epoch,
             'save_best': True},
        'networks': [
            {'name': 'training',
             'batch_size': args.batch_size,
             'outputs': {'loss': loss_t},
             'names': {'x': x, 'y': t, 'loss': loss_t}},
            {'name': 'validation',
             'batch_size': args.batch_size,
             'outputs': {'loss': loss_v},
             'names': {'x': x, 'y': t, 'loss': loss_v}}],
        'optimizers': [
            {'name': 'optimizer',
             'solver': solver,
             'network': 'training',
             'dataset': 'mnist_training',
             'weight_decay': 0,
             'lr_decay': 1,
             'lr_decay_interval': 1,
	     'update_interval': 1
             }],
        'datasets': [
            {'name': 'mnist_training',
             'uri': 'MNIST_TRAINING',
             'cache_dir': args_added.cache_dir + '/mnist_training.cache/',
             'variables': {'x': x, 'y': t},
             'shuffle': True,
             'batch_size': args.batch_size,
             'no_image_normalization': True},
            {'name': 'mnist_validation',
             'uri': 'MNIST_VALIDATION',
             'cache_dir': args_added.cache_dir + '/mnist_test.cache/',
             'variables': {'x': x, 'y': t},
             'shuffle': False,
             'batch_size': args.batch_size,
             'no_image_normalization': True
             }],
        'monitors': [
            {'name': 'training_loss',
             'network': 'validation',
             'dataset': 'mnist_training'},
            {'name': 'validation_loss',
             'network': 'validation',
             'dataset': 'mnist_validation'}],
    }

    nn.utils.save.save(nnp_file, training_contents)

In the above code, the initialized parameters and other configurations are saved into the NNP file lenet_initialized.nnp You can see the contents by unzipping the file.

The network structure contents are described in a JSON like format. In the networks field, a network is given a name training. It has a default batch size. The computation graph can be set by the output variable loss in the outputs field. At the same time, the input variables x and y of the computation graph are registered in names field. To query an input or intermediate variable in the computation graph via the C++ interface, you should set a filed names in a format of {<name>: <Variable>}.

Build MNIST training example in C++ code

You can find an executable file 'mnist_training' under the build directory located at nnabla/build/bin. If you want to build it yourself using makefile you can refer to the following commands in linux environments.

nbla train lenet_initialized.nnp result

The above command creates result directory and we can see the logs of the training operation.