Skip to content

Commit

Permalink
Merge branch 'master' into feature/20180401-file-format-converter
Browse files Browse the repository at this point in the history
  • Loading branch information
YukioOobuchi committed Jun 7, 2018
2 parents 563925b + 97669f9 commit 491332b
Show file tree
Hide file tree
Showing 35 changed files with 1,515 additions and 40 deletions.
6 changes: 6 additions & 0 deletions build-tools/code_generator/function_types.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -368,6 +368,12 @@ DepthwiseDeconvolution:
Round:
float: [float]
half: [Half]
Ceil:
float: [float]
half: [Half]
Floor:
float: [float]
half: [Half]
Sin:
float: [float]
half: [Half]
Expand Down
54 changes: 48 additions & 6 deletions build-tools/code_generator/functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -700,14 +700,10 @@ Normalization:
.. math::
\begin{eqnarray}
\mu &=& \frac{1}{M} \sum x_i \\
rm &=& ({\rm decay\_rate}) rm + (1 - {\rm decay\_rate}) \mu \\
y_i &=& x_i - rm
y_i &=& x_i - \mu
\end{eqnarray}
At validation time, it is defined as
.. math::
y_i = x_i - rm
At testing time, the mean values used are those that were computed during training by moving average.
Note:
The backward performs an approximated differentiation that takes into account only the latest mini-batch.
Expand Down Expand Up @@ -1779,6 +1775,52 @@ Math:
.. math::
\frac{\partial y_i}{\partial x_i} = 1.
inputs:
x:
doc: Input variable
arguments: {}
outputs:
y:
doc: N-D array with the same shape as x
Ceil:
snake_name: ceil
doc: |2+
Element-wise ceil function.
In the forward pass, this function simply returns the smallest integer which is not less than the input.
.. math::
y_i = ceil(x_i).
In the backward pass, the simple Straight-Through Estimator (STE) is applied,
.. math::
\frac{\partial y_i}{\partial x_i} = 1.
inputs:
x:
doc: Input variable
arguments: {}
outputs:
y:
doc: N-D array with the same shape as x
Floor:
snake_name: floor
doc: |2+
Element-wise floor function.
In the forward pass, this function simply returns the largest integer which is not greater than the input.
.. math::
y_i = floor(x_i).
In the backward pass, the simple Straight-Through Estimator (STE) is applied,
.. math::
\frac{\partial y_i}{\partial x_i} = 1.
inputs:
x:
doc: Input variable
Expand Down
18 changes: 18 additions & 0 deletions build-tools/make/build-with-docker.mk
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ DOCKER_IMAGE_NAME_BASE ?= nnabla-build
DOCKER_IMAGE_AUTO_FORMAT ?= $(DOCKER_IMAGE_NAME_BASE)-auto-format
DOCKER_IMAGE_DOC ?= $(DOCKER_IMAGE_NAME_BASE)-doc
DOCKER_IMAGE_BUILD ?= $(DOCKER_IMAGE_NAME_BASE)-build
DOCKER_IMAGE_NNABLA ?= $(DOCKER_IMAGE_NAME_BASE)-nnabla

DOCKER_RUN_OPTS +=--rm
DOCKER_RUN_OPTS += -v $$(pwd):$$(pwd)
Expand Down Expand Up @@ -96,3 +97,20 @@ bwd-nnabla-wheel: docker_image_build
bwd-nnabla-test: docker_image_build
cd $(NNABLA_DIRECTORY) \
&& docker run $(DOCKER_RUN_OPTS) $(DOCKER_IMAGE_BUILD) make -f build-tools/make/build.mk nnabla-test-local

.PHONY: bwd-nnabla-shell
bwd-nnabla-shell: docker_image_build
cd $(NNABLA_DIRECTORY) \
&& docker run $(DOCKER_RUN_OPTS) -it --rm ${DOCKER_IMAGE_BUILD} make nnabla-shell

########################################################################################################################
# Docker image with current nnabla
.PHONY: docker_image_nnabla
docker_image_nnabla: bwd-nnabla-cpplib bwd-nnabla-wheel
docker pull ubuntu:16.04
cd $(NNABLA_DIRECTORY) \
&& cp docker/development/Dockerfile.build.py$(PYTHON_VERSION_MAJOR)$(PYTHON_VERSION_MINOR) Dockerfile \
&& echo ADD $(shell echo build_wheel_py$(PYTHON_VERSION_MAJOR)$(PYTHON_VERSION_MINOR)/dist/*.whl) /tmp/ >>Dockerfile \
&& echo RUN pip install /tmp/$(shell basename build_wheel_py$(PYTHON_VERSION_MAJOR)$(PYTHON_VERSION_MINOR)/dist/*.whl) >>Dockerfile \
&& docker build $(DOCKER_BUILD_ARGS) -t $(DOCKER_IMAGE_NNABLA) . \
&& rm -f Dockerfile
8 changes: 8 additions & 0 deletions build-tools/make/build.mk
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,14 @@ nnabla-install:
-pip uninstall -y nnabla
pip install $(BUILD_DIRECTORY_WHEEL)/dist/*.whl

########################################################################################################################
# Shell (for rapid development)
.PHONY: nnabla-shell
nnabla-shell:
PS1="nnabla-build: " bash --norc -i

########################################################################################################################
# test
.PHONY: nnabla-test-cpplib
nnabla-test-cpplib: nnabla-cpplib
@$(MAKE) -C $(BUILD_DIRECTORY_CPPLIB) cpplibtest
Expand Down
1 change: 1 addition & 0 deletions doc/python/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,5 @@ Python API Reference
api/communicator
api/monitor
api/data_iterator
api/utils
api/ext
4 changes: 4 additions & 0 deletions doc/python/api/function.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,9 @@ Neural Network Layers

.. autofunction:: affine
.. autofunction:: convolution
.. autofunction:: depthwise_convolution
.. autofunction:: deconvolution
.. autofunction:: depthwise_deconvolution
.. autofunction:: max_pooling
.. autofunction:: average_pooling
.. autofunction:: sum_pooling
Expand Down Expand Up @@ -135,6 +137,8 @@ Math
.. autofunction:: exp
.. autofunction:: log
.. autofunction:: round
.. autofunction:: ceil
.. autofunction:: floor
.. autofunction:: identity
.. autofunction:: matrix_diag
.. autofunction:: matrix_diag_part
Expand Down
7 changes: 7 additions & 0 deletions doc/python/api/parametric_function.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,12 +52,18 @@ Here is the list of parametric functions.

.. autofunction:: affine
.. autofunction:: convolution
.. autofunction:: depthwise_convolution
.. autofunction:: deconvolution
.. autofunction:: depthwise_deconvolution
.. autofunction:: batch_normalization
.. autofunction:: mean_subtraction

.. autofunction:: embed
.. autofunction:: prelu

.. autofunction:: svd_affine
.. autofunction:: svd_convolution
.. autofunction:: cpd3_convolution
.. autofunction:: binary_connect_affine
.. autofunction:: binary_connect_convolution
.. autofunction:: binary_weight_affine
Expand All @@ -69,6 +75,7 @@ Here is the list of parametric functions.
.. autofunction:: fixed_point_quantized_convolution
.. autofunction:: pow2_quantized_affine
.. autofunction:: pow2_quantized_convolution

.. autofunction:: lstm

.. autoclass:: LSTMCell
Expand Down
9 changes: 9 additions & 0 deletions doc/python/api/utils.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Utils
==============

.. toctree::
:maxdepth: 1

utils/data_iterator.rst
utils/profiling.rst

File renamed without changes.
14 changes: 14 additions & 0 deletions doc/python/api/utils/profiling.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
Profiling
==============

.. automodule:: nnabla.utils.profiler

Profiler
----------

.. autoclass:: GraphProfiler
:members:

.. autoclass:: GraphProfilerCsvWriter
:members:

2 changes: 1 addition & 1 deletion doc/python/install_on_windows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Check for running (CUDA/cuDNN).
> ipython
In [1]: import nnabla_ext.cuda.cudnn
In [1]: import nnabla_ext.cudnn
2017-06-16 18:42:18,881 [nnabla][Level 99]: Initializing CPU extension...
2017-06-16 18:42:19,923 [nnabla][Level 99]: Initializing CUDA extension...
2017-06-16 18:42:20,243 [nnabla][Level 99]: Initializing cuDNN extension...
Expand Down
8 changes: 4 additions & 4 deletions doc/python/tutorial/by_examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -576,8 +576,8 @@ CUDA by specifying a context before building a graph. We strongly
recommend using a CUDNN context that is fast. Although the context class
can be instantiated by ``nn.Context()``, specifying a context descriptor
might be a bit complicated for users. There for we recommend create a
context by using a helper function ``extension_context()`` found in the
``nnabla.contrib.context`` module. NNabla officially supports ``cpu``
context by using a helper function ``get_extension_context()`` found in the
``nnabla.ext_utils`` module. NNabla officially supports ``cpu``
and ``cudnn`` as a context specifier passed to the first argument
(extension name). NOTE: By setting the cudnn context as a global default
context, Functions and solves created are instantiated with CUDNN
Expand All @@ -589,9 +589,9 @@ for details.
.. code-block:: python2
# Run on CUDA
from nnabla.contrib.context import extension_context
from nnabla.ext_utils import get_extension_context
cuda_device_id = 0
ctx = extension_context('cudnn', device_id=cuda_device_id)
ctx = get_extension_context('cudnn', device_id=cuda_device_id)
print "Context:", ctx
nn.set_default_context(ctx) # Set CUDA as a default context.
y, hs = cnn(x)
Expand Down
4 changes: 2 additions & 2 deletions doc/python/tutorial/dynamic_and_static_nn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -106,10 +106,10 @@ neural network does not change during training.

.. code-block:: python2
from nnabla.contrib.context import extension_context
from nnabla.ext_utils import get_extension_context
# setup cuda extension
ctx_cuda = extension_context('cudnn', device_id=GPU) # replace 'cuda.cudnn' by 'cpu' if you want to run the example on the CPU
ctx_cuda = get_extension_context('cudnn', device_id=GPU) # replace 'cudnn' by 'cpu' if you want to run the example on the CPU
nn.set_default_context(ctx_cuda)
# create variables for network input and label
Expand Down
6 changes: 3 additions & 3 deletions doc/python/tutorial/multi_device_training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Prepare the dependencies
import nnabla as nn
import nnabla.communicators as C
from nnabla.contrib.context import extension_context
from nnabla.ext_utils import get_extension_context
import nnabla.functions as F
from nnabla.initializer import (
calc_uniform_lim_glorot,
Expand All @@ -104,13 +104,13 @@ Define the communicator for gradients exchange.
%%px
extension_module = "cudnn"
ctx = extension_context(extension_module)
ctx = get_extension_context(extension_module)
comm = C.MultiProcessDataParalellCommunicator(ctx)
comm.init()
n_devices = comm.size
mpi_rank = comm.rank
device_id = mpi_rank
ctx = extension_context(extension_module, device_id=device_id)
ctx = get_extension_context(extension_module, device_id=device_id)
Check different ranks are assigned to different devices

Expand Down
File renamed without changes.
File renamed without changes.
45 changes: 45 additions & 0 deletions include/nbla/function/ceil.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
// Copyright (c) 2017 Sony Corporation. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

/** Ceil
*/

#ifndef NBLA_FUNCTION_CEIL_HPP
#define NBLA_FUNCTION_CEIL_HPP

#include <nbla/function/utils/base_transform_unary.hpp>

#include <cmath>

namespace nbla {

/** @class Ceil
@brief Ceiling value, defined as
@f[
y_i = ceil(x_i),
@f]
Inputs:
- N-D array.
Outputs:
- N-D array.
@tparam T Data type for computation.
\ingroup FunctionImpGrp
*/

NBLA_DEFINE_TRANSFORM_UNARY(Ceil, std::ceil(x), dy, false);
}
#endif
43 changes: 43 additions & 0 deletions include/nbla/function/floor.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
// Copyright (c) 2017 Sony Corporation. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#ifndef NBLA_FUNCTION_FLOOR_HPP
#define NBLA_FUNCTION_FLOOR_HPP

#include <nbla/function/utils/base_transform_unary.hpp>

#include <cmath>

namespace nbla {

/** @class Floor
@brief Flooring value, defined as
@f[
y_i = floor(x_i),
@f]
Inputs:
- N-D array.
Outputs:
- N-D array.
@tparam T Data type for computation.
\ingroup FunctionImplGrp
*/

NBLA_DEFINE_TRANSFORM_UNARY(Floor, std::floor(x), dy, false);
}
#endif
1 change: 1 addition & 0 deletions python/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,7 @@ def extopts(library_name, library_dir):
'nnabla.utils.converter.nnabla',
'nnabla.utils.converter.nnablart',
'nnabla.utils.converter.onnx',
'nnabla.utils.factorization',
'nnabla_ext',
'nnabla_ext.cpu', ]

Expand Down

0 comments on commit 491332b

Please sign in to comment.