Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync to latest code #28

Merged
merged 5 commits into from
Apr 23, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 1 addition & 2 deletions ci/jenkins/Jenkinsfile_unix_gpu
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,7 @@ core_logic: {
custom_steps.test_unix_python2_mkldnn_gpu(),
custom_steps.test_unix_python3_mkldnn_gpu(),
custom_steps.test_unix_python3_mkldnn_nocudnn_gpu(),
// Disabled temporarily for https://github.com/apache/incubator-mxnet/issues/14626
// custom_steps.test_unix_python3_tensorrt_gpu(),
custom_steps.test_unix_python3_tensorrt_gpu(),
custom_steps.test_unix_perl_gpu(),
custom_steps.test_unix_r_gpu(),
custom_steps.test_unix_cpp_gpu(),
Expand Down
35 changes: 22 additions & 13 deletions docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -1233,9 +1233,7 @@ You can do a dockerized cross compilation build on your local machine or a nativ
The complete MXNet library and its requirements can take almost 200MB of RAM, and loading large models with the library can take over 1GB of RAM. Because of this, we recommend running MXNet on the Raspberry Pi 3 or an equivalent device that has more than 1 GB of RAM and a Secure Digital (SD) card that has at least 4 GB of free memory.

## Quick installation
You can use this [pre-built Python wheel](wget https://mxnet-public.s3.amazonaws.com/install/raspbian/mxnet-1.5.0-py2.py3-none-any.whl) on a Raspberry Pi 3B with Stretch. You will likely need to install several dependencies to get MXNet to work. Refer to the following **Build** section for details.

**Cross compilation build (Experimental)**
You can use this [pre-built Python wheel](https://mxnet-public.s3.amazonaws.com/install/raspbian/mxnet-1.5.0-py2.py3-none-any.whl) on a Raspberry Pi 3B with Stretch. You will likely need to install several dependencies to get MXNet to work. Refer to the following **Build** section for details.

## Docker installation
**Step 1** Install Docker on your machine by following the [docker installation instructions](https://docs.docker.com/engine/installation/linux/ubuntu/#install-using-the-repository).
Expand All @@ -1248,18 +1246,22 @@ Follow the four steps in this [docker documentation](https://docs.docker.com/eng

## Build

**Please use a Native build with gcc 4 as explained below, higher compiler versions currently cause test
failures on ARM**
**This cross compilation build is experimental.**

**Please use a Native build with gcc 4 as explained below, higher compiler versions currently cause test failures on ARM.**

The following command will build a container with dependencies and tools and then compile MXNet for
ARMv7. The resulting artifact will be located in `build/mxnet-x.x.x-py2.py3-none-any.whl`, copy this
file to your Raspberry Pi.
The following command will build a container with dependencies and tools,
and then compile MXNet for ARMv7.
You will want to run this on a fast cloud instance or locally on a fast PC to save time.
The resulting artifact will be located in `build/mxnet-x.x.x-py2.py3-none-any.whl`.
Copy this file to your Raspberry Pi.
The previously mentioned pre-built wheel was created using this method.

```
ci/build.py -p armv7
```

## Install
## Install using a pip wheel

Your Pi will need several dependencies.

Expand All @@ -1282,6 +1284,7 @@ sudo apt-get install -y \
libzmq3-dev \
ninja-build \
python-dev \
python-pip \
software-properties-common \
sudo \
unzip \
Expand All @@ -1298,18 +1301,24 @@ virtualenv -p `which python` mxnet_py27
```
You may use Python 3, however the [wine bottle detection example](https://mxnet.incubator.apache.org/versions/master/tutorials/embedded/wine_detector.html) for the Pi with camera requires Python 2.7.

Create a virtualenv and install the wheel we created previously, or the wheel that you downloaded.
Activate the environment, then install the wheel we created previously, or install this [prebuilt wheel](https://mxnet-public.s3.amazonaws.com/install/raspbian/mxnet-1.5.0-py2.py3-none-any.whl).

```
virtualenv -p `which python3` mxnet_py27
source mxnet_py27/bin/activate
pip install mxnet-x.x.x-py2.py3-none-any.whl
```

Test MXNet with the Python interpreter:
```
$ python

>>> import mxnet
```
If there are no errors then you're ready to start using MXNet on your Pi!

**Native Build**
## Native Build

Installing MXNet is a two-step process:
Installing MXNet from source is a two-step process:

1. Build the shared library from the MXNet C++ source code.
2. Install the supported language-specific packages for MXNet.
Expand Down
3 changes: 2 additions & 1 deletion python/mxnet/gluon/nn/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,12 +153,13 @@ class ELU(HybridBlock):
Outputs:
- **out**: output tensor with the same shape as `data`.
"""

def __init__(self, alpha=1.0, **kwargs):
super(ELU, self).__init__(**kwargs)
self._alpha = alpha

def hybrid_forward(self, F, x):
return F.where(x > 0, x, self._alpha * (F.exp(x) - 1.0))
return F.LeakyReLU(x, act_type='elu', slope=self._alpha)


class SELU(HybridBlock):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ void MKLDNNQuantizedFullyConnectedForward(const nnvm::NodeAttrs &attrs,
int32_t *quantized_bias_ptr = quantized_bias.data().dptr<int32_t>();
size_t bias_size = bias.shape().Size();
#pragma omp parallel for num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
for (size_t i = 0; i < bias_size; ++i) {
for (index_t i = 0; i < static_cast<index_t>(bias_size); ++i) {
quantized_bias_ptr[i] = bias_ptr[i] * bias_int32_rescale;
}
}
Expand Down
2 changes: 1 addition & 1 deletion src/operator/subgraph/mkldnn/mkldnn_fc.cc
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ void SgMKLDNNFCOp::Forward(const OpContext &ctx,
int32_t *quantized_bias_ptr = cached_bias_.data().dptr<int32_t>();
size_t bias_size = bias.shape().Size();
#pragma omp parallel for num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
for (size_t i = 0; i < bias_size; ++i) {
for (index_t i = 0; i < static_cast<index_t>(bias_size); ++i) {
quantized_bias_ptr[i] = bias_ptr[i] * bias_int32_rescale;
}
}
Expand Down
1 change: 1 addition & 0 deletions tests/python/gpu/test_operator_gpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@
set_default_context(mx.gpu(0))
del test_support_vector_machine_l1_svm # noqa
del test_support_vector_machine_l2_svm # noqa
del test_custom_op_fork #noqa


def check_countsketch(in_dim,out_dim,n):
Expand Down
2 changes: 1 addition & 1 deletion tests/python/unittest/test_gluon.py
Original file line number Diff line number Diff line change
Expand Up @@ -1180,7 +1180,7 @@ def swish_test(x):
elu = mx.gluon.nn.ELU()
def elu_test(x):
def elu(x):
return 1.0 * (mx.nd.exp(x) - 1) if x < 0 else x
return mx.nd.expm1(x) if x <= 0.0 else x
return [elu(x_i) for x_i in x]

for test_point, ref_point in zip(elu_test(point_to_validate), elu(point_to_validate)):
Expand Down
5 changes: 4 additions & 1 deletion tests/python/unittest/test_operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -5393,6 +5393,9 @@ def create_operator(self, ctx, shapes, dtypes):
x = mx.nd.Custom(length=10, depth=10, op_type="no_input_op")
assert_almost_equal(x.asnumpy(), np.ones(shape=(10, 10), dtype=np.float32))


@with_seed()
def test_custom_op_fork():
# test custom operator fork
# see https://github.com/apache/incubator-mxnet/issues/14396
class AdditionOP(mx.operator.CustomOp):
Expand Down Expand Up @@ -5430,7 +5433,7 @@ def custom_add():
p.daemon = True
p.start()
p.join(5)
assert not p.is_alive(), "deadlock may exist in custom operator"
assert not p.is_alive() and p.exitcode == 0


def _build_dot_custom(fun_forward, name):
Expand Down