v7.0.0a1
Pre-release
Pre-release
This is the release note of v7.0.0a1. See here for the complete list of solved issues and merged PRs.
Highlights
- Many examples including ImageNet, DCGAN and VAE start supporting ChainerX arrays
New Features
- Support orthogonal embedding initialization (#6031)
- Add an option in
links.loss.CRF1d
to automatically sort the input sequence (#6351) - Add AdaBound (and AMSBound) (#6388, thanks @hitsgub!)
- Add
squared_difference
to chainerx (#6501, thanks @aksub99!) - Implement array vs array functionality to
chainerx.minimum
(#6541, thanks @aksub99!) - Add FP16 support to send/recv (#6552)
- Implement array to array functionality to
chainerx.maximum
(#6570, thanks @aksub99!) - Add Mean Var Python Bindings to ChainerX (#6640, thanks @kshitij12345!)
- Add
chainerx.ceil
(#6705, thanks @kshitij12345!) - Add
chainerx.floor
(#6707, thanks @kshitij12345!) - Add
chainerx.absolute
(#6715, thanks @dido1998!) - Add
chainerx.argmin
andchainerx.ndarray.argmin
(#6740, thanks @Harshan01!) - Add
chainerx.amin
andchainerx.min
(#6752, thanks @Harshan01!) - Add
chainerx.a/sinh
,chainerx.a/cosh
(#6776, thanks @kshitij12345!) - Add
chainerx.fabs
andchainerx.sign
(#6777, thanks @kshitij12345!) - Add
chainerx.logical_and
chainerx.logical_or
(#6779, thanks @kshitij12345!) - Add
chainerx.all
andchainerx.any
(#6781, thanks @kshitij12345!) - Add
chainerx::Softmax
andchainerx.softmax
(#6814, thanks @tohmae!) - Add zero fill mode in allreduce of chainermn (#6817)
- Make
BatchNorm
states public (#6847) - Introduce Native/CUDA macros for registering standard elementwise ops (#6870, thanks @kshitij12345!)
- Make adam variants more accessible (#6874, thanks @crcrpar!)
- Add
chainerx::Swapaxes
andchainerx.swapaxes
(#6897, thanks @kshitij12345!) - Add
chainerx.logical_xor
(#7014, thanks @ishanrai05!) - Add
chainerx.log10
(#7015, thanks @ishanrai05!) - Add
chainerx.isfinite
(#7016, thanks @kshitij12345!) - Add bitwise ops to ChainerX (#7017, thanks @kshitij12345!)
- Add
chainerx.arctan2
(#7028, thanks @kshitij12345!) - Add
chainerx.expand_dims
(#7029, thanks @kshitij12345!) - Add
chainerx.flip
,chainerx.fliplr
andchainerx.flipud
(#7065, thanks @kshitij12345!) - Add
chainerx.where
(#7067, thanks @kshitij12345!) - Add
F.arctanh
(#7095)
Enhancements
- Improve error message of
gradient_check.check_double_backward
(#6427) - Improve
link_hooks.SpectralNormalization
(#6655, thanks @crcrpar!) - ChainerX Op registration: normalization (#6719)
- ChainerX Op registration: arithmetic (#6723)
- Implement Relu in ChainerX (#6731, thanks @dido1998!)
- Make device functions public (#6744)
- ChainerX Op registration: creation (#6745)
- ChainerX Op registration: linalg (#6746)
- Allow
snapshot_object
havecondition
andwriter
option (#6762) - Support fallbacks of ChainerX on GetItem fail when indices contains
chainerx.ndarray
(#6769) - Fix
Evaluator
forchainer.dataset.converter
(#6768) - Rename
patients
argument topatience
inEarlyStoppingTrigger
(#6784) - Remove
Backend
ctor and useCreateBackend
(#6785) - ChainerX Op registration: pooling (#6800)
- Define
__str__
forDevice
classes (#6816, thanks @nishnik!) - Simplify
numeric.h
(#6832) - ChainerX Op registration: connection (#6833)
- ChainerX Op registration: array members (#6834)
- ChainerX Op registration: math (#6842)
- Mixed dtypes:
chainerx::Minimum
(#6858) - Update
distributions.independent
(#6860, thanks @ganow!) - Add
chainerx.ndarray.all
andchainerx.ndarray.any
(#6926) - Fix
HuberLoss.forward
avoid loss of significance (#6940) - Support Tensor Core in
chainerx::Dot
(#6960) - Fix
F.get_item
backward for ChainerX (#6991) - Support NumPy scalars in ChainerX arithmetics (#7004)
- Implement NumPy-like pairwise reduction for stability (#7043, thanks @grafi-tt!)
- Support mixed dtypes in
Stack
(#7058) - ChainerX Scalar / Array divisions (#7075)
- Fix
Reshape
copy condition (#7080) - Fix trigger constructors to raise errors instead of assertion failures (#7101)
- Support Tensor Core in
chainerx::Conv
(#7112)
Performance Improvements
- Optimized ChainerX-to-CuPy
ndarray
conversion (#6204) - Use cuDNN in ReLU (#6993)
- Fast integer scale unpooling (#7114, thanks @tkerola!)
Bug Fixes
- Avoid throwing in destructors (#6725)
- Fix TypeError during BN deserialization on Win64 (#6765, thanks @hyabe!)
- Fix
chainerx.astype
casting fromfloat16
tobool
in CUDA (#6780, thanks @kshitij12345!) - Fix ArgMax of CUDA when all values are negative (#6783)
- Fix unchain gradient pull (#6804, thanks @Rishav1!)
- Remove
chainerx.square
fallback since it is implemented in C++ (#6823) - Fix stack overflow caused when
to_gpu
/to_cpu
/to_intel64
were overridden (#6824) - Fix
filename
arg ofPlotReport
(#6866) - Make
InvalidType
picklable (#6884, thanks @zaltoprofen!) - Rename the macro name for
AMinOp
(#6922) - Fix terminal column width retrieval in backprop traceback in Python 2 (#6949)
- Avoid using ImportError during
import cupy
(#6954) - Fix cuDNN descriptor double destroy (#6972)
- Fix
ConcatWithAsyncTransfer
(#6992) - Set
allow_pickle=True
(#7036) - Fix subview of zero-sized arrays (#7037)
- Fix
At
output offset (#7046) - Fix handling of ndarray offsets (#7047)
- Fix construction of
std::shared_ptr
with custom deleter inchianer_interop.cc
(#7107) - Fix build with clang (#7119)
Code Fixes
- Check headers with clang-tidy (#6441)
- Refactor CUDA batch norm tensor descriptor (#6724)
- Fix comments and add TODO to indexing routines (#6789)
- Add
cuda_internal::DeviceInternals
to wrap handle etc. (#6820) - Clean up
DeviceInternals
(#6827) - Rename
CHAINERX_REGISTER_OP_{NATIVE,CUDA}
toCHAINERX_{NATIVE,CUDA}_REGISTER_OP
(#6865) - Add comments on
del
(#6933) - Unify variable names in
gradient_check
(#6935) - Align macro parameter name (#6941)
- Introduce
chainerx/kernels/
and rename existing device "op"s to "kernel"s (#6944) - Remove obsolete "Op" files (#6959)
- Prefix macro with CHAINERX as per convention (#7022)
- Use macro in exp_log.{cc/cu} (#7068)
- Pass arguments by value in
native::Float16
andcuda::Float16
(#7069) - Avoid importing object (#7110)
Documentation
- Fix to clarify the description about initializer argument (#6317)
- Add docs for two loss functions (#6349, thanks @hsezhiyan!)
- Improve docs of square, maximum and squared_difference (#6451, thanks @aksub99!)
- Append to v6 upgrade guide about Python 3.4 support drop (#6493)
- Add reference and warning to
F.swish
document (#6509, thanks @fiarabbit!) - Document fix in default initializer (#6519)
- Convert utilities docs to one page (#6595, thanks @trancenoid!)
- Add
chainer.get_device
to doc (#6735) - Use search index (#6881)
- Add
chainerx.sigmoid
docs (#6889, thanks @crcrpar!) - Fix typo in
F.convolution_2d
(#6890, thanks @crcrpar!) - Document
chainer.testing.LinkTestCase
(#6895, thanks @crcrpar!) - Update README.txt for a link to the tutorial (#6896)
- Fix broken link in
chainerx.md
(#6899, thanks @tkat0!) - Document passive attributes in
FunctionTestCase
(#6931) - Fix documentation of renamed arguments (#6932)
- Fix typo in
pickle_dataset.py
(#6942) - Update ChainerX contribution guide (#6951)
- Support Sphinx 2.0 and use absolute path to support the latest RTD (#7027)
- Fix link to ChainerMN docs in performance guide (#7044)
- Update supported MPI list (#7086)
- Document
CHAINERX_ENABLE_BLAS
environment variable (#7098, thanks @durswd!) - Move backend docs to a separate page (#7099)
- Document backend and device objects (#7102)
- Remove extra spaces in docstrings (#7125)
- Fix
AdamW
docstring (#7137, thanks @crcrpar!) - Fix spelling of
AMSGrad
(#7138, thanks @crcrpar!)
Installation
- CMake for Windows(clang-cl) (#7039, thanks @durswd!)
- Exclude protobuf 3.8.0rc1 from dependencies (#7083)
Examples
- Improve chainer examples (#6399, thanks @crcrpar!)
- Fix reinforcement_learning example to work with default dtype (#6624)
- Support default dtype in vae example (#6717)
- Support ChainerX in reinforcement learning example (#6733)
- Support ChainerX in wavenet example (#6736)
- Trivial fixes to Wavenet example (#6737)
- Support ChainerX in VAE example (#6739)
- Support ChainerX in text classification example (#6769)
- Support ChainerX in DCGAN example (#6773)
- Support ChainerX in word2vec example (#6774)
- Show download progress bar in image-captioning example (#6775)
- Support ChainerX in memnn example (#6854)
- Use
filename
in PlotReport example (#6880, thanks @crcrpar!) - Support ChainerX in CIFAR example (#6936)
- Support ChainerX in POS-tagging example (#7081)
- Support ChainerX in Sentiment example (#7087)
- Add progress bar to sentiment analysis example (#7103)
- Support ChainerX in Model Zoo example (#7129)
Tests
- Simplify
F.mean_absolute_error
test (#6253, thanks @aksub99!) - Simplify
F.bilinear
test (#6488, thanks @ishanrai05!) - Simplify
F.deconvolution_2d
test (#6498, thanks @ishanrai05!) - Display
pytest
summary (#6625, thanks @kshitij12345!) - Travis test against v6 branch (#6749)
- Fix Travis with macOS (#6754)
- Dodge nondifferentiable inputs in
chainerx.max
test (#6761) - Make too slow initializers' tests faster (#6792)
- Fix test failures in math test (#6798)
- Simplify
F.flip
test (#6801, thanks @ishanrai05!) - Simplify
F.where
test (#6802, thanks @ishanrai05!) - Simplify
F.repeat
test (#6803, thanks @ishanrai05!) - Fix
F.elu
test numeric error (#6841) - Relax tolerance for float16 in
unary_math_function_unittest
(#6845) - Relax tolerances and avoid non-differentiable points for FP16 in triplet loss tests (#6855)
- Simplify
F.unpooling_nd
test (#6861, thanks @ishanrai05!) - Simplify
F.local_response_normalization
test (#6867, thanks @ishanrai05!) - Simplify
F.reshape
test (#6868, thanks @ishanrai05!) - Simplify
F.layer_normalization
test (#6871, thanks @ishanrai05!) - Fix test failure in
test_spatial_transformer_sampler.py
(#6883) - Simplify
F.prelu
test (#6887, thanks @ishanrai05!) - Simplify
F.flatten
test (#6888, thanks @ishanrai05!) - Simplify
F.dstack
test (#6891, thanks @ishanrai05!) - Simplify
F.sign
test (#6898, thanks @hikjik!) - Simplify
F.ceil
test (#6900, thanks @hikjik!) - Simplify
F.floor
test (#6901, thanks @hikjik!) - Fix
F.rrelu
test instability (#6915) - Fix
F.max_pooling_nd
test instability (#6917) - Fix flaky Huber loss test (#6924)
- Simplify
F.fmod
test (#6937, thanks @hikjik!) - Simplify
F.fix
test (#6938, thanks @hikjik!) - Fix test parameters in ChainerX math tests (#6946)
- Increase the default columns in Travis CI (#6948)
- Fold Travis test outputs (#6961)
- Simplify 'F.min', 'F.max' test (#6962, thanks @hikjik!)
- Simplify 'F.exp', 'F.log' test (#6963, thanks @hikjik!)
- Simplify
F.expm1
test (#6965, thanks @hikjik!) - Fix flaky ChainerX
max_pool
test (#6975) - Simplify
F.bias
test (#6976, thanks @hikjik!) - Simplify
F.cumsum
test (#6977, thanks @hikjik!) - Refactor
Variable.addgrad
test (#6979) - Simplify
F.cosh
,F.sinh
test (#6980, thanks @hikjik!) - Simplify
F.log1p
test (#6981, thanks @hikjik!) - Simplify
F.linear_interpolate
test (#6984, thanks @hikjik!) - Simplify
F.fft
,F.ifft
test (#6985, thanks @hikjik!) - Simplify
F.matmul
test (#6987, thanks @ishanrai05!) - Fix flaky
TestLogSumExp
(#6988) - Fix flaky
TestMin
(#6989) - Simplify
F.get_item
test (#6990) - Simplify
F.inv
,F.batch_inv
test (#6994, thanks @hikjik!) - Simplify
F.batch_l2_norm_squared
test (#6996, thanks @hikjik!) - Simplify
F.accuracy
test (#7006, thanks @hikjik!) - Simplify
F.binary_accuracy
test (#7007, thanks @hikjik!) - Simplify
F.r2_score
test (#7008, thanks @hikjik!) - Simplify
F.permutate
test (#7010, thanks @hikjik!) - Simplify
F.scatter_add
test (#7012, thanks @hikjik!) - Simplify
F.separate
test (#7013, thanks @hikjik!) - Simplify
F.logsumexp
test (#7018, thanks @hikjik!) - Skip tests that fail with NumPy 1.16.3 (#7021)
- Add broadcast test in
test_math.py
(#7023) - Fix flaky
chainerx.abs
test (#7024) - Remove ChainerX acceptance tests (#7026)
- Fix flaky
chainerx.tan
test (#7033) - Display
pytest
summary (cont.) (#7089)