v7.0.0rc1
Pre-releaseThis is the release note of v7.0.0rc1. See here for the complete list of solved issues and merged PRs.
Announcements
This time, we will keep the current branches for active development (master
for v7.x, v6
for v6.x) after the RC. We will maintain v6.x series until Python2 EOL, so we do not cut the new development version for now to avoid increasing the number of branches to maintain. New features will be included directly into v7 for a while, and maintenance changes will be backported to v6.
Highlights
ONNX-Chainer Integration
ONNX-Chainer which used to be a separate project has now been integrated to the Chainer repository and made more accessible to existing Chainer users (#8229). You can easily export Chainer model as ONNX format like this:
import onnx_chainer
onnx_chainer.export(chainer_model, pseudo_input, filename='model.onnx')
For a more detailed description on how to get started, please refer to the ONNX-Chainer section in the official documentation.
ChainerMN
ChainerMN now works with ChainerX. In this release, the MNIST example has also been updated to demonstrate the usage. (#7844)
New Features
- Add
UpsamplingDeconvFilter
andDownsamplingConvFilter
initializer (#5290, thanks @knorth55!) - Add
chainerx.meshgrid
(#6668, thanks @kshitij12345!) - Add
chainerx.hsplit
(#7030, thanks @ishanrai05!) - Add
linalg.cholesky
to ChainerX (#7329, thanks @IvanYashchuk!) - Add
linalg.eigh
,linalg.eigvalsh
to ChainerX (#7503, thanks @IvanYashchuk!) - ChainerX + ChainerMN integration on MNIST (#7844)
- New configuration system of communicator inspired by links (#7885)
- More efficient multi-node snapshot (#8003)
- A new multi-node evaluator for
force_equal_length=False
(#8071) - Allow weight initializer to have its own
RandomState
instance (#8081, thanks @mr4msm!) - Add
chainerx.hinge
(#8168) - Integrate ONNX-Chainer to Chainer repository (#8229)
- Implement
chainerx::SoftmaxCrossEntropy
andchainerx.softmax_cross_entropy
(#8250) - Add
chainermn.testing.to_device
function (#8279) - Add
chainerx.copyto
(#8314, thanks @kshitij12345!)
Enhancements
- Rename
TabularDataset.as_tuple/as_dict
toTabularDataset.astuple/asdict
(#7788) - Deprecate
DeviceResident.to_gpu
/to_cpu
/to_intel64
(#8058) - Support zero-sized matrix in
generate_matrix
(#8167) - Add mode argument to
chainerx.take
(#8197) - Delete move and copy of virtual
*GradState
classes (#8224) - Fix directional gradient stability in
gradient_check
(#8236) - Fix some typo (#8243, thanks @garanews!)
- Fix CuPy installation detection error message (#8264)
- Fix intel64 support of
F.batch_normalization
(#8266) - Fix dim clearing on output (#8270)
- Remove
device
argument fromchainerx.diag
andchainerx.diagflat
(#8275) - Fix algorithm to avoid small directions in
gradient_check
(#8290) - Show import error with guild message on ONNX (#8293)
- Partially
output_grad
support onfake_as_funcnode
(#8298) - Compute
F.negative_sampling
in fp32 for fp16 inputs (#8300) - Make some arguments keyword-only. Note that some of them may break code based on v7 beta versions, but none of them breaks the compatibility against v6.
- Make
mode
andalign_corners
arguments inF.resize_image
keyword-only (#8009) - Make
weights
andkeepdims
arguments inVariable.mean
keyword-only (#8010) - Make arguments of
WeightStandardization
keyword-only (#8011) - Make
call_before_training
argument ofTrainer.extend
keyword-only (#8064)- The argument was introduced in v7.0.0b3, so it is not counted as compatibility break of v7.
- Make arguments in
ObservationAggregator
andMultiNodeEarlyStoppingTrigger
keyword-only (#8065) - Make
force_equal_length
argument inscatter_dataset
andscatter_index
keyword-only (#8066) - Make
size
argument oftabular.from_data
keyword-only (#8067)
- Make
Performance Improvements
- Make contiguous case for
chainerx::Take
faster (#8295)
Bug Fixes
- Fix subgraph construction for ChainerX backward (#8049)
- Fix a bug in
F.batch_normalization
with mixed dtype (#8149) - Fix
__str__
of parameterized class (#8169) - Fix bugs when
x
andgamma
/beta
have different dtypes inF.batch_normalization
(#8175) - Change
copy
to__deepcopy__
in ChainerMNbatch_normalization
and replaceto_gpu
(#8185) - Fix possible data race in CUDA memory keeper (#8213)
- Add virtual destructor to CUDA
Allocator
(#8215) - Inherit input ndarray device in
chainerx.ascontiguousarray
(#8262) - Do not expose
global_kernel_registry
(#8265) - Fix SCE with ChainerX and normalize (#8301)
- Unable to use
gpu_id=0
in ChainerMN testingget_device
(#8304)
Code Fixes
- Update variable names for consistent naming convention (#8074)
- Fix style of
setup.cfg
(#8180) - Remove unused forward declaration of
AveragePoolPadMode
enum (#8214) - Write Read the Docs related comments in
setup.py
(#8218) - Remove unused classes
{Max,Average}PoolForwardBackward
(#8223) - Conform to
readability-avoid-const-params-in-decls
(#8225) - Simplify direction vector sampling in
gradient_check
(#8238) - Use type hint for method declaration (#8248)
- Remove obsolete comment in
F.softmax_cross_entropy
(#8253) - Fix import order and grouping (#8257)
- Simplify
CreateSubgraph
(#8310)
Documentation
- Change citation to new KDD paper (#7994)
- Fix a typo in the Cauchy distribution page (#8208, thanks @nzw0301!)
- Fix
resize_images
documentation to reflect recent code changes (#8221, thanks @zu3st!) - Set up documentation for loss functions in ChainerX (#8231)
- Add documentation for
chainerx.ravel
(#8233) - Add documentation for
chainerx.sigmoid_cross_entropy
(#8249) - Put a link to CuPy installation guide in README instead of a command instruction (#8287)
Installation
- Add ability to build with ninja generator. (#8194, thanks @cloudhan!)
- Suppress warnings-as-errors from external libraries (#8227)
- Write CMake generator when building (#8239)
- Add
libchainerx_base.a
to link chainerx statically (#8247)
Examples
- Fix WaveNet example not working (#8157, thanks @dhgrs!)
- Fix
generate.py
inexamples/wavenet
(#8172, thanks @dhgrs!)
Tests
- Simplify
F.scale
test (#6969, thanks @ishanrai05!) - Improve example tests (#7475)
- Add fp16 test to
test_n_step_rnn
(#7483) - Fix protobuf dependency (#7529)
- Fix
TestAccuracy
: Randomly reduce testing parameters (#7820) - Support ChainerMN testing in pfnci (#7821)
- Fix flaky tests of
chx.linalg.solve
(#7997) - Fix overflow warning in div backward test (#8109)
- Fix flaky
TestQR
(#8114) - Disable flaky test retry in flexCI (#8143)
- Pairwise testing (#8164)
- Allow
pytest.skip()
in combination withtesting.repeat
/retry
(#8174) - Remove
DummySerializer
andDummyDeserializer
fromiterators_tests
(#8176) - Fix comparison with casting in hdf5 serializer test (#8182)
- Relax
BatchNormalization
backward test tolerances (#8189) - Fix caffe test with
protobuf>=3.8
(#8190) - Add
CHAINER_TEST_PAIRWISE_PARAMETERIZATION
and enable it only in Travis CI (#8211) - Fix
attrs
package version (#8219) - Fix
HDF5Serializer
test for h5py<2.9 (#8220) - Fix flaky
TestBatchNormalization
(#8230) - Relax tolerances in ChainerX unary math tests (#8234)
- Add
"jenkins"
extras (#8241) - Use
clang-format-6.0
if possible and track the version ofclang-format
(#8242) - Remove legacy
DeprecationWarning
filter fromtest_multi_node_chain_list
(#8246) - Fix
chainex_tests
/unit_tests
/routines_tests
/test_linalg.py::Inverse
(#8255) - Fix flaky
TestHuberLoss
(#8271) - Stop setting too small tolerances in backprop tests (#8283)
- Make
ImportWarning
just a warning in tests (#8291) - Fix
gtest
linkage (#8292, thanks @cloudhan!) test_average
is slow in FlexCI (#8303)- Add ChainerX to
test_mnist
inchainermn_tests
(#8305) - Implement
communicator_test
for ChainerX+ChainerMN (#8313)