v7.0.0b1
Pre-release
Pre-release
This is the release note of v7.0.0b1. See here for the complete list of solved issues and merged PRs.
Highlights
- TabularDataset is added. This is a new dataset interface that supports rich manipulation in a tabular form (like pandas.DataFrame), e.g. loading only a specified subset of keys (columns), efficient slicing (with less transposition/concatenation), batch-wise preprocessing, etc. The API is still under development; we are adding more functionalities and widening its support in existing features where datasets are involved.
New Features
- Add interface to backprop from multiple variables (#5952)
- Option to show progress bar during evaluation (#6474, thanks @wkentaro!)
- Elementwise
Power
for ChainerX (#6496, thanks @dido1998!) - Add
chainerx.hstack
,chainerx.vstack
andchainerx.atleast_2d
(#6886, thanks @kshitij12345!) - Add
TabularDataset
(#7115) - Add
TabularDataset.concat/join
(#7116) - Add
chainerx.expm1
andchainerx.exp2
(#7126, thanks @aksub99!) - Add
chainerx.log2
(#7139) - Add
TabularDataset.{transform/transform_batch}
(#7150) - Add
chainerx.log1p
(#7161, thanks @sky58!) - Expose
chainerx::AsContiguous
as a public C++ API (#7166) - Emit warning on
chainerx
import in debug mode (#7178) - Add
chainer.as_array
for consistency withchainer.as_variable
(#7252, thanks @tkerola!) - Add
chainerx.moveaxis
(#7265, thanks @kshitij12345!) - Add
chainerx.leaky_relu
(#7351, thanks @aksub99!) - Add
chainerx.dstack
andchainerx.atleast_3d
(#7353, thanks @kshitij12345!) - Add Python operator
__abs__
withchainerx.ndarray
(#7364) - Allow turning off the static subgraph optimizations using a config (#7369)
- Add NumPy constants to ChainerX (#7384)
- Add
chainerx.erf
(#7404, thanks @aksub99!) - Add
align_corners
option toresize_images
(#7429) - Add nearest mode to
resize_images
(#7443) - Add
input_device
toStandardUpdater
(#7472) - Add
is_array_supported
method onbackend.Device
(#7487)
Enhancements
- Refactor
roi_max_align_2d
androi_average_align_2d
(#6405, thanks @knorth55!) - Support Tagged communication with
MPI_Status
. (#6696, thanks @y1r!) - Support ChainerX in
F.copy
(#6982) - Avoid unnecessary updates in
F.batch_renormalization
, and related fixes (#7104) - Support ChainerX in
Variable.addgrad
(#7132) - Fix
cuda.DummyDevice
inheritance (#7147) - Add
Device.name
property (#7149) Link.serialize
to support ChainerX (#7175)- Fix typo in
Variable.backward
(#7196) - Call
require_grad()
on ChainerXVariable.grad
setter (#7198) - Clear outputs in
FunctionNode.unchain
and raise error in ChainerX fallback mode (#7216) - Support ChainerX in
Variable.copydata
(#7226) - Support ChainerX in MNIST data parallel example (#7227)
MultiprocessParallelUpdater
to support new devices (#7245)- Alias
StackVector<int64_t, kMaxNdim>
toDims
(#7258) - Support bool dtypes in
chainerx::{Max,Min}imum
(#7261) - Fix integral negative powers (#7262)
- Make
chx.backward
not cause error even if backprop is not required (#7287) - Support
None
arguments inchainerx.clip
andchainerx.ndarray.clip
(#7296) - Support scalar in
chainerx::Where
(#7325) F.clip
function withNone
parameter tomin
/max
(#7333)- Support cudnn deterministic max pooling (#7390, thanks @anaruse!)
- Avoid transferring from a native device to another in
Array::ToNative()
(#7394) - Add type hints to
Variable
(#7400) - Improve
get_device
error message when ChainerX is not available (#7401) get_device
to raise a more correct error types (#7421)- Make
EXEPECT_ARRAY_*
macros able to used outside ChainerX (#7434) - Add sequence support for ChainerX shape arguments (#7446)
- Check positive dilation in
F.convolution_2d
(#7448) - Check positive dilation in
F.deconvolution_2d
(#7449) - Explicit check of chainerx arrays on fallback functions (#7452)
- Support
F.copy
between non-ChainerX and ChainerX devices only if backprop is not required (#7473)
Performance Improvements
- In
FunctionNode
ChainerX fallback, reuseChainerxDevice
taken from inputs to create outputs (#7397)
Bug Fixes
- Fix type check of
F.where
(#6872) - Fix a bug in
Bernoulli.log_prob
(#7064, thanks @seiyab!) - Fix uncopyable
MultiNodeBatchNormalization
(#7106) - Bugfix:
MultiNodeChainList
should not assume float32 (#7165) - Fix initialization of
L.Linear
when called withn_batch_axes
(#7167) - Fix float16 and Tensor Core related issue in ChainerX (#7189, thanks @anaruse!)
- Fix recomputation of
L.BatchRenormalization
(#7256) - Fix
F.absolute_error
for ChainerX (#7281, thanks @crcrpar!) - Fix a bug that root is ignored in scatter_dataset and bcast (#7289)
- Fix condition to invoke cuDNN dropout (#7293, thanks @crcrpar!)
- Improve type check in
_values_to_dicts
so it works with unicode of python 2 too (#7316) - Fix DtypeError in
chainerx.square
(#7321) - Fix mypy errors (#7423)
- Make
WeightDecay
aware of loss scale (#7491) - Fix
GradientMethod
ChainerX fallback for uninitialized parameters (#7492) - Bugfix for pytest 2x2 (#7509)
- Fix AdamW update rule regression on CPU (#7512)
Code Fixes
- Split binary functions from math.cc (#7128)
- Avoid using
cuda.DummyDevice
andcuda.get_device_from_array
(#7148) - Fix pointless comparison compiler warning in ChainerX (#7160)
- Remove backslashes to continue lines of link targets (#7170)
- Split trigonometric/hyperbolic routines from
math.cc
(#7171) - Remove duplicated code in
logic.cc
(#7176) - Consistent cases for Inplace (#7181)
- Improve code in
testing.backend.BackendConfig
(#7212) - Split ChainerX statistics routines from
math.cc
(#7222) - Fix code style for long expressions (#7231)
- Check device instance using
xp
when possible (#7234) - Move declaration of
AMax
andAMin
to statistics routines (#7269) - Split reduction routines from
math.cc
(#7270) - Use
_
for private classes underchainer.dataset.tabular
(#7275) - Remove unused using declaration (#7284)
- Split misc routines from
math.cc
(#7298) - Fix wrong comment in ChainerX backward implementation (#7311)
- Split explog routines from
math.cc
(#7317) - Fix style on imports (#7338)
- Split rounding routines (#7407)
- Split arithmetic ops from routines/math.h (#7415)
- Put comments in
FindCuDNN.cmake
(#7419) - DRY optimizer test parameterizations (#7437)
- Split logic routines from math (#7444)
- Qualify some arguments of pool kernels
const&
(#7453) - Include
cuda_fp16.h
instead ofcuda_fp16.hpp
(#7480) - Use py::arg literal in ChainerX python binding (#7490)
- Remove rounding kernels from math (#7497)
- Rename and move activation routines from
math.h
(#7501) - Remove ChainerX
AsTypeKernel
(#7522, thanks @kshitij12345!) - Split python binding math routines (#7527)
- Use absolute namespace in macros (#7536)
Documentation
- Improve contribution guide (#6140)
- Fix dead sphinx links (#6450)
- Fix
F.normalize
documentation (#7062, thanks @crcrpar!) - Document
F.copy
view behavior (#7135) - Improve device documentation (#7162)
- Document
backend.get_device_from_array
(#7163) - Remove
chainerx.md
(#7179) - Add
optimizers.MSVAG
to documentation (#7183) - Fix grammatical errors in documentation (#7186)
- Fix capitalization of
F.relu
in doc (#7188) - Add missing doc entry for
CommunicatorBase.allgather
(#7192) - Fix invalid escape sequences in ChainerX routine docstrings (#7214)
- Fix typos in
chainer.utils.type_check
(#7249, thanks @ktns!) - Document
observe_value
andobserve_lr
trigger interval (#7266) - Fix
robots.txt
to allow indexing root (#7306) - Avoid installing ChainerX when building docs of other projects on ReadTheDocs (#7363, thanks @knorth55!)
- Improve
F.normalize
documentation (#7371, thanks @crcrpar!) - Fix format of
static_graph.rst
(#7389) - Change Deformable Convolution 2D docs to match arguments (#7402, thanks @higumachan!)
- Avoid setting
test_iter.epoch
manually in the tutorial of training loop (#7405) - Remove "Comparison with other frameworks" from docs (#7417)
- Fix documentation for
shape
ingenerate_array
(#7450) - Remove test coverage from ChainerX contribution guide (#7462)
- Correct Embed ID documentation (#7484)
- Fix typo in
tabular_dataset.py
(#7495, thanks @nai62!)
Installation
- Fix ChainerX compilation with MSVC (#7108, thanks @durswd!)
- Allow
CUDNN_LIBNAME
to be specified by environment variable (#7243) - Use external
$MAKEFLAGS
instead if set in Travis CI script (#7331) - In
FindCuDNN.cmake
, prioritize explicit variables over environment variables (#7441) - Add ChainerX build option to use cuDNN from CuPy installation (#7442)
- Pin
typing == 3.6.6
(#7562) - Fix
typing
requirements (#7564)
Examples
- Add CIFAR example to ChainerMN (#6839, thanks @ai-kase!)
- Support device specifiers in MNIST data parallel example (#6857)
- Support device specifiers in PTB example (#7055)
- Support device specifiers in pix2pix example (#7076)
- Support device specifiers in static graph example (#7153)
- Support device specifiers in ImageNet data parallel example (#7164)
- Support ChainerX in MNIST inference example (#7169)
- Support device specifier in image captioning example (#7204)
- Support device specifier in image captioning example (
predict.py
) (#7206) - Remove
PlotReport.available()
check in glance example (#7209) - Minor fix in DCGAN example README (#7210)
- Fix sentiment example test (#7215)
- Support device specifiers in MNIST model parallel example (#7225)
- Use Agg backend in examples with plot functionality (#7247)
- Support ChainerX in PTB gentxt example (#7314)
- Support ChainerX in MNIST model parallel example (#7330)
- Warn NaN in FP16 mode in dcgan example (#7344)
- Warn NaN in FP16 mode in memnn example (#7345)
- Warn NaN in FP16 mode in pix2pix example (#7346)
- Warn NaN in FP16 mode in pos example (#7354)
- Warn NaN in FP16 mode in reinforcement learning examples (#7355)
- Warn NaN in FP16 mode in sentiment example (#7356)
- Warn NaN in FP16 mode in static_graph_optimizations/cifar example (#7357)
- Warn NaN in FP16 mode in static_graph_optimizations/mnist example (#7358)
- Warn NaN in FP16 mode in vae example (#7362)
- Warn NaN in FP16 mode in word2vec example (#7366)
- Fix typo in wavenet example requirements (#7367)
- Warn NaN in FP16 mode in wavenet example (#7372)
- Support ChainerX in static subgraph optimization examples (#7431)
- Implement
reset
method in the PTB example (#7533)
Tests
- Add FP16 test to multi_node_chain_list (#6575)
- [chainerx] Fix skipped_backward tests to return as PASS (#6815, thanks @kshitij12345!)
- Add configuration of new CI system (#6843)
- Simplify
F.tensordot
test (#6968, thanks @ishanrai05!) - Simplify
F.cumprod
test (#6978, thanks @hikjik!) - Simplify
F.average
test (#6995, thanks @hikjik!) - Move
test_cuda.py
tobackends_tests
(#7144) - Fix missing cuda in
chainerx.swapaxes
test (#7184, thanks @kshitij12345!) - Split
Variable.grad
andVariable.grad_var
tests (#7191) - Refactor
Variable.zerograd
test (#7199) - Add Tensor Core test for
chainerx.conv
andchainerx.conv_transpose
(#7203) - Move
TestTanh
fromtest_math.py
to test_trigonometric_hyperbolic.py (#7207) - Refactor
Variable.copydata
test (#7224) - Add a test to reproduce the bcast deadlock problem (#7257)
- Add float16 comparison test (#7260)
- Use
CUDA_VISIBLE_DEVICES
in ChainerX tests (#7290) - Add
chainer.as_array
test (#7318) - Rewrite
StandardUpdater
tests with pytest style assertion (#7326) - Change
0
to0.0
for python2 (#7373) - Add missing parameter
dstack
toinvalid_shape
test (#7457, thanks @kshitij12345!) - Use
pytest.mark.xfail
instead ofunittest.expectedFailure
(#7488)
Others
- Remove "Research projects using Chainer" from README (#7416)