v7.0.0b1
Pre-release
Pre-release
This is the release note of v7.0.0b1. See here for the complete list of solved issues and merged PRs.
Highlights
- TabularDataset is added. This is a new dataset interface that supports rich manipulation in a tabular form (like pandas.DataFrame), e.g. loading only a specified subset of keys (columns), efficient slicing (with less transposition/concatenation), batch-wise preprocessing, etc. The API is still under development; we are adding more functionalities and widening its support in existing features where datasets are involved.
New Features
- Add interface to backprop from multiple variables (#5952)
- Option to show progress bar during evaluation (#6474, thanks @wkentaro!)
- Elementwise
Powerfor ChainerX (#6496, thanks @dido1998!) - Add
chainerx.hstack,chainerx.vstackandchainerx.atleast_2d(#6886, thanks @kshitij12345!) - Add
TabularDataset(#7115) - Add
TabularDataset.concat/join(#7116) - Add
chainerx.expm1andchainerx.exp2(#7126, thanks @aksub99!) - Add
chainerx.log2(#7139) - Add
TabularDataset.{transform/transform_batch}(#7150) - Add
chainerx.log1p(#7161, thanks @sky58!) - Expose
chainerx::AsContiguousas a public C++ API (#7166) - Emit warning on
chainerximport in debug mode (#7178) - Add
chainer.as_arrayfor consistency withchainer.as_variable(#7252, thanks @tkerola!) - Add
chainerx.moveaxis(#7265, thanks @kshitij12345!) - Add
chainerx.leaky_relu(#7351, thanks @aksub99!) - Add
chainerx.dstackandchainerx.atleast_3d(#7353, thanks @kshitij12345!) - Add Python operator
__abs__withchainerx.ndarray(#7364) - Allow turning off the static subgraph optimizations using a config (#7369)
- Add NumPy constants to ChainerX (#7384)
- Add
chainerx.erf(#7404, thanks @aksub99!) - Add
align_cornersoption toresize_images(#7429) - Add nearest mode to
resize_images(#7443) - Add
input_devicetoStandardUpdater(#7472) - Add
is_array_supportedmethod onbackend.Device(#7487)
Enhancements
- Refactor
roi_max_align_2dandroi_average_align_2d(#6405, thanks @knorth55!) - Support Tagged communication with
MPI_Status. (#6696, thanks @y1r!) - Support ChainerX in
F.copy(#6982) - Avoid unnecessary updates in
F.batch_renormalization, and related fixes (#7104) - Support ChainerX in
Variable.addgrad(#7132) - Fix
cuda.DummyDeviceinheritance (#7147) - Add
Device.nameproperty (#7149) Link.serializeto support ChainerX (#7175)- Fix typo in
Variable.backward(#7196) - Call
require_grad()on ChainerXVariable.gradsetter (#7198) - Clear outputs in
FunctionNode.unchainand raise error in ChainerX fallback mode (#7216) - Support ChainerX in
Variable.copydata(#7226) - Support ChainerX in MNIST data parallel example (#7227)
MultiprocessParallelUpdaterto support new devices (#7245)- Alias
StackVector<int64_t, kMaxNdim>toDims(#7258) - Support bool dtypes in
chainerx::{Max,Min}imum(#7261) - Fix integral negative powers (#7262)
- Make
chx.backwardnot cause error even if backprop is not required (#7287) - Support
Nonearguments inchainerx.clipandchainerx.ndarray.clip(#7296) - Support scalar in
chainerx::Where(#7325) F.clipfunction withNoneparameter tomin/max(#7333)- Support cudnn deterministic max pooling (#7390, thanks @anaruse!)
- Avoid transferring from a native device to another in
Array::ToNative()(#7394) - Add type hints to
Variable(#7400) - Improve
get_deviceerror message when ChainerX is not available (#7401) get_deviceto raise a more correct error types (#7421)- Make
EXEPECT_ARRAY_*macros able to used outside ChainerX (#7434) - Add sequence support for ChainerX shape arguments (#7446)
- Check positive dilation in
F.convolution_2d(#7448) - Check positive dilation in
F.deconvolution_2d(#7449) - Explicit check of chainerx arrays on fallback functions (#7452)
- Support
F.copybetween non-ChainerX and ChainerX devices only if backprop is not required (#7473)
Performance Improvements
- In
FunctionNodeChainerX fallback, reuseChainerxDevicetaken from inputs to create outputs (#7397)
Bug Fixes
- Fix type check of
F.where(#6872) - Fix a bug in
Bernoulli.log_prob(#7064, thanks @seiyab!) - Fix uncopyable
MultiNodeBatchNormalization(#7106) - Bugfix:
MultiNodeChainListshould not assume float32 (#7165) - Fix initialization of
L.Linearwhen called withn_batch_axes(#7167) - Fix float16 and Tensor Core related issue in ChainerX (#7189, thanks @anaruse!)
- Fix recomputation of
L.BatchRenormalization(#7256) - Fix
F.absolute_errorfor ChainerX (#7281, thanks @crcrpar!) - Fix a bug that root is ignored in scatter_dataset and bcast (#7289)
- Fix condition to invoke cuDNN dropout (#7293, thanks @crcrpar!)
- Improve type check in
_values_to_dictsso it works with unicode of python 2 too (#7316) - Fix DtypeError in
chainerx.square(#7321) - Fix mypy errors (#7423)
- Make
WeightDecayaware of loss scale (#7491) - Fix
GradientMethodChainerX fallback for uninitialized parameters (#7492) - Bugfix for pytest 2x2 (#7509)
- Fix AdamW update rule regression on CPU (#7512)
Code Fixes
- Split binary functions from math.cc (#7128)
- Avoid using
cuda.DummyDeviceandcuda.get_device_from_array(#7148) - Fix pointless comparison compiler warning in ChainerX (#7160)
- Remove backslashes to continue lines of link targets (#7170)
- Split trigonometric/hyperbolic routines from
math.cc(#7171) - Remove duplicated code in
logic.cc(#7176) - Consistent cases for Inplace (#7181)
- Improve code in
testing.backend.BackendConfig(#7212) - Split ChainerX statistics routines from
math.cc(#7222) - Fix code style for long expressions (#7231)
- Check device instance using
xpwhen possible (#7234) - Move declaration of
AMaxandAMinto statistics routines (#7269) - Split reduction routines from
math.cc(#7270) - Use
_for private classes underchainer.dataset.tabular(#7275) - Remove unused using declaration (#7284)
- Split misc routines from
math.cc(#7298) - Fix wrong comment in ChainerX backward implementation (#7311)
- Split explog routines from
math.cc(#7317) - Fix style on imports (#7338)
- Split rounding routines (#7407)
- Split arithmetic ops from routines/math.h (#7415)
- Put comments in
FindCuDNN.cmake(#7419) - DRY optimizer test parameterizations (#7437)
- Split logic routines from math (#7444)
- Qualify some arguments of pool kernels
const&(#7453) - Include
cuda_fp16.hinstead ofcuda_fp16.hpp(#7480) - Use py::arg literal in ChainerX python binding (#7490)
- Remove rounding kernels from math (#7497)
- Rename and move activation routines from
math.h(#7501) - Remove ChainerX
AsTypeKernel(#7522, thanks @kshitij12345!) - Split python binding math routines (#7527)
- Use absolute namespace in macros (#7536)
Documentation
- Improve contribution guide (#6140)
- Fix dead sphinx links (#6450)
- Fix
F.normalizedocumentation (#7062, thanks @crcrpar!) - Document
F.copyview behavior (#7135) - Improve device documentation (#7162)
- Document
backend.get_device_from_array(#7163) - Remove
chainerx.md(#7179) - Add
optimizers.MSVAGto documentation (#7183) - Fix grammatical errors in documentation (#7186)
- Fix capitalization of
F.reluin doc (#7188) - Add missing doc entry for
CommunicatorBase.allgather(#7192) - Fix invalid escape sequences in ChainerX routine docstrings (#7214)
- Fix typos in
chainer.utils.type_check(#7249, thanks @ktns!) - Document
observe_valueandobserve_lrtrigger interval (#7266) - Fix
robots.txtto allow indexing root (#7306) - Avoid installing ChainerX when building docs of other projects on ReadTheDocs (#7363, thanks @knorth55!)
- Improve
F.normalizedocumentation (#7371, thanks @crcrpar!) - Fix format of
static_graph.rst(#7389) - Change Deformable Convolution 2D docs to match arguments (#7402, thanks @higumachan!)
- Avoid setting
test_iter.epochmanually in the tutorial of training loop (#7405) - Remove "Comparison with other frameworks" from docs (#7417)
- Fix documentation for
shapeingenerate_array(#7450) - Remove test coverage from ChainerX contribution guide (#7462)
- Correct Embed ID documentation (#7484)
- Fix typo in
tabular_dataset.py(#7495, thanks @nai62!)
Installation
- Fix ChainerX compilation with MSVC (#7108, thanks @durswd!)
- Allow
CUDNN_LIBNAMEto be specified by environment variable (#7243) - Use external
$MAKEFLAGSinstead if set in Travis CI script (#7331) - In
FindCuDNN.cmake, prioritize explicit variables over environment variables (#7441) - Add ChainerX build option to use cuDNN from CuPy installation (#7442)
- Pin
typing == 3.6.6(#7562) - Fix
typingrequirements (#7564)
Examples
- Add CIFAR example to ChainerMN (#6839, thanks @ai-kase!)
- Support device specifiers in MNIST data parallel example (#6857)
- Support device specifiers in PTB example (#7055)
- Support device specifiers in pix2pix example (#7076)
- Support device specifiers in static graph example (#7153)
- Support device specifiers in ImageNet data parallel example (#7164)
- Support ChainerX in MNIST inference example (#7169)
- Support device specifier in image captioning example (#7204)
- Support device specifier in image captioning example (
predict.py) (#7206) - Remove
PlotReport.available()check in glance example (#7209) - Minor fix in DCGAN example README (#7210)
- Fix sentiment example test (#7215)
- Support device specifiers in MNIST model parallel example (#7225)
- Use Agg backend in examples with plot functionality (#7247)
- Support ChainerX in PTB gentxt example (#7314)
- Support ChainerX in MNIST model parallel example (#7330)
- Warn NaN in FP16 mode in dcgan example (#7344)
- Warn NaN in FP16 mode in memnn example (#7345)
- Warn NaN in FP16 mode in pix2pix example (#7346)
- Warn NaN in FP16 mode in pos example (#7354)
- Warn NaN in FP16 mode in reinforcement learning examples (#7355)
- Warn NaN in FP16 mode in sentiment example (#7356)
- Warn NaN in FP16 mode in static_graph_optimizations/cifar example (#7357)
- Warn NaN in FP16 mode in static_graph_optimizations/mnist example (#7358)
- Warn NaN in FP16 mode in vae example (#7362)
- Warn NaN in FP16 mode in word2vec example (#7366)
- Fix typo in wavenet example requirements (#7367)
- Warn NaN in FP16 mode in wavenet example (#7372)
- Support ChainerX in static subgraph optimization examples (#7431)
- Implement
resetmethod in the PTB example (#7533)
Tests
- Add FP16 test to multi_node_chain_list (#6575)
- [chainerx] Fix skipped_backward tests to return as PASS (#6815, thanks @kshitij12345!)
- Add configuration of new CI system (#6843)
- Simplify
F.tensordottest (#6968, thanks @ishanrai05!) - Simplify
F.cumprodtest (#6978, thanks @hikjik!) - Simplify
F.averagetest (#6995, thanks @hikjik!) - Move
test_cuda.pytobackends_tests(#7144) - Fix missing cuda in
chainerx.swapaxestest (#7184, thanks @kshitij12345!) - Split
Variable.gradandVariable.grad_vartests (#7191) - Refactor
Variable.zerogradtest (#7199) - Add Tensor Core test for
chainerx.convandchainerx.conv_transpose(#7203) - Move
TestTanhfromtest_math.pyto test_trigonometric_hyperbolic.py (#7207) - Refactor
Variable.copydatatest (#7224) - Add a test to reproduce the bcast deadlock problem (#7257)
- Add float16 comparison test (#7260)
- Use
CUDA_VISIBLE_DEVICESin ChainerX tests (#7290) - Add
chainer.as_arraytest (#7318) - Rewrite
StandardUpdatertests with pytest style assertion (#7326) - Change
0to0.0for python2 (#7373) - Add missing parameter
dstacktoinvalid_shapetest (#7457, thanks @kshitij12345!) - Use
pytest.mark.xfailinstead ofunittest.expectedFailure(#7488)
Others
- Remove "Research projects using Chainer" from README (#7416)