Releases: chainer/chainer
v6.6.0
This is the release note of v6.6.0. See here for the complete list of solved issues and merged PRs.
Bug Fixes
- Fix SCE with ChainerX and normalize (#8311)
- Fix kernel of double backward of
max_pooling_2d
(#8329) - Fix ChainerX fallback condition in batch normalization (#8368)
- Fix
optimizer_hooks.GradientHardClipping
for scalar array (#8372) - Fix pickling of optimizers (#8417)
- Register uninitialized persistents (#8446)
Enhancements
- Compute
F.negative_sampling
in fp32 for fp16 inputs (#8309) - Fix
optimizer_hooks.GradientHardClipping
for ChainerX (#8377, thanks @kshitij12345!)
Documentation
- Fix documentation of NStepGRUBase (#8337, thanks @msakai!)
- Fix n-step RNN docs (#8402)
- Fix typo in
/examples/seq2seq/README.md
(#8404, thanks @tanaken0515!) - Changes citation to new KDD paper (#8418)
- Link to examples directory for the current branch (#8423)
- Update expected messages of
type_check
errors (#8456) - Update requirements (#8502)
Tests
- Fix Decorrelated Batch Normalization tests (#8340)
- Add missing FlexCI configurations (#8352)
- Use
LinkTestCase
forL.GroupNormalization
(#8355) - Show pytest summary in flexCI (#8369)
- Set
CHAINER_CI
in Travis CI (#8373) - Set
CHAINER_CI
in ChainerX tests in Jenkins (#8375) - Set
CHAINER_CI
in Chainer tests in FlexCI (#8381) - Print installed packages in pytest (#8386)
- Print actual array values in
FunctionTest
modified input error (#8388) - Avoid non-differential point in
TestTriplet
(#8396) - Use different docker image for each base development branch (#8401)
- Disable ChainerMN FlexCI tests on v6 (#8411)
- Use
fix_random
in xfail backward tests (#8457) - Avoid ChainerX slow tests in Jenkins (#8474)
- Use CuPy v6 in ChainerX test in Jenkins (#8477)
- Skip some
Convolution2D
tests for older numpy versions (#8478) - Fix Travis Openssl Error in OSX (#8480)
- Fix flaky test of
_modified_xlogx
(#8486) - Add error message for invalid base branch in pfnCI (#8500)
- Adjust timeout and build memory usage in FlexCI (#8503)
v7.0.0rc1
This is the release note of v7.0.0rc1. See here for the complete list of solved issues and merged PRs.
Announcements
This time, we will keep the current branches for active development (master
for v7.x, v6
for v6.x) after the RC. We will maintain v6.x series until Python2 EOL, so we do not cut the new development version for now to avoid increasing the number of branches to maintain. New features will be included directly into v7 for a while, and maintenance changes will be backported to v6.
Highlights
ONNX-Chainer Integration
ONNX-Chainer which used to be a separate project has now been integrated to the Chainer repository and made more accessible to existing Chainer users (#8229). You can easily export Chainer model as ONNX format like this:
import onnx_chainer
onnx_chainer.export(chainer_model, pseudo_input, filename='model.onnx')
For a more detailed description on how to get started, please refer to the ONNX-Chainer section in the official documentation.
ChainerMN
ChainerMN now works with ChainerX. In this release, the MNIST example has also been updated to demonstrate the usage. (#7844)
New Features
- Add
UpsamplingDeconvFilter
andDownsamplingConvFilter
initializer (#5290, thanks @knorth55!) - Add
chainerx.meshgrid
(#6668, thanks @kshitij12345!) - Add
chainerx.hsplit
(#7030, thanks @ishanrai05!) - Add
linalg.cholesky
to ChainerX (#7329, thanks @IvanYashchuk!) - Add
linalg.eigh
,linalg.eigvalsh
to ChainerX (#7503, thanks @IvanYashchuk!) - ChainerX + ChainerMN integration on MNIST (#7844)
- New configuration system of communicator inspired by links (#7885)
- More efficient multi-node snapshot (#8003)
- A new multi-node evaluator for
force_equal_length=False
(#8071) - Allow weight initializer to have its own
RandomState
instance (#8081, thanks @mr4msm!) - Add
chainerx.hinge
(#8168) - Integrate ONNX-Chainer to Chainer repository (#8229)
- Implement
chainerx::SoftmaxCrossEntropy
andchainerx.softmax_cross_entropy
(#8250) - Add
chainermn.testing.to_device
function (#8279) - Add
chainerx.copyto
(#8314, thanks @kshitij12345!)
Enhancements
- Rename
TabularDataset.as_tuple/as_dict
toTabularDataset.astuple/asdict
(#7788) - Deprecate
DeviceResident.to_gpu
/to_cpu
/to_intel64
(#8058) - Support zero-sized matrix in
generate_matrix
(#8167) - Add mode argument to
chainerx.take
(#8197) - Delete move and copy of virtual
*GradState
classes (#8224) - Fix directional gradient stability in
gradient_check
(#8236) - Fix some typo (#8243, thanks @garanews!)
- Fix CuPy installation detection error message (#8264)
- Fix intel64 support of
F.batch_normalization
(#8266) - Fix dim clearing on output (#8270)
- Remove
device
argument fromchainerx.diag
andchainerx.diagflat
(#8275) - Fix algorithm to avoid small directions in
gradient_check
(#8290) - Show import error with guild message on ONNX (#8293)
- Partially
output_grad
support onfake_as_funcnode
(#8298) - Compute
F.negative_sampling
in fp32 for fp16 inputs (#8300) - Make some arguments keyword-only. Note that some of them may break code based on v7 beta versions, but none of them breaks the compatibility against v6.
- Make
mode
andalign_corners
arguments inF.resize_image
keyword-only (#8009) - Make
weights
andkeepdims
arguments inVariable.mean
keyword-only (#8010) - Make arguments of
WeightStandardization
keyword-only (#8011) - Make
call_before_training
argument ofTrainer.extend
keyword-only (#8064)- The argument was introduced in v7.0.0b3, so it is not counted as compatibility break of v7.
- Make arguments in
ObservationAggregator
andMultiNodeEarlyStoppingTrigger
keyword-only (#8065) - Make
force_equal_length
argument inscatter_dataset
andscatter_index
keyword-only (#8066) - Make
size
argument oftabular.from_data
keyword-only (#8067)
- Make
Performance Improvements
- Make contiguous case for
chainerx::Take
faster (#8295)
Bug Fixes
- Fix subgraph construction for ChainerX backward (#8049)
- Fix a bug in
F.batch_normalization
with mixed dtype (#8149) - Fix
__str__
of parameterized class (#8169) - Fix bugs when
x
andgamma
/beta
have different dtypes inF.batch_normalization
(#8175) - Change
copy
to__deepcopy__
in ChainerMNbatch_normalization
and replaceto_gpu
(#8185) - Fix possible data race in CUDA memory keeper (#8213)
- Add virtual destructor to CUDA
Allocator
(#8215) - Inherit input ndarray device in
chainerx.ascontiguousarray
(#8262) - Do not expose
global_kernel_registry
(#8265) - Fix SCE with ChainerX and normalize (#8301)
- Unable to use
gpu_id=0
in ChainerMN testingget_device
(#8304)
Code Fixes
- Update variable names for consistent naming convention (#8074)
- Fix style of
setup.cfg
(#8180) - Remove unused forward declaration of
AveragePoolPadMode
enum (#8214) - Write Read the Docs related comments in
setup.py
(#8218) - Remove unused classes
{Max,Average}PoolForwardBackward
(#8223) - Conform to
readability-avoid-const-params-in-decls
(#8225) - Simplify direction vector sampling in
gradient_check
(#8238) - Use type hint for method declaration (#8248)
- Remove obsolete comment in
F.softmax_cross_entropy
(#8253) - Fix import order and grouping (#8257)
- Simplify
CreateSubgraph
(#8310)
Documentation
- Change citation to new KDD paper (#7994)
- Fix a typo in the Cauchy distribution page (#8208, thanks @nzw0301!)
- Fix
resize_images
documentation to reflect recent code changes (#8221, thanks @zu3st!) - Set up documentation for loss functions in ChainerX (#8231)
- Add documentation for
chainerx.ravel
(#8233) - Add documentation for
chainerx.sigmoid_cross_entropy
(#8249) - Put a link to CuPy installation guide in README instead of a command instruction (#8287)
Installation
- Add ability to build with ninja generator. (#8194, thanks @cloudhan!)
- Suppress warnings-as-errors from external libraries (#8227)
- Write CMake generator when building (#8239)
- Add
libchainerx_base.a
to link chainerx statically (#8247)
Examples
- Fix WaveNet example not working (#8157, thanks @dhgrs!)
- Fix
generate.py
inexamples/wavenet
(#8172, thanks @dhgrs!)
Tests
- Simplify
F.scale
test (#6969, thanks @ishanrai05!) - Improve example tests (#7475)
- Add fp16 test to
test_n_step_rnn
(#7483) - Fix protobuf dependency (#7529)
- Fix
TestAccuracy
: Randomly reduce testing parameters (#7820) - Support ChainerMN testing in pfnci (#7821)
- Fix flaky tests of
chx.linalg.solve
(#7997) - Fix overflow warning in div backward test (#8109)
- Fix flaky
TestQR
(#8114) - Disable flaky test retry in flexCI (#8143)
- Pairwise testing (#8164)
- Allow
pytest.skip()
in combination withtesting.repeat
/retry
(#8174) - Remove
DummySerializer
andDummyDeserializer
fromiterators_tests
(#8176) - Fix comparison with casting in hdf5 serializer test (#8182)
- Relax
BatchNormalization
backward test tolerances (#8189) - Fix caffe test with
protobuf>=3.8
(#8190) - Add
CHAINER_TEST_PAIRWISE_PARAMETERIZATION
and enable it only in Travis CI (#8211) - Fix
attrs
package version (#8219) - Fix
HDF5Serializer
test for h5py<2.9 (#8220) - Fix flaky
TestBatchNormalization
(#8230) - Relax tolerances in ChainerX unary math tests (#8234)
- Add
"jenkins"
extras (#8241) - Use
clang-format-6.0
if possible and track the version ofclang-format
(#8242) - Remove legacy
DeprecationWarning
filter fromtest_multi_node_chain_list
(#8246) - Fix
chainex_tests
/unit_tests
/routines_tests
/test_linalg.py::Inverse
(#8255) - Fix flaky
TestHuberLoss
(#8271) - Stop setting too small tolerances in backprop tests (#8283)
- Make
ImportWarning
just a warning in tests (#8291) - Fix
gtest
linkage (#8292, thanks @cloudhan!) test_average
is slow in FlexCI (#8303)- Add ChainerX to
test_mnist
inchainermn_tests
(#8305) - Implement
communicator_test
for ChainerX+ChainerMN (#8313)
Others
v6.5.0
This is the release note of v6.5.0. See here for the complete list of solved issues and merged PRs.
Enhancements
- Display ChainerX availability in
print_runtime_info
(#7860) - Fix CuPy installation detection error message (#8278)
Bug Fixes
- Fix
__str__
of parameterized class (#8184)
Code Fixes
- Update variable names for consistent naming convention (#8307)
Documentation
- Add document print runtime info (#8165)
- Fix RNN documentation (#8203)
- Fix a typo in the Cauchy distribution page (#8209, thanks @nzw0301!)
Tests
- Increase CPU memory for test instance in PFN CI (#7955)
- Fix overflow warning in div backward test (#8188)
- Disable flaky test retry in flexCI (#8191)
- Relax
BatchNormalization
backward test tolerances (#8196) - Fix comparison with casting in hdf5 serializer test (#8198)
- Fix tests of
L.BatchRenormalization
and adjust tolerances (#8200) - Adjust
TestConvolution2DFunction::test_double_backward
fp16 tolerance (#8201) - Fix
attrs
version (#8222) - Fix caffe test with protobuf>=3.8 (#8232)
- Relax tolerances in ChainerX unary math tests (#8235)
- Add Jenkins extras (#8252)
- Fix
HDF5Serializer
test for h5py<2.9 (#8256)
Others
- Replace Slack invitation links (#8284)
v7.0.0b4
This is the release note of v7.0.0b4. See here for the complete list of solved issues and merged PRs.
Highlights
Many updates to ChainerX including new routines and support for loss scaling.
New Features
- Support all float dtypes in
F.n_step_rnn
andF.n_step_birnn
(#5808) - Add
chainerx.vsplit
to ChainerX (#7032, thanks @ishanrai05!) - Add
chainerx.linalg.qr
to ChainerX (#7379, thanks @IvanYashchuk!) - Add
chainerx.accuracy
(#7526, thanks @aksub99!) - Add
chainerx.{remainder/mod}
(#7675, thanks @sky58!) - Add Tree-LSTM to ChainerX (#7720, thanks @dido1998!)
- Add S-LSTM to ChainerX (#7783, thanks @dido1998!)
- Loss scale support for chainerx (#7979)
- Add
F.zeta
(#8059, thanks @UmashankarTriforce!) - Add
testing.generate_matrix
to get matrices of given singular values (#8077) - Add
chainerx.fmod
(#8110) - Add
chainerx.nonzero
(#8124)
Enhancements
- Abbreviate output of
chainerx::ArrayRepr
for large inputs (#7708) - Make parameterized test names deterministic (#7945)
- Raise
FutureWarning
on GPU-to-GPU transfer inStandardUpdater
(#7952) - Always get
typeid
of kernels inlibchainerx
(#7970) - Fixed support of 0-sized arrays for linalg routines in ChainerX (#7980, thanks @IvanYashchuk!)
- Support CuPy/ChainerX arrays when initializing
variable.Parameter
objects (#8022) - Add cuda
ScanKernel
(#8103)
Performance Improvements
Bug Fixes
- Fix deadlock on
MultiprocessIterator
andMultiprocessParallelUpdater
(#7511) - Support
mixed16
/float16
GroupNormalization
(#7965) - Change return policy for
chx::Device
object onndarray
pickling (#7988) - Fix deepcopy for chain parameters (#7996)
- Fix floating point exception in ChainerX inferred reshape (#8018)
- Fix
chainerx::Dot
edge cases with empty arrays (#8020) - Fix LSTM for omitted upstream gradients (#8037)
- Fix native
AddAt
implementation for float16 arrays (#8055) - Correctly cast
fill_value
in constant initializer (#8089)
Code Fixes
- Simplify
ArrayReprImpl
(#7699) - Remove unnecessary file (#8000)
- Refactor
F.batch_normalization
and ChainerMN backend implementations (#8039) - Fix
-Wabsolute-value
for clang (#8045) - Generalize and simplify
NativeCumsumKernel
(#8053) - Fix coding style of some imports in ChainerMN (#8060)
- Fix
-Wbraced-scalar-init
for clang (#8076) - Use standard constructor (#8088, thanks @cloudhan!)
- Remove unused headers in
arithmetic.{h,cc}
(#8128)
Documentation
- Fix doc of
backend.copyto
(#7832) - Document
chainerx.to_numpy
(#7984) - Fix RNN docs for ChainerX (#7985, thanks @dido1998!)
- Remove obsolete note about
chainerx.take
indices dtype (#7998) - Add undocumented arguments to snapshot extension signature (#8004)
- Fix grammatical errors in documentation (#8029)
- Fix heading anchor in ChainerX docs (#8091)
- Documentation improvement
CHAINERX_ENABLE_{BLAS,LAPACK}
(#8099) - Add document print runtime info (#8125)
- Fix RNN documentation (#8144)
- Add documentation for
chainerx.minimum
(#8146) - Remove obsolete note in
chainerx.maximum
doc (#8147) - Fix typo (#8160)
Installation
- Fix NumPy version in Dockerfile (#8027)
- Add
cblas.h
and modifiedCMakeLists.txt
(#8052, thanks @okdshin!) - Fix environment variable
CHAINERX_ENABLE_LAPACK=0
causes error (#8086, thanks @cloudhan!) - Update abseil to new release (#8120)
Examples
- Use some latest features for the WaveNet example (#6285)
- Separate training script into main part and data submodule to avoid an error related to NVIDIA DALI. (#8127, thanks @lazykyama!)
Tests
- Treat warnings as errors in tests (#6653)
- Filter
DeprecationWarning
intest_maniplation.py
(#7824) - Avoid unnecessary test condition in
F.max_pooling_2d
test (#7924) - Add test for optimizers test coverage (#7927)
- Fix flaky
negative_sampling
(#7975) - Avoid testing full combinations in
F.lstm
test parameterization (#7987) - Relax tolerances in
gradient_check
test (#7989) - Drop Python 2 Travis CI configuration (#8013)
- Drop Python 2 AppVeyor configuration (#8014)
- Drop Python 2 PFN CI configuration (#8017)
- Suppress number of combinations of in_out_dtype (#8023)
- Avoid non-differentiable point in min/max tests (#8044)
- Adjust
TrueDiv
tolerances (#8047) - Add scripts for Docker base images for Chainer CI (#8075)
- Fix tests of
L.BatchRenormalization
and adjust tolerances (#8080) - Add timestamp to Travis CI log (#8085)
- Explicit
h5py.File
mode
(#8090) - Fix flaky tests with
np.empty
(#8096) - Revive clang-tidy test in Travis CI (#8098)
- Fix matrix generation in linear algebra
PseudoInverse
test (#8102) - Remove duplicated parameter in
test_normal.py
(#8111) - Register pytest markers (#8112, #8132)
- Fix macOS Travis error caused by Homebrew (#8115)
- Add
ignore::ImportWarning
tosetup.cfg
(#8131) - Relax tolerance of im2col test (#8133)
- Allow
fix_random
decorator to be used withOpTest
(#8136) - Fix missing dtype checks in ChainerX loss test (#8141)
- Fix flaky
NStepRNN
andNStepBiRNN
(#8142) - Avoid
empty
inF.cast
test that can cause overflow warning (#8152) - Make xdist usable in ChainerX tests (#8155)
- Adjust
TestConvolution2DFunction::test_double_backward
fp16 tolerance (#8163)
Others
- Convert tabs to spaces in
setup.cfg
(#8154)
v6.4.0
This is the release note of v6.4.0. See here for the complete list of solved issues and merged PRs.
Enhancements
- Insert missing spaces between concatenated string literals (#7935)
- Make parameterized test names deterministic (#8134)
Bug Fixes
- Fix decorrelated batch normalization when groups ≠ 1 (#7825)
- Support mixed16/float16
GroupNormalization
(#8113) - Fix deadlock on
MultiprocessIterator
andMultiprocessParallelUpdater
(#8126) - Fixes
deepcopy
for chain parameters (#8150)
Code Fixes
- Remove unused argument from decorrelated batch norm (#8097)
Documentation
- Add undocumented arguments to snapshot extension signature (#8016)
- Add a note about incompatibility with NumPy 1.17 + Python2 (#8028)
- Fix grammatical errors in documentation (#8036)
- Fix doc of
backend.copyto
(#8056) - Fix typo (#8161)
Installation
- Fix NumPy version in Dockerfile (#8068)
Tests
- Refactor
DecorrelatedBatchNormalizationTest
and add stable input (#7940) - Relax float16 tolerances in
F.batch_inv
test (#7981) - Relax tolerances in old cuDNN convolution tests (#7982)
- Fix numerical gradient precision in
F.squared_error
test (#8012) - Fix flaky
negative_sampling
(#8019) - Relax tolerances in
gradient_check
test (#8021) - Explicit
h5py.File
mode
(#8107) - Fix eps in
Contrastive.backward
(#8108) - Remove duplicated parameter in
test_normal.py
(#8117) - Fix macOS Travis error caused by Homebrew (#8118)
- Add timestamp to Travis CI log (#8119)
- Relax tolerance of
im2col
test (#8135)
v7.0.0b3
This is the release note of v7.0.0b3. See here for the complete list of solved issues and merged PRs.
Dropping Support of Python 2
Due to the end-of-life (EOL) of Python 2 in January 2020, Python 2 support has been dropped in this release. Chainer v6.x continues to support Python 2. See the blog post for details.
Note on F.max_pooling_2d
refactoring
Implementation of F.max_pooling_2d
has been merged to F.max_pooling_nd
. The behavior is unchanged, so ordinary users should not be affected by this change. However, the FunctionNode
class recorded in the computational graph corresponding to F.max_pooling_2d
has changed from MaxPooling2D
to MaxPoolingND
. The code explicitly depending on this class will need a fix.
New Features
- Add an option to invoke extensions before training (#3511, thanks @wkentaro!)
- Add automatic management of snapshots (deletion and load) (#6856)
- Add
chainerx.repeat
(#7223, thanks @durswd!) - Support mixed indices in
TabularDataset.slice
(#7251) - Add
chainer.dataset.tabular.DelegateDataset
(#7276) - Add
ObservationAggregator
extension to ChainerMN (#7302) - Add strict mode to
scatter_dataset
as well asscatter_index
(#7327) - Add
chainer.dataset.tabular.from_data
(#7361) - Add
linalg.svd
,linalg.pinv
to ChainerX (#7411, thanks @IvanYashchuk!) - Add
TabularDataset.convert/with_converter
(#7428) - Add
linalg.solve
,linalg.inv
to ChainerX (#7474, thanks @IvanYashchuk!) - Add base
Converter
class (#7489) - Add
chainerx.sigmoid_cross_entropy
(#7524, thanks @aksub99!) - Add
chainerx.cumsum
(#7558, thanks @aksub99!) - Add
chainerx.nansum
(#7719, thanks @aksub99!) - Add
chainerx.nanargmax
andchainerx.nanargmin
(#7755, thanks @aksub99!) - LSTM, GRU and RNN implementation for ChainerX (#7764, thanks @dido1998!)
- Add
tri*
routines to ChainerX (#7791, thanks @IvanYashchuk!) - Add finalize method to ChainerMN
CommunicatorBase
class (#7814) - Add
numerical_grad_dtype
toFunctionTestCase
andLinkTestCase
(#7817) - Support callable in
tabular.from_data
(#7847) - Add
chainerx.count_nonzero
(#7852, thanks @aksub99!) - Implement hooks for memory pool in ChainerX (#7898)
- Add
chainerx.flatten
(#7901, thanks @aksub99!) - Add
chainerx.ravel
(#7904, thanks @aksub99!)
Enhancements
- Use numbers for input check in
roi_{average|max}_{pooling|align}_2d.py
(#5636, thanks @knorth55!) - Warn
Link.to_gpu
unless compatible withto_device
(#5762) - Change
F.dropout
to use cuDNN by default (#7185, thanks @crcrpar!) - Fix Adam FP16 overflow on GPU kernels (#7694)
- Improve chainerx import check (#7738)
- Make
F.average
as accurate as backend (#7758) - Improve NCCL availability error in
PureNcclCommunicator
(#7793) - Fix
type_check
error message on evaluating bool expression (#7795) - Fix module in msg of
type_check
(#7803) - Use scalar array in
chx.leaky_relu
/elu
(#7816) - Allow
None
inputs to gradient check and generatingNone
gradients inFunctionTestCase
(#7831) - Display ChainerX availability in
print_runtime_info
(#7833) - Add support for input with different dtypes for 'linalg.solve' in ChainerX (#7840, thanks @IvanYashchuk!)
- Fix
F.clip
for NumPy 1.17 (#7843) - Include
rtol * abs(b)
inallclose
output (#7848) - Fix SLSTM for omitted upstream gradients (#7891)
- Fix LSTM for omitted upstream gradients (#7896)
- Insert missing spaces between concatenated string literals (#7930)
- Fix a typo in a kernel name (#7962)
Bug Fixes
- Fix
TypeError
inmax_pooling_2d
(#6835, thanks @ishanrai05!) - Fix multi-device loss scaling (#7594)
- Avoid unload module call in
PureNcclCommunicator
(#7600) - Fix decorrelated batch normalization when groups ≠ 1 (#7707)
- Fix
create_mnbn_model()
bug (#7718) - Fix
optimizer_hooks.GradientHardClipping
for scalar array (#7760) - Fix "zero division" in resize image (#7769, thanks @meokz!)
- Fix ChainerX non native deserialization (#7830)
- Fix
backends.copyto
from chainerx to non-chainerx (#7835) - Fix backward of
split_axis
for intel64 whengrad_ouputs
containsNone
(#7836) - Support for CUDA async in batched copy (#7877)
- Add scatter interface to
CommunicatorBase
(#7888) - Add
DeprecationWarning
to initializer ofBuildingBlock
(#7909) - Fix in-place update of arrays in
Link.serialize
andoptimizers.Adam
(#7918) - Fix precision in
F.max_pooling_2d
(#7922)
Code Fixes
- Avoid using
_fallback_workarounds
inSpectralNormalization
(#7539) - Create
links.rnn
andfunctions.rnn
(#7725) - Add
batched_copy
to allCommunicators
(#7761) - Remove unused lambda capture of
axis
(#7799) - Remove unused argument from decorrelated batch norm (#7828)
- Fix copies for
linalg.svd
python bindings layer in ChainerX (#7866, thanks @IvanYashchuk!) - Replace
n_layer
withn_layers
for consistency (#7871) - Rename a variable in CUDA SVD kernel (#7921, thanks @IvanYashchuk!)
- Refactor
pooling_nd
functions (#7938) - Merge implementation of
F.max_pooling_2d
intoF.max_pooling_nd
(#7939) - Fix typo in comment: unique -> deterministic (#7775)
Documentation
- Fix
static_graph
docs code examples (#7875) - Add 1.17 to supported NumPy versions (#7883)
- Add
scatter
to doc (#7897) - Update stable version in README (#7948)
Installation
- Relax typing version requirement in Python 3 (#7811)
- Remove mypy from requirements (#7812)
- Add OpenMP option for cuSOLVER (#7839)
- Fix Windows build of ChainerX (#7967, thanks @cloudhan!)
Examples
- Improve VAE example (#7250)
- Show prompt in text classification example (#7858, thanks @UmashankarTriforce!)
Tests
- Add test to ensure no mutable default arguments (#4413)
- Simplify
F.max_pooling_2d
test (#6836, thanks @ishanrai05!) - Simplify
F.lstm
test (#7808, thanks @dido1998!) - Simplify
F.slstm
test (#7805, thanks @dido1998!) - Simplify
F.n_step_rnn
test (#7804, thanks @dido1998!) - Simplify
F.n_step_lstm
test (#7807, thanks @dido1998!) - Simplify
F.n_step_gru
test (#7806, thanks @dido1998!) - Simplify
F.embed_id
test (#7903, thanks @dido1998!) - Add ChainerCV's tests to pfnCI (#7060)
- Add mixed16 tests to multi-node chain list (#7630)
- Add mixed16 tests to collective functions (#7633)
- Add mixed16 tests to
point_to_point
communications (#7637) - Add mixed16 tests to
pseudo_connect
(#7638) - Skip flaky
TestConv*TensorCore
(#7710) - Fix test of
chx.reshape
(#7762) - Revert tentative workaround related to OpenSSL (#7790)
- Switch current directory in Jenkins tests (#7834)
- Fix flaky
TestHuberLoss
(#7837) - Configure tolerances of
F.average_pooling_2d
test (#7841) - Fix
F.clipped_relu
test for NumPy 1.17 (#7842) - Add
test_accuracy.py
to the list of slow test files (#7851) - Fix
BatchNorm
flaky of ChainerX (#7857) - Refactor convolution functions tests (#7863)
- Relax tolerances in convolution function tests when using old cuDNN (#7864)
- Fix
test_TrilTriu
(#7865) - Fix
chainerx.logsumexp
test tolerance (#7867) - Relax tolerances in convolution link tests when using old cuDNN (#7868)
- Relax float16 tolerances in ChainerX binary math tests (#7874)
F.tree_lstm
test for ChainerX (#7881, thanks @dido1998!)- Avoid
ndarray.data
access and fix wrong test (#7890) - Sample stable inputs in tests of group normalization (#7894)
- Avoid unstable inputs in tests of decorrelated batch normalization (#7900)
- Relax fp16 tolerance in
TrueDiv
test (#7917) - Avoid testing
F.cast
from negative floating-point to unsigned (#7920) - Fix tolerance in
L.CRF1d
test (#7926) - Refactor
DecorrelatedBatchNormalizationTest
and add stable input (#7932) - Relax tolerances in old cuDNN convolution tests (#7942)
- Fix flaky
chainerx.power
test (#7950) - Increase CPU memory for test instance in PFN CI (#7951)
- Relax fp16 tolerances in
TestContrastive
(#7953) - Relax float16 tolerances in
F.batch_inv
test (#7971)
Others
- Drop support for Python 2.7 (#7826)
v6.3.0
This is the release note of v6.3.0. See here for the complete list of solved issues and merged PRs.
Highlights
- NumPy 1.17 is now officially supported.
New Features
- Add automatic management of snapshots (deletion and load) (#7862)
Enhancements
- Fix Adam FP16 overflow on gpu kernels (#7780)
- Make
F.average
as accurate as backend (#7782) - Fix
type_check
error message on evaluating bool expression (#7801) - Fix module in msg of
type_check
(#7810) - Fix
F.clip
for NumPy 1.17 (#7855)
Bug Fixes
- Fix
Parameter.dtype
for uninitialized parameter (#7749) - Fix
UpdateRule.use_fp32_update
for uninitialized parameter (#7751) - Avoid unload module call in
PureNcclCommunicator
(#7787) - Fix
TypeError
inmax_pooling_2d
(#7789, thanks @ishanrai05!) - Fix
create_mnbn_model()
bug (#7846) - Fix backward of
split_axis
for intel64 whengrad_ouputs
containsNone
(#7931) - Fix precision in
F.max_pooling_2d
(#7933) - Fix
backends.copyto
from/to chainerx (#7934) - Fix in-place update of arrays in
Link.serialize
andoptimizers.Adam
(#7941) - Fix ChainerX non native deserialization (#7954)
- Fix multi-device loss scaling (#7968)
Documentation
Tests
- Fix test of
chx.reshape
(#7792) - Revert #6754 (Fix Travis with macOS) (#7800)
- Fix a typo in
test_communicator
(#7822) - Fix
F.clipped_relu
test for NumPy 1.17 (#7854) - Switch current directory in Jenkins tests (#7856)
- Fix flaky
TestHuberLoss
(#7869) - Configure tolerances of
F.average_pooling_2d
test (#7870) - Refactor convolution functions tests (#7873)
- Relax tolerances in convolution link tests when using old cuDNN (#7878)
- Fix
chainerx.logsumexp
test tolerance (#7889) - Relax tolerances in convolution function tests when using old cuDNN (#7895)
- Sample stable inputs in tests of group normalization (#7899)
- Relax float16 tolerances in ChainerX binary math tests (#7908)
- Avoid
ndarray.data
access and fix wrong test (#7913) - Avoid unstable inputs in tests of decorrelated batch normalization (#7915)
- Avoid testing
F.cast
from negative floating-point to unsigned (#7944) - Relax fp16 tolerances in
TestContrastive
(#7959) - Relax fp16 tolerance in
TrueDiv
test (#7972) - Fix tolerance in
L.CRF1d
test (#7977)
v7.0.0b2
This is the release note of v7.0.0b2. See here for the complete list of solved issues and merged PRs.
Highlights
ChainerX has several new backproppable ops such as ELU and softplus activation functions and loss functions including absolute error, squared error, Huber loss and Gaussian KL divergence. ChainerX is also supported in all OptimizerHook
s when used through Chainer. TabularDataset
has also been improved with new features.
Changes without compatibility
Variable.grad
getter now raises an error when it is called before callingcleargrad
,zerograd
, or setting the gradient directly. (#7146)- Moving average statistics of
BatchRenormalization
(usage of epsilon) is fixed. It affects the inference behavior. (#7202) - Deprecated communicators in ChainerMN have now been removed. Those include
HierarchicalCommunicator
,SingleNodeCommunicator
andTwoDimensionalCommunicator
and are no longer necessary as NCCL now supports inter-node communication. (#7697)
New Features
- Add
WeightStandardization
link hook (#6678, thanks @hitsgub!) - Add
chainerx.dsplit
(#7031, thanks @ishanrai05!) - Add basic loss functions (#7063, thanks @kshitij12345!)
- Add basic activation functions (#7118, thanks @aksub99!)
- Add
chainerx.left_shift
andchainerx.right_shift
(#7339, thanks @sky58!) - Add
chainerx.elu
(#7439, thanks @aksub99!) - Add unary mode to
TabularDataset
(#7493) - Add
TabluarDataset.__iter__
(#7601) - Add
Variable.mean
(#7670) - Add
chainerx.softplus
(#7679, thanks @aksub99!)
Enhancements
- Avoid mutable default arguments (#4822)
- Set initial
top_data
as-np.inf
andargmax_data
as-1
inF.roi_max_pooling_2d
(#6237, thanks @knorth55!) - Add a flag to detect access to grad before calling
cleargrad
(#7146) - Add fp16 support to collective functions (#7456)
- Call
chainerx.grad
fromchainer.grad
(#7464) - Use abseil to print stacktrace when signal is raised in ChainerX (#7502)
- Emit build info of ChainerX and stop hiding
ImportError
(#7518) - Avoid chainerx implicit type conversions (#7520)
- Make
device
argument a keyword only argument. (#7537, thanks @kshitij12345!) - Support ellipsis in
Array::At
and__getitem__
(#7561) - Introduce
chainerx.ndarray._is_chained
(#7565) - Remove
squared_difference
and fix docs (#7582) - Avoid code duplication in optimizer hook implementation (#7592)
- Refactor
allreduce_grad()
and functions related with it (#7604) - Raise python
IndexError
if the index__getitem__
takes is out of bounds (#7614) - Use
six.integer_types
for axis check inF.concat
(#7632, thanks @knorth55!) - Fix
optimizer_hooks.GradientClipping
for ChainerX (#7641) - Replace optional-lite with abseil (#7646)
- Make devices hashable (#7648)
- Fix
optimizer_hooks.GradientHardClipping
for ChainerX (#7656, thanks @kshitij12345!) - Implement
IntervalTrigger.__str__
(#7664, thanks @ktns!) GradientLARS
optimizer hook working with ChainerX (#7669)- Use
absl::Span
and related helpers instead ofgsl::span
(#7671) - Added chainerx support on initializers (#7687)
- Delete deprecated communicators (#7697)
- Use
six.integer_types
for axis checks (#7713) - Require CUDA if
CHAINERX_BUILD_CUDA
is set (#7752)
Bug Fixes
- Skip
None
array inFunctionNode
NaN check (#6283) - Fix unit selection of
CupyMemoryProfiler
(#7003) - Exclude eps from
running_var
ofF.batch_renormalization
(#7202) - Fix pickling issues on
MultiprocessIterator
(#7486) - Fix
initializers.Identity
for ideep backend (#7548) - Fix a bug of
chainermn.links.create_mnbn_model
(#7603) - Fix
PickleDataset
crash when using multiprocessing (#7625, thanks @zaltoprofen!) - Fix
AMSGrad
with intel64 backend (#7661) - Fix an error on
chainer.grad
for multiple devices (#7692) - Fixes spectral normalization chainerx conversion (#7698)
- Fix offset in
chainerx::Flip
(#7727) - Fix reporter for multi-thread use (#7731)
- Fix
Parameter.dtype
for uninitialized parameter (#7735) - Fix
UpdateRule.use_fp32_update
for uninitialized parameter (#7736)
Code Fixes
- Use
backend.get_array_module
notcuda.get_array_module
(#7514, thanks @crcrpar!) - Make
squared_difference
alias ofsquared_error
(#7547) - Avoid code duplication and access violation between
Optimizer
andGradientMethod
(#7585) - Use
chainerx.clipped_relu
inF.clipped_relu
(#7588) - Use old syntax to suppressing warning in ChainerX (#7615)
- Rename split functions in pybind implementation (#7617)
- Cleanup
CMakeList.txt
(#7647) - Fix flake8 error (#7663)
- Avoid else after return (#7666)
- Use curly braces for constructors (#7667)
Documentation
- Improve contribution docs (#6492)
- Explain corresponding
Link
s (#6512) - Fix inconsistent document for extension finalizer (#7557)
- Document
CHAINERX_CUDNN_USE_CUPY
(#7574) - Fix typos in
ResNet
prepare method (#7577) - Tiny fix of
BackwardContext
comment (#7595, thanks @crcrpar!) - Fixes typos in
expand_dims.py
(#7602) - Remove moved comment (#7607)
- Correct missing parenthesis in documents (#7611, thanks @tinunkai!)
- Minor grammar Improvements to broadcast documentation. (#7621)
- Edits
FunctionNode
docs. (#7622) - Fix a typo in
chainer/functions/math/average.py
(#7653, thanks @ktns!) - Fixes a grammar error (#7658)
- Fix typo in
F.squeeze
documentation (#7682)
Examples
- Support default dtype in sentiment example's recursive minibatch version (#7438)
- Warn NaN in FP16 mode in sentiment example's recursive minibatch version (#7447)
- Fix typo in
examples/vae/train_vae.py
(#7578, thanks @m4saka!) - Example fix: stateful triggers cannot be reused (#7665)
Tests
- Simplify
F.polygamma
test (#6970, thanks @ishanrai05!) - Simplify
F.cast
test (#7034) - Refactor optimizer test for multi-backend (#7590)
- Fix
y_shape
not used in tests (#7610) - Test
optimizer_hooks.Lasso
for ChainerX (#7657, thanks @kshitij12345!) - Fix
GroupNormalization
tests (#7684) - Test
optimizer_hooks.GradientNoise
for ChainerX (#7709, thanks @kshitij12345!) - Fix warning filter for
protobuf
(#7715) - Test
optimizer_hooks.WeightDecay
for ChainerX (#7716, thanks @kshitij12345!) - Relax
atol
/rtol
ofchainerx.erf
float16 test (#7721) - Fix flaky
TestHuberLoss
(#7723) - Reverse input array for non-contiguous tests (#7728)
- Fix eps in
Contrastive.backward
(#7745) - Fix flaky
TestContrastive
(#7747)
Others
v6.2.0
This is the release note of v6.2.0. See here for the complete list of solved issues and merged PRs.
Enhancements
- Avoid code duplication in optimizer hook implementation (#7674)
- Use
six.integer_types
for axis check inF.concat
(#7712, thanks @knorth55!) - Use
six.integer_types
for axis checks (#7770)
Bug Fixes
- Fix a bug of
chainermn.links.create_mnbn_model
(#7618) - Fix unit selection of
CupyMemoryProfiler
(#7639) - Skip
None
array inFunctionNode
NaN check (#7642) - Fix
AMSGrad
with intel64 backend (#7689) - Fix spectral normalization chainerx conversion (#7705)
- Fix
PickleDataset
crash when using multiprocessing (#7729, thanks @zaltoprofen!) - Fix pickling issues on
MultiprocessIterator
(#7742) - Fix an error on
chainer.grad
for multiple devices (#7746)
Code Fixes
- Remove backslashes to continue lines of link targets (#7182)
- Use
backend.get_array_module
notcuda.get_array_module
(#7619, thanks @crcrpar!) - Avoid code duplication and access violation between
Optimizer
andGradientMethod
(#7644)
Documentation
- Add
chainer.get_device
to doc (#6831) - Correct Embed ID documentation (#7575)
- Fix documentation for
shape
ingenerate_array
(#7576) - Fix typos in ResNet prepare method (#7579)
- Fix inconsistent document for extension finalizer (#7581)
- Fix typos in
expand_dims.py
(#7608) - Minor grammar Improvements to broadcast documentation. (#7623)
- Explain corresponding
Link
s (#7628) - Correct missing parenthesis in documents (#7635, thanks @tinunkai!)
- Tiny fix of
BackwardContext
comment (#7636, thanks @crcrpar!) - Edit
FunctionNode
docs. (#7659) - Improve contribution docs (#7680)
- Fix typo in
F.squeeze
documentation (#7688) - Fix a grammar error (#7711)
Examples
- Fix typo in
examples/vae/train_vae.py
(#7580, thanks @m4saka!) - Support default dtype in sentiment example's recursive minibatch version (#7596)
- Warn NaN in FP16 mode in sentiment example's recursive minibatch version (#7598)
- Example fix: stateful triggers cannot be reused (#7683)
Tests
v7.0.0b1
This is the release note of v7.0.0b1. See here for the complete list of solved issues and merged PRs.
Highlights
- TabularDataset is added. This is a new dataset interface that supports rich manipulation in a tabular form (like pandas.DataFrame), e.g. loading only a specified subset of keys (columns), efficient slicing (with less transposition/concatenation), batch-wise preprocessing, etc. The API is still under development; we are adding more functionalities and widening its support in existing features where datasets are involved.
New Features
- Add interface to backprop from multiple variables (#5952)
- Option to show progress bar during evaluation (#6474, thanks @wkentaro!)
- Elementwise
Power
for ChainerX (#6496, thanks @dido1998!) - Add
chainerx.hstack
,chainerx.vstack
andchainerx.atleast_2d
(#6886, thanks @kshitij12345!) - Add
TabularDataset
(#7115) - Add
TabularDataset.concat/join
(#7116) - Add
chainerx.expm1
andchainerx.exp2
(#7126, thanks @aksub99!) - Add
chainerx.log2
(#7139) - Add
TabularDataset.{transform/transform_batch}
(#7150) - Add
chainerx.log1p
(#7161, thanks @sky58!) - Expose
chainerx::AsContiguous
as a public C++ API (#7166) - Emit warning on
chainerx
import in debug mode (#7178) - Add
chainer.as_array
for consistency withchainer.as_variable
(#7252, thanks @tkerola!) - Add
chainerx.moveaxis
(#7265, thanks @kshitij12345!) - Add
chainerx.leaky_relu
(#7351, thanks @aksub99!) - Add
chainerx.dstack
andchainerx.atleast_3d
(#7353, thanks @kshitij12345!) - Add Python operator
__abs__
withchainerx.ndarray
(#7364) - Allow turning off the static subgraph optimizations using a config (#7369)
- Add NumPy constants to ChainerX (#7384)
- Add
chainerx.erf
(#7404, thanks @aksub99!) - Add
align_corners
option toresize_images
(#7429) - Add nearest mode to
resize_images
(#7443) - Add
input_device
toStandardUpdater
(#7472) - Add
is_array_supported
method onbackend.Device
(#7487)
Enhancements
- Refactor
roi_max_align_2d
androi_average_align_2d
(#6405, thanks @knorth55!) - Support Tagged communication with
MPI_Status
. (#6696, thanks @y1r!) - Support ChainerX in
F.copy
(#6982) - Avoid unnecessary updates in
F.batch_renormalization
, and related fixes (#7104) - Support ChainerX in
Variable.addgrad
(#7132) - Fix
cuda.DummyDevice
inheritance (#7147) - Add
Device.name
property (#7149) Link.serialize
to support ChainerX (#7175)- Fix typo in
Variable.backward
(#7196) - Call
require_grad()
on ChainerXVariable.grad
setter (#7198) - Clear outputs in
FunctionNode.unchain
and raise error in ChainerX fallback mode (#7216) - Support ChainerX in
Variable.copydata
(#7226) - Support ChainerX in MNIST data parallel example (#7227)
MultiprocessParallelUpdater
to support new devices (#7245)- Alias
StackVector<int64_t, kMaxNdim>
toDims
(#7258) - Support bool dtypes in
chainerx::{Max,Min}imum
(#7261) - Fix integral negative powers (#7262)
- Make
chx.backward
not cause error even if backprop is not required (#7287) - Support
None
arguments inchainerx.clip
andchainerx.ndarray.clip
(#7296) - Support scalar in
chainerx::Where
(#7325) F.clip
function withNone
parameter tomin
/max
(#7333)- Support cudnn deterministic max pooling (#7390, thanks @anaruse!)
- Avoid transferring from a native device to another in
Array::ToNative()
(#7394) - Add type hints to
Variable
(#7400) - Improve
get_device
error message when ChainerX is not available (#7401) get_device
to raise a more correct error types (#7421)- Make
EXEPECT_ARRAY_*
macros able to used outside ChainerX (#7434) - Add sequence support for ChainerX shape arguments (#7446)
- Check positive dilation in
F.convolution_2d
(#7448) - Check positive dilation in
F.deconvolution_2d
(#7449) - Explicit check of chainerx arrays on fallback functions (#7452)
- Support
F.copy
between non-ChainerX and ChainerX devices only if backprop is not required (#7473)
Performance Improvements
- In
FunctionNode
ChainerX fallback, reuseChainerxDevice
taken from inputs to create outputs (#7397)
Bug Fixes
- Fix type check of
F.where
(#6872) - Fix a bug in
Bernoulli.log_prob
(#7064, thanks @seiyab!) - Fix uncopyable
MultiNodeBatchNormalization
(#7106) - Bugfix:
MultiNodeChainList
should not assume float32 (#7165) - Fix initialization of
L.Linear
when called withn_batch_axes
(#7167) - Fix float16 and Tensor Core related issue in ChainerX (#7189, thanks @anaruse!)
- Fix recomputation of
L.BatchRenormalization
(#7256) - Fix
F.absolute_error
for ChainerX (#7281, thanks @crcrpar!) - Fix a bug that root is ignored in scatter_dataset and bcast (#7289)
- Fix condition to invoke cuDNN dropout (#7293, thanks @crcrpar!)
- Improve type check in
_values_to_dicts
so it works with unicode of python 2 too (#7316) - Fix DtypeError in
chainerx.square
(#7321) - Fix mypy errors (#7423)
- Make
WeightDecay
aware of loss scale (#7491) - Fix
GradientMethod
ChainerX fallback for uninitialized parameters (#7492) - Bugfix for pytest 2x2 (#7509)
- Fix AdamW update rule regression on CPU (#7512)
Code Fixes
- Split binary functions from math.cc (#7128)
- Avoid using
cuda.DummyDevice
andcuda.get_device_from_array
(#7148) - Fix pointless comparison compiler warning in ChainerX (#7160)
- Remove backslashes to continue lines of link targets (#7170)
- Split trigonometric/hyperbolic routines from
math.cc
(#7171) - Remove duplicated code in
logic.cc
(#7176) - Consistent cases for Inplace (#7181)
- Improve code in
testing.backend.BackendConfig
(#7212) - Split ChainerX statistics routines from
math.cc
(#7222) - Fix code style for long expressions (#7231)
- Check device instance using
xp
when possible (#7234) - Move declaration of
AMax
andAMin
to statistics routines (#7269) - Split reduction routines from
math.cc
(#7270) - Use
_
for private classes underchainer.dataset.tabular
(#7275) - Remove unused using declaration (#7284)
- Split misc routines from
math.cc
(#7298) - Fix wrong comment in ChainerX backward implementation (#7311)
- Split explog routines from
math.cc
(#7317) - Fix style on imports (#7338)
- Split rounding routines (#7407)
- Split arithmetic ops from routines/math.h (#7415)
- Put comments in
FindCuDNN.cmake
(#7419) - DRY optimizer test parameterizations (#7437)
- Split logic routines from math (#7444)
- Qualify some arguments of pool kernels
const&
(#7453) - Include
cuda_fp16.h
instead ofcuda_fp16.hpp
(#7480) - Use py::arg literal in ChainerX python binding (#7490)
- Remove rounding kernels from math (#7497)
- Rename and move activation routines from
math.h
(#7501) - Remove ChainerX
AsTypeKernel
(#7522, thanks @kshitij12345!) - Split python binding math routines (#7527)
- Use absolute namespace in macros (#7536)
Documentation
- Improve contribution guide (#6140)
- Fix dead sphinx links (#6450)
- Fix
F.normalize
documentation (#7062, thanks @crcrpar!) - Document
F.copy
view behavior (#7135) - Improve device documentation (#7162)
- Document
backend.get_device_from_array
(#7163) - Remove
chainerx.md
(#7179) - Add
optimizers.MSVAG
to documentation (#7183) - Fix grammatical errors in documentation (#7186)
- Fix capitalization of
F.relu
in doc (#7188) - Add missing doc entry for
CommunicatorBase.allgather
(#7192) - Fix invalid escape sequences in ChainerX routine docstrings (#7214)
- Fix typos in
chainer.utils.type_check
(#7249, thanks @ktns!) - Document
observe_value
andobserve_lr
trigger interval (#7266) - Fix
robots.txt
to allow indexing root (#7306) - Avoid installing ChainerX when building docs of other projects on ReadTheDocs (#7363, thanks @knorth55!)
- Improve
F.normalize
documentation (#7371, thanks @crcrpar!) - Fix format of
static_graph.rst
(#7389) - Change Deformable Convolution 2D docs to match arguments (#7402, thanks @higumachan!)
- Avoid setting
test_iter.epoch
manually in the tutorial of training loop (#7405) - Remove "Comparison with other frameworks" from docs (#7417)
- Fix documentation for
shape
ingenerate_array
(#7450) - Remove test coverage from ChainerX contribution guide (#7462)
- Correct Embed ID documentation (#7484)
- Fix typo in
tabular_dataset.py
(#7495, thanks @nai62!)
Installation
- Fix ChainerX compilation with MSVC (#7108, thanks @durswd!)
- Allow
CUDNN_LIBNAME
to be specified by environment variable (#7243) - Use external
$MAKEFLAGS
instead if set in Travis CI script (#7331) - In
FindCuDNN.cmake
, prioritize explicit variables over environment variables (#7441) - Add ChainerX build option to use cuDNN from CuPy installation (#7442)
- Pin
typing == 3.6.6
(#7562) - Fix
typing
requirements (#7564)
Examples
- Add CIFAR example to ChainerMN (#6839, thanks @ai-kase!)
- Support device specifiers in MNIST data parallel example (#6857)
- Support device specifiers in PTB example (#7055)
- Support device specifiers in pix2pix example (#7076)
- Support device specifiers in static graph example (#7153)
- Support device specifiers in ImageNet data parallel example (#7164)
- Support ChainerX in MNIST inference example (#7169)
- Support device specifier in image captioning example (#7204)
- Support device specifier in image captioning example (
predict.py
) (#7206) - Remove
PlotReport.available()
check in glance example (#7209) - Minor fix in DCGAN example README (#7210)
- Fix sentiment example test (#7215)
- Support device specifiers in MNIST model parallel example (#7225)
- Use Agg backend in examples with plot functionality (#7247)
- Support ChainerX in PTB gentxt example (#7314)
- Support ChainerX in MNIST model...