Pre-release
Pre-release

@hvy hvy released this Dec 3, 2018 · 187 commits to master since this release

Assets 2

This is the release note of v6.0.0b1. See here for the complete list of solved issues and merged PRs.

Highlights

ChainerX

ChainerX is an ndarray implementation with Define-by-Run automatic differentiation capability. It roughly corresponds to "NumPy/CuPy + Chainer Variable", while some additional features follow:

  • Speed: The whole ndarray and autograd implementation is written in C++, with a thin Python binding. It lowers the overhead existing in the pure Python implementation of Chainer.
  • Extensibility: The backend is pluggable so that it is much easier to add support of new devices.

The speed is best achieved by directly using ChainerX APIs,
while it also provides a compatibility layer through the conventional Variable interface for easier adoption of ChainerX in existing projects.
See the ChainerX Tutorial for more details and concrete examples.

New Features

  • Implement double backward of SLSTM function (#4824, thanks @tohmae!)
  • Add F.roi_max_align_2d (#5198, thanks @knorth55!)
  • Add F.roi_average_pooling_2d (#5285, thanks @knorth55!)
  • Add F.roi_max_pooling_2d (#5304, thanks @knorth55!)
  • Support all float dtypes in F.negative_sampling (#5336)
  • Add D.Chisquare (#5338)
  • Add D.Gumbel (#5352)
  • Add D.Poisson (#5364)
  • Add D.OneHotCategorical (#5372)
  • Serialize BestValueTrigger (#5402, thanks @ktns!)
  • Add return_samples argument to F.negative_sampling and L.NegativeSampling (#5597)
  • Support all float dtypes in F.embed_id (#5624)
  • Support default dtype in L.BlackOut (#5638)
  • Support default dtype in L.BinaryHierarchicalSoftmax (#5648)
  • Support all float dtypes in F.connectionist_temporal_classification (#5680)
  • ChainerX (#5725)

Enhancements

  • Add type compatibility check in npz deserializer (#5483)
  • Use cupy.linalg.det in F.det (#5525)
  • Avoid unnecessary copy in ndarray.astype (#5547)
  • Avoid cuDNN handle around DropoutStates (#5563)
  • Simplify softmax with cuDNN (#5566)
  • Simplify pooling with cuDNN (#5567)
  • Add KL divergence test for D.OneHotCategorical (#5587)
  • Add compute_stream argument in ConcatWithAsyncTransfer to allow more overlap between computation transfer in CUDA (#5606, thanks @anaruse!)
  • Use chainer.utils.size_of_shape in ChainerMN (#5610)
  • Import testing/backend.py definitions in testing/__init__.py (#5633)
  • Avoid using dype char codes (#5646)
  • More consistent use of Variable.array in codes under links (#5657, thanks @crcrpar!)
  • Use automatic broadcasting instead of F.repeat (#5662)
  • Refactor the statemachine of iterators that iterates indices (#5669, thanks @grafi-tt!)
  • Refactor train_mnist_dual_parallel.py (#5678)
  • Change Link.add_hook to return self (#5736, thanks @crcrpar!)

Bug Fixes

  • Fix reporter.Summary float value deserialization (#5482)
  • Fix text_classification example fails on Python 3 (#5591, thanks @koreyou!)
  • Improve iDeep version checking (#5600)
  • Fix D.OneHotCategorical (#5604)
  • Fix Python 3.7 test failures in F.roi_average_pooling_2d (#5611)
  • Fix F.negative_sampling output dtype in CPU mode (#5613)
  • Fix args check in F.roi_average_align_2d and F.roi_average_pooling_2d (#5627, thanks @knorth55!)
  • Fix L.BatchNormalization with lazy initialization fail on GPU (#5683, thanks @koreyou!)

Documentation

  • Simplify array type information fields in function documentation (#4887)
  • Update installation guide of numpy with openblas on macOS (#5021)
  • Add links to ChainerCV documentation (#5434)
  • Add ChainerMN paper to references (#5570)
  • Fix docstring of F.forget (#5586, thanks @fiarabbit!)
  • Fix typo in updaters (#5589, thanks @okayu9!)
  • Fix extensions guide error regarding method to implement (#5602, thanks @lehy!)
  • Update F.roi_average_align_2d doc to refer wrapper function (#5609, thanks @knorth55!)
  • Fix a typo in Chain example code (#5653)
  • Fix typo in F.max_pooling_nd docstring (#5654)
  • Fix a typo in chainer.distributions documentation (#5658)
  • Add documentation of ndarray (#5660)
  • Fix typo in L.ResNetLayers (#5665, thanks @takaaki82!)
  • Minor typo correction (in docs/variables). (#5670, thanks @grigorisg9gr!)
  • Fix typo in docstrings (#5676)
  • Fix docs for backprop_step (#5692)
  • Make docs in chainer.distributions refer ndarray (#5717)
  • Fix image URL in README (#5720, thanks @levelfour!)
  • Add warning in ChainerX documentation (#5752)

Installation

  • Require setuptools and add docs for (#5532)

Examples

  • Add WaveNet example (#4922, thanks @dhgrs!)
  • Rewrite the example of VAE using Chainer distributions (#5356, thanks @ganow!)

Tests

  • Fix test warnings in NumPy 1.15 (#5596)
  • Fix test of F.rrelu (#5618)
  • Fix regex of protobuf modules warned by Python 3.7 (#5642)
  • Ignore h5py warning in Python 3.7 (#5691)
  • Add gradient consistency checks in numerical_grad (#5698)

Other

  • Update style check tools to the versions compatible with pycodestyle 2.4 (#5643)

@niboshi niboshi released this Dec 3, 2018 · 22 commits to v5 since this release

Assets 2

This is the release note of v5.1.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Added support for float dtypes in some functions
  • F.negative_sampling (#5593)
  • F.scatter_add and F.get_item (#5594)

Enhancements

  • Avoid unnecessary copy in ndarray.astype (#5623)
  • Add compute_stream argument in ConcatWithAsyncTransfer to allow more overlap between computation and transfer in CUDA (#5684, thanks @anaruse!)
  • Add gradient consistency checks in numerical_grad (#5705)
  • Code enhancements
    • Avoid cuDNN handle around DropoutStates (#5644)
    • Import testing/backend.py definitions in testing/__init__.py (#5639)
    • Simplify pooling and softmax with cuDNN (#5637, #5672)
    • More consistent use of Variable.array in codes under links (#5689, thanks @crcrpar!)
    • Use automatic broadcasting instead of F.repeat (#5708)

Bug Fixes

  • Fix D.Uniform.log_prob to avoid returning -inf at boundary (#5550)
  • Fix reporter.Summary float value deserialization (#5584)
  • Fix F.negative_sampling output dtype in CPU mode (#5625)

Documentation

  • Add ChainerMN paper to references (#5583)
  • Fix docstring of F.forget (#5588, thanks @fiarabbit!)
  • Fix typo in updaters (#5598, thanks @okayu9!)
  • Update documentation for iDeep constraints (#5601)
  • Fix the method name in the extension guide (#5605, thanks @lehy!)
  • Update F.roi_average_align_2d doc to refer wrapper function (#5617, thanks @knorth55!)
  • Update installation guide of numpy with openblas on macOS (#5630)
  • Fix a typo in Chain example code (#5655)
  • Fix a typo in chainer.distributions documentation (#5661)
  • Fix typo in L.ResNetLayers (#5667, thanks @takaaki82!)
  • Minor typo correction (in docs/variables). (#5671, thanks @grigorisg9gr!)
  • Add links to ChainerCV documentation (#5677)
  • Fix typo in docstrings (#5679)
  • Add documentation of ndarray (#5704)
  • Fix docs for backprop_step (#5710)
  • Make docs in chainer.distributions refer ndarray (#5719)

Examples

  • Use SerialIterator in train_mnist_custom_loop.py (#5544)

Test

  • Fix test warnings in NumPy 1.15 (#5599)
  • Fix test of F.rrelu (#5673)
  • Ignore h5py warning in Python 3.7 (#5694)
  • Fix regex of protobuf modules warned by Python 3.7 (#5711)

Others

  • Update style check tools to the versions compatible with pycodestyle 2.4 (#5715)
Pre-release
Pre-release

@kmaehashi kmaehashi released this Oct 25, 2018 · 6740 commits to master since this release

Assets 2

This is the release note of v6.0.0a1. See here for the complete list of solved issues and merged PRs.

New Features

  • Add error handler interface to trainer extensions (#4630)
  • Add discriminative margin based clustering loss (#5313, thanks @dBeker!)
  • Support all float dtypes in F.det and F.inv (#5323)
  • Support all float dtypes in F.scatter_add and F.get_item (#5335)
  • Add probability distribution functions

Enhancements

  • Add maxtasksperchild parameter for MultiprocessIterator (#4972, thanks @jnishi!)
  • In-place update in F.batch_renormalization (#5014)
  • Introduce utils._fp16_mixed_precision_helper decorator (#5306)
  • Remove unnecessary version checking in ChainerMN (#5312)
  • Dynamically import matplotlib (#5320)
  • Use automatic broadcast and force_array (#5409)
  • Refactor gradient_check.check_backward (#5411)
  • Rename Adam.lr to Adam.alpha_t (#5420)
  • Grouped convolutions using matmul (#5459)
  • Validate shape of weight in F.convolution_2d (#5460)
  • Avoid Iterable in CaffeFunction (#5477)
  • Support negative axis for F.softmax (#5497)
  • Use arr.item() instead of numpy.asscalar(arr) to support NumPy 1.16 (#5510)
  • ChainerMN: Forward-port recent enhancements and bug-fixes (ChainerMN v1.3.1 release note) (#5535)
  • Make type_check.argname private (#5552)
  • Un-deprecate Link.add_param and Link.add_link (#5553)
  • ChainerMN: add an error message when mpi4py is missing (#5559)
  • Fix code for python 3.7 (#5577)
  • Improve iDeep 2.0 support
  • Code enhancements
    • Fix some E241 style errors (#5431)
    • Fix style of imports (#5433)
    • Simplify scalar handling in basic_math (#5428, #5439)
    • Dedup assertion in MpiCommunicatorBase.allreduce (#5473)
    • Remove debug print (#5430)
    • Implement no-double-backprop version of F.softmax_cross_entropy using FunctionNode (#5478, #5508)
    • Consistently use Variable.array instead of .data (#5417, #5495, thanks @crcrpar!)

Bug Fixes

  • For proper resuming, don't raise KeyError at UpdateRule deserialization (#5353, thanks @grafi-tt!)
  • Support 0-size shape in D.Beta (#5382)
  • Fix re-creation of retained output variable nodes in backward (#5424)
  • CaffeFunction ignores pad_w (#5463, thanks @koreyou!)
  • Fix train_imagenet_data_parallel.py example cannot be run (#5469, thanks @Lynkzhang!)
  • Fix backward of HuberLoss for ndim >= 3 (#5493)
  • Fix F.softmax and F.log_softmax with axis=-1 on gpu (#5496)
  • Fix D.Uniform.log_prob to avoid returning -inf at boundary (#5548)

Documentation

  • Merge ChainerMN docs from master branch (#5300)
  • Update ChainerMN documents (#5302)
  • Replace Variable.data with Variable.array in examples and functions (#5386, thanks @crcrpar!)
  • Improve code sample appearance in docs (#5388)
  • Fix typos in doc of chainer.report (#5410)
  • Fix a ReST escape (#5415)
  • Add document for D.Beta (#5419)
  • Fix docstring of discriminative loss (#5423)
  • Fix docstrings to follow OpenStack Style Guidelines (#5427)
  • Fix docstring of chainer.Sequential (#5438)
  • Add Google Colaboratory installation steps and link to community examples (#5446)
  • Use "documentation" instead of "document" in our documentation (#5450)
  • fix typo in static graph optimization (#5453, thanks @crcrpar!)
  • Add support for NumPy 1.15 in docs (#5500)
  • Fix dead fragments to CuPy docs (#5504)
  • Fix a typo in Extension.on_error (#5523)
  • Improve FunctionNode upgrade guide (#5527)
  • Chainer v5 requires CuPy v5 (#5531)
  • Add upgrade guide for get_device_from_array (#5558)
  • Add Python 3.7 support to installation docs (#5573)

Installation

Examples

  • Use SerialIterator in train_mnist_custom_loop.py (#5519)

Tests

  • Fix occasional test failure of l2normalize with float16 (#5380)
  • Add missing test in Variable test (#5385)
  • Travis test against v5 branch (#5394)
  • Ignore scipy<1.0 is warned by using deprecated feature of numpy>=1.15 (#5471)
  • Relax tolerance of check_double_backward test (#5486)
  • Ignore protobuf is warned by Python 3.7 (#5514)
  • Fix tests with maxtasksperchild=1 or 10 are slow (#5516)
  • Fix test for python 3.7 (#5572)

@beam2d beam2d released this Oct 25, 2018 · 99 commits to v5 since this release

Assets 2

This is the release note of v5.0.0. See here for the complete list of solved issues and merged PRs.

This is the fifth major release of Chainer. This release note only covers the difference from v5.0.0rc1; for all highlights and changes, please refer to the blog post and release notes of the pre-releases:

See the Upgrade Guide if you are upgrading from previous versions.

Highlights

  • Chainer now supports Python 3.7.
  • iDeep 2.0 has been supported. Existing iDeep 1.x users must update iDeep using pip install -U ideep4py.
  • Link parameter and child link initializations via __init__(...), add_param, and add_link are undeprecated. They are useful when one builds a link as a container of parameters and links, and therefore we decided to leave these APIs besides init_scope.

New Features

  • Add discriminative margin based clustering loss (#5505, thanks @dBeker!)

Enhancements

  • Update Adam for iDeep 2.0 (#5407, thanks @mingxiaoh!)
  • Fix some E241 style errors (#5437)
  • Fix style of imports (#5458)
  • Validate shape of weight in F.convolution_2d (#5466)
  • Dedup assertion in MpiCommunicatorBase.allreduce (#5475)
  • Replace variable.data with variable.array in variable.py (#5488, thanks @crcrpar!)
  • Update packaging for iDeep 2.0 (#5513)
  • Consistently use Variable.array instead of .data (#5517)
  • Use arr.item() instead of np.asscalar(arr) to support NumPy 1.16 (#5529)
  • Support negative axis for F.softmax (#5543)
  • In-place update in F.batch_renormalization (#5546)
  • Grouped convolutions using matmul (#5549)
  • ChainerMN: Forward-port recent enhancements and bug-fixes (ChainerMN v1.3.1 release note) (#5554)
  • Make type_check.argname private (#5556)
  • ChainerMN: add an error message when mpi4py is missing (#5562)
  • Un-deprecate Link.add_param and Link.add_link (#5569)
  • Fix code for Python 3.7 (#5578)
  • Remove unnecessary version checking in ChainerMN (#5400)

Bug Fixes

  • Fix beta distribution (#5455)
  • [bugfix] CaffeFunction ignores pad_w (#5468, thanks @koreyou!)
  • Fix FunctionNode.retained_outputs (#5476)
  • Fix train_imagenet_data_parallel.py example cannot be run (#5499, thanks @Lynkzhang!)
  • Fix F.softmax and F.log_softmax with axis=-1 on gpu (#5502)
  • For proper resuming, don't raise KeyError at UpdateRule deserialization (#5506, thanks @grafi-tt!)
  • Fix backward of HuberLoss (#5520)

Documentation

  • Merge ChainerMN docs from master branch (#5399)
  • Add document for D.Beta (#5426)
  • Fix typos in doc of chainer.report (#5447)
  • Fix a ReST escape (#5449)
  • Fix typo in static graph optimization (#5456, thanks @crcrpar!)
  • Fix docstring of chainer.Sequential (#5461)
  • Add Google Colaboratory installation steps and link to community examples (#5464)
  • Fix docstrings to follow OpenStack Style Guidelines (#5465)
  • Fix dead fragments to CuPy docs (#5515)
  • Improve code sample appearance in docs (#5522)
  • Use "documentation" instead of "document" in our documentation (#5533)
  • Fix docstring of discriminative loss (#5537)
  • Chainer v5 requires CuPy v5 (#5540)
  • Improve FunctionNode upgrade guide (#5541)
  • Add support for NumPy 1.15 in docs (#5545)
  • Add upgrade guide for get_device_from_array (#5560)
  • Update ChainerMN documents (#5564)
  • Add Python 3.7 support to installation docs (#5574)

Installation

  • Fix typo in setup.py (#5398)
  • Update base docker image (#5571)

Tests

  • Travis test against v5 branch (#5395)
  • Add missing test in Variable test (#5406)
  • Fix occasional test failure of l2normalize with float16 (#5448)
  • Relax tolerance of check_double_backward test (#5490)
  • Ignore scipy<1.0 is warned by using a deprecated feature of numpy>=1.15 (#5491)
  • Ignore protobuf is warned by Python 3.7 (#5518)
  • Fix test for python 3.7 (#5576)
Pre-release
Pre-release

@hvy hvy released this Sep 27, 2018 · 7066 commits to master since this release

Assets 2

These are the releases notes for v5.0.0rc1. See here for the complete list of solved issues and merged PRs.

Highlights

Static subgraph optimization

Static subgraph optimization feature has been introduced. The CPU (Python) overhead of graph construction and traversal in backward is removed with it.

By applying @static_graph decorator to functions or methods (typically it is the forward method of a chain), you can let Chainer cache the computational graph collected at the first call and reuse it from the subsequent calls. To use this feature safely, your define-by-run code must always perform the same computations each iteration.

Advanced graph optimizations/transformations are not implemented yet, so currently it only reduces the CPU overhead. We will consider adding more sophisticated graph-level optimizations to improve the GPU utilization as well as further reduce CPU overhead.

This feature is experimental. We may change the interface in the future releases.

ChainerMN integration

ChainerMN has been integrated into Chainer. ChainerMN module (chainermn) will become available just by installing Chainer (note that installation of MPI is still required separately). Please uninstall ChainerMN (pip uninstall chainermn) if you already have ChainerMN installed before updating to this version of Chainer.

iDeep 2.0

iDeep 2.0 has been supported. iDeep 2.0 provides accelerations on Intel architecture for more functions than iDeep 1.x. Be aware that iDeep 1.x is incompatible with this version of Chainer; please update to iDeep 2.x if you already have iDeep 1.x installed.

NVIDIA DALI support

NVIDIA DALI has been supported.
DALI is a library to construct data preprocessing pipeline.
New DaliIterator converts the data pipeline for DALI into an iterator that can be used from any updaters.
Currently, users need to write a custom converter function to use it in Trainer.
See the imagenet example and its dali_util.py for how to use it.

This feature is experimental. We may change the interface in the future releases.

New Features

  • Add LinkHook (#4730)
  • Implement static subgraph optimizations (#4811)
  • Improve performance of sparse_matmul (#4831, thanks @anaruse!)
  • Allow weighted scalar reporting by providing a tuple to the reporter (#4844, thanks @hknerdgn!)
  • Add 1d/3d aliases for convolution/pooling functions and links (#4851)
  • Add params and xp to Distribution (#4925)
  • Support default dtype in F.spatial_transformer_grid (#5114)
  • Add Dirichlet distribution (#5115)
  • Include platform information to chainer.print_runtime_info() (#5163, thanks @himkt!)
  • Support all float dtypes in F.sigmoid_cross_entropy (#5211)
  • Add L.VGG19Layers (#5213, thanks @crcrpar!)
  • Integrate ChainerMN into Chainer repository (#5226)
  • Support all float dtypes in F.normalize (#5256)
  • Add contains_nan (#5270)
  • Support all float dtypes in F.roi_pooling_2d (#5281)
  • Support all float dtypes in F.gaussian (#5284)
  • Refine F.roi_average_align_2d interface (#5305, thanks @knorth55!)
  • Automatic broadcast: minimum, maximum, where (#5330)
  • Support NVIDIA DALI (#5387, thanks @anaruse!)

Enhancements

  • More checks in function_node (#3983)
  • Support None for the in_channels argument in ConvolutionND/DeconvolutionND (#4587)
  • Update for iDeep4py 2.0 (#4933, thanks @mingxiaoh!)
  • Refactor NaN check in chainer.grad in debug mode (#5228)
  • Improve type check messages for some functions (#5251)
  • Use log_softmax in D.Categorical (#5255)
  • Avoid mutating in_params of coo_matmul (#5258)
  • Use with cuda_device (#5269)
  • Avoid using pkg_resources to retrieve Chainer version (#5298)
  • Move get_array_module to backends from cuda (#5327)
  • Disallow summing over ellipsis in F.einsum (#5328)
  • Fix input variable of FunctionNode of where (#5340)
  • Move copyto to chainer.backend (#5344)
  • Fix input variable of FunctionNode of permutate (#5349)
  • Add binary_check option to D.Bernoulli (#5363)
  • Support negative axis for F.log_softmax (#5381)

Bug Fixes

  • Fix performance regression in #4772 (#5267)
  • Fix import order in _runtime_info.py (#5271)
  • Make Link.to_*pu return self (#5322)
  • Fix PrintHook fails if grad is None (#5333)
  • Fix a shape of W in L.ConvolutionND (#5370)
  • Make deserializers overwrite intel64.mdarray (#5373)
  • Fix serialize in link to support iDeep2 (#5374)

Documentation

  • Add DCGAN tutorial (#4544)
  • Add a comment that explains why the Classifier chain clears Variable attributes (#5069, thanks @grafi-tt!)
  • Fix FunctionHook documentation (#5188)
  • Add explanation about switching the three modes (#5283, thanks @fiarabbit!)
  • Amend chainer.Chain document (#5294, thanks @fiarabbit!)
  • Fix location of chainermn docs in sidebar (#5296)
  • Update Chainer at a Glance (#5316)
  • Fix invalid escape sequence warnings in Python 3.6 (#5317)
  • Add quotes to stylecheck install (#5318)
  • Add long_description for PyPI (#5345)
  • Fix typo (#5346)
  • Fix dead link (#5358)
  • Fix sphinx version to 1.7.9 (#5359)
  • Mention chainer.backend in upgrade guide (#5384)

Installation

  • Delete a commented out line (#5354)

Examples

  • Fix seq2seq example to support --resume option and improve docs (#4977)
  • Fix snapshot trigger in CIFAR example (#5325, thanks @akitotakeki!)

Tests

  • Avoid using deprecated is_linear argument in test (#5307, thanks @knorth55!)
  • Stabilize backward tests in TestTriangularInv (#5329)
  • Avoid printing in test_kldivergence (#5366)
  • Fix tolerance of matmul tests (#5369)
  • Minor fixes to TestKLDivergence (#5379)

Others

  • Update issue-template to encourage users to use chainer.print_runtime_info (#5272, thanks @himkt!)

@mitmul mitmul released this Sep 27, 2018

Assets 2

This is the release note of v4.5.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Include platform information to chainer.print_runtime_info() (#5268, thanks @himkt!)

Enhancements

  • Support None for the in_channels argument in ConvolutionND/DeconvolutionND (#5279)

Bug Fixes

  • Fix snapshot trigger in CIFAR example (#5331, thanks @akitotakeki!)
  • Fix PrintHook fails if grad is None (#5361)
  • Make Link.to_*pu return self (#5362)
  • Make deserializers overwrite intel64.mdarray (#5377)

Documentation

  • Amend chainer.Chain document (#5299)
  • Fix invalid escape sequence warnings in Python 3.6 (#5334)
  • Fix FunctionHook documentation (#5339)
  • Fix typo (#5348)

Examples

  • Fix seq2seq example to support --resume option and improve docs (#5275)
  • Fix snapshot trigger in CIFAR example (#5331, thanks @akitotakeki!)

Others

  • Fix Dockerfile in v4 branch to use CuPy minor versions (#5291)
Pre-release
Pre-release

@niboshi niboshi released this Aug 23, 2018 · 7541 commits to master since this release

Assets 2

This is the release note of v5.0.0b4. See here for the complete list of solved issues and merged PRs.

Highlights

Changes without compatibility

  • Change the initial avg_var of L.BatchNormalization to 1 (#4742)
  • Fix backward computation in F.forget (#5179). In this fix, the double backprop capability of F.forget is removed, since it did not work correctly in some cases.

New Features

  • Add new functions:
    • Add F.rrelu, Randomized Leaky ReLU (RReLU) activation function (#3059, thanks @raven38!)
    • Add F.erfcx, scaled complementary error function (#5195)
    • Add F.erfcinv, inverse complementary error function (#5202)
    • Add F.ndtr, normal cumulative distribution function (#5237)
    • Add F.log_ndtr (#5239)
    • Add F.ndtri, the inverse of ndtr (#5247)
    • Add F.roi_average_align_2d (#5070, thanks @wkentaro!, #5259)
    • Add F.cumprod (#5074)
  • Add new distributions:
  • Support default dtype in some links:
  • Support all float dtypes in some functions:
  • Implemented dataset using pickle (#4581)
  • Add WarmupShift and MultistepShift extensions (#4935, thanks @mingxiaoh!)
  • Improve initializer support in L.Maxout (#5068)
  • Support n_batch_axis in L.Linear(#5103)
  • Add raw kernel function (#5106)
  • Support axis argument for F.log_softmax (#5215)

Enhancements

  • Improve type check messages for some functions (#5189, #5200, #5224, #5248)
  • Detect stalled datasets in MultiprocessIterator (#4607)
  • New backward_accumulate (#4772)
  • Avoid numpy.ascontiguousarray when iDeep is used (#5063)
  • Use automatic broadcasting in distributions (#5086)
  • Use F.cumprod in backward of F.prod (#5094)
  • Return params and children as ordered by name in Link and Chain (#5119)
  • Use cuda.get_array_module in fused function (#5120)
  • Fix L.Convolution2D error message (#5138, thanks @fiarabbit!)
  • Remove cuda_fusion.py (#5144)
  • Implement eps_inside_sqrt option to RMSprop (#5150)
  • Normalize chainer.config.dtype in chainer.get_dtype() (#5167)
  • Use collections.abc to avoid DeprecationWarning in Python 3.7 (#5172)
  • Minor fixes to D.MultivariateNormal (#5173)
  • Avoid collections.Iterable (#5180)
  • Make imports in alphabetical order (#5181)
  • Avoid keyword arguments in FunctionHook callbacks (#5191)
  • Retain outputs in F.erfinv (#5199)
  • Use xp.einsum in F.bilinear (#5207)
  • Minor fixes to D.Beta (#5219)
  • Minor fixes to D.Uniform (#5225)
  • Check eps < CUDNN_BN_MIN_EPSILON in FixedBatchNormalization (#5232, thanks @cycentum!)
  • Use ndtr and log_ndtr in normal distribution (#5240)
  • Use erfcinv for Normal.icdf (#5242)
  • Use ndtri in normal distribution (#5254)
  • Use normcdfinv in F.ndtri (#5260)

Bug Fixes

  • Move backends.cuda.copyto to backends.copyto and make it work with iDeep (#5095)
  • Fix the condition for the switching of cuDNN in F.deconvolution_nd (#5129, thanks @fiarabbit!)
  • Fix test failure of TestResNetLayers (#5133)
  • Fix backward compatibility of Link.__call__ MRO (#5141)
  • Flush the output stream after PrintReport reports (#5146)
  • Support old numpy in F.split_axis (#5157)
  • Avoid cancellation in D.Normal (#5185)
  • Support 0-dim input in F.logsumexp (#5190, thanks @cadenacchi!)
  • Fix cpu codes of indexing in F.softmax_cross_entropy (#5238)
  • Comment out extreme test of D.Categorical (#5261)

Documentation

  • Add iDeep to backend docs (#5121)
  • Improve iterator description in Chainer at a glance documentation (#5132, thanks @fiarabbit!)
  • Avoid use of ideep in doctest (#5148)
  • Fix grammar in PR template (#5178)
  • Fix docstring of F.erfinv (#5201)
  • Fix Sphinx issues in the reference of probability distributions (#5203)
  • Change iDeep in tips.rst to Chainer Backend for Intel Architecture (#5208, thanks @mingxiaoh!)
  • Fix toc level in iterator documentation(#5257)

Examples

  • Rename VAE hyperparameter C to beta in the example (#5135, thanks @Evanc123!)
  • Override Link.forward in MNIST model parallel example (#5159)

Tests

  • Simplify and stabilize softmax_cross_entropy test (#3409)
  • Ignore float warnings if testing extreme value (#5122)
  • Trivial fix for parameterized test case of F.contrastive (#5147)
  • Fix a class name in test_erfinv (#5165)
  • Fix doctests of open_pickle_dataset (#5182)
  • Fix test failure on Windows (#5186)
  • Add sphinx to doctest requirements (#5187)
  • Fix occasional test failure of l2normalize (#5210)
  • Fix occasional test failure of contrastive (#5218)
  • Ignore Theano warnings in Python 3.7 (#5223)
  • Ignore DeprecationWarnings at importing Theano (#5230)
  • Adjust tolerance of TestMatMul (#5236)
  • Add .pytest_cache/ to .gitignore (#5193)

@kmaehashi kmaehashi released this Aug 23, 2018 · 31 commits to v4 since this release

Assets 2

This is the release note of v4.4.0. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Fix L.Convolution2D error message (#5140, thanks @fiarabbit!)
  • Use collections.abc to avoid DeprecationWarning in Python 3.7 (#5177)
  • Avoid collections.Iterable (#5220)

Bug Fixes

  • Flush the output stream after PrintReport reports (#5149)
  • Fix backward compatibility of Link.__call__ MRO (#5151)
  • Support old numpy in F.split_axis (#5164)
  • Support 0-dim input in F.logsumexp (#5196, thanks @cadenacchi!)
  • Fix cpu codes of indexing in F.softmax_cross_entropy (#5241)

Documentation

  • Add docs for Extension.name (#5110)
  • Add iDeep to backend docs (#5162)
  • Avoid use of ideep in doctest (#5168)
  • Fix cross-link and format of Chainer at a glance documentation (#5170)
  • Fix grammar in PR template (#5204)
  • Improve Iterator description in Chainer at a glance documentation (#5250, thanks @fiarabbit!)

Tests

  • Simplify and stabilize softmax_cross_entropy test (#5216)
  • Fix occasional test failure of l2normalize (#5222)
  • Ignore Theano warns in Python 3.7 (#5227)
  • Ignore DeprecationWarnings at importing Theano (#5243)
  • Trivial fix for parameterized test case of F.contrastive (#5252)
  • Fix occasional test failure of contrastive (#5253)
  • Add .pytest_cache/ to .gitignore (#5194)

@kmaehashi kmaehashi released this Jul 27, 2018

Assets 2

This is the release note of v4.3.1. See here for the complete list of solved issues and merged PRs.

This is a hot-fix release for v4.3.0 to address the backward incompatibility issue reported in #5078 (thanks @grafi-tt and @tkanmae for reporting this!). Users implementing __call__ method of their own Link using mix-in (multiple inheritance) may have been affected by this issue.

Bug Fixes

  • Fix backward compatibility of Link.__call__ MRO (#5154)
Pre-release
Pre-release

@niboshi niboshi released this Jul 19, 2018 · 8107 commits to master since this release

Assets 2

This is the release note of v5.0.0b3. See here for the complete list of solved issues and merged PRs.

Highlights

  • New functions have been added: F.einsum, F.lgamma, F.digamma, F.polygamma
  • More built-in Links support chainer.config.dtype configuration introduced in v5.0.0b2.

Changes without compatibility

Please refer to the Upgrade Guide for details.

  • Link.copyparams has been changed to copy persistent values in addition to parameters (#4997). You can use newly-introduced copy_persistent=False option to emulate the previous behavior.
  • FunctionNode classes exposed under chainer.functions namespace have been removed (#4421). Please use wrapper functions under chainer.functions instead of directly using classes.

New Features

  • Add F.einsum (#4644)
  • Add logarithmic gamma and related functions: F.lgamma, F.digamma, and F.polygamma (#4720)
  • Improve performance of batch normalization (#4798, thanks @anaruse!)
  • Add StepShift extension (#4894, thanks @jinjiren!)
  • Add Laplace distribution (#4932)
  • Add LabeledZippedImageDataset (#4961, thanks @d0i!)
  • Add Bernoulli distribution (#5025)
  • Support default dtype: L.BatchNormalization and L.BatchRenormalization (#5034), L.Maxout (#5058), L.InceptionBN (#5062), L.StatefulMGU (#5084)
  • Support all float dtypes in F.mean_absolute_error (#5053)

Enhancements

  • Fix cuda.elementwise to up performance (#3787)
  • Support cuDNN in F.dropout (#3369, thanks @bonprosoft!)
  • Hide FunctionNode classes from chainer.functions namespace (#4421)
  • Infer input size in batchnorm using aggregate axes (#4673, thanks @tkanmae!)
  • Avoid zero division in F.normalize (#4769)
  • Fix for NumPy 1.15.0rc1 (#4832)
  • Create function to identify fashion-MNIST labels (#4860)
  • Rename all __call__ methods in Links to forward (#4912)
  • Refactor distribution (#4923)
  • Cleanup F.batch_normalization (#4964)
  • Run gradient clipping on GPU, if possible (#4982, thanks @shinh!)
  • Add log_scale option of Normal distritbuion (#4987)
  • Copy persistent values in Link.copyparams (#4997)
  • Remove obsolete code from batch (re)normalization (#5013)
  • Avoid hasattr in L.BatchNormalization (#5017)
  • Let F.depthwise_convolution_2d use F.convolution_2d internally (#5046)
  • Initialize gradient of uninitialized parameter with default dtype when initializer is callable (#5064)
  • Support 0-dim params in distributions (#5077)
  • Fix F.einsum to support NumPy 1.15rc1 (#5079)
  • Fix dataset path to use os.path.join (#5100)

Bug Fixes

  • Fix GetItem.backward for 0-dim boolean index (#4958)
  • Fix exception not raised when unsupported format is specified when dumping computational graph (#4971)
  • Fix iDeep call in MultiAdd (#5056)
  • Fix kernels not memorized (#5061)

Documentation

  • Add Chainer at a Glance documentation (#3127)
  • Add upgrade guide for auto_new_epoch (#4956)
  • Fix cross-reference links in StandardUpdater (#4968)
  • Add docs for Extension.name (#4980)
  • Fix docs of chainer.config.dtype (#4981)
  • Clarify how arguments are handled in L.Linear docs (#4983)
  • Fix docstrings in computational_graph (#4984)
  • Add support for NumPy 1.14 in docs (#4990)
  • Fix docs of L.NStepBiRNNTanh, L.NStepLSTMBase, L.NStepLSTM and L.NStepBiLSTM (#4991, thanks @mori97!)
  • Update docs in F.upsampling_2d according to new F.max_pooling_2d (#4992)
  • Fix verb error in chainer.functions.fft docstring (#5002, thanks @butsugiri!)
  • Fix typo in n_step_gru docs (#5006)
  • Add notes about relationship between F.dilated_convolution_2d and F.convolution_2d (#5010)
  • Fix broken notations in F.linear docs (#5011)
  • Add rules regarding use of pytest module (#5012)
  • Fix typo in README of seq2seq (#5018, thanks @MannyKayy!)
  • Improve Variable guide (#5030)
  • Fix typo in docs template (#5035)
  • Fix attribute name collisions in docstring (#5037)
  • Fix cross-link and format of Chainer at a glance documentation (#5044)
  • Fix dead link to numpy.dtype.kind in Tips (#5051)
  • Clarify distinction between chainer.dataset and chainer.datasets (#5057)
  • Fix broken docs in PolynomialShift (#5089)
  • Update caffe.rst docs (#5090)
  • Add upgrade guide for Link.copyparams changes (#5093)
  • Fix typo in the docstring of ChainList (#5098)

Examples

  • Fix invalid keyword arguments to L.Linear in ImageNet example (#4975)

Tests

  • Update style check tools (#4864)
  • Eliminate no_grads and squares in double backward tests (#4978)
  • Fix test_default_backward (#5001)
  • Remove unused parameter from TestBatchRenormalization (#5016)
  • Remove test_get_dummy_device_for_empty_array (#5071)