Skip to content
Pre-release
Pre-release

@niboshi niboshi released this Jan 24, 2019 · 3949 commits to master since this release

This is the release note of v6.0.0b2. See here for the complete list of solved issues and merged PRs.

New Features

  • Asynchronous snapshot writers (#4472, thanks @tyohei!)
  • Add D.Cauchy (#5337)
  • Add D.Geometric (#5343)
  • Add cached_property decorator (#5416)
  • Make build_computational_graph accept single output (#5445)
  • Add trigger to be fired only once (#5565, thanks @hitsgub!)
  • Use default dtype in L.NegativeSampling (#5664)
  • Add optional property finished to trigger object (#5681, thanks @hitsgub!)
  • Support all float dtypes in F.spatial_transformer_sampler (#5751)
  • Add a naive TimerHook link hook. (#5842, thanks @crcrpar!)
  • Add F.as_strided (#5902, thanks @fiarabbit!)
  • Add 'mean' value as an option for VAE loss reduce (#5966, thanks @23pointsNorth!)

Enhancements

  • Support inputs with ndim!=2 for F.huber_loss (#5534)
  • Show forward stacktrace in backward (#5603)
  • Add type check for r arg of F.rrelu (#5619)
  • Support unretained Variables in _check_grad_type (#5640)
  • FunctionNode automatic fallback of array attributes in forward (#5745)
  • Switch device during gradient_check (#5777)
  • Raise CuPy not available error early in cuda.GpuDevice initialization (#5780)
  • Add hasattr check to user-specified flush call to file-like objects. (#5794, thanks @grafi-tt!)
  • Support custom initializer in links.CRF1d (#5807, thanks @himkt!)
  • Remove F.clip type restriction (#5813)
  • Batched pack/unpack params before/after allreduce (#5829, thanks @anaruse!)
  • Remove unnecessary cast in F.huber_loss (#5835)
  • Reimplement F.LocalResponseNormalization as FunctionNode (#5851)
  • Stop managing memory in max pooling specific manner (#5861)
  • Do not retain input on iDeep F.relu (#5871, thanks @grafi-tt!)
  • Set grad of F.clip 1 at x_min and x_max (#5876, thanks @grafi-tt!)
  • Warn if reset method is not implemented in an iterator (#5882)
  • Cache attributes of distributions (#5892)
  • Use FunctionNode on ROIPooling2D (#5957)
  • Use more precise timer in function_hooks/timer.py (#5971, thanks @crcrpar!)
  • Improve F.elu memory consumption by retaining output (#5972, thanks @grafi-tt!)

Bug Fixes

  • Fix dump_graph not to memory leak (#5538, thanks @hitsgub!)
  • Fix F.batch_normalization + F.forget combination (#5557)
  • Bugfix of MultiNodeOptimizer with loss scaling (#5659)
  • Fix usage of downsample_fb in resnet (#5737, thanks @milhidaka!)
  • Fix device argument passed to MultiprocessParallelUpdater being modified (#5739, thanks @Guriido!)
  • Fix bug when CuPy not installed and cuda.fuse decorator used without parentheses (#5809, thanks @grafi-tt!)
  • Fix F.cast gradient for casts between the same dtypes (#5811)
  • Accept splitting at the tail of dataset in split_dataset (#5895)
  • Fix broken F.leaky_relu grad when slope = 0 (#5898, thanks @grafi-tt!)
  • Add copyparams method to Sequential (#5914)
  • Override _to_device for consistency (#5948)
  • Allow import chainer.testing without pytest (#5973)
  • Raise an appropriate error on cuDNN RNN backward in testing mode (#5981)
  • Fix stochastic failure in WalkerAlias (#6057)

Documentation

  • Remove deprecation notices for v1 and v2 in documentation (#5081)
  • Add description for initializer dtype (#5246)
  • Add Code of Conduct (#5629)
  • Improve installation guide of ChainerMN (#5656)
  • Add explanations for LeNet5 (#5686)
  • Make docs of activation functions refer ndarray (#5718)
  • Add robots.txt to hide older versions from search results (#5768)
  • Fix typo in v2 Upgrade Guide (#5771)
  • Fix a couple of broken links from markdown files (#5789)
  • Model Parallel Documentation (#5791, thanks @levelfour!)
  • Fix wording in documentation (#5795)
  • Write "Wx + b" in the document of Linear. (#5852)
  • Make docs of array functions refer ndarray (#5863)
  • Some small fixes to grammar and spelling (#5869)
  • Make docs of connection functions refer ndarray (#5875)
  • Fix static_graph module path in documentation (#5883)
  • Correct the stable version in master branch (#5891, thanks @jinjiren!)
  • Change .data to .array in Guides and Examples docs (#5907, thanks @jinjiren!)
  • Fix typo (#5915, thanks @MannyKayy!)
  • Transform dataset documentation fix (#5938, thanks @23pointsNorth!)
  • Fix typo (#5942)
  • Update the note in DCGAN example to be compatible with the code. (#5951, thanks @jinjiren!)
  • Fix doc of F.softmax_cross_entropy on output shape with reduce=no (#5965)
  • Make some docs of functions refer ndarray (#5975)
  • Fix document in NStepLSTM/NStepRNN (#5979)
  • Make docs of math functions refer ndarray (#6032)
  • Fix wrong MNIST MLP anchor (#6046)

Installation

  • Check integrity of CuPy wheel for CUDA 10 (#5955)

Examples

  • Add inference code to MNIST example (#4741)
  • Use iter.reset() in PTB example (#5834)
  • Some small improvements to the Mushrooms example (#5982)

Tests

  • FunctionTestCase for function tests (#3499)
  • Test statistics of initializers (#5511)
  • Add test mode to text classification example (#5666)
  • Fix test of F.connectionist_temporal_classification (#5727)
  • Refactor tests of F.split_axis and F.concat (#5733)
  • Return exitcode of make html to Travis (#5769)
  • Fix testing.BackendConfig context for repeated use (#5779)
  • Encode parameters in parameterized class name (#5782)
  • Add test for conveter device argument in Evaluator (#5806)
  • Fix error message of testing.assert_allclose (#5814)
  • Refactor CI scripts (#5858)
  • Refactor Travis script (#5859)
  • Remove some CI requirements (#5865)
  • Allow multiple application of testing.parameterize (#5893)
  • Allow mixing testing.inject_backend_tests and testing.parameterize (#5904)
  • Adjust testing tolerance of numerical gradient (#5923)
  • Adjust testing tolerance of F.connectionist_temporal_classification (#5928)
  • Do not ignore FutureWarning other than experimental features (#5949)
  • Move mypy to static checks (#5987)
  • Skip test on Theano<=1.0.3 and NumPy>=1.16.0 (#6001)
  • Fix travis script to continue on failure in each step (#6002)
  • Fix inject_backend_tests multi_gpu test mark (#6028)
  • Allow doctest to run in single-GPU environment (#6029)
  • Test if the default CUDA device keeps being 0 after each test (#6044)

ChainerX

  • Add ChainerX native float16 (#5761)
  • CuPy/ChainerX memory pool sharing (#5821)
  • Automatic ChainerX fallback of array attributes in Function (#5828)
  • ChainerX backward w.r.t. inputs (C++ chainerx.grad ) (#5747)
  • Improve gradient mismatch error (#5748)
  • Forbid fallback get/setitem for arrays with backprop required (#5754)
  • Implement BFC algorithm in ChainerX CUDA memory pool (#5760)
  • Resolve _as_noncontiguous_array workaround for ChainerX (#5781)
  • L.NegativeSampling ChainerX support (#5816)
  • Stop using Unified Memory by default (#5912)
  • Avoid cudaMemcpyAsync for pinned memory for faster host-to-device transfer (#5940)
  • Remove chainerx.asscalar (#6007)
  • Fix scalar handling of indices_and_sections in chainerx.split (#5788)
  • Fix ChainerX Python docstring allocation issue (#5815)
  • Fix chainerx.maximum to restore CUDA device (#6043)
  • Build ChainerX on ReadTheDocs (#5766)
  • Add chainerx.ndarray to the ndarray doc (#5864)
  • Document CuPy memory pool sharing (#6017)
  • Do not overwrite CMAKE_CXX_FLAGS a user specified (#5770)
  • Patch files for macOS (#5776, thanks @ktnyt!)
  • Update pybind dependency to v2.2.4 (#5798)
  • Update gsl-lite to v0.32.0 (#5849)
  • Enable ChainerX in docker image (#5879)
  • Update third-party.cmake to follow the recent way (#5911)
  • Made ChainerX setup and compile on Windows (#5932, thanks @durswd!)
  • Fix visibility for pybind exception registration for macOS (#5936)
  • Fix manifest typos (#6065)
  • ChainerX MNIST C++ example (#5746)
  • Remove some TODOs of the chainerx resnet example (#5775)
  • Fix jenkins script to allow explicit repo root (#5774)
  • Fix to test against new chainerx.GradientError (#5787)
  • Add Travis matrix for macOS ChainerX tests (#5846)
  • Remove .circleci (#5860)
  • Add C++ linter checks in Travis CI (#5867)
  • Fix FixedCapacityDummyAllocator in CUDA memory pool test (#5993)
  • Fix CUDA specific Python binding (#6037)
  • Add chainerx-generated reference docs to .gitignore (#5805, thanks @knorth55!)
  • Disable clang-tidy modernize-use-auto (#5839)

Code Fixes

  • Simplify batch normalization with cuDNN (#5568)
  • Add type hints for Link, LinkHook, Initializer and ChainerX (#5675)
  • Refactor gradient setter in gradient_check (#5699)
  • Use new RNN implementation (#5726)
  • Backprop from multiple variables (#5741)
  • Fixes for clang (#5744)
  • Improve coding style (#5763)
  • Fix style of setup.py (#5764)
  • Code enhancements: avoid array copies (#5800)
  • Random code enhancements (#5801)
  • Add comment to MultiprocessIterator.__copy__ (#5833)
  • Move workaround utils._getitem/_setitem to chainerx (#5840)
  • Fix clang-tidy error (#5870)
  • Fix typo on internal attribute (#5894)
  • Fix clang-tidy warnings on clang-tidy 6 (#5901)
  • Fix for clang-tidy 7 (#5933)
  • Fix code formatting (#5941)
  • Remove @overload annotations outside the stub files (#5960)
  • Avoid deprecated numpy.asscalar (#5994)
  • Post macro comment for consistency (#6014)
  • Remove chainerx.asscalar from mypy stub file (#6024)

Others

  • Fix .gitignore to avoid ignoring some necessary files (#5836)
  • Allow skipping linkcode in docs with environment variable (#5868)
Assets 2
You can’t perform that action at this time.