v1.11.0
This is a minor release that contains the following updates:
feature #914 #1142 #1168 #1204 #1217 #1285 #1286 #1289 #1329 #1330 #1331 #1358 #1359 #1373
other #1357
enhancement #886 #1112 #1302 #1311 #1312 #1342
test #1302
document #1328 #1341 #1354 #1355 #1371
bug #1297 #1336 #1341 #1346 #1352 #1355 #1364
Summary:
- Chainer now supports dataset and training loop abstraction (#1285).
- It contains the following components.
- Dataset and Iterator to extract mini batches by iterating over datasets
- Trainer, Updater, and Extension to customize the training loop with low cost
- Reporter to collect statistics from inside of the models
- It also contains off-the-shelf support of data parallel learning with multiple GPUs (#1358, thanks @amitibo!!).
- MNIST, PTB, and ImageNet examples are updated. They are now using Trainer.
- Tutorial is also updated to support the updated examples.
- It contains the following components.
- New links and Functions:
- Enhancement of links and Functions:
- Asynchronous data transfer using a CUDA stream is now supported (#1112).