Be notified of new releases
Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 28 million developers.Sign up
DyNet v. 2.1 incorporates the following changes:
- Parameters are now implicitly cast Expressions in python. This changes the API slightly as there is no need to call
- Python 3.7 support (pre-built binaries on PyPI) #1450 (thanks @danielhers )
- Advanced Numpy-like slicing #1363 (thanks @msperber)
- Argmax and straight-through estimators #1208
- Updated API doc #1312 (thanks @zhechenyan)
- Fix segmentation fault in RNNs #1371
- Many other small fixes and QoL improvements (see the full list of merged PRs since the last release for more details)
Link to the 2.1 documentation: https://dynet.readthedocs.io/en/2.1/
DyNet v. 2.0.3 incorporates the following changes:
- On-GPU random number generation (#1059 #1094 #1154)
- Memory savings through in-place operations (#1103)
- More efficient inputTensor that doesn't switch memory layout (#1143)
- More stable sigmoid (#1200)
- Fix bug in weight decay (#1201)
- Many other fixes, etc.
Link to the documentation: Dynet v2.0.3
v 2.0.2 of DyNet includes the following improvements. Thanks to everyone who made them happen!
Better organized examples: #191
Full multi-device support: #952
Broadcasting standard elementwise operations: #776
Some refactoring: #522
Better profiling: #1088
Fix performance regression on autobatching: #974
Pre-compiled pip binaries
A bunch of other small functionality additions and bug fixes
DyNet v2.0.1 made the following major improvements:
Simplified training interface: #695
Support for multi-device computation (thanks @xunzhang !): #704
A memory efficient version of LSTMBuilder (thanks @msperber): #729
Scratch memory for better memory efficiency (thanks @zhisbug @Abasyoni !): #692
Work towards pre-compiled pip files (thanks @danielhers !)
This release includes a number of new features that are breaking changes with respect to v1.1.
- DyNet no longer requires boost (thanks @xunzhang)! This means that models are now not saved in Boost format, but instead a format supported natively by DyNet.
- Other changes to reading and writing include the ability to read/write only parts of models. There have been a number of changes to the reading/writing interface as well, and examples of how to use it can be found in the "examples". (#84)
- Renaming of "Model" as "ParameterCollection"
- Removing the dynet::expr namespace in C++ (now expressions are in the dynet:: namespace)
- Making VanillaLSTMBuilder the default LSTM interface #474
Other new features include
- Autobatching (by @yoavgo and @neubig): https://github.com/clab/dynet/blob/master/examples/python/tutorials/Autobatching.ipynb
- Scala bindings (thanks @joelgrus!) #357
- Dynamically increasing memory pools (thanks @yoavgo) #364
- Convolutions and cuDNN (thanks @zhisbug!): #236
- Better error handling: #365
- Better documentation (thanks @pmichel31415!)
- Gal dropout (thanks @yoavgo and @pmichel31415!): #261
- Integration into pip (thanks @danielhers !)
- A cool new logo! (http://dynet.readthedocs.io/en/latest/citing.html)
- A huge number of other changes by other contributors. Thank you everyone!
This is the first release candidate for DyNet version 1.0.
Compared to the previous cnn, it supports a number of new features:
- Full GPU support
- Simple support of mini-batching
- Better integration with Python bindings
- Better efficiency
- Correct implementation of l2 regularization
- More supported functions
- And much more!