v6.10.0: CPU efficiency improvements, refactoring
✨ Major features and improvements
- Provisional CUDA 9 support. CUDA 9 removes a compilation flag we require for CUDA 8. As a temporary workaround, you can build on CUDA 9 by setting the environment variable
CUDA9=1
. For example:
CUDA9=1 pip install thinc==6.10.0
- Improve efficiency of
NumpyOps.scatter_add
, when the indices only have a single dimension. This function was previously a bottle-neck for spaCy. - Remove redundant copies in backpropagation of maxout non-linearity
- Call floating-point versions of
sqrt
,exp
andtanh
functions. - Remove calls to
tensordot
, instead reshaping to make 2ddot
calls. - Improve efficiency of Adam optimizer on CPU.
- Eliminate redundant code in
thinc.optimizers
. There's now a singleOptimizer
class. For backwards compatibility,SGD
andAdam
functions are used to create optimizers with theAdam
recipe or vanilla SGD recipe.
👥 Contributors
Thanks to @RaananHadar for the pull request!