Skip to content

Releases: nengo/nengo-dl

Release 3.6.0

27 Jan 00:37
Compare
Choose a tag to compare

Compatible with Nengo 3.0 - 3.2

Compatible with TensorFlow 2.3 - 2.11

Added

  • Included tensorflow-macos in the alternative tensorflow package names checked during installation. (#228)
  • Added support for groups parameter to ConvertConv. (#223)

Changed

  • Pinned TensorFlow version to <2.11 on Windows. As of 2.11 the TensorFlow package for Windows is maintained by a third party (Intel), and there are currently bugs in that package affecting functionality that is required by NengoDL. (#229)

Removed

  • Removed support for "graph mode" (i.e., running with tf.compat.v1.disable_eager_execution()). TensorFlow is no longer supporting this functionality, and it is increasingly buggy. Graph mode may still be faster for some models; if you need this functionality, try using a previous version of NengoDL. (#229)
  • Dropped support for TensorFlow 2.2. The minimum supported version is now 2.3.4 (earlier 2.3.x versions should work as well, but TensorFlow may install an incompatible protobuf version that the user will need to manually correct). (#228)

Release 3.5.0

18 May 15:35
Compare
Choose a tag to compare

Compatible with Nengo 3.0 - 3.2

Compatible with TensorFlow 2.2 - 2.9

Changed

  • Dropped support for Python 3.6 and added support for 3.9 and 3.10. (#224)

Release 3.4.4

10 Feb 23:11
Compare
Choose a tag to compare

Compatible with Nengo 3.0 - 3.2

Compatible with TensorFlow 2.2 - 2.8

Added

  • Added support for nengo.transforms.ConvolutionTranspose. (#183)

Release 3.4.3

10 Nov 13:52
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0 - 3.1.0

Compatible with TensorFlow 2.2.0 - 2.7.0

Added

  • Added support for TensorFlow 2.7.0. (#218)

Changed

  • Increased minimum keras-spiking version to 0.3.0. (#219)

Release 3.4.2

13 Aug 00:56
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0 - 3.1.0

Compatible with TensorFlow 2.2.0 - 2.6.0

Added

  • Added support for TensorFlow 2.6.0. (#216)

Release 3.4.1

28 May 22:44
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0 - 3.1.0

Compatible with TensorFlow 2.2.0 - 2.5.0

Added

  • Added support for TensorFlow 2.5.0. (#212)

Fixed

  • A more informative error message will be raised if a custom neuron build function returns the wrong number of values. (#199)

Removed

  • Dropped support for Python 3.5 (which reached its end of life in September 2020). (#184)

Release 3.4.0

26 Nov 23:43
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0 - 3.1.0

Compatible with TensorFlow 2.2.0 - 2.4.0

Added

  • Added support for KerasSpiking layers in the Converter. (#182)
  • Added support for tf.keras.layers.TimeDistributed in the Converter. (#182)
  • Added support for TensorFlow 2.4. (#185)
  • Added support for Nengo 3.1. (#187)

Changed

  • Minor improvements to build speed by building constants outside of TensorFlow. (#173)
  • Support for PES implementation changes in Nengo core (see #1627 and #1640). (#181)

Fixed

  • Global default Keras dtype will now be reset correctly when an exception occurs in a Simulator method outside the with Simulator context. (#173)
  • Support new LinearFilter step type introduced in Nengo core (see #1629). (#173)
  • Fixed a bug when slicing multi-dimensional Signals (e.g. Ensemble encoders). (#181)
  • Fixed a bug when loading weights saved in a different Python version. (#187)

Release 3.3.0

14 Aug 20:03
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0

Compatible with TensorFlow 2.2.0 - 2.3.0

Added

  • Added support for new Nengo core NeuronType state implementation. (#159)
  • Compatible with TensorFlow 2.3.0. (#159)
  • Added support for nengo.Tanh, nengo.RegularSpiking, nengo.StochasticSpiking, and nengo.PoissonSpiking neuron types. (#159)
  • Added nengo_dl.configure_settings(learning_phase=True/False) configuration option. This mimics the previous behaviour of tf.keras.backend.learning_phase_scope (which was deprecated by TensorFlow). That is, if you would like to override the default behaviour so that, e.g., sim.predict runs in training mode, set nengo_dl.configure_settings(learning_phase=True). (#163)

Changed

  • Simulator.evaluate no longer prints any information to stdout in TensorFlow 2.2 in graph mode (due to a TensorFlow issue, see tensorflow/tensorflow#39456). Loss/metric values will still be returned from the function as normal. (#153)
  • A warning will now be raised if activation types are passed to Converter.swap_activations that aren't actually in the model. (#168)
  • Updated TensorFlow installation instruction in documentation. (#170)
  • NengoDL will now use TensorFlow's eager mode by default. The previous graph-mode behaviour can be restored by calling tf.compat.v1.disable_eager_execution(), but we cannot guarantee that that behaviour will be supported in the future. (#163)
  • NengoDL will now use TensorFlow's "control flow v2" by default. The previous behaviour can be restored by calling tf.compat.v1.disable_control_flow_v2(), but we cannot guarantee that that behaviour will be supported in the future. (#163)
  • NengoDL will now default to allowing TensorFlow's "soft placement" logic, meaning that even if you specify an explicit device like "/gpu:0", TensorFlow may not allocate an op to that device if there isn't a compatible implementation available. The previous behaviour can be restored by calling tf.config.set_soft_device_placement(False). (#163)
  • Internal NengoDL OpBuilder classes now separate the "pre build" stage from OpBuilder.__init__ (so that the same OpBuilder class can be re-used across multiple calls, rather than instantiating a new OpBuilder each time). Note that this has no impact on front-end users, this is only relevant to anyone that has implemented a custom build class. The logic that would previously have gone in OpBuilder.__init__ should now go in OpBuilder.build_pre. In addition, the ops argument has been removed from OpBuilder.build_pre; that will be passed to OpBuilder.__init__ ( and will be available in build_pre as self.ops). Similarly, the ops and config argument have been removed from build_post, and can instead be accessed through self.ops/config. (#163)
  • Minimum TensorFlow version is now 2.2.0. (#163)

Fixed

  • Support Sparse transforms in Simulator.get_nengo_params. (#149)
  • Fixed bug in TensorGraph log message when logging was enabled. (#151)
  • Updated the KerasWrapper class in the tensorflow-models example to fix a compatibility issue in TensorFlow 2.2. (#153)
  • Handle Nodes that are not connected to anything else, but are probed (this only occurs in Nengo>=3.1.0). (#159)
  • More robust support for converting nested Keras models in TensorFlow 2.3. (#161)
  • Fix bug when probing slices of certain probeable attributes (those that are directly targeting a Signal in the model). (#164)

Removed

  • Removed nengo_dl.utils.print_op (use tf.print instead). (#163)

Release 3.2.0

02 Apr 15:40
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0

Compatible with TensorFlow 2.0.0 - 2.2.0

Added

  • Added nengo_dl.LeakyReLU and nengo_dl.SpikingLeakyReLU neuron models. (#126)
  • Added support for leaky ReLU Keras layers to nengo_dl.Converter. (#126)
  • Added a new remove_reset_incs graph simplification step. (#129)
  • Added support for UpSampling layers to nengo_dl.Converter. (#130)
  • Added tolerance parameters to nengo_dl.Converter.verify. (#130)
  • Added scale_firing_rates option to nengo_dl.Converter. (#134)
  • Added Converter.layers attribute which will map Keras layers/tensors to the converted Nengo objects, to make it easier to access converted components. (#134)
  • Compatible with TensorFlow 2.2.0. (#140)
  • Added a new synapse argument to the Converter, which can be used to automatically add synaptic filters on the output of neural layers during the conversion process. (#141)
  • Added a new example demonstrating how to use the NengoDL Converter to convert a Keras model to a spiking Nengo network. (#141)

Changed

  • Re-enabled the remove_constant_copies graph simplification by default. (#129)
  • Reduced the amount of state that needs to be stored in the simulation. (#129)
  • Added more information to the error message when loading saved parameters that don't match the current model. (#129)
  • More efficient implementation of convolutional biases in the Converter. (#130)
  • Saved simulator state will no longer be included in Simulator.keras_model.weights. This means that Simulator.keras_model.save/load_weights will not include the saved simulator state, making it easier to reuse weights between models (as long as the models have the same weights, they do not need to have the same state variables). Simulator.save/load_params(..., include_state=True) can be used to explicitly save the simulator state, if desired. (#140)
  • Model parameters (e.g., connection weights) that are not trainable (because they've been marked non-trainable by user or targeted by an online learning rule) will now be treated separately from simulator state. For example, Simulator.save_params(..., include_state=False) will still include those parameters, and the results of any online learning will persist between calls even with stateful=False. (#140)
  • Added include_probes, include_trainable, and include_processes arguments to Simulator.reset to provide more fine-grained control over Simulator resetting. This replicates the previous functionality in Simulator.soft_reset. (#139)
  • More informative error messages when accessing invalid Simulator functionality after the Simulator has been closed. (#139)
  • A warning is now raised when the number of input data items passed to the simulator does not match the number of input nodes, to help avoid unintentionally passing data to the wrong input node. This warning can be avoided by passing data for all nodes, or using the dictionary input style if you want to only pass data for a specific node. (#139)
  • Dictionaries returned by sim.predict/evaluate will now be ordered. (#141)

Fixed

  • Fixed bug in error message when passing data with batch size less than Simulator minibatch size. (#139)
  • More informative error message when validation_split does not result in batch sizes evenly divisible by minibatch size. (#139)
  • Added tensorflow-cpu distributions to installation checks (so Nengo DL will not attempt to reinstall TensorFlow if tensorflow-cpu is already installed). (#142)
  • Fixed bug when applying the Converter to Keras models that re-use intermediate layers as output layers. (#137)
  • Fixed bug in conversion of Keras Dense layers with non-native activation functions. (#144)

Deprecated

  • Renamed Simulator.save/load_params include_non_trainable parameter to include_state. (#140)
  • Simulator.soft_reset has been deprecated. Use Simulator.reset(include_probes=False, include_trainable=False, include_processes=False) instead. (#139)

Release 3.1.0

05 Mar 13:56
Compare
Choose a tag to compare

Compatible with Nengo 3.0.0

Compatible with TensorFlow 2.0.0 - 2.1.0

Added

  • Added inference_only=True option to the Converter, which will allow some Layers/parameters that cannot be fully converted to native Nengo objects to be converted in a way that only matches the inference behaviour of the source Keras model (not the training behaviour). (#119)

Changed

  • Improved build time of networks containing lots of TensorNodes. (#119)
  • Improved memory usage of build process. (#119)
  • Saved simulation state may now be placed on GPU (this should improve the speed of state updates, but may slightly increase GPU memory usage). (#119)
  • Changed Converter freeze_batchnorm=True option to inference_only=True (effect of the parameter is the same on BatchNormalization layers, but also has broader effects). (#119)
  • The precision of the Nengo core build process will now be set based on the nengo_dl.configure_settings(dtype=...) config option. Note that this will override the default precision set in nengo.rc. (#119)
  • Minimum Numpy version is now 1.16.0 (required by TensorFlow). (#119)
  • Added support for the new transform=None default in Nengo connections (see Nengo#1591). Note that this may change the number of trainable parameters in a network as the scalar default transform=1 weights on non-Ensemble connections will no longer be present. (#128)

Fixed

  • Provide a more informative error message if Layer shape_in/shape_out contains undefined (None) elements. (#119)
  • Fixed bug in Converter when source model contains duplicate nodes. (#119)
  • Fixed bug in Converter for Concatenate layers with axis != 1. (#119)
  • Fixed bug in Converter for models containing passthrough Input layers inside submodels. (#119)
  • Keras Layers inside TensorNodes will be called with the training argument set correctly (previously it was always set to the default value). (#119)
  • Fixed compatibility with progressbar2 version 3.50.0. (#136)