Added
- Added progress bar support for Jupyter Lab >=0.32. (#1428, #1087)
- We now warn that the progress bar is not supported in Jupyter Notebook <5. (#1428, #1426)
Changed
- Replaced the
dt
argument toSimulator.trange
withsample_every
becausedt
would return values that the simulator had not simulated.dt
is now an alias forsample_every
and will be removed in the future. (#1368, #1384)
Added
- Added a warning when setting
gain
andbias
along with either ofmax_rates
orintercepts
, as the latter two parameters are ignored. (#1431, #1433)
Changed
- Learning rules can now be sliced when providing error input. (#1365, #1385)
- The order of parameters in learning rules has changed such that
learning_rate
always comes first. (#1095) - Learning rules take
pre_synapse
,post_synapse
, andtheta_synapse
instead ofpre_tau
,post_tau
, andtheta_tau
respectively. This allows arbitrarySynapse
objects to be used as filters on learning signals. (#1095)
Deprecated
- The
nengo.ipynb
IPython extension and theIPython2ProgressBar
have been deprecated and replaced by theIPython5ProgressBar
. This progress bar will be automatically activated in IPython and Jupyter notebooks from IPython version 5.0 onwards. (#1087, #1375) - The
pre_tau
,post_tau
, andtheta_tau
parameters for learning rules are deprecated. Instead, usepre_synapse
,post_synapse
, andtheta_synapse
respectively. (#1095)
Removed
Added
- Added
amplitude
parameter toLIF
,LIFRate
, andRectifiedLinear
which scale the output amplitude. (#1325, #1391) - Added the
SpikingRectifiedLinear
neuron model. (#1391)
Changed
- Default values can no longer be set for
Ensemble.n_neurons
orEnsemble.dimensions
. (#1372) - If the simulator seed is not specified, it will now be set from the network seed if a network seed is specified. (#980, #1386)
Fixed
- Fixed an issue in which signals could not be pickled,
making it impossible to pickle
Model
instances. (#1135) - Better error message for invalid return values in
nengo.Node
functions. (#1317) - Fixed an issue in which accepting and passing
(*args, **kwargs)
could not be used in custom solvers. (#1358, #1359) - Fixed an issue in which the cache would not release its index lock on abnormal termination of the Nengo process. (#1364)
- Fixed validation checks that prevented the default from being set on certain parameters. (#1372)
- Fixed an issue with repeated elements in slices in which a positive and negative index referred to the same dimension. (#1395)
- The
Simulator.n_steps
andSimulator.time
properties now return scalars, as was stated in the documentation. (#1406) - Fixed the
--seed-offset
option of the test suite. (#1409)
Added
- Added a
NoSolver
solver that can be used to manually pass in a predefined set of decoders or weights to a connection. (#1352) - Added a
Piecewise
process, which replaces the now deprecatedpiecewise
function. (#1036, #1100, #1355, #1362)
Changed
- The minimum required version of NumPy has been raised to 1.8. (#947)
- Learning rules can now have a learning rate of 0. (#1356)
- Running the simulator for zero timesteps will now issue a warning, and running for negative time will error. (#1354, #1357)
Fixed
- Fixed an issue in which the PES learning rule could not be used
on connections to an
ObjView
when using a weight solver. (#1317) - The progress bar that can appear when building a large model will now appear earlier in the build process. (#1340)
- Fixed an issue in which
ShapeParam
would always storeNone
. (#1342) - Fixed an issue in which multiple identical indices in a slice were ignored. (#947, #1361)
Deprecated
- The
piecewise
function innengo.utils.functions
has been deprecated. Please use thePiecewise
process instead. (#1100)
Added
- Added a
n_neurons
property toNetwork
, which gives the number of neurons in the network, including all subnetworks. (#435, #1186) - Added a new example showing how adjusting ensemble tuning curves can improve function approximation. (#1129)
- Added a minimum magnitude option to
UniformHypersphere
. (#799) - Added documentation on RC settings. (#1130)
- Added documentation on improving performance. (#1119, #1130)
- Added
LinearFilter.combine
method to combine twoLinearFilter
instances. (#1312) - Added a method to all neuron types to compute ensemble
max_rates
andintercepts
givengain
andbias
. (#1334)
Changed
- Learning rules now have a
size_in
parameter and attribute, allowing both integers and strings to define the dimensionality of the learning rule. This replaces theerror_type
attribute. (#1307, #1310) EnsembleArray.n_neurons
now gives the total number of neurons in all ensembles, including those in subnetworks. To get the number of neurons in each ensemble, useEnsembleArray.n_neurons_per_ensemble
. (#1186)- The Nengo modelling API document now has summaries to help navigate the page. (#1304)
- The error raised when a
Connection
function returnsNone
is now more clear. (#1319) - We now raise an error when a
Connection
transform is set toNone
. (#1326)
Fixed
- Probe cache is now cleared on simulator reset. (#1324)
- Neural gains are now always applied after the synapse model. Previously, this was the case for decoded connections but not neuron-to-neuron connections. (#1330)
- Fixed a crash when a lock cannot be acquired while shrinking the cache. (#1335, #1336)
Added
- Added an optimizer that reduces simulation time for common types of models.
The optimizer can be turned off by passing
optimize=False
toSimulator
. (#1035) - Added the option to not normalize encoders by setting
Ensemble.normalize_encoders
toFalse
. (#1191, #1267) - Added the
Samples
distribution to allow raw NumPy arrays to be passed in situations where a distribution is required. (#1233)
Changed
- We now raise an error when an ensemble is assigned a negative gain. This can occur when solving for gains with intercepts greater than 1. (#1212, #1231, #1248)
- We now raise an error when a
Node
orDirect
ensemble produces a non-finite value. (#1178, #1280, #1286) - We now enforce that the
label
of a network must be a string orNone
, and that theseed
of a network must be an int orNone
. This helps avoid situations where the seed would mistakenly be passed as the label. (#1277, #1275) - It is now possible to pass NumPy arrays in the
ens_kwargs
argument ofEnsembleArray
. Arrays are wrapped in aSamples
distribution internally. (#691, #766, #1233) - The default refractory period (
tau_ref
) for theSigmoid
neuron type has changed to 2.5 ms (from 2 ms) for better compatibility with the default maximum firing rates of 200-400 Hz. (#1248) - Inputs to the
Product
andCircularConvolution
networks have been renamed fromA
andB
toinput_a
andinput_b
for consistency. The old names are still available, but should be considered deprecated. (#887, #1296)
Fixed
Deprecated
- The
net
argument to networks has been deprecated. This argument existed so that network components could be added to an existing network instead of constructing a new network. However, this feature is rarely used, and makes the code more complicated for complex networks. (#1296)
Added
- Added documentation on config system quirks. (#1224)
- Added
nengo.utils.network.activate_direct_mode
function to make it easier to activate direct mode in networks where some parts require neurons. (#1111, #1168)
Fixed
- The matrix multiplication example will now work with matrices of any size and uses the product network for clarity. (#1159)
- Fixed instances in which passing a callable class as a function could fail. (#1245)
- Fixed an issue in which probing some attributes would be one timestep faster than other attributes. (#1234, #1245)
- Fixed an issue in which SPA models could not be copied. (#1266, #1271)
- Fixed an issue in which Nengo would crash if other programs had locks on Nengo cache files in Windows. (#1200, #1235)
Changed
- Integer indexing of Nengo objects out of range raises an
IndexError
now to be consistent with standard Python behaviour. (#1176, #1183) - Documentation that applies to all Nengo projects has been moved to https://www.nengo.ai/. (#1251)
Added
- It is now possible to probe
scaled_encoders
on ensembles. (#1167, #1117) - Added
copy
method to Nengo objects. Nengo objects can now be pickled. (#977, #984) - A progress bar now tracks the build process in the terminal and Jupyter notebook. (#937, #1151)
- Added
nengo.dists.get_samples
function for convenience when working with distributions or samples. (#1181, docs)
Changed
- Access to probe data via
nengo.Simulator.data
is now cached, making repeated access much faster. (#1076, #1175)
Deprecated
- Access to
nengo.Simulator.model
is deprecated. To access static data generated during the build usenengo.Simulator.data
. It provides access to everything thatnengo.Simulator.model.params
used to provide access to and is the canonical way to access this data across different backends. (#1145, #1173)
API changes
- It is now possible to pass a NumPy array to the
function
argument ofnengo.Connection
. The values in the array are taken to be the targets in the decoder solving process, which means that theeval_points
must also be set on the connection. (#1010) nengo.utils.connection.target_function
is now deprecated, and will be removed in Nengo 3.0. Instead, pass the targets directly to the connection through thefunction
argument. (#1010)
Behavioural changes
- Dropped support for NumPy 1.6. Oldest supported NumPy version is now 1.7. (#1147)
Improvements
- Added a
nengo.backends
entry point to make the reference simulator discoverable for other Python packages. In the future all backends should declare an entry point accordingly. (#1127) - Added
ShapeParam
to store array shapes. (#1045) - Added
ThresholdingPreset
to configure ensembles for thresholding. (#1058, #1077, #1148) - Tweaked
rasterplot
so that spikes from different neurons don't overlap. (#1121)
Documentation
- Added a page explaining the config system and preset configs. (#1150)
Bug fixes
- Fixed some situations where the cache index becomes corrupt by writing the updated cache index atomically (in most cases). (#1097, #1107)
- The synapse methods
filt
andfiltfilt
now support lists as input. (#1123) - Added a registry system so that only stable objects are cached. (#1054, #1068)
- Nodes now support array views as input. (#1156, #1157)
Bug fixes
- The DecoderCache is now more robust when used improperly, and no longer requires changes to backends in order to use properly. (#1112)
Improvements
- Improved the default
LIF
neuron model to spike at the same rate as theLIFRate
neuron model for constant inputs. The older model has been moved to nengo_extras under the nameFastLIF
. (#975) - Added
y0
attribute toWhiteSignal
, which adjusts the phase of each dimension to begin with absolute value closest toy0
. (#1064) - Allow the
AssociativeMemory
to accept Semantic Pointer expressions asinput_keys
andoutput_keys
. (#982)
Bug fixes
- The DecoderCache is used as context manager instead of relying on the
__del__
method for cleanup. This should solve problems with the cache's file lock not being removed. It might be necessary to manually remove theindex.lock
file in the cache directory after upgrading from an older Nengo version. (#1053, #1041, #1048) - If the cache index is corrupted, we now fail gracefully by invalidating the cache and continuing rather than raising an exception. (#1110, #1097)
- The
Nnls
solver now works for weights. TheNnlsL2
solver is improved since we clip values to be non-negative before forming the Gram system. (#1027, #1019) - Eliminate memory leak in the parameter system. (#1089, #1090)
- Allow recurrence of the form
a=b, b=a
in basal ganglia SPA actions. (#1098, #1099) - Support a greater range of Jupyter notebook and ipywidgets versions with the
the
ipynb
extensions. (#1088, #1085)
API changes
- A new class for representing stateful functions called
Process
has been added.Node
objects are now process-aware, meaning that a process can be used as a node'soutput
. Unlike non-process callables, processes are properly reset when a simulator is reset. See theprocesses.ipynb
example notebook, or the API documentation for more details. (#590, #652, #945, #955) - Spiking
LIF
neuron models now accept an additional argument,min_voltage
. Voltages are clipped such that they do not drop below this value (previously, this was fixed at 0). (#666) - The
PES
learning rule no longer accepts a connection as an argument. Instead, error information is transmitted by making a connection to the learning rule object (e.g.,nengo.Connection(error_ensemble, connection.learning_rule)
. (#344, #642) - The
modulatory
attribute has been removed fromnengo.Connection
. This was only used for learning rules to this point, and has been removed in favor of connecting directly to the learning rule. (#642) - Connection weights can now be probed with
nengo.Probe(conn, 'weights')
, and these are always the weights that will change with learning regardless of the type of connection. Previously, eitherdecoders
ortransform
may have changed depending on the type of connection; it is now no longer possible to probedecoders
ortransform
. (#729) - A version of the AssociativeMemory SPA module is now available as a
stand-alone network in
nengo.networks
. The AssociativeMemory SPA module also has an updated argument list. (#702) - The
Product
andInputGatedMemory
networks no longer accept aconfig
argument. (#814) - The
EnsembleArray
network'sneuron_nodes
argument is deprecated. Instead, call the newadd_neuron_input
oradd_neuron_output
methods. (#868) - The
nengo.log
utility function now takes a stringlevel
parameter to specify any logging level, instead of the old binarydebug
parameter. Cache messages are logged at DEBUG instead of INFO level. (#883) - Reorganised the Associative Memory code, including removing many extra
parameters from
nengo.networks.assoc_mem.AssociativeMemory
and modifying the defaults of others. (#797) - Add
close
method toSimulator
.Simulator
can now be used used as a context manager. (#857, #739, #859) - Most exceptions that Nengo can raise are now custom exception classes
that can be found in the
nengo.exceptions
module. (#781) - All Nengo objects (
Connection
,Ensemble
,Node
, andProbe
) now accept alabel
andseed
argument if they didn't previously. (#958) - In
nengo.synapses
,filt
andfiltfilt
are deprecated. Every synapse type now hasfilt
andfiltfilt
methods that filter using the synapse. (#945) Connection
objects can now accept aDistribution
for the transform argument; the transform matrix will be sampled from that distribution when the model is built. (#979).
Behavioural changes
- The sign on the
PES
learning rule's error has been flipped to conform with most learning rules, in which error is minimized. The error should beactual - target
. (#642) - The
PES
rule's learning rate is invariant to the number of neurons in the presynaptic population. The effective speed of learning should now be unaffected by changes in the size of the presynaptic population. Existing learning networks may need to be updated; to achieve identical behavior, scale the learning rate bypre.n_neurons / 100
. (#643) - The
probeable
attribute of all Nengo objects is now implemented as a property, rather than a configurable parameter. (#671) - Node functions receive
x
as a copied NumPy array (instead of a readonly view). (#716, #722) - The SPA Compare module produces a scalar output (instead of a specific vector). (#775, #782)
- Bias nodes in
spa.Cortical
, and gate ensembles and connections inspa.Thalamus
are now stored in the target modules. (#894, #906) - The
filt
andfiltfilt
functions onSynapse
now use the initial value of the input signal to initialize the filter output by default. This provides more accurate filtering at the beginning of the signal, for signals that do not start at zero. (#945)
Improvements
- Added
Ensemble.noise
attribute, which injects noise directly into neurons according to a stochasticProcess
. (#590) - Added a
randomized_svd
subsolver for the L2 solvers. This can be much quicker for large numbers of neurons or evaluation points. (#803) - Added
PES.pre_tau
attribute, which sets the time constant on a lowpass filter of the presynaptic activity. (#643) EnsembleArray.add_output
now accepts a list of functions to be computed by each ensemble. (#562, #580)LinearFilter
now has ananalog
argument which can be set through its constructor. Linear filters with digital coefficients can be specified by settinganalog
toFalse
. (#819)- Added
SqrtBeta
distribution, which describes the distribution of semantic pointer elements. (#414, #430) - Added
Triangle
synapse, which filters with a triangular FIR filter. (#660) - Added
utils.connection.eval_point_decoding
function, which provides a connection's static decoding of a list of evaluation points. (#700) - Resetting the Simulator now resets all Processes, meaning the
injected random signals and noise are identical between runs,
unless the seed is changed (which can be done through
Simulator.reset
). (#582, #616, #652) - An exception is raised if SPA modules are not properly assigned to an SPA attribute. (#730, #791)
- The
Product
network is now more accurate. (#651) - Numpy arrays can now be used as indices for slicing objects. (#754)
Config.configures
now accepts multiple classes rather than just one. (#842)- Added
add
method tospa.Actions
, which allows actions to be added after module has been initialized. (#861, #862) - Added SPA wrapper for circular convolution networks,
spa.Bind
(#849) - Added the
Voja
(Vector Oja) learning rule type, which updates an ensemble's encoders to fire selectively for its inputs. (seeexamples/learning/learn_associations.ipynb
). (#727) - Added a clipped exponential distribution useful for thresholding, in particular in the AssociativeMemory. (#779)
- Added a cosine similarity distribution, which is the distribution of the
cosine of the angle between two random vectors. It is useful for setting
intercepts, in particular when using the
Voja
learning rule. (#768) nengo.synapses.LinearFilter
now has anevaluate
method to evaluate the filter response to sine waves of given frequencies. This can be used to create Bode plots, for example. (#945)nengo.spa.Vocabulary
objects now have areadonly
attribute that can be used to disallow adding new semantic pointers. Vocabulary subsets are read-only by default. (#699)- Improved performance of the decoder cache by writing all decoders of a network into a single file. (#946)
Bug fixes
- Fixed issue where setting
Connection.seed
through the constructor had no effect. (#724) - Fixed issue in which learning connections could not be sliced. (#632)
- Fixed issue when probing scalar transforms. (#667, #671)
- Fix for SPA actions that route to a module with multiple inputs. (#714)
- Corrected the
rmses
values inBuiltConnection.solver_info
when usingNNls
andNnl2sL2
solvers, and thereg
argument forNnl2sL2
. (#839) spa.Vocabulary.create_pointer
now respects the specified number of creation attempts, and returns the most dissimilar pointer if none can be found below the similarity threshold. (#817)- Probing a Connection's output now returns the output of that individual Connection, rather than the input to the Connection's post Ensemble. (#973, #974)
- Fixed thread-safety of using networks and config in
with
statements. (#989) - The decoder cache will only be used when a seed is specified. (#946)
Bug fixes
- Cache now fails gracefully if the
legacy.txt
file cannot be read. This can occur if a later version of Nengo is used.
API changes
- The
spa.State
object replaces the oldspa.Memory
andspa.Buffer
. These old modules are deprecated and will be removed in 2.2. (#796)
2.0.2 is a bug fix release to ensure that Nengo continues to work with more recent versions of Jupyter (formerly known as the IPython notebook).
Behavioural changes
- The IPython notebook progress bar has to be activated with
%load_ext nengo.ipynb
. (#693)
Improvements
- Added
[progress]
section tonengorc
which allows settingprogress_bar
andupdater
. (#693)
Bug fixes
- Fix compatibility issues with newer versions of IPython, and Jupyter. (#693)
Behavioural changes
- Node functions receive
t
as a float (instead of a NumPy scalar) andx
as a readonly NumPy array (instead of a writeable array). (#626, #628)
Improvements
rasterplot
works with 0 neurons, and generates much smaller PDFs. (#601)
Bug fixes
- Fix compatibility with NumPy 1.6. (#627)
Initial release of Nengo 2.0! Supports Python 2.6+ and 3.3+. Thanks to all of the contributors for making this possible!