Skip to content

Releases: norse/norse

v1.1 Support for NIR and torch.compile

18 Mar 22:27
Compare
Choose a tag to compare

This release features a lot of quality-of-life improvements. Most notably, we started testing for torch.compile support, which gives us significant speedup. The improvements gained by moving to torch.compile meant that we could safely remove the C++ code such that Norse now is a Python-only module. That means the installations should be significantly faster.

We also added initialization methods for spatial and temporal receptive fields, added support for NIR, cleaned up the docs, restructured the imports, removed unnecessary (and slow) try-catch clauses, and cleaned up dependencies.

New state tuples

We also added tentative support for a new StateTuple implementation based on PyTorch's pytrees, which makes it easier to operate on parameters. This allows us to cast parameters to devices or

p = LIFParameters(...) # Create a parameter
p.to("cuda:0") # Cast the parameters to a device
p.float() # Cast the parameters to floats.

Note that this is currently only implemented for LIFParameters and LIFBoxParameters. Let us know how it works!

What's Changed

New Contributors

Full Changelog: v1.0.0...v1.1.0

v1.0.0 - First Stable Release

20 Jan 00:04
Compare
Choose a tag to compare

This is the first stable release of Norse. We feel like that after close to almost 4 years of development it is time to take this step. The API has stabilised somewhat and while we anticipate some changes in the future, we will try to do them in ways that are easy to accommodate for users. Since the last release we mostly focussed on bugs and worked on performance. We also got some nice additions

  • Feature spikes to times decoder (#321): A differentiable way of encoding spikes to times.
  • Feature no delay (#326): We changed the integration order of most models
  • Feature AdEx refractory (#291): Adaptive Exponential Fire Neuron with refractory state

We also worked on improvements to documentation, our continuous integration and build tooling.

v0.0.7 Neuron models and stability improvements

06 Oct 09:48
7d04609
Compare
Choose a tag to compare

This release includes prototypical sparse and adjoint equations, neuron models, utilities, and various stability fixes.

Specifically, we included

  • Izhikevich and IAF neuron models (thanks to @adelpierre)
  • Discrete adjoint sensitivity and sparse equations for LIF dynamics
  • Efficient convolutions in time (LConv2d)
  • Triangular surrogate gradients (thanks to @Huizerd)
  • Preliminary plotting primitives
  • Usage improvements for tensor datatypes (from, for instance, Tonic) and parameter settings for JIT optimized code
  • Increased code coverage
  • Documentation improvements
  • Stability improvements around C++ code, sequential state models, and tasks
  • Numerous bug fixes, e.g. around recurrent autapses
  • Nix support

Sparse and adjoint code

20 Sep 21:41
ac33bfc
Compare
Choose a tag to compare
Pre-release
Fixes related to release/publishing (#259)

* Fixed conda, docker, and pypi workflow
* Updated docker publishing
* Updated installation docs

Sparse and adjoint code

20 Sep 15:49
9724b33
Compare
Choose a tag to compare
Pre-release

RC3 for the sparse and adjoint code. Aims to resolve builds for Windows and Linux

Sparse and adjoint code

23 Aug 20:37
7242806
Compare
Choose a tag to compare
Pre-release

RC2 for the sparse and adjoint code. Aims to resolve builds for Windows and Linux

Sparse and adjoint code

21 Aug 12:07
0a47127
Compare
Choose a tag to compare
Pre-release

This release candidate drafts code for sparse activations and adjoint-based optimizations as described in https://arxiv.org/abs/2009.08378

Streamlined Module API

31 Jan 22:13
69b85fe
Compare
Choose a tag to compare

This release features our shiny and new module API, it unifies all Spiking Neuron modules
under one common base class thereby eliminating redundant code.

From a user perspective it also means that the API is now consistent across all Neuron
types.

Optimizations, deep-learning models, and plasticity

21 Dec 13:42
3756b42
Compare
Choose a tag to compare

This release brings numerous improvements in terms of speed, usability, specializations, documentation and more. In general, we tried to make Norse more user-friendly and applicable for both the die-hard deep-learning expert and neuroscience enthusiasts new to Python. Specifically, this release includes:

  • Compatibility with the PyTorch Lightning library, which means that Norse now scales to multiple GPUs and even supercomputing clusters with SLURM. As an example, see our MNIST task.
  • The SequentialState module, which works similar to PyTorch's Sequential layers in that it allows for seamless composition of PyTorch and Norse modules. Together with the Lift module, this is an important step towards powerful and simple tools for developing spiking neural networks.
  • As Norse becomes faster to work with, it is also easier to implement more complex models. Norse now features spiking convolutions, MobileNet and VGG networks which can be used out-of-the box. See the norse.torch.models package for more information.
  • Improved performance. We implemented the LIF neuron equations and the SuperSpike synthetic gradient in C++. All in all, Norse is roughly twice as fast as it was before.
  • Improved documentation. The main pages and the introductory pages were edited and cleaned up. This is an area we will be improving much more in the future.
  • Various bugfixes. Norse is now more stable and useable than before.

As always, we welcome feedback and are looking forward to hearing how you are using Norse! Happy hacking 🥳

STDP, neuron models, and PyTorch compatibility

10 Sep 17:11
a694d75
Compare
Choose a tag to compare

This release contains a number of functionality and model additions, as well as improved PyTorch compatibility through the Lift module. Most notably, we

  • Added spike-time plasticity
  • Added regularization for spiking cells/layers
  • Added a layer for Lifting regular PyTorch layers to work with temporal data
  • Improved usability by
    • cleaning up neuron model parameters and
    • inferring initial neuron state
    • inferring device parameter