Skip to content

Commit

Permalink
Netx (#30)
Browse files Browse the repository at this point in the history
* Updated readme and tutuorial links.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* PilotNet verified in Netx

* PiloteNet verified in NetX. Minor liniting fixes.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Changes to accompany io read and reset api changes.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Pilotnet SDNN tested

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* pilotnet sdnn tutorial

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* PilotNet SDNN notebook

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Removed dataset gt logging

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* fixed sdnn notebook plot label

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Integrated virtualport api to swicth from conv to dense layer

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* dense graded input support added. Cleaned up sdnn notebook

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Refport insconsistency in snn notebook

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Linting fixes. Updated build script

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Changed dates in copyright header

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Readme Updates and review comments fixes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fixed linting issue

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Cleaned up snn notebook. Fixed issue with lambda on Windows.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fixed test_hdf5 tests to reflect in_layer transform changes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* final tid bits

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

Co-authored-by: Marcus G K Williams <marcus.williams@intel.com>
  • Loading branch information
bamsumit and mgkwill committed Mar 1, 2022
1 parent 839a875 commit 56099b9
Show file tree
Hide file tree
Showing 136 changed files with 7,550 additions and 102 deletions.
32 changes: 19 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@ The library presently consists of

1. `lava.lib.dl.slayer` for natively training Deep Event-Based Networks.
2. `lava.lib.dl.bootstrap` for training rate coded SNNs.

Coming soon to the library
1. `lava.lib.dl.netx` for training and deployment of event-based deep neural networks on traditional as well as neuromorphic backends.
3. `lava.lib.dl.netx` for training and deployment of event-based deep neural networks on traditional as well as neuromorphic backends.

More tools will be added in the future.

Expand Down Expand Up @@ -112,15 +110,20 @@ $ pip install lava-nc-0.1.0.tar.gz

## Getting Started

**End to end tutorials**
**End to end training tutorials**
* [Oxford spike train regression](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/oxford/train.ipynb)
* [MNIST digit classification](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/bootstrap/mnist/train.ipynb)
* [NMNIST digit classification](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/nmnist/train.ipynb)
* [PilotNet steering angle prediction](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/pilotnet/train.ipynb)

**Deep dive tutorials**
**Deep dive training tutorials**
* [Dynamics and Neurons](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/neuron_dynamics/dynamics.ipynb)

**Inference tutorials**
* [Oxford Inference](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/netx/oxford/run.ipynb)
* [PilotNet SNN Inference](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/netx/pilotnet_snn/run.ipynb)
* [PilotNet SDNN Inference](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/netx/pilotnet_sdnn/run.ipynb)

## __`lava.lib.dl.slayer`__

`lava.lib.dl.slayer` is an enhanced version of [SLAYER](https://github.com/bamsumit/slayerPytorch). Most noteworthy enhancements are: support for _recurrent network structures_, a wider variety of _neuron models_ and _synaptic connections_ (a complete list of features is [here](https://github.com/lava-nc/lava-dl/blob/main/src/lava/lib/dl/slayer/README.md)). This version of SLAYER is built on top of the [PyTorch](https://pytorch.org/) deep learning framework, similar to its predecessor. For smooth integration with Lava, `lava.lib.dl.slayer` supports exporting trained models using the platform independent __hdf5 network exchange__ format.
Expand Down Expand Up @@ -263,27 +266,30 @@ __Load the trained network__
# Import the model as a Lava Process
net = hdf5.Network(net_config='network.net')
```
__Attach Processes for Input Injection and Output Readout__
__Attach Processes for Input-Output interaction__
```python
from lava.proc.io import InputLoader, BiasWriter, OutputReader
from lava.proc import io

# Instantiate the processes
input_loader = InputLoader(dataset=testing_set)
bias_writer = BiasWriter(shape=input_shape)
output = OutputReader()
dataloader = io.dataloader.SpikeDataloader(dataset=test_set)
output_logger = io.sink.RingBuffer(shape=net.out_layer.shape, buffer=num_steps)
gt_logger = io.sink.RingBuffer(shape=(1,), buffer=num_steps)

# Connect the input to the network:
input_loader.data_out.connect(bias_writer.bias_in)
bias_writer.bias_out.connect(net.in_layer.bias)
dataloader.ground_truth.connect(gt_logger.a_in)
dataloader.s_out.connect(net.in_layer.neuron.a_in)

# Connect network-output to the output process
net.out_layer.neuron.s_out.connect(output.net_output_in)
net.out_layer.out.connect(output_logger.a_in)
```
__Run the network__
```python
from lava.magma import run_configs as rcfg
from lava.magma import run_conditions as rcnd

net.run(condition=rcnd.RunSteps(total_run_time), run_cfg=rcfg.Loihi1SimCfg())
output = output_logger.data.get()
gts = gt_logger.data.get()
net.stop()
```

3 changes: 2 additions & 1 deletion build.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ def set_properties_unit(project):
project.set_property("dir_source_unittest_python", "tests/lava")
project.set_property("dir_source_main_scripts", "scripts")
project.set_property("dir_docs", "docs")
project.build_depends_on("lava", url="git+https://github.com/lava-nc/lava.git")

project.set_property("sphinx_config_path", "docs")
project.set_property("sphinx_source_dir", "docs")
Expand All @@ -82,7 +83,7 @@ def set_properties_unit(project):
project.plugin_depends_on("sphinx_tabs")

project.set_property("verbose", True)

project.set_property("coverage_threshold_warn", 0)
project.set_property("coverage_break_build", False)

Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ scipy
matplotlib
ninja
h5py >= 3.1.0
lava-nc@git+https://github.com/lava-nc/lava.git
2 changes: 1 addition & 1 deletion src/lava/lib/dl/bootstrap/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2021 Intel Corporation
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause


Expand Down
2 changes: 1 addition & 1 deletion src/lava/lib/dl/bootstrap/ann_sampler.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2021 Intel Corporation
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

"""ANN sampler module."""
Expand Down
2 changes: 1 addition & 1 deletion src/lava/lib/dl/bootstrap/block/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2021 Intel Corporation
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

from . import base, cuba
Expand Down
2 changes: 1 addition & 1 deletion src/lava/lib/dl/bootstrap/block/base.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2021 Intel Corporation
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

"""Abstract bootstrap layer blocks."""
Expand Down
2 changes: 1 addition & 1 deletion src/lava/lib/dl/bootstrap/block/cuba.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2021 Intel Corporation
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

"""Bootstrap CUBA layer blocks."""
Expand Down
2 changes: 1 addition & 1 deletion src/lava/lib/dl/bootstrap/routine.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (C) 2021 Intel Corporation
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

"""ANN-SNN mode switching routine helper."""
Expand Down
7 changes: 7 additions & 0 deletions src/lava/lib/dl/netx/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

from . import hdf5
from . import blocks

__all__ = ['hdf5', 'blocks']
Empty file.
51 changes: 51 additions & 0 deletions src/lava/lib/dl/netx/blocks/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause

import numpy as np

from lava.magma.core.process.process import AbstractProcess
from lava.magma.core.model.py.ports import PyInPort, PyOutPort
from lava.magma.core.model.py.type import LavaPyType
from lava.magma.core.decorator import implements, requires, tag
from lava.magma.core.model.sub.model import AbstractSubProcessModel
from lava.magma.core.sync.protocols.loihi_protocol import LoihiProtocol
from lava.magma.core.resources import CPU

from lava.lib.dl.netx.blocks.process import Input, Dense, Conv


@requires(CPU)
@tag('fixed_pt')
class AbstractPyBlockModel(AbstractSubProcessModel):
"""Abstract Block model. A block typically encapsulates at least a
synapse and a neuron in a layer. It could also include recurrent
connection as well as residual connection. A minimal example of a
block is a feedforward layer."""
def __init__(self, proc: AbstractProcess) -> None:
if proc.has_graded_input:
self.inp: PyInPort = LavaPyType(np.ndarray, np.int32, precision=32)
else:
self.inp: PyInPort = LavaPyType(np.ndarray, np.int8, precision=1)

if proc.has_graded_output:
self.out: PyOutPort = LavaPyType(np.ndarray, np.int32, precision=32)
else:
self.out: PyOutPort = LavaPyType(np.ndarray, np.int8, precision=1)


@implements(proc=Input, protocol=LoihiProtocol)
class PyInputModel(AbstractPyBlockModel):
def __init__(self, proc: AbstractProcess) -> None:
super().__init__(proc)


@implements(proc=Dense, protocol=LoihiProtocol)
class PyDenseModel(AbstractPyBlockModel):
def __init__(self, proc: AbstractProcess) -> None:
super().__init__(proc)


@implements(proc=Conv, protocol=LoihiProtocol)
class PyConvModel(AbstractPyBlockModel):
def __init__(self, proc: AbstractProcess) -> None:
super().__init__(proc)

0 comments on commit 56099b9

Please sign in to comment.