Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netx #30

Merged
merged 23 commits into from
Mar 1, 2022
Merged

Netx #30

Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
349bed5
Updated readme and tutuorial links.
bamsumit Nov 13, 2021
a2ef51b
Merge branch 'main' into main
mgkwill Nov 13, 2021
55c9250
Merge branch 'lava-nc:main' into main
bamsumit Nov 19, 2021
622a53a
Merge branch 'lava-nc:main' into main
bamsumit Dec 17, 2021
6512bcb
PilotNet verified in Netx
bamsumit Jan 10, 2022
2464643
PiloteNet verified in NetX. Minor liniting fixes.
bamsumit Jan 10, 2022
9e2fd9d
Changes to accompany io read and reset api changes.
bamsumit Jan 12, 2022
f40893c
Pilotnet SDNN tested
bamsumit Feb 8, 2022
bb36cee
pilotnet sdnn tutorial
bamsumit Feb 11, 2022
7271391
PilotNet SDNN notebook
bamsumit Feb 14, 2022
35dd630
Removed dataset gt logging
bamsumit Feb 14, 2022
b0029bd
fixed sdnn notebook plot label
bamsumit Feb 14, 2022
c0e409e
Integrated virtualport api to swicth from conv to dense layer
bamsumit Feb 22, 2022
fbd3a48
dense graded input support added. Cleaned up sdnn notebook
bamsumit Feb 25, 2022
4d96e40
Refport insconsistency in snn notebook
bamsumit Feb 25, 2022
01e5254
Linting fixes. Updated build script
bamsumit Feb 25, 2022
9800bc7
Changed dates in copyright header
bamsumit Feb 25, 2022
de169c1
Merge branch 'main' into netx
bamsumit Feb 25, 2022
1cab00b
Readme Updates and review comments fixes
bamsumit Feb 25, 2022
64a9262
Fixed linting issue
bamsumit Feb 25, 2022
aac5ab9
Cleaned up snn notebook. Fixed issue with lambda on Windows.
bamsumit Feb 26, 2022
ccb4449
Fixed test_hdf5 tests to reflect in_layer transform changes
bamsumit Feb 28, 2022
ed0f2ff
final tid bits
bamsumit Mar 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
32 changes: 19 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@ The library presently consists of

1. `lava.lib.dl.slayer` for natively training Deep Event-Based Networks.
2. `lava.lib.dl.bootstrap` for training rate coded SNNs.

Coming soon to the library
1. `lava.lib.dl.netx` for training and deployment of event-based deep neural networks on traditional as well as neuromorphic backends.
3. `lava.lib.dl.netx` for training and deployment of event-based deep neural networks on traditional as well as neuromorphic backends.

More tools will be added in the future.

Expand Down Expand Up @@ -112,15 +110,20 @@ $ pip install lava-nc-0.1.0.tar.gz

## Getting Started

**End to end tutorials**
**End to end training tutorials**
* [Oxford spike train regression](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/oxford/train.ipynb)
* [MNIST digit classification](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/bootstrap/mnist/train.ipynb)
* [NMNIST digit classification](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/nmnist/train.ipynb)
* [PilotNet steering angle prediction](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/pilotnet/train.ipynb)

**Deep dive tutorials**
**Deep dive training tutorials**
* [Dynamics and Neurons](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/slayer/neuron_dynamics/dynamics.ipynb)

**Inference tutorials**
* [Oxford Inference](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/netx/oxford/run.ipynb)
* [PilotNet SNN Inference](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/netx/pilotnet_snn/run.ipynb)
* [PilotNet SDNN Inference](https://github.com/lava-nc/lava-dl/blob/main/tutorials/lava/lib/dl/netx/pilotnet_sdnn/run.ipynb)

## __`lava.lib.dl.slayer`__

`lava.lib.dl.slayer` is an enhanced version of [SLAYER](https://github.com/bamsumit/slayerPytorch). Most noteworthy enhancements are: support for _recurrent network structures_, a wider variety of _neuron models_ and _synaptic connections_ (a complete list of features is [here](https://github.com/lava-nc/lava-dl/blob/main/src/lava/lib/dl/slayer/README.md)). This version of SLAYER is built on top of the [PyTorch](https://pytorch.org/) deep learning framework, similar to its predecessor. For smooth integration with Lava, `lava.lib.dl.slayer` supports exporting trained models using the platform independent __hdf5 network exchange__ format.
Expand Down Expand Up @@ -263,27 +266,30 @@ __Load the trained network__
# Import the model as a Lava Process
net = hdf5.Network(net_config='network.net')
```
__Attach Processes for Input Injection and Output Readout__
__Attach Processes for Input-Output interaction__
```python
from lava.proc.io import InputLoader, BiasWriter, OutputReader
from lava.proc import io

# Instantiate the processes
input_loader = InputLoader(dataset=testing_set)
bias_writer = BiasWriter(shape=input_shape)
output = OutputReader()
dataloader = io.dataloader.SpikeDataloader(dataset=test_set)
output_logger = io.sink.RingBuffer(shape=net.out_layer.shape, buffer=num_steps)
gt_logger = io.sink.RingBuffer(shape=(1,), buffer=num_steps)

# Connect the input to the network:
input_loader.data_out.connect(bias_writer.bias_in)
bias_writer.bias_out.connect(net.in_layer.bias)
dataloader.ground_truth.connect(gt_logger.a_in)
dataloader.s_out.connect(net.in_layer.neuron.a_in)

# Connect network-output to the output process
net.out_layer.neuron.s_out.connect(output.net_output_in)
net.out_layer.out.connect(output_logger.a_in)
```
__Run the network__
```python
from lava.magma import run_configs as rcfg
from lava.magma import run_conditions as rcnd

net.run(condition=rcnd.RunSteps(total_run_time), run_cfg=rcfg.Loihi1SimCfg())
output = output_logger.data.get()
gts = gt_logger.data.get()
net.stop()
```

5 changes: 4 additions & 1 deletion src/lava/lib/dl/netx/blocks/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,16 @@
from lava.magma.core.sync.protocols.loihi_protocol import LoihiProtocol
from lava.magma.core.resources import CPU

# from .process import Conv
from lava.lib.dl.netx.blocks.process import Input, Dense, Conv


@requires(CPU)
@tag('fixed_pt')
class AbstractPyBlockModel(AbstractSubProcessModel):
bamsumit marked this conversation as resolved.
Show resolved Hide resolved
"""Abstract Block model. A block typically encapsulates at least a
synapse and a neuron in a layer. It could also include recurrent
connection as well as residual connection. A minimal example of a
block is a feedforward layer."""
def __init__(self, proc: AbstractProcess) -> None:
if proc.has_graded_input:
self.inp: PyInPort = LavaPyType(np.ndarray, np.int32, precision=32)
Expand Down
20 changes: 8 additions & 12 deletions src/lava/lib/dl/netx/blocks/process.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,6 @@ def _neuron(self, bias: np.ndarray) -> Type[AbstractProcess]:
f'Bias of shape {bias.shape} could not be broadcast to '
f'neuron of shape {self.shape}.'
)
# print(f'{bias=}')
# print(f'{bias_mant=}')
neuron_params = {'shape': self.shape, **self.neuron_params}
neuron_params['bias'] = bias_mant.astype(np.int32)
return self.neuron_process(**neuron_params)
Expand Down Expand Up @@ -118,6 +116,13 @@ class Dense(AbstractBlock):
bias of neuron. None means no bias. Defaults to None.
has_graded_input : dict
flag for graded spikes at input. Defaults to False.
num_weight_bits : int
number of weight bits. Defaults to 8.
weight_exponent : int
weight exponent value. Defaults to 0.
sign_mode : int
sign mode of the synapse. Refer to lava.proc.dense documentation for
details. Default is 1 meaning mixed mode.
"""
def __init__(self, **kwargs: Union[dict, tuple, list, int, bool]) -> None:
super().__init__(**kwargs)
Expand All @@ -128,10 +133,7 @@ def __init__(self, **kwargs: Union[dict, tuple, list, int, bool]) -> None:
sign_mode = kwargs.pop('sign_mode', 1)

graded_spikes_params = {'use_graded_spike': self.has_graded_input}
# TODO: additional graded spike params
# if self.has_graded_input is True:
# graded_spikes_parms['spike_payload_bytes'] = 2 # 16 bits
# graded_spikes_parms['spike_payload_sign'] = 1 # signed

self.synapse = DenseSynapse(
shape=weight.shape,
weights=weight,
Expand All @@ -147,7 +149,6 @@ def __init__(self, **kwargs: Union[dict, tuple, list, int, bool]) -> None:
f'found {self.synapse.a_out.shape}.'
)

# TODO: add support for axonal delay
self.neuron = self._neuron(kwargs.pop('bias', None))

self.inp = InPort(shape=self.synapse.s_in.shape)
Expand Down Expand Up @@ -196,10 +197,6 @@ def __init__(self, **kwargs: Union[dict, tuple, list, int, bool]) -> None:
# sign_mode = kwargs.pop('sign_mode', 1)

graded_spikes_params = {'use_graded_spike': self.has_graded_input}
# TODO: additional graded spike params
# if self.has_graded_input is True:
# graded_spikes_parms['spike_payload_bytes'] = 2 # 16 bits
# graded_spikes_parms['spike_payload_sign'] = 1 # signed

self.synapse = ConvSynapse(
input_shape=kwargs.pop('input_shape'),
Expand All @@ -208,7 +205,6 @@ def __init__(self, **kwargs: Union[dict, tuple, list, int, bool]) -> None:
padding=kwargs.pop('padding', 0),
dilation=kwargs.pop('dilation', 1),
groups=kwargs.pop('groups', 1),
# TODO: implement these features
# num_wgt_bits=num_weight_bits,
# # num_dly_bits=self.num_weight_bits(delays),
# wgt_exp=weight_exponent,
bamsumit marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
89 changes: 43 additions & 46 deletions src/lava/lib/dl/netx/hdf5.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ class Network(AbstractProcess):
Parameters
----------
net_config : str
name of the hdf5 config file.
name of the hdf5 config filename.
num_layers : int, optional
number of blocks to generate. An integer valuew will only generate the
first ``num_layers`` blocks in the description. The actual number of
Expand Down Expand Up @@ -96,14 +96,14 @@ def get_neuron_params(
'neuron_proc': neuron_process,
'vth': neuron_config['vThMant'],
'du': neuron_config['iDecay'] - 1,
# weird: it seems the LIF process is written this way!
'dv': neuron_config['vDecay'],
'bias_exp': 6,
'use_graded_spikes': False,
}
return neuron_params
elif neuron_type in ['SDNN']:
if input is True: # delta process if it is an input layer
if input is True:
# If it is an input layer (input is true) use delta process.
neuron_process = Delta
neuron_params = {
'neuron_proc': neuron_process,
Expand Down Expand Up @@ -143,8 +143,8 @@ def _table_str(
delay: bool = False,
header: bool = False,
) -> str:
# A private helper function to print the mapping output configuration
# for a layer/block.
"""Helper function to print mapping output configuration
for a layer/block."""
if header is True:
return '| Type | W | H | C '\
'| ker | str | pad | dil | grp |delay|'
Expand Down Expand Up @@ -196,31 +196,33 @@ def create_input(layer_config: h5py.Group) -> Tuple[Input, str]:
if 'weight' in layer_config.keys():
weight = int(layer_config['weight'])
else:
# TODO: for sigma delta neurons, default weight is 64 # CHECK
# weight = 1
weight = 64

if 'bias' in layer_config.keys():
bias = int(layer_config['bias'])
else:
bias = 0

# affine transform of the input
def transform(x: Union[int, np.ndarray]) -> Union[int, np.ndarray]:
result = 2 * weight * x - weight + bias

if hasattr(result, 'shape'):
if len(result.shape) == 2:
# Lava format for image is (X, Y) i.e. WH
# whereas standard images are in (height, width)
# i.e. HW format
return result.astype(np.int32).transpose([1, 0])
elif len(result.shape) == 3:
return result.astype(np.int32).transpose([1, 0, 2])
else:
return result.astype(np.int32)
else:
return int(result)
# # affine transform of the input
# def transform(x: Union[int, np.ndarray]) -> Union[int, np.ndarray]:
# """Affine transform of the input and reordering of the input
# dimension. Lava represetnts dimensions in (X, Y) format
# whereas standard image format is (height, width) i.e. (Y, X)"""
# result = 2 * weight * x - weight + bias

# if hasattr(result, 'shape'):
# if len(result.shape) == 2:
# # Lava format for image is (X, Y) i.e. WH
# # whereas standard images are in (height, width)
# # i.e. HW format
# return result.astype(np.int32).transpose([1, 0])
# elif len(result.shape) == 3:
# return result.astype(np.int32).transpose([1, 0, 2])
# else:
# return result.astype(np.int32)
# else:
# return int(result)
transform = {'weight': weight, 'bias': bias}

params = { # arguments for Input block
'shape': shape,
Expand Down Expand Up @@ -323,9 +325,6 @@ def create_conv(
dilation = layer_config['dilation'][::-1]
groups = layer_config['groups']

# opt_weights = optimize_weight_bits(weight)
# weight, num_weight_bits, weight_exponent, sign_mode = opt_weights

params = { # arguments for conv block
'input_shape': input_shape,
'shape': shape,
Expand All @@ -335,19 +334,17 @@ def create_conv(
'padding': padding,
'dilation': dilation,
'groups': groups,
# 'num_weight_bits': num_weight_bits,
# 'weight_exponent': weight_exponent,
# 'sign_mode': sign_mode,
'has_graded_input': has_graded_input,
}

# optional arguments
# Optional arguments
if 'bias' in layer_config.keys():
params['bias'] = layer_config['bias']

if 'delay' not in layer_config.keys():
params['neuron_params']['delay_bits'] = 1
else:
pass # TODO: set appropriate delay bits for synaptic delay
pass

table_entry = Network._table_str(
type_str='Conv',
Expand All @@ -359,25 +356,25 @@ def create_conv(

return Conv(**params), table_entry

# @staticmethod
# def create_pool(layer_config):
# pass
@staticmethod
def create_pool(layer_config: h5py.Group) -> None:
raise NotImplementedError

# @staticmethod
# def create_convT(layer_config):
# pass
@staticmethod
def create_convT(layer_config: h5py.Group) -> None:
raise NotImplementedError

# @staticmethod
# def create_unpool(layer_config):
# pass
@staticmethod
def create_unpool(layer_config: h5py.Group) -> None:
raise NotImplementedError

# @staticmethod
# def create_average(layer_config):
# pass
@staticmethod
def create_average(layer_config: h5py.Group) -> None:
raise NotImplementedError

# @staticmethod
# def create_concat(layer_config):
# pass
@staticmethod
def create_concat(layer_config: h5py.Group) -> None:
raise NotImplementedError

def _create(self) -> List[AbstractProcess]:
has_graded_input_next = self.has_graded_input
Expand Down
1 change: 0 additions & 1 deletion src/lava/lib/dl/netx/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ def __len__(self) -> int:
return len(self.f)

def __getitem__(self, key: str) -> h5py.Dataset:
# print(key)
if key in self.str_keys:
value = self.f[key]
if len(value.shape) > 0:
Expand Down
6 changes: 2 additions & 4 deletions tests/lava/lib/dl/netx/test_hdf5.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,8 @@ def test_num_layers(self) -> None:
def test_input_transform(self) -> None:
"""Tests the input tansform of known hdf5 net."""
net = netx.hdf5.Network(net_config=root + '/tiny.net', num_layers=1)
# transformation should be y = 2*weight*x - weight + bias
bias = net.in_layer.transform(0)
weight = (net.in_layer.transform(1) - bias) / 2
bias += weight
bias = net.in_layer.transform['bias']
weight = net.in_layer.transform['weight']
self.assertEqual(
bias, 34,
f'Expected transformation bias to be 34. Found {bias}.'
Expand Down
10 changes: 6 additions & 4 deletions tutorials/lava/lib/dl/netx/pilotnet_sdnn/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ class PilotNetDataset():
If true, the train/test split is ignored and the temporal sequence
of the data is preserved. Defaults to False.
sample_offset : int, optional
sample offset. Default is 0.
sample offset. Default is 0.

Usage
-----
Expand Down Expand Up @@ -101,12 +101,14 @@ def __getitem__(self, index: int) -> Tuple[np.ndarray, float]:
).resize(self.size, resample=Image.BILINEAR)
image = np.array(image) / 255
if self.transform is not None:
image = self.transform(image)
image = 2 * self.transform['weight'] * image \
- self.transform['weight'] + self.transform['bias']
image = image.astype(np.int32).transpose([1, 0, 2])
ground_truth = float(self.samples[index][1])
if ground_truth == 0:
ground_truth = (
float(self.samples[index - 1][1]) +
float(self.samples[index + 1][1])
float(self.samples[index - 1][1])
+ float(self.samples[index + 1][1])
) / 2
gt_val = ground_truth * np.pi / 180
return image.reshape(image.shape + (1,)), gt_val
Expand Down
2 changes: 1 addition & 1 deletion tutorials/lava/lib/dl/netx/pilotnet_sdnn/run.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@
{
"data": {
"text/plain": [
"<matplotlib.legend.Legend at 0x7f204c456940>"
"<matplotlib.legend.Legend at 0x7fe8cb330280>"
]
},
"execution_count": 10,
Expand Down