Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lava va #740

Merged
merged 74 commits into from
Jun 25, 2024
Merged
Show file tree
Hide file tree
Changes from 73 commits
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
b0fe8ff
prod neuron
epaxon Jul 12, 2023
006864a
trying to get prod neuron to work...
epaxon Jul 13, 2023
98821d3
trying to get prod neuron cpu to work...
epaxon Jul 13, 2023
91f82ad
prod neuron process cpu backend working with unit test
epaxon Jul 13, 2023
a71e2d3
remove init file from prod_neuron
epaxon Jul 13, 2023
324a089
gradedvec process and test
epaxon Jul 13, 2023
20d9100
working on norm vec
epaxon Jul 13, 2023
789d389
fixed prod neuron license headers
epaxon Jul 13, 2023
f4884f7
invsqrt model and tests, reconfigured to process and models
epaxon Jul 13, 2023
09db1ec
normvecdelay and tests, timing weirdness with normvecdelay
epaxon Jul 14, 2023
76fafb8
test for second channel of norm vec
epaxon Jul 14, 2023
30f1a61
Merge branch 'prod_neuron' into graded
epaxon Jul 14, 2023
302e0fe
Merge branch 'main' into graded
epaxon Jul 17, 2023
886a07d
renamed to prodneuron.
epaxon Jul 18, 2023
ffd0fde
fixing some linting errors
epaxon Jul 18, 2023
f47d681
Merge branch 'main' into graded
epaxon Jul 18, 2023
2b16cea
cleanup
epaxon Jul 18, 2023
5f8471e
Merge branch 'main' into graded
epaxon Jul 18, 2023
317682b
frameworks and networks added to lava-nc
epaxon Jul 18, 2023
81b5505
Merge branch 'main' into lava_va
epaxon Jul 18, 2023
691e3fb
Merge branch 'main' into lava_va
epaxon Jul 19, 2023
5998dae
adding some docstring, fixing unused imports
epaxon Jul 18, 2023
89907f3
Fix partition parse bug for internal vLab (#741)
tim-shea Jul 20, 2023
617332f
Add linting tutorials folder (#742)
PhilippPlank Jul 21, 2023
e76d3f6
Iterator callback fx signature fix (#743)
bamsumit Jul 21, 2023
f7692ba
Bugfix to pass the args by keyword (#744)
joyeshmishra Jul 22, 2023
48ebc37
CLP Tutorial 01 Only (#746)
drager-intel Jul 25, 2023
11555d7
Update release job, add pypi upload, github release creation (#737)
mgkwill Jul 25, 2023
a2ab7c9
Update release job, pypi auth
Jul 25, 2023
b03da54
Use github pypi auth in release job (#747)
mgkwill Jul 25, 2023
c293464
Release 0.8.0
Jul 25, 2023
db74c7f
Fix conv python model to send() before recv() (#751)
Gavinator98 Jul 28, 2023
feeff0e
Adds support for Monitor a Port to observe if it is blocked (#755)
joyeshmishra Jul 31, 2023
5a370b0
Set version to dev0 in pyproject.toml
mgkwill Aug 1, 2023
0e2f5a6
Update README.md
PhilippPlank Aug 3, 2023
a322ef5
Update README.md (#758)
ahenkes1 Aug 3, 2023
23088cf
Fix DelayDense buffer issue (#767)
bamsumit Aug 7, 2023
b79062f
Allow np.array as input weights for Sparse (#772)
SveaMeyer13 Aug 22, 2023
88575d2
Bump tornado from 6.3.2 to 6.3.3 (#778)
dependabot[bot] Aug 23, 2023
cf62850
Bump cryptography from 41.0.2 to 41.0.3 (#779)
dependabot[bot] Aug 23, 2023
a5d351a
small docstring, typing and other formatting changes
epaxon Aug 31, 2023
5675a6c
Update README.md (#758)
ahenkes1 Aug 3, 2023
0471a70
small docstring, typing and other formatting changes
epaxon Sep 1, 2023
34a4ffe
doc strings for graded vec
epaxon Sep 1, 2023
e39fc0f
Bump gitpython from 3.1.32 to 3.1.35 (#785)
dependabot[bot] Sep 11, 2023
71c5a06
fixing merge conflicts on prodneuron
epaxon Oct 24, 2023
09b7a6f
Merge Spike IO (#786)
joyeshmishra Sep 16, 2023
efe4650
CLP tutorial 1 small patch (#773)
elvinhajizada Sep 21, 2023
ac29454
CLP Tutorial 02: COIL-100 (#721)
elvinhajizada Sep 21, 2023
9048c98
Bump cryptography from 41.0.3 to 41.0.4 (#790)
dependabot[bot] Sep 25, 2023
98e16c7
Generalize int shape check in injector and extractor to take numpy in…
bamsumit Sep 27, 2023
9b7eb1e
Resfire (#787)
epaxon Oct 6, 2023
fca1900
Bump pillow from 10.0.0 to 10.0.1 (#794)
dependabot[bot] Oct 8, 2023
5a469c5
Bump urllib3 from 1.26.16 to 1.26.17 (#793)
dependabot[bot] Oct 8, 2023
df89783
mulitply for threshvec, fixes to frameworks imports, fixes for resfir…
epaxon Oct 9, 2023
894a06a
rename ThreshVec to GradedVec and fixes.
epaxon Oct 24, 2023
3d7d8c3
lava VA tutorials.
epaxon Oct 24, 2023
705efdf
merge with main
epaxon Nov 20, 2023
9f5d63e
Merge branch 'main' into lava_va
epaxon Nov 20, 2023
2d0c90e
slight updates to lava_va tutorials and removed csr_matrix cast
epaxon Dec 13, 2023
8c81719
Merge branch 'main' into lava_va
epaxon Dec 13, 2023
838b928
Merge branch 'main' into lava_va
epaxon Jan 8, 2024
c90ef46
Merge branch 'main' into lava_va
epaxon Mar 12, 2024
825d4cf
super needed, formatting, fixed test_network
epaxon May 15, 2024
0fd4ca0
Merge branch 'main' into lava_va
epaxon May 16, 2024
79dd5ca
Automatically create identity connections when using lva to connect v…
epaxon May 29, 2024
7387e15
NetworkList to keep track of + Networks. More flexibility in algebra …
epaxon May 29, 2024
512569f
Updated tutorial 1 to demo automatic vec2vec connections and better +…
epaxon May 29, 2024
758744a
Updates to Tutorial01 that show automatic identity connections when c…
epaxon May 30, 2024
77b4926
Merge branch 'main' into lava_va
epaxon Jun 20, 2024
2f07dd1
Comments, docstrings, typing clean-up.
epaxon Jun 20, 2024
b6e0b5c
changing embedded io import location, in case theres no lava-loihi.
epaxon Jun 20, 2024
7ea0df8
small codacy fixes. Test lava va tutorials.
epaxon Jun 20, 2024
012bdbb
Cleanup comments on test_graded.py and test_tutorials-lva.py
epaxon Jun 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions src/lava/frameworks/loihi2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright (C) 2022-23 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

from lava.networks.gradedvecnetwork import (InputVec, OutputVec, GradedVec,

Check notice on line 5 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L5

'lava.networks.gradedvecnetwork.InputVec' imported but unused (F401)

Check warning on line 5 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L5

Unused InputVec imported from lava.networks.gradedvecnetwork
GradedDense, GradedSparse,
ProductVec,
LIFVec,
NormalizeNet)

from lava.networks.resfire import ResFireVec

Check notice on line 11 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L11

'lava.networks.resfire.ResFireVec' imported but unused (F401)

Check warning on line 11 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L11

Unused ResFireVec imported from lava.networks.resfire

from lava.magma.core.run_conditions import RunSteps, RunContinuous

Check notice on line 13 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L13

'lava.magma.core.run_conditions.RunSteps' imported but unused (F401)

Check warning on line 13 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L13

Unused RunSteps imported from lava.magma.core.run_conditions
from lava.magma.core.run_configs import Loihi2SimCfg, Loihi2HwCfg

Check notice on line 14 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L14

'lava.magma.core.run_configs.Loihi2SimCfg' imported but unused (F401)

Check warning on line 14 in src/lava/frameworks/loihi2.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/frameworks/loihi2.py#L14

Unused Loihi2SimCfg imported from lava.magma.core.run_configs
324 changes: 324 additions & 0 deletions src/lava/networks/gradedvecnetwork.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,324 @@
# Copyright (C) 2022-23 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

import numpy as np
import typing as ty

from lava.proc.graded.process import InvSqrt
from lava.proc.graded.process import NormVecDelay
from lava.proc.sparse.process import Sparse
from lava.proc.dense.process import Dense
from lava.proc.prodneuron.process import ProdNeuron
from lava.proc.graded.process import GradedVec as GradedVecProc
from lava.proc.lif.process import LIF
from lava.proc.io import sink, source

from .network import Network, AlgebraicVector, AlgebraicMatrix


class InputVec(AlgebraicVector):
"""InputVec
Simple input vector. Adds algebraic syntax to RingBuffer
epaxon marked this conversation as resolved.
Show resolved Hide resolved

Parameters
----------
vec : np.ndarray
NxM array of input values. Input will repeat every M steps.
exp : int, optional
Set the fixed point base value
loihi2 : bool, optional
Flag to create the adapters for loihi 2.
"""

def __init__(self,
vec: np.ndarray,
loihi2: ty.Optional[bool] = False,
exp: ty.Optional[int] = 0,
**kwargs) -> None:

self.loihi2 = loihi2
self.shape = np.atleast_2d(vec).shape
self.exp = exp

# Convert it to fixed point base
vec *= 2**self.exp

self.inport_plug = source.RingBuffer(data=np.atleast_2d(vec))

if self.loihi2:
from lava.proc import embedded_io as eio
self.inport_adapter = eio.spike.PyToNxAdapter(
shape=(self.shape[0],),
num_message_bits=24)
self.inport_plug.s_out.connect(self.inport_adapter.inp)
self.out_port = self.inport_adapter.out

else:
self.out_port = self.inport_plug.s_out

def __lshift__(self, other):
# Maybe this could be done with a numpy array and call set_data?
return NotImplemented


class OutputVec(Network):
"""OutputVec
Records spike output. Adds algebraic syntax to RingBuffer

Parameters
----------
shape : tuple(int)
shape of the output to record
buffer : int, optional
length of the recording.
(buffer is overwritten if shorter than sim time).
loihi2 : bool, optional
Flag to create the adapters for loihi 2.
num_message_bits : int
size of output message. ("0" is for unary spike event).
"""

def __init__(self,
shape: ty.Tuple[int, ...],
buffer: int = 1,
loihi2: ty.Optional[bool] = False,
num_message_bits: ty.Optional[int] = 24,
**kwargs) -> None:

self.shape = shape
self.buffer = buffer
self.loihi2 = loihi2
self.num_message_bits = num_message_bits

self.outport_plug = sink.RingBuffer(
shape=self.shape, buffer=self.buffer, **kwargs)

if self.loihi2:
from lava.proc import embedded_io as eio
self.outport_adapter = eio.spike.NxToPyAdapter(
shape=self.shape, num_message_bits=self.num_message_bits)
self.outport_adapter.out.connect(self.outport_plug.a_in)
self.in_port = self.outport_adapter.inp
else:
self.in_port = self.outport_plug.a_in

def get_data(self):
return (self.outport_plug.data.get().astype(np.int32) << 8) >> 8


class LIFVec(AlgebraicVector):
"""LIFVec
Network wrapper to LIF neuron.

Parameters
----------
See lava.proc.lif.process.LIF
"""

def __init__(self, **kwargs):
epaxon marked this conversation as resolved.
Show resolved Hide resolved
self.main = LIF(**kwargs)

self.in_port = self.main.a_in
self.out_port = self.main.s_out


class GradedVec(AlgebraicVector):
"""GradedVec
Simple graded threshold vector with no dynamics.

Parameters
----------
shape : tuple(int)
Number and topology of neurons.
vth : int, optional
Threshold for spiking.
exp : int, optional
Fixed point base of the vector.
"""

def __init__(self,
shape: ty.Tuple[int, ...],
vth: int = 10,
exp: int = 0,
**kwargs):

self.shape = shape
self.vth = vth
self.exp = exp

self.main = GradedVecProc(shape=self.shape, vth=self.vth, exp=self.exp)
self.in_port = self.main.a_in
self.out_port = self.main.s_out

super().__init__()

def __mul__(self, other):
if isinstance(other, GradedVec):
# Create the product network
prod_layer = ProductVec(shape=self.shape, vth=1, exp=self.exp)

weightsI = np.eye(self.shape[0])

weights_A = GradedSparse(weights=weightsI)
weights_B = GradedSparse(weights=weightsI)
weights_out = GradedSparse(weights=weightsI)

prod_layer << (weights_A @ self, weights_B @ other)

Check warning on line 167 in src/lava/networks/gradedvecnetwork.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/networks/gradedvecnetwork.py#L167

Statement seems to have no effect
weights_out @ prod_layer

Check warning on line 168 in src/lava/networks/gradedvecnetwork.py

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

src/lava/networks/gradedvecnetwork.py#L168

Statement seems to have no effect
return weights_out
else:
return NotImplemented


class ProductVec(AlgebraicVector):
"""ProductVec

Neuron that will multiply values on two input channels.

Parameters
----------
shape : tuple(int)
Number and topology of neurons.
vth : int
Threshold for spiking.
exp : int
Fixed point base of the vector.
"""

def __init__(self,
shape: ty.Tuple[int, ...],
vth: ty.Optional[int] = 10,
exp: ty.Optional[int] = 0,
**kwargs):
self.shape = shape
self.vth = vth
self.exp = exp

self.main = ProdNeuron(shape=self.shape, vth=self.vth, exp=self.exp)

self.in_port = self.main.a_in1
self.in_port2 = self.main.a_in2

self.out_port = self.main.s_out

def __lshift__(self, other):
# We're going to override the behavior here,
# since there are two ports the API idea is:
# prod_layer << (conn1, conn2)
if isinstance(other, (list, tuple)):
# It should be only length 2, and a Network object,
# TODO: add checks
other[0].out_port.connect(self.in_port)
other[1].out_port.connect(self.in_port2)
else:
return NotImplemented


class GradedDense(AlgebraicMatrix):
"""GradedDense
Network wrapper for Dense. Adds algebraic syntax to Dense.

Parameters
----------
See lava.proc.dense.process.Dense

weights : numpy.ndarray
Weight matrix expressed as floating point. Weights will be automatically
reconfigured to fixed point (may lead to changes due to rounding).
exp : int, optional
Fixed point base of the weight (reconfigures weights/weight_exp).
"""

def __init__(self,
weights: np.ndarray,
exp: int = 7,
**kwargs):
self.exp = exp

# Adjust the weights to the fixed point
w = weights * 2 ** self.exp

self.main = Dense(weights=w,
num_message_bits=24,
num_weight_bits=8,
weight_exp=-self.exp)

self.in_port = self.main.s_in
self.out_port = self.main.a_out


class GradedSparse(AlgebraicMatrix):
"""GradedSparse
Network wrapper for Sparse. Adds algebraic syntax to Sparse.

Parameters
----------
See lava.proc.sparse.process.Sparse

weights : numpy.ndarray
Weight matrix expressed as floating point. Weights will be automatically
reconfigured to fixed point (may lead to changes due to rounding).
exp : int, optional
Fixed point base of the weight (reconfigures weights/weight_exp).
"""

def __init__(self,
weights: np.ndarray,
exp: int = 7,
**kwargs):

self.exp = exp

# Adjust the weights to the fixed point
w = weights * 2 ** self.exp
self.main = Sparse(weights=w,
num_message_bits=24,
num_weight_bits=8,
weight_exp=-self.exp)

self.in_port = self.main.s_in
self.out_port = self.main.a_out


class NormalizeNet(AlgebraicVector):
"""NormalizeNet
Creates a layer for normalizing vector inputs

Parameters
----------
shape : tuple(int)
Number and topology of neurons.
exp : int
Fixed point base of the vector.
"""

def __init__(self,
shape: ty.Tuple[int, ...],
exp: ty.Optional[int] = 12,
**kwargs):
self.shape = shape
self.fpb = exp

vec_to_fpinv_w = np.ones((1, self.shape[0]))
fpinv_to_vec_w = np.ones((self.shape[0], 1))
weight_exp = 0

self.vfp_dense = Dense(weights=vec_to_fpinv_w,
num_message_bits=24,
weight_exp=-weight_exp)
self.fpv_dense = Dense(weights=fpinv_to_vec_w,
num_message_bits=24,
weight_exp=-weight_exp)

self.main = NormVecDelay(shape=self.shape, vth=1,
exp=self.fpb)
self.fp_inv_neuron = InvSqrt(shape=(1,), fp_base=self.fpb)

self.main.s2_out.connect(self.vfp_dense.s_in)
self.vfp_dense.a_out.connect(self.fp_inv_neuron.a_in)
self.fp_inv_neuron.s_out.connect(self.fpv_dense.s_in)
self.fpv_dense.a_out.connect(self.main.a_in2)

self.in_port = self.main.a_in1
self.out_port = self.main.s_out
Loading
Loading