Questions about converting networks from lava-dl to lava #245
-
from lava.magma.core.run_configs import Loihi1SimCfg
from lava.magma.core.run_conditions import RunSteps
from lava.proc.io.sink import RingBuffer as ReceiveProcess
from lava.proc.io.source import RingBuffer as SendProcess
import h5py
import torch
from lava.lib.dl import netx, slayer
filename ='./lava_net.net'
tau = 2.
scale = 1 << 6
in_features = 8
out_features = 1
N = 1
T = 8
neuron_param = {
'threshold': 1.,
'current_decay': 1.,
'voltage_decay': 1. / tau,
'tau_grad': 1, 'scale_grad': 1, 'scale': scale,
'norm': None, 'dropout': None,
'shared_param': True, 'persistent_state': False, 'requires_grad': False,
'graded_spike': False
}
with torch.no_grad():
net = slayer.block.cuba.Dense(neuron_param, in_features, out_features)
torch.nn.init.constant_(net.synapse.weight.data, 0.25)
x = torch.rand([N, in_features, T])
x = (x > 0.5).float()
y = net(x)
print('lava-dl output:\n', y.numpy()[0])
h5_h = h5py.File(filename, 'w')
h5_layer = h5_h.create_group('layer')
net.export_hdf5(h5_layer.create_group(f'{0}'))
net_x = netx.hdf5.Network(net_config=filename)
source = SendProcess(x[0])
sink = ReceiveProcess(shape=net_x.out.shape, buffer=T * 2)
source.s_out.connect(net_x.inp)
net_x.out.connect(sink.a_in)
run_condition = RunSteps(num_steps=T)
run_config = Loihi1SimCfg(select_tag='fixed_pt')
net_x.run(condition=run_condition, run_cfg=run_config)
output = sink.data.get()
net_x.stop()
print('lava output:\n', output) The outputs are:
I find that the two networks have different outputs ( |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @fangwei123456, thanks for reaching out to us. The neurons in slayer and lava should match except for some esoteric cases. What you have encountered is a specific discrepancy regarding the spike condition. A fix to this issue has been pushed in lava-dl PR #78. Once it is merged, the difference should be resolved and the outputs should match 1:1 The PR also includes a test inspired by your code snippet. Thanks for uncovering such issues. We highly appreciate it. |
Beta Was this translation helpful? Give feedback.
-
Hi @bamsumit , I have another question about it. If a comment
Does it means the spikes are also 8-bits, which is same with |
Beta Was this translation helpful? Give feedback.
Hi @fangwei123456, thanks for reaching out to us. The neurons in slayer and lava should match except for some esoteric cases. What you have encountered is a specific discrepancy regarding the spike condition. A fix to this issue has been pushed in lava-dl PR #78. Once it is merged, the difference should be resolved and the outputs should match 1:1
The PR also includes a test inspired by your code snippet. Thanks for uncovering such issues. We highly appreciate it.