Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

有关泊松编码 #81

Closed
201528014227051 opened this issue Jun 21, 2021 · 2 comments
Closed

有关泊松编码 #81

201528014227051 opened this issue Jun 21, 2021 · 2 comments
Labels
API design How to design an API or why design such an API good first issue Good for newcomers question Further information is requested

Comments

@201528014227051
Copy link

感谢作者公开的框架。
关于泊松编码,框架中给的代码是简单的对比:
out_spike = torch.rand_like(x).le(x)
这个和bindsnet当中对于泊松编码的实现过程不一样,bindsnet代码如下:
代码来自:https://gitee.com/CtrlPlayer/notebook/blob/master/torch_notebook/distribution_poisson.ipynb
def poisson(datum: torch.Tensor, time: int, dt: float = 1.0, **kwargs) -> torch.Tensor:
# language=rst
"""
Generates Poisson-distributed spike trains based on input intensity. Inputs must be
non-negative, and give the firing rate in Hz. Inter-spike intervals (ISIs) for
non-negative data incremented by one to avoid zero intervals while maintaining ISI
distributions.
:param datum: Tensor of shape [n_1, ..., n_k].
:param time: Length of Poisson spike train per input variable.
:param dt: Simulation time step.
:return: Tensor of shape [time, n_1, ..., n_k] of Poisson-distributed spikes.
"""
assert (datum >= 0).all(), "Inputs must be non-negative"

# Get shape and size of data.
shape, size = datum.shape, datum.numel()
datum = datum.flatten()
time = int(time / dt)

# Compute firing rates in seconds as function of data intensity,
# accounting for simulation time step.
rate = torch.zeros(size)
rate[datum != 0] = 1 / datum[datum != 0] * (1000 / dt)

# Create Poisson distribution and sample inter-spike intervals
# (incrementing by 1 to avoid zero intervals).
dist = torch.distributions.Poisson(rate=rate)
intervals = dist.sample(sample_shape=torch.Size([time + 1]))
intervals[:, datum != 0] += (intervals[:, datum != 0] == 0).float()

#-------------------------------------------------------------------
# 与上一部分代码的循环作用等效
# Calculate spike times by cumulatively summing over time dimension.
times = torch.cumsum(intervals, dim=0).long()
times[times >= time + 1] = 0

# Create tensor of spikes.
spikes = torch.zeros(time + 1, size).byte()
spikes[times, torch.arange(size)] = 1
spikes = spikes[1:]
#-------------------------------------------------------------------

return spikes.view(time, *shape)

请问两者之前的区别是什么?期盼回复

@Yanqi-Chen
Copy link
Collaborator

本框架的模拟方式是用二项分布逼近Poisson分布(发放率意义上),讨论见 #68 ,其速度更快但更不精确(相比于真正意义上的Poisson采样)。

def poisson(datum: torch.Tensor, time: int, dt: float = 1.0, **kwargs) -> torch.Tensor:
  # language=rst
  """
  Generates Poisson-distributed spike trains based on input intensity. Inputs must be
  non-negative, and give the firing rate in Hz. Inter-spike intervals (ISIs) for
  non-negative data incremented by one to avoid zero intervals while maintaining ISI
  distributions.
  :param datum: Tensor of shape [n_1, ..., n_k].
  :param time: Length of Poisson spike train per input variable.
  :param dt: Simulation time step.
  :return: Tensor of shape [time, n_1, ..., n_k] of Poisson-distributed spikes.
  """
  # Get shape and size of data.
  shape, size = datum.shape, datum.numel()
  datum = datum.flatten()
  time = int(time / dt)
  
  # Compute firing rates in seconds as function of data intensity,
  # accounting for simulation time step.
  rate = torch.zeros(size)
  rate[datum != 0] = 1 / datum[datum != 0] * (1000 / dt)
  
  # Create Poisson distribution and sample inter-spike intervals
  # (incrementing by 1 to avoid zero intervals).
  dist = torch.distributions.Poisson(rate=rate)
  intervals = dist.sample(sample_shape=torch.Size([time + 1]))
  intervals[:, datum != 0] += (intervals[:, datum != 0] == 0).float()
  
  #-------------------------------------------------------------------
  # 与上一部分代码的循环作用等效
  # Calculate spike times by cumulatively summing over time dimension.
  times = torch.cumsum(intervals, dim=0).long()
  times[times >= time + 1] = 0
  
  # Create tensor of spikes.
  spikes = torch.zeros(time + 1, size).byte()
  spikes[times, torch.arange(size)] = 1
  spikes = spikes[1:]
  #-------------------------------------------------------------------
  
  return spikes.view(time, *shape)

这一段代码采用的是从真实Poisson分布中采样获得脉冲发放间隔,将时间截断到仿真步长后获得脉冲序列,因此相比用二项分布近似更精确,但也更慢。

@fangwei123456 fangwei123456 added good first issue Good for newcomers question Further information is requested labels Jun 21, 2021
@201528014227051
Copy link
Author

明白了,感谢回复 @Yanqi-Chen

@fangwei123456 fangwei123456 added the API design How to design an API or why design such an API label Dec 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API design How to design an API or why design such an API good first issue Good for newcomers question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants