All datasets are subclasses of torch.utils.data.Dataset
i.e., they have __getitem__
and __len__
methods implemented. Hence, they can all be passed to a torch.utils.data.DataLoader
which can load multiple samples in parallel using torch.multiprocessing
workers. For example:
nmnist_data = spikevision.data.NMNIST('path/to/nmnist_root/')
data_loader = DataLoader(nmnist_data,
batch_size=4,
shuffle=True,
num_workers=args.nThreads)
For further examples on each dataset and its use, please refer to the examples.
snntorch.spikevision.spikedata.nmnist.NMNIST
snntorch.spikevision.spikedata.dvs_gesture.DVSGesture
snntorch.spikevision.spikedata.shd.SHD