Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tonic example #65

Merged
merged 2 commits into from
Sep 7, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
examples/data/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down Expand Up @@ -123,4 +125,4 @@ test_nmnist.ipynb
test_shd.ipynb
temp.py
batch_flag.ipynb
examples_2/
examples_2/
169 changes: 169 additions & 0 deletions examples/tutorial_5_neuromorphic_datasets.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "47d5313e-c29d-4581-a9c7-a45122337069",
"metadata": {},
"source": [
"# snnTorch - Tutorial 3\n",
"### By Jason K. Eshraghian and Gregor Lenz"
]
},
{
"cell_type": "markdown",
"id": "e93694d9-0f0a-46a0-b17f-c04ac9b73a63",
"metadata": {},
"source": [
"# Neuromorphic datasets\n",
"Now we're going to look at how we can use datasets that were recorded with a neuromorphic camera. For that we make use of [Tonic](https://github.com/neuromorphs/tonic), which works much like PyTorch vision. We can simply install the package from pypi."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "86102f99-5955-43d6-ba80-db6ed1561733",
"metadata": {},
"outputs": [],
"source": [
"!pip install tonic"
]
},
{
"cell_type": "markdown",
"id": "404c438f-82b2-4041-a7ea-88ecdd17f4cc",
"metadata": {},
"source": [
"Let's start by loading the neuromorphic version of the MNIST dataset which is called [N-MNIST](https://tonic.readthedocs.io/en/latest/datasets.html#n-mnist). We can have a look at some raw events to get a feeling what we're working with. The raw output is an NxD array of events. Every row is one event and the columns are for x and y coordinates, timestamp and polarity."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d286ef9-5fe6-4578-a686-91559a1f81d2",
"metadata": {},
"outputs": [],
"source": [
"import tonic\n",
"\n",
"dataset = tonic.datasets.NMNIST(save_to='./data', train=False, download=True,)\n",
"events, target = dataset[0]\n",
"print(events)"
]
},
{
"cell_type": "markdown",
"id": "71dfd2fb-7ea7-483f-bef0-ff3d444df44a",
"metadata": {},
"source": [
"If we were to accumulate those events over time and plot the bins as images, it looks like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6d6a0b0-2a73-4dbe-9576-d06c251f0fa4",
"metadata": {},
"outputs": [],
"source": [
"tonic.utils.plot_event_grid(events, dataset.ordering)"
]
},
{
"cell_type": "markdown",
"id": "f6bcc031-d11a-4471-b3aa-335eec76d7ad",
"metadata": {},
"source": [
"Our neural network doesn't just take as an input an array of raw events. We want to convert the raw data to a representation that is suitable, such as a tensor. We can choose a set of transforms to apply to our data before feeding it to our network. The neuromorphic camera sensor has a temporal resolution of microseconds, which when converted into a dense representation ends up in a very large tensor. That is why we specify a [Downsample](https://tonic.readthedocs.io/en/latest/transformations.html#downsample-timestamps-and-or-spatial-coordinates) transform to reduce temporal resolution to milliseconds, which is sufficient for our task. Then we convert the raw array of events into a sparse tensor from which the dense version can easily be generated."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "30f249be-8a65-4c1c-a21c-d561e904b4bf",
"metadata": {},
"outputs": [],
"source": [
"import tonic.transforms as transforms\n",
"\n",
"transform = transforms.Compose([transforms.Downsample(time_factor=1e-3),\n",
" transforms.ToSparseTensor(merge_polarities=False),]) # set this to True if you want a single channel\n",
"\n",
"download = True\n",
"trainset = tonic.datasets.NMNIST(save_to='./data', download=download, transform=transform, train=True)\n",
"testset = tonic.datasets.NMNIST(save_to='./data', download=download, transform=transform, train=False)"
]
},
{
"cell_type": "markdown",
"id": "3831428d-0511-4fde-84d9-11d08fa45df7",
"metadata": {},
"source": [
"Now we can start loading data much like we would load from any other Pytorch dataloader. Because SNNTorch uses time as its first dimension, we have to permute the tensor dimensions from (Batch, Time, Channels, Width, Height) to (Time, Batch, Channels, Width, Height). "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b35c7cd-d292-47cd-9203-7f31aa7f7207",
"metadata": {},
"outputs": [],
"source": [
"from torch.utils.data import DataLoader\n",
"trainloader = DataLoader(trainset, shuffle=True)\n",
"\n",
"event_tensor, target = next(iter(trainloader))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "03aa6a71-5f6f-4551-8a67-87c875cc91f7",
"metadata": {},
"outputs": [],
"source": [
"event_tensor"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d29041f-c284-4ab3-9081-7c1aa23437ad",
"metadata": {},
"outputs": [],
"source": [
"print(event_tensor.to_dense().permute([1,0,2,3,4]).shape)\n",
"# feed this to the network\n",
"event_tensor.to_dense().permute([1,0,2,3,4])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "164b9c10-565d-4b4b-9511-00b9d6c2bec8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}