Skip to content

Commit

Permalink
Merge pull request #33 from Vivswan/master
Browse files Browse the repository at this point in the history
v1.0.6
  • Loading branch information
Vivswan committed Jun 6, 2023
2 parents 63d80a7 + a235713 commit 3ff13cc
Show file tree
Hide file tree
Showing 23 changed files with 104 additions and 227 deletions.
35 changes: 27 additions & 8 deletions .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,36 @@
max-line-length = 120

extend-ignore =
C101, # Coding magic comment
D100, # Missing docstring in public module
D104, # Missing docstring in public package
D202, # No blank lines allowed after function docstring
D210, # No whitespaces allowed surrounding docstring text
D401, # First line should be in imperative mood
R504, # unnecessary variable assignment before return statement
R505, # unnecessary else after return statement
# No explicit stacklevel argument found
B028,

# Coding magic comment
C101,

# Missing docstring in public module
D100,

# Missing docstring in public package
D104,

# No blank lines allowed after function docstring
D202,

# No whitespaces allowed surrounding docstring text
D210,

# First line should be in imperative mood
D401,

# unnecessary variable assignment before return statement
R504,

# unnecessary else after return statement
R505,

per-file-ignores =
sample_code.py: D100, D101, D102, D103, D104
sample_code_non_analog.py: D100, D101, D102, D103, D104
sample_code_with_logs.py: D100, D101, D102, D103, D104

exclude =
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -226,3 +226,4 @@ fabric.properties

# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser
.pdm-python
12 changes: 9 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,21 @@
# Changelog

## 1.0.6

* `Model` is subclass of `BackwardModule` for additional functionality.
* Using `inspect.isclass` to check if `backward_class` is a class in `Linear.set_backward_function`.
* Repr using `self.__class__.__name__` in all classes.

## 1.0.5 (Patches for Pytorch 2.0.1)

* Removed unnecessary `PseudoParameter.grad` property.
* Patch for Pytorch 2.0.1, add filtering inputs in `BackwardGraph._calculate_gradients`.

## 1.0.4

* Combined `PseudoParameter` and `PseudoParameterModule` for better visibility
* BugFix: fixed save and load of state_dict of `PseudoParameter` and transformation module
* Removed redundant class `analogvnn.parameter.Parameter`
* Combined `PseudoParameter` and `PseudoParameterModule` for better visibility.
* BugFix: fixed save and load of state_dict of `PseudoParameter` and transformation module.
* Removed redundant class `analogvnn.parameter.Parameter`.

## 1.0.3

Expand Down
24 changes: 19 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# AnalogVNN

[![arXiv](https://img.shields.io/badge/arXiv-2210.10048-orange.svg)](https://arxiv.org/abs/2210.10048)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/v1.0.0/docs/_static/AnalogVNN_Demo.ipynb)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

[![PyPI version](https://badge.fury.io/py/analogvnn.svg)](https://badge.fury.io/py/analogvnn)
[![Documentation Status](https://readthedocs.org/projects/analogvnn/badge/?version=stable)](https://analogvnn.readthedocs.io/en/stable/?badge=stable)
[![Python](https://img.shields.io/badge/python-3.7--3.10-blue)](https://badge.fury.io/py/analogvnn)
[![Python](https://img.shields.io/badge/python-3.7--3.11-blue)](https://badge.fury.io/py/analogvnn)
[![License: MPL 2.0](https://img.shields.io/badge/License-MPL_2.0-blue.svg)](https://opensource.org/licenses/MPL-2.0)

Documentation: [https://analogvnn.readthedocs.io/](https://analogvnn.readthedocs.io/)
Expand All @@ -16,15 +16,29 @@ Documentation: [https://analogvnn.readthedocs.io/](https://analogvnn.readthedocs
- Install AnalogVNN using [pip](https://pypi.org/project/analogvnn/)

```bash
pip install analogvnn
# Current stable release for CPU and GPU
pip install analogvnn

# For additional optional features
pip install analogvnn[full]
```

![3 Layered Linear Photonic Analog Neural Network](docs/_static/analogvnn_model.png)
## Usage:

[//]: # (![3 Layered Linear Photonic Analog Neural Network](https://github.com/Vivswan/AnalogVNN/raw/release/docs/_static/analogvnn_model.png))
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

- Sample code with AnalogVNN: [sample_code.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code.py)
- Sample code without
AnalogVNN: [sample_code_non_analog.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_non_analog.py)
- Sample code with AnalogVNN and
Logs: [sample_code_with_logs.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_with_logs.py)
- Jupyter
Notebook: [AnalogVNN_Demo.ipynb](https://github.com/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

## Abstract

![3 Layered Linear Photonic Analog Neural Network](https://github.com/Vivswan/AnalogVNN/raw/release/docs/_static/analogvnn_model.png)

**AnalogVNN** is a simulation framework built on PyTorch which can simulate the effects of
optoelectronic noise, limited precision, and signal normalization present in photonic
neural network accelerators. We use this framework to train and optimize linear and
Expand Down
2 changes: 1 addition & 1 deletion analogvnn/graph/AccumulateGrad.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def __repr__(self):
str: String representation of the module.
"""

return f'AccumulateGrad({self.module})'
return f'{self.__class__.__name__}({self.module})'

def __call__( # noqa: C901
self,
Expand Down
4 changes: 2 additions & 2 deletions analogvnn/graph/AcyclicDirectedGraph.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,8 @@ def add_edge(
self.graph.nodes[v_of_edge]['fillcolor'] = 'lightblue'
return self

@staticmethod # noqa: C901
def check_edge_parameters(
@staticmethod
def check_edge_parameters( # noqa: C901
in_arg: Union[None, int, bool],
in_kwarg: Union[None, str, bool],
out_arg: Union[None, int, bool],
Expand Down
2 changes: 1 addition & 1 deletion analogvnn/graph/ArgsKwargs.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def is_empty(self):
def __repr__(self):
"""Returns a string representation of the parameter."""

return f'ArgsKwargs(args={self.args}, kwargs={self.kwargs})'
return f'{self.__class__.__name__}(args={self.args}, kwargs={self.kwargs})'

@classmethod
def to_args_kwargs_object(cls, outputs: ArgsKwargsInput) -> ArgsKwargs:
Expand Down
3 changes: 2 additions & 1 deletion analogvnn/nn/module/Layer.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from __future__ import annotations

import functools
import inspect
from typing import Union, Type, Callable, Sequence, Optional, Set, Iterator, Tuple

from torch import nn, Tensor
Expand Down Expand Up @@ -178,7 +179,7 @@ def set_backward_function(self, backward_class: Union[Callable, BackwardModule,
if backward_class == self:
return self

if issubclass(backward_class, BackwardModule):
if inspect.isclass(backward_class) and issubclass(backward_class, BackwardModule):
self._backward_module = backward_class(self)
elif isinstance(backward_class, BackwardModule):
backward_class.set_layer(self)
Expand Down
3 changes: 2 additions & 1 deletion analogvnn/nn/module/Model.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from torch import optim, Tensor, nn
from torch.utils.data import DataLoader

from analogvnn.backward.BackwardModule import BackwardModule
from analogvnn.fn.test import test
from analogvnn.fn.train import train
from analogvnn.graph.BackwardGraph import BackwardGraph
Expand All @@ -22,7 +23,7 @@
__all__ = ['Model']


class Model(Layer):
class Model(Layer, BackwardModule):
"""Base class for analog neural network models.
Attributes:
Expand Down
4 changes: 2 additions & 2 deletions analogvnn/parameter/PseudoParameter.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def __init__(self, data=None, requires_grad=True, transformation=None):
self._transformed.original = self
self._transformation = self.identity
self.set_transformation(transformation)
self.substitute_member(self.original, self._transformed, "grad")
self.substitute_member(self.original, self._transformed, 'grad')

def __call__(self, *args, **kwargs):
"""Transforms the parameter.
Expand Down Expand Up @@ -117,7 +117,7 @@ def __repr__(self):
str: the string representation.
"""

return f'{PseudoParameter.__name__}(' \
return f'{self.__class__.__name__}(' \
f'transform={self.transformation}' \
f', original={self.original}' \
f')'
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion analogvnn/utils/TensorboardModelLog.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ def add_summary(
model=model,
input_size=input_size,
train_loader=train_loader,
*args,
*args, # noqa: B026
**kwargs
)

Expand Down
2 changes: 1 addition & 1 deletion analogvnn/utils/get_model_summaries.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from analogvnn.nn.module.Layer import Layer


def get_model_summaries(
def get_model_summaries( # noqa: C901
model: Optional[nn.Module],
input_size: Optional[Sequence[int]] = None,
train_loader: DataLoader = None,
Expand Down
28 changes: 14 additions & 14 deletions docs/_static/AnalogVNN_Demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,31 +18,31 @@
"\n",
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://analogvnn.readthedocs.io/en/v1.0.0/tutorial.html\">\n",
" <a target=\"_blank\" href=\"https://analogvnn.readthedocs.io/en/release/tutorial.html\">\n",
" <center>\n",
" <img src=\"https://analogvnn.readthedocs.io/en/in_progess/_static/analogvnn-logo-square-black.svg\" height=\"32px\" />\n",
" </center>\n",
" View on AnalogVNN\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/v1.0.0/docs/_static/AnalogVNN_Demo.ipynb\">\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb\">\n",
" <center>\n",
" <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n",
" </center>\n",
" Run in Google Colab\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://github.com/Vivswan/AnalogVNN/blob/v1.0.0/docs/_static/AnalogVNN_Demo.ipynb\">\n",
" <a target=\"_blank\" href=\"https://github.com/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb\">\n",
" <center>\n",
" <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n",
" </center>\n",
" View source on GitHub\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://github.com/Vivswan/AnalogVNN/raw/v1.0.0/docs/_static/AnalogVNN_Demo.ipynb\">\n",
" <a href=\"https://github.com/Vivswan/AnalogVNN/raw/release/docs/_static/AnalogVNN_Demo.ipynb\">\n",
" <center>\n",
" <img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />\n",
" </center>\n",
Expand All @@ -55,14 +55,14 @@
{
"cell_type": "markdown",
"source": [
"#### To create 3 layered linear photonic analog neural network with 4-bit [precision](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#reduceprecision), 0.5 [leakage](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#leakage-or-error-probability) and [clamp](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#clamp) normalization:\n",
"#### To create 3 layered linear photonic analog neural network with 4-bit [precision](https://analogvnn.readthedocs.io/en/release/extra_classes.html#reduceprecision), 0.5 [leakage](https://analogvnn.readthedocs.io/en/release/extra_classes.html#leakage-or-error-probability) and [clamp](https://analogvnn.readthedocs.io/en/release/extra_classes.html#clamp) normalization:\n",
"\n",
"![3 Layered Linear Photonic Analog Neural Network](analogvnn_model.png)\n",
"\n",
"Python file:\n",
"[Sample code](https://github.com/Vivswan/AnalogVNN/blob/v1.0.0/sample_code.py)\n",
"[Sample code](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code.py)\n",
"and\n",
"[Sample code with logs](https://github.com/Vivswan/AnalogVNN/blob/v1.0.0/sample_code_with_logs.py)"
"[Sample code with logs](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_with_logs.py)"
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -192,11 +192,11 @@
"source": [
"## Build a 3 layered linear photonic analog neural network\n",
"\n",
"[`FullSequential`](https://analogvnn.readthedocs.io/en/v1.0.0/autoapi/analogvnn/nn/module/FullSequential/index.html#analogvnn.nn.module.FullSequential.FullSequential) is sequential model where backward graph is the reverse of forward graph.\n",
"[`FullSequential`](https://analogvnn.readthedocs.io/en/release/autoapi/analogvnn/nn/module/FullSequential/index.html#analogvnn.nn.module.FullSequential.FullSequential) is sequential model where backward graph is the reverse of forward graph.\n",
"\n",
"To add the [Reduce Precision](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#reduce-precision), [Normalization](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#normalization), and [Noise](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#noise) before and after the main Linear layer, `add_layer` function is used.\n",
"To add the [Reduce Precision](https://analogvnn.readthedocs.io/en/release/extra_classes.html#reduce-precision), [Normalization](https://analogvnn.readthedocs.io/en/release/extra_classes.html#normalization), and [Noise](https://analogvnn.readthedocs.io/en/release/extra_classes.html#noise) before and after the main Linear layer, `add_layer` function is used.\n",
"\n",
"Leakage definition: [https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#leakage-or-error-probability](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#leakage-or-error-probability)"
"Leakage definition: [https://analogvnn.readthedocs.io/en/release/extra_classes.html#leakage-or-error-probability](https://analogvnn.readthedocs.io/en/release/extra_classes.html#leakage-or-error-probability)"
]
},
{
Expand Down Expand Up @@ -244,7 +244,7 @@
"id": "iOkIKXWoZbmn"
},
"source": [
"Note: [`analogvnn.nn.module.Sequential.Sequential.add_sequence()`](https://analogvnn.readthedocs.io/en/v1.0.0/autoapi/analogvnn/nn/module/Sequential/index.html#analogvnn.nn.module.Sequential.Sequential.add_sequence) is used to create and set forward and backward graphs in AnalogVNN, more information in Inner Workings"
"Note: [`analogvnn.nn.module.Sequential.Sequential.add_sequence()`](https://analogvnn.readthedocs.io/en/release/autoapi/analogvnn/nn/module/Sequential/index.html#analogvnn.nn.module.Sequential.Sequential.add_sequence) is used to create and set forward and backward graphs in AnalogVNN, more information in Inner Workings"
]
},
{
Expand Down Expand Up @@ -276,7 +276,7 @@
"\n",
"WeightModel is used to parametrize the parameter of LinearModel to simulate photonic weights\n",
"\n",
"[`FullSequential`](https://analogvnn.readthedocs.io/en/v1.0.0/autoapi/analogvnn/nn/module/FullSequential/index.html#analogvnn.nn.module.FullSequential.FullSequential) is sequential model where backward graph is the reverse of forward graph."
"[`FullSequential`](https://analogvnn.readthedocs.io/en/release/autoapi/analogvnn/nn/module/FullSequential/index.html#analogvnn.nn.module.FullSequential.FullSequential) is sequential model where backward graph is the reverse of forward graph."
]
},
{
Expand Down Expand Up @@ -333,7 +333,7 @@
"id": "Dtg27Y80WwR0"
},
"source": [
"Using [`PseudoParameter`](https://analogvnn.readthedocs.io/en/v1.0.0/inner_workings.html#pseudoparameters) to parametrize the parameter"
"Using [`PseudoParameter`](https://analogvnn.readthedocs.io/en/release/inner_workings.html#pseudoparameters) to parametrize the parameter"
]
},
{
Expand Down Expand Up @@ -443,7 +443,7 @@
"source": [
"## Conclusion\n",
"\n",
"Congratulations! You have trained a 3 layered linear photonic analog neural network with 4-bit [precision](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#reduceprecision), 0.5 [leakage](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#leakage-or-error-probability) and [clamp](https://analogvnn.readthedocs.io/en/v1.0.0/extra_classes.html#clamp) normalization"
"Congratulations! You have trained a 3 layered linear photonic analog neural network with 4-bit [precision](https://analogvnn.readthedocs.io/en/release/extra_classes.html#reduceprecision), 0.5 [leakage](https://analogvnn.readthedocs.io/en/release/extra_classes.html#leakage-or-error-probability) and [clamp](https://analogvnn.readthedocs.io/en/release/extra_classes.html#clamp) normalization"
]
},
{
Expand Down
3 changes: 1 addition & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,7 @@
'light_logo': 'analogvnn-logo-wide-white.svg',
'dark_logo': 'analogvnn-logo-wide-black.svg',
'source_repository': 'https://github.com/Vivswan/AnalogVNN',
# 'source_branch': 'master',
'source_branch': 'v1.0.0',
'source_branch': 'release',
'source_directory': 'docs/',
}
# html_logo = '_static/analogvnn-logo-wide-black.svg'
Expand Down
2 changes: 1 addition & 1 deletion docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

AnalogVNN is tested and supported on the following 64-bit systems:

- Python 3.7, 3.8, 3.9, 3.10
- Python 3.7, 3.8, 3.9, 3.10, 3.11
- Windows 7 and later
- Ubuntu 16.04 and later, including WSL
- Red Hat Enterprise Linux 7 and later
Expand Down
6 changes: 3 additions & 3 deletions docs/sample_code.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Sample code

<a href="https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/v1.0.0/docs/_static/AnalogVNN_Demo.ipynb" style="font-size:24px;">
<a href="https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb" style="font-size:24px;">
Run in Google Colab:
<img alt="Google Colab" src="https://www.tensorflow.org/images/colab_logo_32px.png" style="vertical-align: bottom;">
</a>

![3 Layered Linear Photonic Analog Neural Network](_static/analogvnn_model.png)

[Sample code](https://github.com/Vivswan/AnalogVNN/blob/v1.0.0/sample_code.py)
[Sample code](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code.py)
and
[Sample code with logs](https://github.com/Vivswan/AnalogVNN/blob/v1.0.0/sample_code_with_logs.py)
[Sample code with logs](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_with_logs.py)
for 3 layered linear photonic analog neural network with 4-bit precision,
0.5 {ref}`extra_classes:leakage` and {ref}`extra_classes:clamp`
normalization:
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Tutorial

<a href="https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/v1.0.0/docs/_static/AnalogVNN_Demo.ipynb" style="font-size:24px;">
<a href="https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb" style="font-size:24px;">
Run in Google Colab:
<img alt="Google Colab" src="https://www.tensorflow.org/images/colab_logo_32px.png" style="vertical-align: bottom;">
</a>
Expand Down

0 comments on commit 3ff13cc

Please sign in to comment.