Skip to content

Commit

Permalink
Merge pull request #59 from Vivswan/develop
Browse files Browse the repository at this point in the history
v1.0.8
  • Loading branch information
Vivswan committed May 2, 2024
2 parents 508d8a2 + 4be24fd commit 62f09e1
Show file tree
Hide file tree
Showing 7 changed files with 35 additions and 25 deletions.
11 changes: 4 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,38 @@
# Changelog

## 1.0.8
* Removed redundant code from `reduce_precision`.
* Added `types` argument to `PseudoParameter.parametrize_module` for better selection for Parameterising the Layers.

## 1.0.7
* Fixed `GeLU` backward function equation.

## 1.0.6

* `Model` is subclass of `BackwardModule` for additional functionality.
* Using `inspect.isclass` to check if `backward_class` is a class in `Linear.set_backward_function`.
* Repr using `self.__class__.__name__` in all classes.

## 1.0.5 (Patches for Pytorch 2.0.1)

* Removed unnecessary `PseudoParameter.grad` property.
* Patch for Pytorch 2.0.1, add filtering inputs in `BackwardGraph._calculate_gradients`.

## 1.0.4

* Combined `PseudoParameter` and `PseudoParameterModule` for better visibility.
* BugFix: fixed save and load of state_dict of `PseudoParameter` and transformation module.
* Removed redundant class `analogvnn.parameter.Parameter`.

## 1.0.3

* Added support for no loss function in `Model` class.
* If no loss function is provided, the `Model` object will use outputs for gradient computation.
* Added support for multiple loss outputs from loss function.

## 1.0.2

* Bugfix: removed `graph` from `Layer` class.
* `graph` was causing issues with nested `Model` objects.
* Now `_use_autograd_graph` is directly set while compiling the `Model` object.

## 1.0.1 (Patches for Pytorch 2.0.0)

* added `grad.setter` to `PseudoParameterModule` class.

## 1.0.0

* Public release.
8 changes: 4 additions & 4 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ preferred-citation:
- family-names: Youngblood
given-names: Nathan
affiliation: University of Pittsburgh
doi: "10.48550/arXiv.2210.10048"
journal: "arXiv preprint arXiv:2210.10048"
doi: "10.1063/5.0134156"
journal: "APL Machine Learning"
title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
year: 2022
year: 2023
authors:
- given-names: Vivswan
family-names: Shah
Expand All @@ -25,7 +25,7 @@ authors:
affiliation: University of Pittsburgh
identifiers:
- type: doi
value: 10.48550/arXiv.2210.10048
value: 10.1063/5.0134156
description: >-
The concept DOI for the collection containing
all versions of the Citation File Format.
Expand Down
15 changes: 10 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# AnalogVNN

[![arXiv](https://img.shields.io/badge/arXiv-2210.10048-orange.svg)](https://arxiv.org/abs/2210.10048)
[![AML](https://img.shields.io/badge/AML-10.1063/5.0134156-orange.svg)](https://doi.org/10.1063/5.0134156)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

[![PyPI version](https://badge.fury.io/py/analogvnn.svg)](https://badge.fury.io/py/analogvnn)
Expand Down Expand Up @@ -52,24 +53,28 @@ digital neural network models to their analog counterparts with just a few lines
taking full advantage of the open-source optimization, deep learning, and GPU acceleration
libraries available through PyTorch.

AnalogVNN Paper: [https://arxiv.org/abs/2210.10048](https://arxiv.org/abs/2210.10048)
AnalogVNN Paper: [https://doi.org/10.1063/5.0134156](https://doi.org/10.1063/5.0134156)

## Citing AnalogVNN

We would appreciate if you cite the following paper in your publications for which you used AnalogVNN:

```bibtex
@article{shah2022analogvnn,
@article{shah2023analogvnn,
title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},
author={Shah, Vivswan and Youngblood, Nathan},
journal={arXiv preprint arXiv:2210.10048},
year={2022}
journal={APL Machine Learning},
volume={1},
number={2},
year={2023},
publisher={AIP Publishing}
}
```

Or in textual form:

```text
Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling
and optimizing photonic neural networks." arXiv preprint arXiv:2210.10048 (2022).
and optimizing photonic neural networks." APL Machine Learning 1.2 (2023).
DOI: 10.1063/5.0134156
```
9 changes: 3 additions & 6 deletions analogvnn/fn/reduce_precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,16 @@ def reduce_precision(x: TENSOR_OPERABLE, precision: TENSOR_OPERABLE, divide: TEN
Args:
x (TENSOR_OPERABLE): Tensor
precision (TENSOR_OPERABLE): the precision of the quantization.
divide (TENSOR_OPERABLE): the number of bits to be reduced
divide (TENSOR_OPERABLE): the rounding value that is if divide is 0.5,
then 0.6 will be rounded to 1.0 and 0.4 will be rounded to 0.0.
Returns:
TENSOR_OPERABLE: TENSOR_OPERABLE with the same shape as x, but with values rounded to the nearest
multiple of precision.
"""

x = x if isinstance(x, Tensor) else torch.tensor(x, requires_grad=False)
g: Tensor = x * precision
f = torch.sign(g) * torch.maximum(
torch.floor(torch.abs(g)),
torch.ceil(torch.abs(g) - divide)
) * (1 / precision)
f = torch.sign(x) * torch.ceil(torch.abs(x * precision) - divide) * (1 / precision)
return f


Expand Down
14 changes: 12 additions & 2 deletions analogvnn/parameter/PseudoParameter.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from __future__ import annotations

from typing import Callable, Any
from typing import Callable, Any, Optional, Union, Tuple

import torch
import torch.nn as nn
Expand Down Expand Up @@ -219,13 +219,20 @@ def parameterize(cls, module: nn.Module, param_name: str, transformation: Callab
return new_param

@classmethod
def parametrize_module(cls, module: nn.Module, transformation: Callable, requires_grad: bool = True):
def parametrize_module(
cls,
module: nn.Module,
transformation: Callable,
requires_grad: bool = True,
types: Optional[Union[type, Tuple[type]]] = None,
):
"""Parametrize all parameters of a module.
Args:
module (nn.Module): the module parameters to parametrize.
transformation (Callable): the transformation.
requires_grad (bool): if True, only parametrized parameters that require gradients.
types (Union[type, Tuple[type]]): the type or tuple of types to parametrize.
"""

with torch.no_grad():
Expand All @@ -236,6 +243,9 @@ def parametrize_module(cls, module: nn.Module, transformation: Callable, require
if requires_grad and not parameter.requires_grad:
continue

if types is not None and not isinstance(parameter, types):
continue

cls.parameterize(module=module, param_name=name, transformation=transformation)

for sub_module in module.children():
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ where = ["analogvnn"]
[project]
# $ pip install analogvnn
name = "analogvnn"
version = "1.0.7"
version = "1.0.8"
description = "A fully modular framework for modeling and optimizing analog/photonic neural networks"
readme = "README.md"
requires-python = ">=3.7"
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
torch
torchvision
torchaudio
dataclasses
numpy>=1.22.2
scipy
networkx
Expand Down

0 comments on commit 62f09e1

Please sign in to comment.