Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Patching Support for torch.nn.Sequential Containers #88

Merged
merged 3 commits into from
Sep 10, 2021

Conversation

coreylammie
Copy link
Owner

Added patching support for torch.nn.Sequential containers to memtorch.mn.Module.patch_model and memtorch.bh.nonideality.apply_nonidealities.

@codecov
Copy link

codecov bot commented Sep 10, 2021

Codecov Report

Merging #88 (dcc85f3) into master (bb8836e) will increase coverage by 0.02%.
The diff coverage is 60.71%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #88      +/-   ##
==========================================
+ Coverage   90.74%   90.76%   +0.02%     
==========================================
  Files          54       54              
  Lines        2063     2058       -5     
==========================================
- Hits         1872     1868       -4     
+ Misses        191      190       -1     
Flag Coverage Δ
unittests 90.76% <60.71%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
memtorch/mn/Module.py 87.03% <44.44%> (-5.28%) ⬇️
memtorch/bh/nonideality/NonIdeality.py 87.50% <66.66%> (+4.96%) ⬆️
memtorch/version.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bb8836e...dcc85f3. Read the comment docs.

@coreylammie
Copy link
Owner Author

This functionality was verified using the following code snippet:

import torch
from torch.autograd import Variable
import memtorch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import copy
from memtorch.mn.Module import patch_model
from memtorch.map.Input import naive_scale
from memtorch.map.Parameter import naive_map
from memtorch.bh.nonideality.NonIdeality import apply_nonidealities
from collections import OrderedDict


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.convs = nn.Sequential(nn.Conv2d(1, 20, 5, 1), nn.MaxPool2d(2, 2), nn.Conv2d(20, 50, 5, 1), nn.MaxPool2d(2, 2), nn.ReLU())
        self.fc1 = nn.Sequential(OrderedDict([('fc1', nn.Linear(4*4*50, 500))]))
        self.fc2 = nn.Linear(500, 10)

    def forward(self, x):
        x = self.convs(x)
        x = x.view(-1, 4*4*50)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x

device = torch.device('cpu' if 'cpu' in memtorch.__version__ else 'cuda')
model = Net().to(device)
reference_memristor = memtorch.bh.memristor.VTEAM
reference_memristor_params = {'time_series_resolution': 1e-10}
patched_model = patch_model(copy.deepcopy(model),
                          memristor_model=reference_memristor,
                          memristor_model_params=reference_memristor_params,
                          module_parameters_to_patch=[torch.nn.Conv2d, torch.nn.Linear],
                          mapping_routine=naive_map,
                          transistor=True,
                          programming_routine=None,
                          tile_shape=(128, 128),
                          max_input_voltage=0.3,
                          scaling_routine=naive_scale,
                          ADC_resolution=8,
                          ADC_overflow_rate=0.,
                          quant_method='linear')
patched_model.tune_()
patched_model_ = apply_nonidealities(copy.deepcopy(patched_model),
                                  non_idealities=[memtorch.bh.nonideality.NonIdeality.DeviceFaults],
                                  lrs_proportion=0.5,
                                  hrs_proportion=0.5,
                                  electroform_proportion=0)
patched_model_.tune_()

Co-authored-by: coreylammie <coreylammie@users.noreply.github.com>
@coreylammie coreylammie merged commit e6825bb into master Sep 10, 2021
@coreylammie coreylammie deleted the sequential branch September 10, 2021 03:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant