Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abstractions #38

Merged
merged 117 commits into from
Aug 13, 2020
Merged
Show file tree
Hide file tree
Changes from 106 commits
Commits
Show all changes
117 commits
Select commit Hold shift + click to select a range
670683f
starts mad_competition
billbrod Jun 20, 2019
f5cb748
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
billbrod Jul 29, 2019
c25bf1c
adds load_images function
billbrod Dec 6, 2019
4953e10
adds folder necessary for load_images tests
billbrod Dec 6, 2019
e899974
removes accidental print statement
billbrod Dec 13, 2019
236dbed
starts abstraction for synthesis methods
billbrod Dec 13, 2019
5145d18
adds scikit image dependency
billbrod Dec 13, 2019
282c5a4
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
billbrod Jan 10, 2020
d35d316
adds MSE class, corrects linewrap for MAD
billbrod Jan 10, 2020
2e3741b
Merge branch 'MAD' of github.com:LabForComputationalVision/plenoptic …
billbrod Jan 10, 2020
3daa103
Adds initial MAD competition implementatin
billbrod Jan 24, 2020
53cfc1d
adds comment explaining the proposed_img
billbrod Jan 27, 2020
997ec90
removes manual line search for finding nu
billbrod Jan 31, 2020
541366b
adds ability to set fix_step_n_iter
billbrod Jan 31, 2020
33f85d8
bugfix: remove the third arg to objective_function
billbrod Jan 31, 2020
5f04dcc
adds _init_dict() helper function
billbrod Jan 31, 2020
e9b0bc9
all targets supported
billbrod Feb 7, 2020
e74a045
moves normalized_mse
billbrod Feb 7, 2020
9930280
moves plotting and animate to Synthesis
billbrod Feb 14, 2020
c268f27
plotting and animating now work with mad!
billbrod Feb 21, 2020
8a513fd
adds norm_loss option
billbrod Feb 28, 2020
9c4f13c
dispaly.plot_representation now uses vrange='indep0' for images
billbrod Feb 28, 2020
27595ab
fixes linewrap in mssim
billbrod Feb 28, 2020
d28df98
mad competition now supports metrics as well as models
billbrod Feb 28, 2020
18b282e
removes unnecessary print statement
billbrod Feb 28, 2020
d48ec6b
bugfixes
billbrod Feb 28, 2020
eee23aa
moves shared initialization code to Synthesis.py
billbrod Mar 13, 2020
35bf51c
moves metric support to Synthesis.py
billbrod Mar 13, 2020
0a2f19e
breaks synthesize into functions and moves some to Synthesis
billbrod Mar 20, 2020
3f2294f
updates tests for checking whether matched_rep is a parameter
billbrod Mar 20, 2020
3def646
bugfix: typo
billbrod Mar 20, 2020
3abdc15
bugfix: mad_competition was messing up saved_representation
billbrod Mar 27, 2020
525e020
adds coarse-to-fine to Synthesis, standardizes synthesize() call
billbrod Apr 3, 2020
4196c37
fixes synthesize_all
billbrod Apr 3, 2020
0e760d5
changes device and dtype to DEVICE and DTYPE
billbrod Apr 3, 2020
5ad962a
send metric dummy-model to target_image device
billbrod Apr 3, 2020
241f743
scope doesn't need to be modules for this
billbrod Apr 3, 2020
e4b4af4
corrects load so it will work with MADCompetition
billbrod Apr 3, 2020
70e7ac1
bugfix: load needs to return the object
billbrod Apr 3, 2020
5f9b4ee
adds tests file
billbrod Apr 3, 2020
cf6c340
fixes broken tests
billbrod Apr 10, 2020
d8c97a7
Merge branch 'master' into abstractions
billbrod Apr 10, 2020
27264e8
adds new capabilities to mad_competition, updates docstrings, tests
billbrod Apr 10, 2020
510750c
test_resume_exceptions bugfix
billbrod Apr 10, 2020
ada09b7
more bugfixes for test_mad
billbrod Apr 10, 2020
d577805
bugfix for synthesize_all
billbrod Apr 10, 2020
1a82a13
changes synthesize default args
billbrod Apr 10, 2020
28f82a9
adds metric/classes/NLP
billbrod Apr 10, 2020
b88c48a
adds if_existing arg to MADCompetition.synthesize_all()
billbrod Apr 10, 2020
9be2950
starts MADCompetition example notebook
billbrod Apr 10, 2020
966ae37
some quality of life changes
billbrod Apr 17, 2020
6dfa24e
adds Metamer tutorial
billbrod Apr 17, 2020
d29c29b
adds simple example
billbrod May 4, 2020
0c69693
fixes FrontEnd issue
billbrod May 8, 2020
764f7bb
removes unnecessary input, adds extra documentation
billbrod May 8, 2020
3999599
adds link to MAD Competition notebook
billbrod May 8, 2020
18219cf
adds more info on MAD-specific args/attributes
billbrod May 8, 2020
2cfca2d
starts Synthesis notebook
billbrod May 8, 2020
0246833
updates metamer continue test for new way of continuing
billbrod May 15, 2020
0adc73a
adds tools/optim.py
billbrod May 22, 2020
00cf4e3
overhauls loss_function
billbrod May 22, 2020
b00627b
bugfix: removes extra arg
billbrod May 22, 2020
a211da2
corrects save() and load() for metrics and loss function
billbrod May 23, 2020
12b8049
adds verbose flag
billbrod May 25, 2020
bf669a8
adds small notes
billbrod May 25, 2020
052fe4c
adds pytest-timeout
billbrod May 25, 2020
9dca101
updates pytest-timeout
billbrod May 25, 2020
0b04cbc
pytest no longer captures output
billbrod May 26, 2020
ab6692e
adds more print statements?
billbrod May 26, 2020
30dd960
adds more text
billbrod May 26, 2020
c4e31b7
removes plotting from the malfunctioning test
billbrod May 26, 2020
aef41d6
fixes problem
billbrod May 26, 2020
29e85f7
try another way to fix the problem
billbrod May 26, 2020
fe3bf00
last fix worked, so removing print and -v -s
billbrod May 26, 2020
a5f2005
adds notebook showing simple MAD example
billbrod Jun 12, 2020
8d65038
adds choices for how to do coarse-to-fine optimization
billbrod Jun 15, 2020
321775d
adds store/save_progress attributes to save
billbrod Jun 15, 2020
8402a9b
moves generate_norm_stats and zscore_stats to po.optim
billbrod Jun 26, 2020
6eed195
Merge branch 'abstractions' of github.com:LabForComputationalVision/p…
billbrod Jun 26, 2020
863c41c
bugfix: moves import statements
billbrod Jun 27, 2020
e7b5d40
adds code to view external MAD results
billbrod Jul 11, 2020
fbaf91e
removes pandas import
billbrod Jul 13, 2020
1b354e7
updates plot_MAD_results for compatability with new MATLAB code
billbrod Jul 31, 2020
663f455
updates to SSIM
billbrod Aug 4, 2020
df92b37
changes for SSIM and MAD
billbrod Aug 4, 2020
aa86d8b
Merge branch 'master' into abstractions
billbrod Aug 4, 2020
ca564a3
bugfix: corrects names/locations of functions
billbrod Aug 5, 2020
9b7e2ea
renames min/max to best/worst
billbrod Aug 5, 2020
b5d56cb
corrects add_noise
billbrod Aug 5, 2020
d8ff0d0
removes msssim, SSIM, MSSSIM, restructures ssim
billbrod Aug 5, 2020
13adf0f
adds SSIM tests
billbrod Aug 6, 2020
d3dd4e2
removes TODO from metamer.py
billbrod Aug 6, 2020
d7f5933
bugfix for mad competition saving
billbrod Aug 6, 2020
85f8c72
corrects failing tests
billbrod Aug 6, 2020
64c294d
moves optimizer and scheduler to hidden attributes
billbrod Aug 6, 2020
58d4b00
Merge branch 'master' into abstractions
billbrod Aug 7, 2020
77d2c72
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
billbrod Aug 7, 2020
9196906
removes normalized_mse
billbrod Aug 7, 2020
40bf306
adds tests for new MAD saving changes
billbrod Aug 7, 2020
599314a
renames some Synthesis attributes
billbrod Aug 7, 2020
b7b7633
moves some attributes to hidden
billbrod Aug 7, 2020
3fc54e6
Synthesis no longer a torch.nn.Module
billbrod Aug 7, 2020
bf5f5e0
saves coarse_to_fine
billbrod Aug 7, 2020
f6d06ae
starts cleaning up docstring
billbrod Aug 7, 2020
aaba29d
Big update on documentation, reverts MAD init noise
billbrod Aug 11, 2020
8eb2d3c
Update perceptual_distance.py
lyndond Aug 11, 2020
9ab9513
Adds some additional intro to Simple_MAD
billbrod Aug 11, 2020
66b7414
Reruns with better noise initialization
billbrod Aug 11, 2020
f41f728
changes to docstrings that Lyndon pointed out
billbrod Aug 11, 2020
b45fbce
Merge branch 'abstractions' of github.com:LabForComputationalVision/p…
billbrod Aug 11, 2020
ff4c32a
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
billbrod Aug 11, 2020
f2ec3c1
updates docs
billbrod Aug 11, 2020
3fe5f3d
adds info on adding new tutorial to docs
billbrod Aug 11, 2020
b6352b0
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
billbrod Aug 13, 2020
bd6792c
cleans up tests
billbrod Aug 13, 2020
953fa10
deletes to_delete/ files
billbrod Aug 13, 2020
c5698bf
renames notebooks, adds nblinks
billbrod Aug 13, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ __pycache__/

examples/data/cifar*
data/plenoptic-test-files
data/ssim_images
data/ssim_analysis.mat
data/cat7*
data/elep*
docs/_build
2 changes: 2 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
language: python
dist: xenial
services:
- xvfb
env:
- TEST_SCRIPT=metamers
- TEST_SCRIPT=models
Expand Down
Binary file added data/256x256/checkerboard.pgm
Binary file not shown.
88 changes: 88 additions & 0 deletions data/256x256/curie.pgm

Large diffs are not rendered by default.

Binary file added data/256x256/einstein.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
778 changes: 778 additions & 0 deletions examples/MAD_Competition.ipynb

Large diffs are not rendered by default.

10,528 changes: 10,528 additions & 0 deletions examples/Metamer.ipynb

Large diffs are not rendered by default.

357 changes: 357 additions & 0 deletions examples/Original_MAD.ipynb

Large diffs are not rendered by default.

341 changes: 341 additions & 0 deletions examples/Simple_MAD.ipynb

Large diffs are not rendered by default.

90 changes: 90 additions & 0 deletions examples/Synthesis.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Implementing New Synthesis Methods\n",
"\n",
"This notebook assumes you are familiar with synthesis methods and have interacted with them in this package, and would like to either understand what happens under the hood a little better or implement your own using the existing API.\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have an executable example for this ipynb (even if really dumb/simple) ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I couldn't come up with what an example would look like here. Any thoughts?

Otherwise, this should just probably not be a notebook, and make it a rst in the docs folder instead.

"\n",
"`Synthesis` is an abstract class that should be inherited by any synthesis method (e.g., `Metamer`, `MADCompetition`). It provides many helper functions which, depending on your method, can be used directly or modified slightly. For the two extremes on this, see the source code for `Metamer` and `MADCompetition`: `Metamer` uses almost everything exactly as written, whereas because `MADCompetition` works with two models, requires extensive modification. Even when you're modifying the methods, however, you should try to:\n",
"\n",
"1. Maintain the names and, as much as possible, the call signatures. We want it to be easy to use the different synthesis methods so the way users interact with them should be as similar as possible.\n",
" - The most common reason you'd modify the call signature would be for adding arguments. If an argument is a modification / tweak of an existing one, place it next to that existing argument. If it's completely novel (and important), place it near the beginning. \n",
" - For example, `MADCompetition` requires two models, instead of one, during intialization, and a new required argument, `synthesis_target` for `synthesize()`. The standard initialization call signature is `(target_image, model, loss_function, **model_kwargs)`, so `MADCompetition`'s is `(target_image, model_1, model_2, loss_function, model_1_kwargs, model_2_kwargs)`. The new argument for `synthesize()` goes at the beginning.\n",
"2. Reuse existing methods. The basic idea of many synthesis methods is pretty similar: update the input image based on the gradient (or a function of the gradient) of the model. Therefore, the code for much of what you'll want to do already exists and you will just need to e.g., call it with a different argument, specify what model to use, modify the gradient before updating the image.\n",
"3. Make sure all the existing public-facing methods either work or raise a `NotImplementedError`. We want people either to be able to use the methods they're used to from other synthesis methods for better understanding synthesis (for example, plotting the synthesis status or creating an animation of progress), or to know why they cannot. For example, because `MADCompetition` has two models, we want to plot both losses in `plot_loss`. We can make use of the existing `Synthesis.plot_loss()` method, just modifying where it grabs the data from, and call it twice. To the user, there's no difference in how it creates the plot. However, there's no need to do this for the private methods (e.g., `_set_seed()`).\n",
"4. Add any natural generalizations. `MADCompetition` stimuli come in sets of 4, and so it makes sense to provide a function that generalizes `plot_synthesized_image()` to show all 4 of them: `plot_synthesized_image_all()`.\n",
"\n",
"## Structure\n",
"\n",
"Now, let's walk through the structure of a `Synthesis` class.\n",
"\n",
"The two most important functions are `__init__()`, which initializes the class, and `synthesize()` which synthesizes an image. \n",
"\n",
"`Synthesis.__init__()` provides a lot of code that you can use (as well as a basis for the docstring), and should be called unless you have a *really* good reason not to. It will automatically support the use of models and metrics, modifying the loss function, and initialize a lot of the class's attributes. You may want to call it and then do additional stuff, e.g., set up a second model or initialize new attributes.\n",
"\n",
"`Synthesis.synthesize()` cannot be called, but provides a skeleton of what `synthesize()` should look like (as well as a basis for the docstring). It shows how the various hidden helper methods are used to set up the synthesis call and core loop. You'll probably want to copy this into your new synthesis method's `synthesize()` and then modify it. You'll certainly need to change the initialization of the matched image, which varies from method to method (for instance, `Metamer` uses random noise or a new image, whereas `MADCompetition` uses the reference image plus some noise). You may otherwise be able to ues the method as it's written, just modifying the helper functions.\n",
"\n",
"`Synthesis` also contains a variety of plotting and animating functions. You will probably need to think about what to plot, but should hopefully be able to adapt the existing display code to your needs:\n",
"- `Synthesis.plot_representation_error()` calls `po.tools.display.plot_representation` on `Synthesis.representation_error()`, which takes the difference between `Synthesis.saved_representation` and `Synthesis.base_representation`.\n",
"- `Synthesis.plot_loss()` plots `Synthesis.loss` as a function of iterations.\n",
"- `Synthesis.plot_synthesized_image()` calls `pt.imshow` on `Synthesis.synthesized_signal`\n",
"- `Synthesis.plot_synthesis_statuss()` combines the three above plots into one figure\n",
"- `Synthesis.animate()` animates the above figure over iterations.\n",
"\n",
"## Important Attributes\n",
"\n",
"In order to mesh with `Synthesis`, you'll need to adopt its naming conventions for its attributes:\n",
"- At initialization, you should take something like the following arguments, which will get stored as attributes:\n",
" - `base_signal`: the signal you're basing your synthesis off of.\n",
" - `model`: the model (`torch.nn.Module`) or metric (callable) that you're basing your synthesis off of\n",
" - `loss_function`: the callable to use for computing distance, must return a scalar. Can be `None`, in which case we use the l2-norm of the difference in representation space.\n",
"- The model's representation of `base_signal` should be `base_representation`.\n",
"- During iterative synthesis: \n",
" - The synthesis-in-progress is `synthesized_signal` and the model's representation of it is `synthesized_representation`.\n",
" - Loss is `loss`, norm of the gradient is `gradient`, learning rate is `learning_rate`\n",
" - If user wants to store progress, then `store_progress` is either a boolean or an integer specifying how often to update the following attributes, which store the corresponding other attributes:\n",
" - `saved_signal` contains `synthesized_signal`\n",
" - `saved_representation` contains `synthesized_representation`\n",
" - `saved_signal_gradient` contains `synthesized_signal.grad`\n",
" - `saved_representation_gradient` contains `synthesized_representation.grad`\n",
" - If you want to make use of coarse-to-fine optimization, `_init_ctf_and_randomizer` will take care of initializing the following attributes, `_optimizer_step` and `_closure` use them:\n",
" - `scales` is a copy of `model.scales` and will be edited over the course of optimization to specify which scale we're working on at the moment\n",
" - `scales_loss`: scale-specific loss at each iteration (`loss` contains the loss computed with the whole model)\n",
" - `scales_timing`: dictionary containing the iterations where we started and stopped synthesizing each scale\n",
" - `scales_finished`: list of scales that we've finished optimizing\n",
" - For saving during synthesis (in case of failure or something), `save_progress` acts like `store_progress` and `save_path` specifies the path to the `.pt` file for saving. \n",
" - The other arguments to `synthesize()`, as documented there, are also set as attributes and made use of by `_optimizer_step` and `_closure`, but are not necessary for the other functionality.\n",
" \n",
"## Required methods\n",
"\n",
"The only methods you need to implement are `__init__()`, `save()`, and `load()`:\n",
"- For save, you just need to tell `super().save()` which attributes you wish to save. It's recommended you include the `save_model_reduced` argument as well (see `Metamer` tutorial notebook for an explanation of that).\n",
"- For load, you need to tell `super().load()` what the name of the attribute that contains the model is (e.g., `model`)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:plenoptic]",
"language": "python",
"name": "conda-env-plenoptic-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
7,907 changes: 7,907 additions & 0 deletions examples/simple_example.ipynb

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions plenoptic/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@
# from .tools.linalg import *
from .tools.display import *
from .tools.data import *
from .tools import optim
from .tools import external


from .version import version as __version__
Expand Down
6 changes: 4 additions & 2 deletions plenoptic/metric/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
from .perceptual_distance import ssim, msssim, nlpd, nspd
from .model_metric import model_metric
from .perceptual_distance import ssim, nlpd, nspd, ssim_map
from .model_metric import model_metric
from .naive import mse
from .classes import NLP
47 changes: 47 additions & 0 deletions plenoptic/metric/classes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import torch
from .perceptual_distance import normalized_laplacian_pyramid


class NLP(torch.nn.Module):
r"""simple class for implementing normalized laplacian pyramid

This class just calls
``plenoptic.metric.normalized_laplacian_pyramid`` on the image and
returns a 3d tensor with the flattened activations.

NOTE: synthesis using this class will not be the exact same as
synthesis using the ``plenoptic.metric.nlpd`` function (by default),
because the synthesis methods use ``torch.norm(x - y, p=2)`` as the
distance metric between representations, whereas ``nlpd`` uses the
root-mean square of the distance (i.e.,
``torch.sqrt(torch.mean(x-y)**2))``

"""
def __init__(self):
super().__init__()

def forward(self, image):
"""returns flattened NLP activations

WARNING: For now this only supports images with batch and
channel size 1

Parameters
----------
image : torch.tensor
image to pass to normalized_laplacian_pyramid

Returns
-------
representatio : torch.tensor
billbrod marked this conversation as resolved.
Show resolved Hide resolved
3d tensor with flattened NLP activations

"""
if image.shape[0] > 1 or image.shape[1] > 1:
raise Exception("For now, this only supports batch and channel size 1")
activations = normalized_laplacian_pyramid(image)
# activations is a list of tensors, each at a different scale
# (down-sampled by factors of 2). To combine these into one
# vector, we need to flatten each of them and then unsqueeze so
# it is 3d
return torch.cat([i.flatten() for i in activations]).unsqueeze(0).unsqueeze(0)
33 changes: 33 additions & 0 deletions plenoptic/metric/naive.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import torch


def mse(img1, img2):
r"""return the MSE between img1 and img2

Our baseline metric to compare two images is often mean-squared
error, MSE. This is not a good approximation of the human visual
system, but is handy to compare against.

For two images, :math:`x` and :math:`y`, with :math:`n` pixels
each:

.. math::

MSE &= \frac{1}{n}\sum_i=1^n (x_i - y_i)^2

The two images must have a float dtype

Parameters
----------
img1 : torch.tensor
The first image to compare
img2 : torch.tensor
The second image to compare, must be same size as ``img1``

Returns
-------
mse : torch.float
the mean-squared error between ``img1`` and ``img2``

"""
return torch.pow(img1 - img2, 2).mean((-1, -2))
Loading