-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abstractions #38
Abstractions #38
Commits on Jun 20, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 670683f - Browse repository at this point
Copy the full SHA 670683fView commit details
Commits on Jul 29, 2019
-
Configuration menu - View commit details
-
Copy full SHA for f5cb748 - Browse repository at this point
Copy the full SHA f5cb748View commit details
Commits on Dec 6, 2019
-
helper function for loading in images and making sure they're the right shape with tests
Configuration menu - View commit details
-
Copy full SHA for c25bf1c - Browse repository at this point
Copy the full SHA c25bf1cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 4953e10 - Browse repository at this point
Copy the full SHA 4953e10View commit details
Commits on Dec 13, 2019
-
Configuration menu - View commit details
-
Copy full SHA for e899974 - Browse repository at this point
Copy the full SHA e899974View commit details -
starts abstraction for synthesis methods
moves a bunch of code from Metamer to Synthesis. currently not much change to those methods, but will do so as attempt to use the abstraction for geodesics, eigendistortions, and MAD
Configuration menu - View commit details
-
Copy full SHA for 236dbed - Browse repository at this point
Copy the full SHA 236dbedView commit details -
Configuration menu - View commit details
-
Copy full SHA for 5145d18 - Browse repository at this point
Copy the full SHA 5145d18View commit details
Commits on Jan 10, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 282c5a4 - Browse repository at this point
Copy the full SHA 282c5a4View commit details -
Configuration menu - View commit details
-
Copy full SHA for d35d316 - Browse repository at this point
Copy the full SHA d35d316View commit details -
Merge branch 'MAD' of github.com:LabForComputationalVision/plenoptic …
…into abstractions
Configuration menu - View commit details
-
Copy full SHA for 2e3741b - Browse repository at this point
Copy the full SHA 2e3741bView commit details
Commits on Jan 24, 2020
-
Adds initial MAD competition implementatin
with this commit, we add an initial implementation of MAD competition. it currently only minimizes model 1 while holding model 2 constant, and only works on models (not metrics), but the basic framework is there. a fair amount of cleanup is required as well. MAD competition (like Metamer) inherits the Synthesis class and so makes use of much there. more functionality consolidation is needed here, as there was a lot of copy-and-paste - Synthesis now uses the self.names dictionary to determine what to target / match etc. this doesn't change how Metamer works (or honestly, probably any of the other synthesis methods) but is necessary for MAD - adds a base gradient descent optimizer choice, GD - scheduler is now optional - optimizer_step's pbar arg is now optional. if None, will not update information. can also add additional information to the pbar as kwargs
Configuration menu - View commit details
-
Copy full SHA for 3daa103 - Browse repository at this point
Copy the full SHA 3daa103View commit details
Commits on Jan 27, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 53cfc1d - Browse repository at this point
Copy the full SHA 53cfc1dView commit details
Commits on Jan 31, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 997ec90 - Browse repository at this point
Copy the full SHA 997ec90View commit details -
Configuration menu - View commit details
-
Copy full SHA for 541366b - Browse repository at this point
Copy the full SHA 541366bView commit details -
bugfix: remove the third arg to objective_function
originally, I was planning on adding an arg to objective_function to turn normalizing on/off, but now we always do it if possible
Configuration menu - View commit details
-
Copy full SHA for 33f85d8 - Browse repository at this point
Copy the full SHA 33f85d8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 5f04dcc - Browse repository at this point
Copy the full SHA 5f04dccView commit details
Commits on Feb 7, 2020
-
with this, all targets are now supported, which requires an overhaul of how we handle getting and setting the relevant attributes. Synthesis goes back to knowing nothing about the attribute structure of its children classes, assuming it can just directly grab self.matched_representation (instead of going through self.names). it is the responsibility of the child class to make sure this is the correct attribute to modify, which we do here by modifying the getter and setter (and continued use of a self._names attribute) - adds way more documentation to mad competition - moves representation_error to Synthesis - mad competition has own representation_error method, to handle the multiple synthesis targets / models - normalizing the loss is now an option, not always done - many MADCompetition attributes now have "all" variants, which store across targets
Configuration menu - View commit details
-
Copy full SHA for e9b0bc9 - Browse repository at this point
Copy the full SHA e9b0bc9View commit details -
it's now in Synthesis, mad_competition has its own version (like representation_error)
Configuration menu - View commit details
-
Copy full SHA for e74a045 - Browse repository at this point
Copy the full SHA e74a045View commit details
Commits on Feb 14, 2020
-
moves plotting and animate to Synthesis
this moves the following functions from the Metamer class to Synthesis, so it can be used by other synthesis methods: plot_representation_error, plot_synthesis_status (was plot_metamer_status), animate other changes, to MADCompetition: - corrects name from saved_representation_gradient_{num} to saved_representation_{num}_gradient - now correctly handles _all attributes and copying information back and forth between them and the local version, does so with helper functions - nu, learning_rate, gradient now lists with _all versions that are dicts (like the other attributes) - adds _check_state() helper function which updates the synthesis target if necessary and returns the current state (used by the various plotting functions)
Configuration menu - View commit details
-
Copy full SHA for 9930280 - Browse repository at this point
Copy the full SHA 9930280View commit details
Commits on Feb 21, 2020
-
plotting and animating now work with mad!
variety of updates necessary to make this happen: - mad competition has wrappers around many of the functions, which just make sure that the synthesis target is appropriately set. we make sure these have the args in the same order as the Synthesis class's version (with the new args tucked on the end) - mad_competition.representation_error's model arg can also be 'both', which returns a dictionary containing both model's representation errors - correctly handle the copying attributes in *and out* of _all for tensors and lists - adds methods for MADCompetition.plot_synthesized_image, plot_synthesized_image_all (shows all synthesized images), plot_loss, plot_loss_all, plot_synthesis_status, and animate - in Synthesis, split out plot_synthesized_image and plot_loss from plot_synthesis_status - Synthesis.animate now takes two more args, plot_data_attr, a list which specifes the attributes to plot on the second subplot (so can be more than one; thus we grab all artists), and rep_error_kwargs, which specifies additional args to pass to the self.representation_error() call - display.rescale_ylim works with an array like or a dict. if a dictionary, we take the max over all of its values
Configuration menu - View commit details
-
Copy full SHA for c268f27 - Browse repository at this point
Copy the full SHA c268f27View commit details
Commits on Feb 28, 2020
-
for mad competition, can set whether you want to normalize the two models' losses or keep them the same. unsure which is best right now
Configuration menu - View commit details
-
Copy full SHA for 8a513fd - Browse repository at this point
Copy the full SHA 8a513fdView commit details -
dispaly.plot_representation now uses vrange='indep0' for images
because it is likely to have positive and negative values
Configuration menu - View commit details
-
Copy full SHA for 9c4f13c - Browse repository at this point
Copy the full SHA 9c4f13cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 27595ab - Browse repository at this point
Copy the full SHA 27595abView commit details -
mad competition now supports metrics as well as models
the MAD competition paper, as originally written, compared SSIM and MSE, which are metrics, not models (they take two images and return a scalar distance between them). as our code was written, it only worked for models (which take an image and return a representation, and we use the L2-norm of the difference between representations for the distance between images). with this commit, can now use metrics for MAD competition. to do that, we construct a 'dummy model', that just returns a copy of the image and modify the function called for our objective function other changes - adds Identity() class in simulate/models/naive.py - adds MSE class to mse() function, since metrics should be functions now - we no longer do the whole randomizing of pixels before computing the loss in Synthesis._closure(). this really messes up those metrics that extract something meaningful from the image (i.e., all the ones we're interested in). so this is turned off by default, and only turned on when needed. right now, this is just when metamer has fraction_removed > 1 or loss_change_fraction < 1. should try and generalize that tools - updates docstrings - adds loss_function attr, which is how we switch things around for the two models. objective_function calls this on x and y. (still use objective_function because it handles the normalizing) - adds _get_model_name, which first tries to grab the model's name attribute and, if it doesn't have one, then returns the class name. useful for the Identity class - raise a warning whenever the representation error is computed with a metric, since it will be meaningless
Configuration menu - View commit details
-
Copy full SHA for d28df98 - Browse repository at this point
Copy the full SHA d28df98View commit details -
Configuration menu - View commit details
-
Copy full SHA for 18b282e - Browse repository at this point
Copy the full SHA 18b282eView commit details -
- renames MSE to mse in import statement - replaces plot_metamer_status with plot_synthesis_status
Configuration menu - View commit details
-
Copy full SHA for d48ec6b - Browse repository at this point
Copy the full SHA d48ec6bView commit details
Commits on Mar 13, 2020
-
moves shared initialization code to Synthesis.py
realized there was more stuff that can be done by the superclass
Configuration menu - View commit details
-
Copy full SHA for eee23aa - Browse repository at this point
Copy the full SHA eee23aaView commit details -
moves metric support to Synthesis.py
two big changes, for Synthesis, MADCompetition, Metamer: - metric support added. this just moves the bits that handle this from mad_competition.py to Synthesis.py - can set loss_function to custom function. by default is L2-norm of difference - updates everyone's documentation and call signatures
Configuration menu - View commit details
-
Copy full SHA for 35bf51c - Browse repository at this point
Copy the full SHA 35bf51cView commit details
Commits on Mar 20, 2020
-
breaks synthesize into functions and moves some to Synthesis
this commit breaks the components of synthesize() into functions, moves them to Synthesize abstract metaclass as possible, and thus makes things more readable as part of doing this, realized why matched_representation was sometimes a Parameter after saving: because I was making it one when a NaN was hit during loss. removed that and everything seems to still work still need to: - figure out how I want to handle coarse-to-fine and the randomizing stuff - standardize synthesize() call signature
Configuration menu - View commit details
-
Copy full SHA for 0a2f19e - Browse repository at this point
Copy the full SHA 0a2f19eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3f2294f - Browse repository at this point
Copy the full SHA 3f2294fView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3def646 - Browse repository at this point
Copy the full SHA 3def646View commit details
Commits on Mar 27, 2020
-
bugfix: mad_competition was messing up saved_representation
accidentally had hard-coded (in _init_store_progress) that model_2 was the stable model, by setting self.saved_representation = list(self.saved_representation_2). fixes that, correctly sets synthesis_target now, and adds some warning to docstring
Configuration menu - View commit details
-
Copy full SHA for 3abdc15 - Browse repository at this point
Copy the full SHA 3abdc15View commit details
Commits on Apr 3, 2020
-
adds coarse-to-fine to Synthesis, standardizes synthesize() call
several related commits, all aiming on finishing up the standardization of synthesize(): - adds _init_ctf_and_randomizer_() to Synthesis class (moving over the relevant part from the beginning of Metamer.synthesis()). this function sets up the coarse-to-fine optimization and possible randomizations (using some subset of representation when calculating gradients) - Synthesis._clamp_and_store() now returns a bool specifying if we stored or not this iteration. - Synthesis has new function, _checkfro_stabilization(), which checks whether loss has stabilized or not (and does so appropriately for coarse to fine optimization) - Synthesis.synthesize() now has args and documentation. it still raises an exception if you call it, but the idea is this provides a framework you can copy and you should keep the arguments in the order they are, putting new args in the beginning - MAD competition now works with coarse to fine. coarse_to_fine is one of those attributes that has values for each model. it is always false for the stable model, though it may be true for the target one. MAD competition has its own _init_ctf_and_randomizer which calls the super and sets the stable model's coarse_to_fine to False - bugfix: move MAD competition's first update_target within the synthesis loop - bugfix: MAD competition correctly updates other matched_representation when _check_nan_loss is True - standardizes synthesize() call signature for both
Configuration menu - View commit details
-
Copy full SHA for 525e020 - Browse repository at this point
Copy the full SHA 525e020View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4196c37 - Browse repository at this point
Copy the full SHA 4196c37View commit details -
changes device and dtype to DEVICE and DTYPE
to make it more clear they're global variables
Configuration menu - View commit details
-
Copy full SHA for 0e760d5 - Browse repository at this point
Copy the full SHA 0e760d5View commit details -
send metric dummy-model to target_image device
this should enable it to handle GPUs successfully
Configuration menu - View commit details
-
Copy full SHA for 5ad962a - Browse repository at this point
Copy the full SHA 5ad962aView commit details -
scope doesn't need to be modules for this
actually the test files are not needed for test_mad
Configuration menu - View commit details
-
Copy full SHA for 241f743 - Browse repository at this point
Copy the full SHA 241f743View commit details -
corrects load so it will work with MADCompetition
this requires allowing for the possibility of multiple models
Configuration menu - View commit details
-
Copy full SHA for e4b4af4 - Browse repository at this point
Copy the full SHA e4b4af4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 70e7ac1 - Browse repository at this point
Copy the full SHA 70e7ac1View commit details -
includes file that tests all the important things (I think)
Configuration menu - View commit details
-
Copy full SHA for 5f9b4ee - Browse repository at this point
Copy the full SHA 5f9b4eeView commit details
Commits on Apr 10, 2020
-
Configuration menu - View commit details
-
Copy full SHA for cf6c340 - Browse repository at this point
Copy the full SHA cf6c340View commit details -
Configuration menu - View commit details
-
Copy full SHA for d8c97a7 - Browse repository at this point
Copy the full SHA d8c97a7View commit details -
adds new capabilities to mad_competition, updates docstrings, tests
several changes here, all related to the merging of master into abstractions on last commit - updates docstrings of Synthesis and Metamer to more accurately reflect current capabilities - changes Synthesis.synthesize() call signature and some of function for new abilities - MADCompetition now handles the new capabilities/changes correctly: better ability to resume synthesis, new synthesize() args clamp_each_iter and clip_grad_norm, default clamper now RangeClamper (instead of None) - adds tests for resuming synthesis (and the errors it should raise) for MADCompetition
Configuration menu - View commit details
-
Copy full SHA for 27264e8 - Browse repository at this point
Copy the full SHA 27264e8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 510750c - Browse repository at this point
Copy the full SHA 510750cView commit details -
resume can only work if store_progress was not False
Configuration menu - View commit details
-
Copy full SHA for ada09b7 - Browse repository at this point
Copy the full SHA ada09b7View commit details -
Configuration menu - View commit details
-
Copy full SHA for d577805 - Browse repository at this point
Copy the full SHA d577805View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1a82a13 - Browse repository at this point
Copy the full SHA 1a82a13View commit details -
Configuration menu - View commit details
-
Copy full SHA for 28f82a9 - Browse repository at this point
Copy the full SHA 28f82a9View commit details -
adds if_existing arg to MADCompetition.synthesize_all()
this new arg controls what to do if one of the synthesis targets has been run before, with three choices and adds tests
Configuration menu - View commit details
-
Copy full SHA for b88c48a - Browse repository at this point
Copy the full SHA b88c48aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9be2950 - Browse repository at this point
Copy the full SHA 9be2950View commit details
Commits on Apr 17, 2020
-
- seed can now be None, in which case we don't set it. intended use case is to avoid resetting when resuming synthesis - store_progress can now be None in order to maintain setting from previous call (when resuming) and will raise an Exception if it changes between synthesis calls - updates documentation for optimizer to reflect full range of options - animation() now handles multiple calls to synthesize() - adds po.tools.data.convert_float_to_im to convert a float that lies between 0 and 1 to another dtype (intended: np.uint8 or np.uint16)
Configuration menu - View commit details
-
Copy full SHA for 966ae37 - Browse repository at this point
Copy the full SHA 966ae37View commit details -
Configuration menu - View commit details
-
Copy full SHA for 6dfa24e - Browse repository at this point
Copy the full SHA 6dfa24eView commit details
Commits on May 4, 2020
-
this might serve as our quick start page? demonstrates basics of how to interact with Synthesis objects and the bare minimum of what a model requries
Configuration menu - View commit details
-
Copy full SHA for d29c29b - Browse repository at this point
Copy the full SHA d29c29bView commit details
Commits on May 8, 2020
-
Front_End had a problem with torch 1.5. previously, torch would convert np.float64 to torch.float32; now, it converts them to torch.float64. so need to explicitly set them to float32 also cleans up a bit
Configuration menu - View commit details
-
Copy full SHA for 0c69693 - Browse repository at this point
Copy the full SHA 0c69693View commit details -
Configuration menu - View commit details
-
Copy full SHA for 764f7bb - Browse repository at this point
Copy the full SHA 764f7bbView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3999599 - Browse repository at this point
Copy the full SHA 3999599View commit details -
adds more info on MAD-specific args/attributes
and those that are more relevant to it
Configuration menu - View commit details
-
Copy full SHA for 18219cf - Browse repository at this point
Copy the full SHA 18219cfView commit details -
attempts to be a tutorial for implementing a new synthesis method, but having trouble with it
Configuration menu - View commit details
-
Copy full SHA for 2cfca2d - Browse repository at this point
Copy the full SHA 2cfca2dView commit details
Commits on May 15, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 0246833 - Browse repository at this point
Copy the full SHA 0246833View commit details
Commits on May 22, 2020
-
which will contain objective functions and the like
Configuration menu - View commit details
-
Copy full SHA for 0adc73a - Browse repository at this point
Copy the full SHA 0adc73aView commit details -
loss functions now work differently: they must all take at least the keyword arguments synth_rep, ref_rep, synth_img, ref_img and return some scalar. this allows us to compute loss with respect ot things in the image, such as range or moments. this required a fair amount of work also: - adds tests - expands section of MADCompetition notebook explaining loss functions to include more details
Configuration menu - View commit details
-
Copy full SHA for 00cf4e3 - Browse repository at this point
Copy the full SHA 00cf4e3View commit details -
Configuration menu - View commit details
-
Copy full SHA for b00627b - Browse repository at this point
Copy the full SHA b00627bView commit details
Commits on May 23, 2020
-
corrects save() and load() for metrics and loss function
previously, wasn't saving loss function and was not able to save if we used a metric instead of a model because saving functions is not supported by default pickle. we now use dill, which does support this (added to setup.py as well) additionally: - loss_kwargs renamed to loss_function_kwargs to be more consistent - save now has a model_attr_names arg just like load, so will properly save multiple models - copies load() to Metamer, makes call signature simpler - load() call signature simplified for MADCompetition, updates docstring to be more MAD-specific - adds better save/load tests for Metamer and MADCompetition
Configuration menu - View commit details
-
Copy full SHA for a211da2 - Browse repository at this point
Copy the full SHA a211da2View commit details
Commits on May 25, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 12b8049 - Browse repository at this point
Copy the full SHA 12b8049View commit details -
Configuration menu - View commit details
-
Copy full SHA for bf669a8 - Browse repository at this point
Copy the full SHA bf669a8View commit details -
to try and figure out why the one test is hanging
Configuration menu - View commit details
-
Copy full SHA for 052fe4c - Browse repository at this point
Copy the full SHA 052fe4cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9dca101 - Browse repository at this point
Copy the full SHA 9dca101View commit details
Commits on May 26, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 0b04cbc - Browse repository at this point
Copy the full SHA 0b04cbcView commit details -
Configuration menu - View commit details
-
Copy full SHA for ab6692e - Browse repository at this point
Copy the full SHA ab6692eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 30dd960 - Browse repository at this point
Copy the full SHA 30dd960View commit details -
Configuration menu - View commit details
-
Copy full SHA for c4e31b7 - Browse repository at this point
Copy the full SHA c4e31b7View commit details -
- the plotting in test_loss_func would cause a stall. it's not that important to test, so got rid of it - also goes back to regular pytest command then - removes pytest-timeout, because that wasn't actually helping
Configuration menu - View commit details
-
Copy full SHA for aef41d6 - Browse repository at this point
Copy the full SHA aef41d6View commit details -
try another way to fix the problem
https://stackoverflow.com/a/35403128 suggests that I should enable xvfb and close the open images, so let's try that.
Configuration menu - View commit details
-
Copy full SHA for 29e85f7 - Browse repository at this point
Copy the full SHA 29e85f7View commit details -
Configuration menu - View commit details
-
Copy full SHA for fe3bf00 - Browse repository at this point
Copy the full SHA fe3bf00View commit details
Commits on Jun 12, 2020
-
adds notebook showing simple MAD example
with L1 and L2 norm, showing that it does the right thing, but that it might not be easy -- in this case, because those two are hard to separate
Configuration menu - View commit details
-
Copy full SHA for a5f2005 - Browse repository at this point
Copy the full SHA a5f2005View commit details
Commits on Jun 15, 2020
-
adds choices for how to do coarse-to-fine optimization
standard way of doing coarse-to-fine (ctf) optimization is *not* what I had been doing before. before this commit, ctf computed the gradient with respect to the target scale individually, ignoring the others (not doing anything to hold them fixed). the standard way is to compute the gradient with respect to the target scale *and all coarser scales*. this commit allows you to toggle between those two choices, by setting coarse_to_fine to either 'together' or 'separate'. also updates MADCompetition.save to appropriately save the related attributes, and makes the tests for metamer and MAD more complete
Configuration menu - View commit details
-
Copy full SHA for 8d65038 - Browse repository at this point
Copy the full SHA 8d65038View commit details -
Configuration menu - View commit details
-
Copy full SHA for 321775d - Browse repository at this point
Copy the full SHA 321775dView commit details
Commits on Jun 26, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 8402a9b - Browse repository at this point
Copy the full SHA 8402a9bView commit details -
Merge branch 'abstractions' of github.com:LabForComputationalVision/p…
…lenoptic into abstractions
Configuration menu - View commit details
-
Copy full SHA for 6eed195 - Browse repository at this point
Copy the full SHA 6eed195View commit details
Commits on Jun 27, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 863c41c - Browse repository at this point
Copy the full SHA 863c41cView commit details
Commits on Jul 11, 2020
-
adds code to view external MAD results
and notebook showing how it's done
Configuration menu - View commit details
-
Copy full SHA for e7b5d40 - Browse repository at this point
Copy the full SHA e7b5d40View commit details
Commits on Jul 13, 2020
-
since it's not a requirement and this is the only place we use it. in the dataframe's place, we return a well-structured dictionary
Configuration menu - View commit details
-
Copy full SHA for fbaf91e - Browse repository at this point
Copy the full SHA fbaf91eView commit details
Commits on Jul 31, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 1b354e7 - Browse repository at this point
Copy the full SHA 1b354e7View commit details
Commits on Aug 4, 2020
-
adds docstrings for _gaussian (renamed from gaussian), create_windows, ssim. Updates SSIM to allow for weighting that matches the output from MAD competition code
Configuration menu - View commit details
-
Copy full SHA for 663f455 - Browse repository at this point
Copy the full SHA 663f455View commit details -
- MSE now only averages across last two dimensions (height and width), as metrics should - SSIM: - removes unnecessary arguments (window and window_size no longer allowed, size_average because we always just average across last two dimensions) - Gaussian window is normalized in 2d, not 1d - Removes cs output - val_range renamed to dynamic_range and must always be explicitly set (None no longer allowed) - checks n_batches and handles them as you'd like - adds add_noise function, which adds noise such that user specifies the target MSE (approximately correct, not exact) and we can handle multiple noise values and images correctly (always with independent seeds and broadcasting as appropriate)
Configuration menu - View commit details
-
Copy full SHA for df92b37 - Browse repository at this point
Copy the full SHA df92b37View commit details -
Configuration menu - View commit details
-
Copy full SHA for aa86d8b - Browse repository at this point
Copy the full SHA aa86d8bView commit details
Commits on Aug 5, 2020
-
Configuration menu - View commit details
-
Copy full SHA for ca564a3 - Browse repository at this point
Copy the full SHA ca564a3View commit details -
because the mapping between min/max and best/worst is not the same for MSE and SSIM, this makes it clearer and sets the order for both to be Best, Worst, matching the paper
Configuration menu - View commit details
-
Copy full SHA for 9b7e2ea - Browse repository at this point
Copy the full SHA 9b7e2eaView commit details -
need to average noise over the last two dimensions in order to get the variances matched exactly
Configuration menu - View commit details
-
Copy full SHA for b5d56cb - Browse repository at this point
Copy the full SHA b5d56cbView commit details -
removes msssim, SSIM, MSSSIM, restructures ssim
with this commit, we remove msssim (because there are some tricky things that I don't have the time to figure out -- issue opened for it if people want to check it out) and SSIM (caching window no longer makes sense), and thus MSSSIM also restructures ssim so that we have a helper function which computes the ssim_map, contrast map, and weight. ssim just returns the mean over the image. adds new function, ssim_map, which just returns the map. contrast map will be used by msssim if we re-implement it
Configuration menu - View commit details
-
Copy full SHA for d8ff0d0 - Browse repository at this point
Copy the full SHA d8ff0d0View commit details
Commits on Aug 6, 2020
-
adds tests for the new SSIM functionality, both to make sure it works as we'd like and that it matches MATLAB outputs. this involves downloading some new files off the OSF, which means the downloading function is updated to be a bit more general also: - updates MAD notebooks - bugfix for add_noise to make sure the sizing works out as we'd like
Configuration menu - View commit details
-
Copy full SHA for 13adf0f - Browse repository at this point
Copy the full SHA 13adf0fView commit details -
Configuration menu - View commit details
-
Copy full SHA for d3dd4e2 - Browse repository at this point
Copy the full SHA d3dd4e2View commit details -
bugfix for mad competition saving
makes sure to save all the _all attributes
Configuration menu - View commit details
-
Copy full SHA for d7f5933 - Browse repository at this point
Copy the full SHA d7f5933View commit details -
Configuration menu - View commit details
-
Copy full SHA for 85f8c72 - Browse repository at this point
Copy the full SHA 85f8c72View commit details -
Configuration menu - View commit details
-
Copy full SHA for 64c294d - Browse repository at this point
Copy the full SHA 64c294dView commit details
Commits on Aug 7, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 58d4b00 - Browse repository at this point
Copy the full SHA 58d4b00View commit details -
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
…ic into abstractions
Configuration menu - View commit details
-
Copy full SHA for 77d2c72 - Browse repository at this point
Copy the full SHA 77d2c72View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9196906 - Browse repository at this point
Copy the full SHA 9196906View commit details -
Configuration menu - View commit details
-
Copy full SHA for 40bf306 - Browse repository at this point
Copy the full SHA 40bf306View commit details -
renames some Synthesis attributes
renames: - matched_ -> synthesized_ - _image -> _signal (except for plot_synthesized_image, which really is plotting a single image) - target_ -> base_
Configuration menu - View commit details
-
Copy full SHA for 599314a - Browse repository at this point
Copy the full SHA 599314aView commit details -
moves some attributes to hidden
step, rep_warning, use_subset_for_gradient, and loss_sign, none of which should ever be set by the user, nor do they need to be seen
Configuration menu - View commit details
-
Copy full SHA for b7b7633 - Browse repository at this point
Copy the full SHA b7b7633View commit details -
Synthesis no longer a torch.nn.Module
this wasn't getting us anything (the only time we called super() was during init, and that just sets some attributes we weren't using), it was setting up a bunch of attributes and methods we weren't using, and it made some things confusing (like inheriting the parameters of its attributes, e.g., model)
Configuration menu - View commit details
-
Copy full SHA for 3fc54e6 - Browse repository at this point
Copy the full SHA 3fc54e6View commit details -
Configuration menu - View commit details
-
Copy full SHA for bf5f5e0 - Browse repository at this point
Copy the full SHA bf5f5e0View commit details -
this starts removing some of the information from the docstrings from the Synthesis methods, since they're overwhelming. Stuff that's relevant is getting moved into a notebook, but many can be just deleted
Configuration menu - View commit details
-
Copy full SHA for f6d06ae - Browse repository at this point
Copy the full SHA f6d06aeView commit details
Commits on Aug 11, 2020
-
Big update on documentation, reverts MAD init noise
Moves a lot of information out of the docstrings of Synthesis, MADCompetition, and Metamer and into their tutorials. Also: - makes sure notebooks match new attribute names - goes back to old way of adding noise for initializing MAD (couldn't get the new way working with Simple_MAD, created issue) - adds info to Synthesis notebook with what the different methods and attributes are for - simplifies MADCompetition.synthesize_all to just use kwargs to pass everything to synthesize
Configuration menu - View commit details
-
Copy full SHA for aaba29d - Browse repository at this point
Copy the full SHA aaba29dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 8eb2d3c - Browse repository at this point
Copy the full SHA 8eb2d3cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9ab9513 - Browse repository at this point
Copy the full SHA 9ab9513View commit details -
Configuration menu - View commit details
-
Copy full SHA for 66b7414 - Browse repository at this point
Copy the full SHA 66b7414View commit details -
changes to docstrings that Lyndon pointed out
largely, this is changing type from torch.tensor to torch.Tensor, which is the appropriate one
Configuration menu - View commit details
-
Copy full SHA for f41f728 - Browse repository at this point
Copy the full SHA f41f728View commit details -
Merge branch 'abstractions' of github.com:LabForComputationalVision/p…
…lenoptic into abstractions
Configuration menu - View commit details
-
Copy full SHA for b45fbce - Browse repository at this point
Copy the full SHA b45fbceView commit details -
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
…ic into abstractions
Configuration menu - View commit details
-
Copy full SHA for ff4c32a - Browse repository at this point
Copy the full SHA ff4c32aView commit details -
- bugfix in non_linearities.py, need to use :meth: tag in See Also section - Adds some tutorials and example notebooks - updates the auto-generated rst files - changes some of the markdown headings of Eigendistortions and Original_MAD notebooks
Configuration menu - View commit details
-
Copy full SHA for f2ec3c1 - Browse repository at this point
Copy the full SHA f2ec3c1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3fe5f3d - Browse repository at this point
Copy the full SHA 3fe5f3dView commit details
Commits on Aug 13, 2020
-
Merge branch 'master' of github.com:LabForComputationalVision/plenopt…
…ic into abstractions
Configuration menu - View commit details
-
Copy full SHA for b6352b0 - Browse repository at this point
Copy the full SHA b6352b0View commit details -
- all tests now import DEVICE, DTYPE, DATA_DIR from test_plenoptic - makes sure the constants are in caps - DTYPE is now torch.float32 - removes unnecessary imports
Configuration menu - View commit details
-
Copy full SHA for bd6792c - Browse repository at this point
Copy the full SHA bd6792cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 953fa10 - Browse repository at this point
Copy the full SHA 953fa10View commit details -
Configuration menu - View commit details
-
Copy full SHA for c5698bf - Browse repository at this point
Copy the full SHA c5698bfView commit details