Skip to content

Conversation

ProGamerGov
Copy link
Contributor

@ProGamerGov ProGamerGov commented Dec 24, 2020

No description provided.

@ProGamerGov
Copy link
Contributor Author

The other optim PRs should probably be reviewed before this one.

@ProGamerGov
Copy link
Contributor Author

All the tests should pass once #656 is merged.

@NarineK NarineK self-assigned this Jul 25, 2021
Copy link
Contributor

@NarineK NarineK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for the documentation PR. Overall looks great!
I did a quick pass and left some comments.
I also see that we have Losses that need to be documented a bit more thoroughly .
https://github.com/pytorch/captum/blob/optim-wip/captum/optim/_core/loss.py

In general I think that before open sourcing we can go over docs again, make them a bit more thorough and clean up the formatting, but we can do that in a separate PR as well.

every iteration and returns a bool that determines whether
to stop the optimization.
See captum.optim.typing.StopCriteria for details.
optimizer (Optimizer, optional): An torch.optim.Optimizer used to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the optimize method loss_summarize_fn and lr aren't documented. Do you mind adding documentation their too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'll add the documentation for those variables!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ProGamerGov, the documentation for losses we will add in a separate PR ?https://github.com/pytorch/captum/blob/optim-wip/captum/optim/_core/loss.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NarineK Yes, we'll add the loss documentation in a separate PR!

Args:
model (nn.Module): The reference to PyTorch model instance.
targets (nn.module or list of nn.module): The target layers to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: nn.Module

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

collect activations from.
"""

def __init__(self, model: nn.Module, targets: Iterable[nn.Module]) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: documentation for init arguments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll move all the init arguments from class descriptions to the class' init functions.

class ActivationFetcher:
"""
Simple module for collecting activations from model targets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind, please, adding also some docs for ModuleOutputsHook ? Also is, ModuleReuseException used anywhere ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see ModuleReuseException being called anywhere, so I think that it's safe to remove it.


class FFTImage(ImageParameterization):
"""Parameterize an image using inverse real 2D FFT"""
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind adding some docs for ImageTensor as well ?
Also, are InputParameterization and ImageParameterization used anywhere ?

return self.image.refine_names("B", "C", "H", "W")


class LaplacianImage(ImageParameterization):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usually we document all non-underscored methods in the public classes. If we don't want those methods to be public we can underscore them. I see that we have quite some in images.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added some more documentation and added underscore to a bunch of the public class methods!

You can specify a fixed background, or a random one will be used by default.
Args:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can go under the __init__.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved it!

def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
Ignore the alpha channel.
Arguments:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that there is some spacing that we need to do here and for Return.
See: https://github.com/pytorch/captum/blob/master/captum/attr/_core/integrated_gradients.py#L132

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to format all the docs to match the formatting used in the integrated_gradients file you linked to.

Returns:
rgb (torch.Tensor): RGB image tensor without the alpha channel.
"""
assert x.dim() == 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since klt_transform and i1i2i3_transform are public, it would be good to document them.
__ init__ for ToRGB needs docs as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I've add some documentation and moved the init args from the main description to the init function.

chw (torch.tensor): A tensor with it's colors recorrelated or
decorrelated.
"""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __init__ for CenterCrop + RandomScale (public methods), RandomSpatialJitter (public methods) need docs for args as well.

loss_summarize_fn (Callable, optional): The function to use for summarizing
tensor outputs from loss functions.
Default: default_loss_summarize
lr: (float): If no optimizer is given, then lr is used as the learning rate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: lr: (float, optional)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

Copy link
Contributor

@NarineK NarineK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing the comments. For couple inputs optional was missing otherwise looks great.

path (str): A URL or filepath to an image.
scale (float): The image scale to use.
Default: 255.0
mode (str:) The image loading mode to use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scale (float, optional):
mode (str, optional):

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

figsize (Tuple[int, int], optional): height & width to use
for displaying the `ImageTensor` figure.
scale (float): Value to multiply the `ImageTensor` by so that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

, optional

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

filename (str): The filename to use when saving the `ImageTensor` as an
image file.
scale (float): Value to multiply the `ImageTensor` by so that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(float, optional)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

"""
Args:
multiplier (float): A float value used to scale the input.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(float, optional)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

@NarineK NarineK merged commit e7835de into meta-pytorch:optim-wip Jul 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants