-
Notifications
You must be signed in to change notification settings - Fork 542
Optim-wip: Add a ton of missing docs #571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The other optim PRs should probably be reviewed before this one. |
All the tests should pass once #656 is merged. |
* Some minor changes that I forget when I pulled from the optim-wip master branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for the documentation PR. Overall looks great!
I did a quick pass and left some comments.
I also see that we have Losses
that need to be documented a bit more thoroughly .
https://github.com/pytorch/captum/blob/optim-wip/captum/optim/_core/loss.py
In general I think that before open sourcing we can go over docs again, make them a bit more thorough and clean up the formatting, but we can do that in a separate PR as well.
every iteration and returns a bool that determines whether | ||
to stop the optimization. | ||
See captum.optim.typing.StopCriteria for details. | ||
optimizer (Optimizer, optional): An torch.optim.Optimizer used to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the optimize
method loss_summarize_fn
and lr
aren't documented. Do you mind adding documentation their too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'll add the documentation for those variables!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ProGamerGov, the documentation for losses we will add in a separate PR ?https://github.com/pytorch/captum/blob/optim-wip/captum/optim/_core/loss.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NarineK Yes, we'll add the loss documentation in a separate PR!
captum/optim/_core/output_hook.py
Outdated
Args: | ||
model (nn.Module): The reference to PyTorch model instance. | ||
targets (nn.module or list of nn.module): The target layers to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: nn.Module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed!
collect activations from. | ||
""" | ||
|
||
def __init__(self, model: nn.Module, targets: Iterable[nn.Module]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: documentation for init arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll move all the init arguments from class descriptions to the class' init functions.
captum/optim/_core/output_hook.py
Outdated
class ActivationFetcher: | ||
""" | ||
Simple module for collecting activations from model targets. | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind, please, adding also some docs for ModuleOutputsHook
? Also is, ModuleReuseException
used anywhere ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see ModuleReuseException
being called anywhere, so I think that it's safe to remove it.
|
||
class FFTImage(ImageParameterization): | ||
"""Parameterize an image using inverse real 2D FFT""" | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind adding some docs for ImageTensor
as well ?
Also, are InputParameterization
and ImageParameterization
used anywhere ?
return self.image.refine_names("B", "C", "H", "W") | ||
|
||
|
||
class LaplacianImage(ImageParameterization): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually we document all non-underscored methods in the public classes. If we don't want those methods to be public we can underscore them. I see that we have quite some in images.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added some more documentation and added underscore to a bunch of the public class methods!
You can specify a fixed background, or a random one will be used by default. | ||
Args: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can go under the __init__
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've moved it!
def forward(self, x: torch.Tensor) -> torch.Tensor: | ||
""" | ||
Ignore the alpha channel. | ||
Arguments: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that there is some spacing that we need to do here and for Return.
See: https://github.com/pytorch/captum/blob/master/captum/attr/_core/integrated_gradients.py#L132
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried to format all the docs to match the formatting used in the integrated_gradients file you linked to.
Returns: | ||
rgb (torch.Tensor): RGB image tensor without the alpha channel. | ||
""" | ||
assert x.dim() == 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since klt_transform
and i1i2i3_transform
are public, it would be good to document them.
__ init__
for ToRGB
needs docs as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I've add some documentation and moved the init args from the main description to the init function.
chw (torch.tensor): A tensor with it's colors recorrelated or | ||
decorrelated. | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The __init__
for CenterCrop + RandomScale (public methods), RandomSpatialJitter (public methods) need docs for args as well.
captum/optim/_core/optimization.py
Outdated
loss_summarize_fn (Callable, optional): The function to use for summarizing | ||
tensor outputs from loss functions. | ||
Default: default_loss_summarize | ||
lr: (float): If no optimizer is given, then lr is used as the learning rate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: lr: (float, optional)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for addressing the comments. For couple inputs optional
was missing otherwise looks great.
captum/optim/_param/image/images.py
Outdated
path (str): A URL or filepath to an image. | ||
scale (float): The image scale to use. | ||
Default: 255.0 | ||
mode (str:) The image loading mode to use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scale (float, optional):
mode (str, optional):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed!
captum/optim/_param/image/images.py
Outdated
figsize (Tuple[int, int], optional): height & width to use | ||
for displaying the `ImageTensor` figure. | ||
scale (float): Value to multiply the `ImageTensor` by so that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
, optional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed!
captum/optim/_param/image/images.py
Outdated
filename (str): The filename to use when saving the `ImageTensor` as an | ||
image file. | ||
scale (float): Value to multiply the `ImageTensor` by so that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(float, optional)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed!
""" | ||
Args: | ||
multiplier (float): A float value used to scale the input. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(float, optional)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed!
No description provided.