-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add type hints to layer_integrated_gradients.py #252
Add type hints to layer_integrated_gradients.py #252
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this, this looks great! Just a few small changes, and if you could add hints to the remaining helper methods in test_layer_integrated_gradients.py, that would be great!
n_steps: int = 50, | ||
method: str = "gausslegendre", | ||
internal_batch_size: Optional[int] = None, | ||
return_convergence_delta: Optional[bool] = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two arguments (return_convergence_delta and attribute_to_layer_input) should just have a bool type, since they default to False, not None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, done.
model: Module, | ||
target_layer: Module, | ||
test_input: Union[Tensor, Tuple[Tensor, ...]], | ||
expected_ig: Tuple[List[List[float]], List[List[float]]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: This is correct for the existing usage, but to make this more generic, could change this to Tuple[List[List[float]], ...], so we can also pass arbitrary length tuples in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, done.
Hi @quinton-hoole , I have made some updates to the class hierarchy for Layer Integrated Gradients here #244, so please merge with these changes when you update this PR, thanks! |
@vivekmig Awesome! I was actually a bit stuck on some type hinting apparently related to attributes(). Hopefully this will help to resolve those. Otherwise I will contact you shortly for some help. |
@vivekmig Since merging your updates in #244 I'm getting the following error (I had zero errors before the merge). Does not seem to be related to any of the changes I've made. Any advice? I'll push the current state of my branch here for your reference.
|
5c1a65d
to
2ca4cef
Compare
This error is due to torch 1.3 missing some type hints for tensors, which was fixed here pytorch/pytorch#28578. If you upgrade to torch 1.4, that should fix the issue! |
Awesome! Thanks. |
@vivekmig I've pushed what will hopefully be the final commit. mypy tests pass:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating this, looks good, just a few small comments! Also, there seem to be some formatting issues causing CircleCI failures, can you run black and flake8?
def __init__( | ||
self, | ||
forward_func: Callable, | ||
layer: torch.nn.Module, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Can you add "from torch.nn import Module" above and just put Module here? It's consistent with our usage in other files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, done.
forward_fn: Callable, | ||
inputs: Union[Tensor, Tuple[Tensor,...]], | ||
target_ind: int = None, | ||
additional_forward_args: Any = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be great to also type hint the return of this function, it should be Tuple[Tensor, ...]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, done.
): | ||
scattered_inputs: Union[Tuple[Union[Tensor, Tuple[Tensor, ...]]], Union[Tuple[Tensor, ...], List[Tuple[Any, ...]]]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can avoid having this extra declaration, and instead just add "# type:ignore" to line 283 below with a short comment above on why this is necessary. This seems to be caused by the scatter method not having a precise enough return type in its stub, so let's just suppress the type warning there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, done.
Thanks @vivekmig. I've addressed all the comments and rerun black and flake8. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like run_mypy is throwing an error now, the "type: ignore" needs to be moved to line 285, since the line was reformatted. Also, happened to notice that the targets hint also should be updated, if you can fix that as well and then rerun black, flake8, and run_mypy to double check everything that would be great! Thanks so much for your work on this, these should be the last changes!
): | ||
forward_fn: Callable, | ||
inputs: Union[Tensor, Tuple[Tensor, ...]], | ||
target_ind: int = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should actually be the same type as target above, (Optional[Union[int, Tuple[int, ...], Tensor, List[Tuple[int, ...]]]])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Thanks, fixed.
Whoops! My apologies for that oversight. Fixed now. |
@vivekmig I think this one is ready to merge now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks so much for adding this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@quinton-hoole has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
method: str = "gausslegendre", | ||
internal_batch_size: Optional[int] = None, | ||
return_convergence_delta: bool = False, | ||
attribute_to_layer_input: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be optional too ?
cc: @vivekmig
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine the way it is, Optional[X] is equivalent to Union of X and None, in this case attribute_to_layer_input will always take a bool value, even in the default case, and should never be None.
@quinton-hoole merged this pull request in 6686d67. |
Summary: See also: pytorch#184 Tests: captum] black . reformatted /data/users/quinton/captum/captum/attr/_core/layer/layer_integrated_gradients.py All done! ✨ 🍰 ✨ 1 file reformatted, 105 files left unchanged. captum] flake8 . captum] ./scripts/run_mypy.sh Success: no issues found in 43 source files Success: no issues found in 48 source files This is my first contribution to PyTorch, so I want to confirm that my dev process and initial attempt are sound. More methods can be typehinted in this or followup PR's (similar to pytorch#184 above). Pull Request resolved: pytorch#252 Reviewed By: vivekmig Differential Revision: D19669052 Pulled By: quinton-hoole fbshipit-source-id: e74830a6c4528a8f60b5358dc9c0cda03edc85ae
See also: #184
Tests:
captum] black .
reformatted /data/users/quinton/captum/captum/attr/_core/layer/layer_integrated_gradients.py
All done! ✨ 🍰 ✨
1 file reformatted, 105 files left unchanged.
captum] flake8 .
captum] ./scripts/run_mypy.sh
Success: no issues found in 43 source files
Success: no issues found in 48 source files
This is my first contribution to PyTorch, so I want to confirm that my dev process and initial attempt are sound. More methods can be typehinted in this or followup PR's (similar to #184 above).