Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add type hints to layer_integrated_gradients.py #252

Conversation

quinton-hoole
Copy link
Contributor

See also: #184

Tests:

captum] black .
reformatted /data/users/quinton/captum/captum/attr/_core/layer/layer_integrated_gradients.py
All done! ✨ 🍰 ✨
1 file reformatted, 105 files left unchanged.
captum] flake8 .
captum] ./scripts/run_mypy.sh
Success: no issues found in 43 source files
Success: no issues found in 48 source files

This is my first contribution to PyTorch, so I want to confirm that my dev process and initial attempt are sound. More methods can be typehinted in this or followup PR's (similar to #184 above).

Copy link
Contributor

@vivekmig vivekmig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this, this looks great! Just a few small changes, and if you could add hints to the remaining helper methods in test_layer_integrated_gradients.py, that would be great!

n_steps: int = 50,
method: str = "gausslegendre",
internal_batch_size: Optional[int] = None,
return_convergence_delta: Optional[bool] = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two arguments (return_convergence_delta and attribute_to_layer_input) should just have a bool type, since they default to False, not None.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, done.

model: Module,
target_layer: Module,
test_input: Union[Tensor, Tuple[Tensor, ...]],
expected_ig: Tuple[List[List[float]], List[List[float]]],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is correct for the existing usage, but to make this more generic, could change this to Tuple[List[List[float]], ...], so we can also pass arbitrary length tuples in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, done.

@vivekmig
Copy link
Contributor

Hi @quinton-hoole , I have made some updates to the class hierarchy for Layer Integrated Gradients here #244, so please merge with these changes when you update this PR, thanks!

@quinton-hoole
Copy link
Contributor Author

@vivekmig Awesome! I was actually a bit stuck on some type hinting apparently related to attributes(). Hopefully this will help to resolve those. Otherwise I will contact you shortly for some help.

@quinton-hoole
Copy link
Contributor Author

quinton-hoole commented Jan 28, 2020

@vivekmig Since merging your updates in #244 I'm getting the following error (I had zero errors before the merge). Does not seem to be related to any of the changes I've made. Any advice? I'll push the current state of my branch here for your reference.

./scripts/run_mypy.sh
Success: no issues found in 44 source files
tests/attr/neuron/test_neuron_gradient_shap.py:66: error: "Tensor" has no attribute "__iter__"; maybe "__ior__", "__int__", or "__invert__"? (not iterable)
Found 1 error in 1 file (checked 48 source files)

@quinton-hoole quinton-hoole force-pushed the 2020-01-23-type-hint-layer_integrated_gradients branch from 5c1a65d to 2ca4cef Compare January 28, 2020 21:10
@vivekmig
Copy link
Contributor

@vivekmig Since merging your updates in #244 I'm getting the following error (I had zero errors before the merge). Does not seem to be related to any of the changes I've made. Any advice? I'll push the current state of my branch here for your reference.

./scripts/run_mypy.sh
Success: no issues found in 44 source files
tests/attr/neuron/test_neuron_gradient_shap.py:66: error: "Tensor" has no attribute "__iter__"; maybe "__ior__", "__int__", or "__invert__"? (not iterable)
Found 1 error in 1 file (checked 48 source files)

This error is due to torch 1.3 missing some type hints for tensors, which was fixed here pytorch/pytorch#28578. If you upgrade to torch 1.4, that should fix the issue!

@quinton-hoole
Copy link
Contributor Author

@vivekmig Since merging your updates in #244 I'm getting the following error (I had zero errors before the merge). Does not seem to be related to any of the changes I've made. Any advice? I'll push the current state of my branch here for your reference.

./scripts/run_mypy.sh
Success: no issues found in 44 source files
tests/attr/neuron/test_neuron_gradient_shap.py:66: error: "Tensor" has no attribute "__iter__"; maybe "__ior__", "__int__", or "__invert__"? (not iterable)
Found 1 error in 1 file (checked 48 source files)

This error is due to torch 1.3 missing some type hints for tensors, which was fixed here pytorch/pytorch#28578. If you upgrade to torch 1.4, that should fix the issue!

Awesome! Thanks.

@quinton-hoole
Copy link
Contributor Author

@vivekmig I've pushed what will hopefully be the final commit. mypy tests pass:

  ./scripts/run_mypy.sh
Success: no issues found in 44 source files
Success: no issues found in 48 source files

Copy link
Contributor

@vivekmig vivekmig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating this, looks good, just a few small comments! Also, there seem to be some formatting issues causing CircleCI failures, can you run black and flake8?

def __init__(
self,
forward_func: Callable,
layer: torch.nn.Module,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Can you add "from torch.nn import Module" above and just put Module here? It's consistent with our usage in other files.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, done.

forward_fn: Callable,
inputs: Union[Tensor, Tuple[Tensor,...]],
target_ind: int = None,
additional_forward_args: Any = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be great to also type hint the return of this function, it should be Tuple[Tensor, ...]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, done.

):
scattered_inputs: Union[Tuple[Union[Tensor, Tuple[Tensor, ...]]], Union[Tuple[Tensor, ...], List[Tuple[Any, ...]]]]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can avoid having this extra declaration, and instead just add "# type:ignore" to line 283 below with a short comment above on why this is necessary. This seems to be caused by the scatter method not having a precise enough return type in its stub, so let's just suppress the type warning there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, done.

@quinton-hoole
Copy link
Contributor Author

Thanks @vivekmig. I've addressed all the comments and rerun black and flake8.

Copy link
Contributor

@vivekmig vivekmig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like run_mypy is throwing an error now, the "type: ignore" needs to be moved to line 285, since the line was reformatted. Also, happened to notice that the targets hint also should be updated, if you can fix that as well and then rerun black, flake8, and run_mypy to double check everything that would be great! Thanks so much for your work on this, these should be the last changes!

):
forward_fn: Callable,
inputs: Union[Tensor, Tuple[Tensor, ...]],
target_ind: int = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should actually be the same type as target above, (Optional[Union[int, Tuple[int, ...], Tensor, List[Tuple[int, ...]]]])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Thanks, fixed.

@quinton-hoole
Copy link
Contributor Author

It seems like run_mypy is throwing an error now, the "type: ignore" needs to be moved to line 285, since the line was reformatted. Also, happened to notice that the targets hint also should be updated, if you can fix that as well and then rerun black, flake8, and run_mypy to double check everything that would be great! Thanks so much for your work on this, these should be the last changes!

Whoops! My apologies for that oversight. Fixed now.

@quinton-hoole
Copy link
Contributor Author

@vivekmig I think this one is ready to merge now.

Copy link
Contributor

@vivekmig vivekmig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks so much for adding this!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@quinton-hoole has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

method: str = "gausslegendre",
internal_batch_size: Optional[int] = None,
return_convergence_delta: bool = False,
attribute_to_layer_input: bool = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be optional too ?
cc: @vivekmig

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine the way it is, Optional[X] is equivalent to Union of X and None, in this case attribute_to_layer_input will always take a bool value, even in the default case, and should never be None.

@facebook-github-bot
Copy link
Contributor

@quinton-hoole merged this pull request in 6686d67.

NarineK pushed a commit to NarineK/captum-1 that referenced this pull request Nov 19, 2020
Summary:
See also: pytorch#184

Tests:

captum] black .
reformatted /data/users/quinton/captum/captum/attr/_core/layer/layer_integrated_gradients.py
All done! ✨ 🍰 ✨
1 file reformatted, 105 files left unchanged.
captum] flake8 .
captum] ./scripts/run_mypy.sh
Success: no issues found in 43 source files
Success: no issues found in 48 source files

This is my first contribution to PyTorch, so I want to confirm that my dev process and initial attempt are sound.  More methods can be typehinted in this or followup PR's (similar to pytorch#184 above).
Pull Request resolved: pytorch#252

Reviewed By: vivekmig

Differential Revision: D19669052

Pulled By: quinton-hoole

fbshipit-source-id: e74830a6c4528a8f60b5358dc9c0cda03edc85ae
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants