Skip to content

Conversation

@cyrjano
Copy link
Contributor

@cyrjano cyrjano commented Nov 18, 2025

Summary: This diff makes a change to the _should_skip_inputs_and_warn function in the captum/attr/_core/feature_ablation.py file. The function is now a free function instead of being a method of a class. The function checks two conditions that would cause a feature group to be skipped during attribution computation: 1. If min_examples_per_batch_grouped is specified and any input tensor in the feature group has a batch size (0th dimension) smaller than this threshold. 2. If all input tensors in the feature group are empty

Differential Revision: D87300652

@meta-cla meta-cla bot added the cla signed label Nov 18, 2025
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Nov 18, 2025

@cyrjano has exported this pull request. If you are a Meta employee, you can view the originating Diff in D87300652.

cyrjano added a commit to cyrjano/captum that referenced this pull request Nov 19, 2025
Summary:

This diff makes a change to the `_should_skip_inputs_and_warn` function in the `captum/attr/_core/feature_ablation.py` file. The function is now a free function instead of being a method of a class. The function checks two conditions that would cause a feature group to be skipped during attribution computation: 1. If `min_examples_per_batch_grouped` is specified and any input tensor in the feature group has a batch size (0th dimension) smaller than this threshold. 2. If all input tensors in the feature group are empty

Differential Revision: D87300652
Summary:

This diff makes a change to the `_should_skip_inputs_and_warn` function in the `captum/attr/_core/feature_ablation.py` file. The function is now a free function instead of being a method of a class. The function checks two conditions that would cause a feature group to be skipped during attribution computation: 1. If `min_examples_per_batch_grouped` is specified and any input tensor in the feature group has a batch size (0th dimension) smaller than this threshold. 2. If all input tensors in the feature group are empty

Differential Revision: D87300652
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Nov 20, 2025

This pull request has been merged in 4236431.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants