Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

149 implement robust dpatch attack #155

Merged
merged 33 commits into from
Jun 11, 2024

Conversation

treubig26
Copy link
Collaborator

@treubig26 treubig26 commented May 21, 2024

Initial implementation of Robust DPatch using a lightning module. This PR does not yet offer attack optimization as part of the armory-library API, but rather implements it entirely in user/example code.

Outside of editorial fixes or blatant errors, most comments on this PR will probably be addressed in a follow-on when the attack optimization is made generic and moved to be part of armory-library.

@treubig26 treubig26 linked an issue May 21, 2024 that may be closed by this pull request
@treubig26 treubig26 marked this pull request as ready for review June 3, 2024 20:18
Copy link
Collaborator

@deprit deprit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Robust DPatch example is somewhat specialized to a particular YOLOv5 model. It's not clear how much effort should be expended to generalize the code.

# TODO non-zero min value
self.patch_min = 0
self.patch_max = self.model.inputs_spec.scale.max
self.patch = torch.randint(0, 255, self.patch_shape) / 255 * self.patch_max
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's define a static method for patch initialization to allow easy subclassing, something like the following.

@staticmethod
def _patch_init(shape: Sequence(int) ...) -> torch.Tensor:
...

# x_1, y_1 = self.patch_location
x_2 = x_1 + self.patch_shape[1]
y_2 = y_1 + self.patch_shape[2]
inputs_with_patch = inputs.clone()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this clone operation necessary? Does the batch loader already create tensor copies of the batch?

Comment on lines +63 to +73
# Apply patch to image
x_1 = random.randint(0, inputs.shape[3] - self.patch_shape[2])
y_1 = random.randint(0, inputs.shape[2] - self.patch_shape[1])
# x_1, y_1 = self.patch_location
x_2 = x_1 + self.patch_shape[1]
y_2 = y_1 + self.patch_shape[2]
inputs_with_patch = inputs.clone()
inputs_with_patch[:, :, x_1:x_2, y_1:y_2] = self.patch

# Apply random augmentations to images
inputs_with_augmentations = self.augmentation(inputs_with_patch)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's encapsulate these lines into a method that applies the patch to a batch of input images with augmentations.

def _apply_patch(self,
    inputs: torch.Tensor,
    patch: torch.Tensor,
    location: torch.Tensor,
    augmentations: kornia.augmentation.container.ImageSequential
) -> torch.Tensor:
...

resize = A.Compose(
[
A.LongestMaxSize(max_size=max_size),
A.PadIfNeeded(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may not be necessary to pad all input images to the max size, some models take a list of images as input (e.g. Faster-RCNN).

@treubig26 treubig26 merged commit e2ab57f into master Jun 11, 2024
30 checks passed
@treubig26 treubig26 deleted the 149-implement-robust-dpatch-attack branch June 11, 2024 13:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement Robust DPatch Attack
3 participants