Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update base.py #333

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 14 additions & 4 deletions piq/functional/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,17 +49,27 @@ def similarity_map(map_x: torch.Tensor, map_y: torch.Tensor, constant: float, al

def gradient_map(x: torch.Tensor, kernels: torch.Tensor) -> torch.Tensor:
r""" Compute gradient map for a given tensor and stack of kernels.

Args:
x: Tensor with shape (N, C, H, W).
kernels: Stack of tensors for gradient computation with shape (k_N, k_H, k_W)
kernels: Stack of tensors for gradient computation with shape (k_N, k_H, k_W) or (k_N, 1, k_H, k_W)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
kernels: Stack of tensors for gradient computation with shape (k_N, k_H, k_W) or (k_N, 1, k_H, k_W)
kernels: Stack of tensors for gradient computation with shape (k_C_out, k_C_in, k_H, k_W). k_C_in equals 1.

Returns:
Gradients of x per-channel with shape (N, C, H, W)
"""
padding = kernels.size(-1) // 2
grads = torch.nn.functional.conv2d(x, kernels, padding=padding)
N, C, H, W = x.shape

# Expand kernel if this is not done already and repeat to match number of groups
if kernels.dim() != 4:
kernels = kernels.unsqueeze(1)

Comment on lines 58 to +64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
padding = kernels.size(-1) // 2
grads = torch.nn.functional.conv2d(x, kernels, padding=padding)
N, C, H, W = x.shape
# Expand kernel if this is not done already and repeat to match number of groups
if kernels.dim() != 4:
kernels = kernels.unsqueeze(1)
assert kernels.dim() == 4, f'Expected 4D kernel, got {kernels.dim()}D tensor '
assert kernels.size(1) == 1, f'Expected dimension size of kernel to be equal one for input number of channels, got kernel {kernel.size()} '
assert kernels.size(-1) == kernel_size(-2), f'Expected squared kernel along coast two dimensions, got {kernel.size()}'
padding = kernels.size(-1) // 2
N, C, H, W = x.shape

if C > 1:
kernels = kernels.repeat(C, 1, 1, 1)

# Process each channel separately using group convolution.
grads = torch.nn.functional.conv2d(x, kernels.to(x), groups=C, padding=padding)
Copy link
Collaborator

@denproc denproc Feb 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implicit transfer between devices is not needed as current implementation in master handles the device and dtype properties.

Suggested change
grads = torch.nn.functional.conv2d(x, kernels.to(x), groups=C, padding=padding)
grads = torch.nn.functional.conv2d(x, kernels, groups=C, padding=padding)


return torch.sqrt(torch.sum(grads ** 2, dim=-3, keepdim=True))
# Create a per-channel view, compute square of grads and return
return torch.sqrt(torch.sum(grads.view(N, C, -1, H, W) ** 2, dim=-3))


def pow_for_complex(base: torch.Tensor, exp: Union[int, float]) -> torch.Tensor:
Expand Down