Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparsity constraint in channel exchanging #17

Closed
hljeong opened this issue Sep 1, 2022 · 5 comments
Closed

Sparsity constraint in channel exchanging #17

hljeong opened this issue Sep 1, 2022 · 5 comments

Comments

@hljeong
Copy link

hljeong commented Sep 1, 2022

Hello,
Thank you for your very interesting work! I was planning on experimenting with CEN but I couldn't seem to find the implementation of the sparsity constraint in channel exchanging, as mentioned in Section 3.3, that channel exchanging is only performed in different (disjoint) sub-parts for different modalities. Would you be able to point me to where in the model is this implemented?

Thanks.

@yikaiw
Copy link
Owner

yikaiw commented Sep 1, 2022

Hi, for semantic segmentation, the sparsity constraint is implemented at:
(1) different (disjoint) sub-parts,

if param.requires_grad and name.endswith('weight') and 'bn2' in name:
if len(slim_params) % 2 == 0:
slim_params.append(param[:len(param) // 2])
else:
slim_params.append(param[len(param) // 2:])

(2) adding the loss of the sparsity constraint,
L1_norm = sum([L1_penalty(m).cuda() for m in slim_params])
loss += lamda * L1_norm # this is actually counted for len(outputs) times

For image-to-image translation, the sparsity constraint is implemented at:
(1) different (disjoint) sub-parts,

slim_params, insnorm_params = [], []
for name, param in G.named_parameters():
if param.requires_grad and name.endswith('weight') and 'insnorm_conv' in name:
insnorm_params.append(param)
if len(slim_params) % 2 == 0:
slim_params.append(param[:len(param) // 2])
else:
slim_params.append(param[len(param) // 2:])

(2) adding the loss of the sparsity constraint,
# L1 loss
l1_loss = params.gama * sum([L1_loss(gen_image, y_) for gen_image in gen_images])
if params.lamda > 0:
slim_loss = params.lamda * sum([slim_penalty(m).cuda() for m in slim_params])
else:
slim_loss = 0

@hljeong
Copy link
Author

hljeong commented Sep 3, 2022

Thank you for your reply! However I am still confused as to where the sparsity constraint in terms of channel exchanging is implemented, as the sections of code you referenced seem to be applying the sparsity constraint to the loss calculation.

I am mainly confused about

x1[:, bn1 >= bn_threshold] = x[0][:, bn1 >= bn_threshold]
x1[:, bn1 < bn_threshold] = x[1][:, bn1 < bn_threshold]
x2[:, bn2 >= bn_threshold] = x[1][:, bn2 >= bn_threshold]
x2[:, bn2 < bn_threshold] = x[0][:, bn2 < bn_threshold]

which seems to exchange channels within all of x[0] and x[1], instead of disjoint sub-parts of them.

@yikaiw
Copy link
Owner

yikaiw commented Sep 5, 2022

Hi, take semantic segmentation as an example:
We apply the sparsity constraints on disjoint sub-parts of BN scaling factors in,

if param.requires_grad and name.endswith('weight') and 'bn2' in name:
if len(slim_params) % 2 == 0:
slim_params.append(param[:len(param) // 2])
else:
slim_params.append(param[len(param) // 2:])

In the case of two modalities, we divide channels into two disjoint sub-parts, which is implemented by adding param[:len(param) // 2 and param[len(param) // 2:] to slim_params. Followed up by the sparsity loss on slim_params, which means only the sub-parts in slim_params are constrained by L1.

We find if a channel is out of the sparsity constraints (L1), its BN scaling factor can be hardly lower than the small threshold during training. Therefore we check the criteria for channel exchanging directly on the whole channels,

x1[:, bn1 >= bn_threshold] = x[0][:, bn1 >= bn_threshold]
x1[:, bn1 < bn_threshold] = x[1][:, bn1 < bn_threshold]
x2[:, bn2 >= bn_threshold] = x[1][:, bn2 >= bn_threshold]
x2[:, bn2 < bn_threshold] = x[0][:, bn2 < bn_threshold]

Since constraining half (disjoint sub-parts) of the channels is already implemented in main.py, checking the exchanging criteria on the whole channels is almost equivalent to disjoint sub-parts.

@hljeong
Copy link
Author

hljeong commented Sep 6, 2022

That makes sense. Thank you for your detailed explanation!

@hljeong hljeong closed this as completed Sep 6, 2022
@leyuan-sun
Copy link

leyuan-sun commented Nov 4, 2022

@yikaiw 你好 (1) 我不是特别理解用L1norm来惩罚 scale factor 在loss function 的意义,这一项在loss function里不就是让 scale factor 越来越小么 简单的来说。能不能稍微解释一下呢, 谢谢🙏
(2)这里的certain portion 就是disjoint 的那部分的意思是么?
截屏2022-11-04 17 55 21
截屏2022-11-04 17 55 55

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants