GAIN framework allows model to focus on a specific areas of object by changing the attention maps (Grad-CAM).
The flow of the framework is summarized as follow:
- First, we should register the
feed-forward
andbackward
of the last convolution layer (or block). - Model generates the attention maps (Grad-CAM) from
forward features
and backwardfeatures
then normalize it by using threshold andsigmoid
function. - Now, the attention maps covers almost important information of the object. We want to tell the model those kind of areas
are important for the task. It can be done by applying attention maps into the original image.
Let imagine that the
masked_image
now is containing useless information. When we feedmasked_image
into the model again, we expect that the prediction score is as low as possible. That is the idea ofattention mining
in the paper. - The losses are computed at GAINCriterionCallback
In this implementation, I select resnet50 as the base-model to perform GAIN framework. You can change the backbone and it's gradient layer as you want.
bash bin/train_gain.sh