Skip to content

Commit

Permalink
updat the README
Browse files Browse the repository at this point in the history
  • Loading branch information
YangYuqi317 authored and YangYuqi317 committed Jan 14, 2023
1 parent e77085f commit 4ddc75b
Show file tree
Hide file tree
Showing 12 changed files with 49 additions and 3 deletions.
13 changes: 13 additions & 0 deletions docs/en/configs/CAL/README.md
@@ -0,0 +1,13 @@
# CAL

[CAL](https://ieeexplore.ieee.org/document/9710619)

## Abstract

Attention mechanism has demonstrated great potential in fine-grained visual recognition tasks. In this paper, we present a counterfactual attention learning method to learn more effective attention based on causal inference. Unlike most existing methods that learn visual attention based on conventional likelihood, we propose to learn the attention with counterfactual causality, which provides a tool to measure the attention quality and a powerful supervisory signal to guide the learning process. Specifically, we analyze the effect of the learned visual attention on network prediction through counterfactual intervention and maximize the effect to encourage the network to learn more useful attention for fine-grained image recognition. Empirically, we evaluate our method on a wide range of fine-grained recognition tasks where attention plays a crucial role, including fine-grained image categorization, person re-identification, and vehicle re-identification. The consistent improvement on all benchmarks demonstrates the effectiveness of our method.

## Framework

<div align=center>
<img src="https://github.com/YangYuqi317/FGVCLib_docs/blob/main/src/mcl_method.jpg?raw=true"/>
</div>
21 changes: 21 additions & 0 deletions docs/en/configs/PIM/README.md
@@ -0,0 +1,21 @@
# PIM

[PIM](https://arxiv.org/abs/2202.03822)

## Abstract

Visual classification can be divided into coarse-grained and fine-grained classification. Coarse-grained classification represents categories with a large degree of dissimilarity, such as the classification of cats and dogs, while fine-grained classification represents classifications with a large degree of similarity, such as cat species, bird species, and the makes or models of vehicles. Unlike coarse-grained visual classification, fine-grained visual classification often requires professional experts to label data, which makes data more expensive. To meet this challenge, many approaches propose to automatically find the most discriminative regions and use local features to provide more precise features. These approaches only require image-level annotations, thereby reducing the cost of annotation. However, most of these methods require two- or multi-stage architectures and cannot be trained end-to-end. Therefore, we propose a novel plug-in module that can be integrated to many common backbones, including CNN-based or Transformer-based networks to provide strongly discriminative regions. The plugin module can output pixel-level feature maps and fuse filtered features to enhance fine-grained visual classification. Experimental results show that the proposed plugin module outperforms state-of-the-art approaches and significantly improves the accuracy to 92.77\% and 92.83\% on CUB200-2011 and NABirds, respectively.

## Framework

<div align=center>
<img src="https://github.com/YangYuqi317/FGVCLib_docs/blob/main/src/mcl_loss.jpg?raw=true"/>
</div>

Schematic flow of the proposed plug-in module. Backbone Blockk
represents the kth block in the backbone network. When the image is input
to the network, the feature map output by each block will be input into the
Weakly Supervised Selector to screen out areas with strong discrimination or
areas that are less related to classification. Finally, a Combiner is used to fuse
the features of the selected results to obtain the prediction results. The Lfinal
represents the loss function.
13 changes: 13 additions & 0 deletions docs/en/configs/TransFG/README.md
@@ -0,0 +1,13 @@
# TransFG

[TransFG](https://ieeexplore.ieee.org/document/9710619)

## Abstract

Attention mechanism has demonstrated great potential in fine-grained visual recognition tasks. In this paper, we present a counterfactual attention learning method to learn more effective attention based on causal inference. Unlike most existing methods that learn visual attention based on conventional likelihood, we propose to learn the attention with counterfactual causality, which provides a tool to measure the attention quality and a powerful supervisory signal to guide the learning process. Specifically, we analyze the effect of the learned visual attention on network prediction through counterfactual intervention and maximize the effect to encourage the network to learn more useful attention for fine-grained image recognition. Empirically, we evaluate our method on a wide range of fine-grained recognition tasks where attention plays a crucial role, including fine-grained image categorization, person re-identification, and vehicle re-identification. The consistent improvement on all benchmarks demonstrates the effectiveness of our method.

## Framework

<div align=center>
<img src="https://github.com/YangYuqi317/FGVCLib_docs/blob/main/src/mcl_method.jpg?raw=true"/>
</div>
2 changes: 1 addition & 1 deletion docs/en/configs/api-net/README.md
Expand Up @@ -8,6 +8,6 @@ In order to effectively identify contrastive clues among highly-confused categor

## Framework
<div align=center>
<img src="https://github.com/YangYuqi317/FGVCLib_docs/blob/main/src/mcl_loss.jpg?raw=true"/>
<img src="https://github.com/YangYuqi317/PRIS-CV_FGVCLib/blob/main/docs/en/configs/framework/API-Net%20Framework.png?raw=true"/>
</div>

1 change: 0 additions & 1 deletion docs/en/configs/baseline_resnet50/README.md

This file was deleted.

Binary file added docs/en/configs/framework/CAL framework.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/configs/framework/MCL_1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/configs/framework/MCL_2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/configs/framework/PIM framework.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/configs/framework/PMG_1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/configs/framework/PMG_2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/en/configs/mcl_vgg16/README.md
Expand Up @@ -4,7 +4,7 @@

## Abstract

In this paper, wes show that it is possible to cultivate subtle details without the need for overly complicated network designs or training mechanisms – a single loss is all it takes. The main trick lies with how we delve into individual feature channels early on, as opposed to the convention of starting from a consolidated feature map. The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components: a discriminality component and a diversity component. The discriminality component forces all feature channels belonging to the same class to be discriminative, through a novel channel-wise attention mechanism. The diversity component additionally constraints channels so that they become mutually exclusive across the spatial dimension. The end result is therefore a set of feature channels, each of which reflects different locally discriminative regions for a specific class. The MC-Loss can be trained end-to-end, without the need for any bounding-box/part annotations, and yields highly discriminative regions during inference.
In this paper, we show that it is possible to cultivate subtle details without the need for overly complicated network designs or training mechanisms – a single loss is all it takes. The main trick lies with how we delve into individual feature channels early on, as opposed to the convention of starting from a consolidated feature map. The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components: a discriminality component and a diversity component. The discriminality component forces all feature channels belonging to the same class to be discriminative, through a novel channel-wise attention mechanism. The diversity component additionally constraints channels so that they become mutually exclusive across the spatial dimension. The end result is therefore a set of feature channels, each of which reflects different locally discriminative regions for a specific class. The MC-Loss can be trained end-to-end, without the need for any bounding-box/part annotations, and yields highly discriminative regions during inference.

<div align=center>
<img src="https://github.com/YangYuqi317/FGVCLib_docs/blob/main/src/mcl_method.jpg?raw=true"/>
Expand Down

0 comments on commit 4ddc75b

Please sign in to comment.