Skip to content

Commit

Permalink
updat the README
Browse files Browse the repository at this point in the history
  • Loading branch information
YangYuqi317 authored and YangYuqi317 committed Jan 14, 2023
1 parent 59399ff commit e77085f
Show file tree
Hide file tree
Showing 3 changed files with 57 additions and 23 deletions.
67 changes: 44 additions & 23 deletions README.md
@@ -1,6 +1,6 @@
## Introduction

FGVCLib is an open-source and well documented library for Fine-grained Visual Analysis. It is based on Pytorch with performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms, or readily apply existing algorithms.
FGVCLib is an open-source and well documented library for Fine-grained Visual Classification. It is based on Pytorch with performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms, or readily apply existing algorithms.
The branch works with **torch 1.12.1**, **torchvision 0.13.1**.

<details open>
Expand All @@ -11,27 +11,27 @@ The branch works with **torch 1.12.1**, **torchvision 0.13.1**.
We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules.

- **State of the art**
We implement state-of-the-art methods by the FGVCLib, [PMG](https://arxiv.org/abs/2003.03836v3), [MCL](https://arxiv.org/abs/2002.04264).
We implement state-of-the-art methods by the FGVCLib, [PMG](https://arxiv.org/abs/2003.03836v3), [MCL](https://arxiv.org/abs/2002.04264), [API-Net](https://arxiv.org/abs/2002.10191), [CAL](https://ieeexplore.ieee.org/document/9710619), [TransFG](https://ieeexplore.ieee.org/document/9710619), [PIM](https://arxiv.org/abs/2202.03822).


## Installation

Please refer to [Installation](docs/get_started.md/#installation) for installation instructions.
Please refer to [Installation](https://pris-cv-fgvclib.readthedocs.io/en/latest/get_started.html) for installation instructions.

## Getting started

Please see [get_started.md](docs/get_started.md) for the basic usage of FGVCLib. We provide the tutorials for:
Please see [get_started.md](https://pris-cv-fgvclib.readthedocs.io/en/latest/get_started.html) for the basic usage of FGVCLib. We provide the tutorials for:

- [with existing data existing model](docs/1_exist_data_model.md)
- [with existing data new model](docs/2_exist_data_new_model.md)
- [learn about apis](docs/tutorials/tutorial1_apis.md)
- [learn about configs](docs/tutorials/tutorial2_configs.md)
- [learn about criterions](docs/tutorials/tutorial3_criterions.md)
- [learn about datasets](docs/tutorials/tutorial4_datasets.md)
- [learn about metrics](docs/tutorials/tutorial5_metrics.md)
- [learn about model](docs/tutorials/tutorial6_model.md)
- [learn about transforms](docs/tutorials/tutorial7_transform.md)
- [learn about the tools](docs/useful_tools.md)
- [with existing data existing model](https://pris-cv-fgvclib.readthedocs.io/en/latest/1_exist_data_model.html)
- [with existing data new model](https://pris-cv-fgvclib.readthedocs.io/en/latest/2_exist_data_new_model.html)
- [learn about apis](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial1_apis.html)
- [learn about configs](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial2_configs.html)
- [learn about criterions](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial3_criterions.html)
- [learn about datasets](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial4_datasets.html)
- [learn about metrics](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial5_metrics.html)
- [learn about model](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial6_model.html)
- [learn about transforms](https://pris-cv-fgvclib.readthedocs.io/en/latest/tutorials/tutorial7_transform.html)
- [learn about the tools](https://pris-cv-fgvclib.readthedocs.io/en/latest/useful_tools.html)


</details>
Expand All @@ -54,10 +54,14 @@ Please see [get_started.md](docs/get_started.md) for the basic usage of FGVCLib.
<tr valign="top">
<td>
<ul>
<li><a href="configs/baseline_resnet50">Baseline_ResNet50</a></li>
<li><a href="configs/mcl_vgg16">MCL_VGG16</a></li>
<li><a href="configs/pmg_resnet50">PMG_ResNet50</a></li>
<li><a href="configs/pmg_v2_resnet50">PMG_V2_ResNet50</a></li>
<li><a href="configs/resnet">Baseline_ResNet50</a></li>
<li><a href="configs/mutual_channel_loss">Mutual-Channel-Loss</a></li>
<li><a href="configs/progressive_multi_granularity_learning">PMG-ResNet50</a></li>
<li><a href="configs/progressive_multi_granularity_learning">PMG_V2_ResNet50</a></li>
<li><a href="configs/">API-Net</a></li>
<li><a href="configs/">CAL</a></li>
<li><a href="configs/">TransFG</a></li>
<li><a href="configs/">PIM</a></li>
</ul>
</td>
<td>
Expand Down Expand Up @@ -118,12 +122,29 @@ Please see [get_started.md](docs/get_started.md) for the basic usage of FGVCLib.
</td>
<td>
<ul>
<li>Baseline</li>
<li><a href="configs/mcl_vgg16/README.md">MCL</a></li>
<li><a href="configs/pmg_resnet50/README.md">PMG</li>
<li>PMG_v2</li>
<li>Baseline_ResNet50</li>
<li><a href="configs/mutual_channel_loss/README.md">Mutual-Channel-Loss</a></li>
<li><a href="configs/progressive_multi_granularity_learning/README.md">PMG-ResNet50</a></li>
<li>PMG_V2_ResNet50</li>
<li><a href="configs/">API-Net</a></li>
<li><a href="configs/">CAL</a></li>
<li><a href="configs/">TransFG</a></li>
<li><a href="configs/">PIM</a></li>
</ul>
</td>
</tr>
</tbody>
</table>
</table>

## The Result of the SOTA
We used fgvclib to replicate the state-of-the-art model, and the following table shows the results of our experiment.

| SOTA | Result of the paper | Result of the official code | Result of the FGVCLib |
| -------------------------------------------------------------------------------------- |
| MCL | | | |
| PMG | | | |
| PMG-v2 | | | |
| API-Net | 88.1 | 87.2 | 86.8 |
| CAL | 90.6 | 89.6 | 89.5 |
| TransFG | 91.7 | 91.1 | 89.3 |
| PIM | 92.8 | 91.9 | 91.4 |
13 changes: 13 additions & 0 deletions docs/en/configs/api-net/README.md
@@ -0,0 +1,13 @@
# API-Net

[API-Net](https://arxiv.org/abs/2002.10191)

## Introduction

In order to effectively identify contrastive clues among highly-confused categories, the writers propose a simple but effective Attentive Pairwise Interaction Network (API-Net), which can progressively recognize a pair of fine-grained images by interaction. They aim at learning a mutual vector first to capture semantic differences in the input pair, and then comparing this mutual vector with individual vectors to highlight their semantic differences respectively. Besides, they also introduce a score-ranking regularization to promote the priorities of these features. For more details, please refer to the [paper](https://arxiv.org/abs/2002.10191).

## Framework
<div align=center>
<img src="https://github.com/YangYuqi317/FGVCLib_docs/blob/main/src/mcl_loss.jpg?raw=true"/>
</div>

Binary file added docs/en/configs/framework/API-Net Framework.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e77085f

Please sign in to comment.