Skip to content

A visual analysis tool to support a unified model evaluation for different computer vision tasks, including classification, object detection, and instance segmentation

License

thu-vis/Uni-Evaluator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Uni-Evaluator: A unified interactive model evaluation for classification, object detection, and instance segmentation in computer vision

Uni-Evaluator is a visual analysis tool to support a unified model evaluation for different computer vision tasks, including classification, object detection, and instance segmentation. More information, including the video and live demo, can be found here.

demo

We further cleaned the annotations of the COCO validation dataset after the case study shown in our paper. The COCO-format annotation file can be downloaded via this link.

Quick start

1. Download repo

git clone https://github.com/thu-vis/Uni-Evaluator.git
cd Uni-Evaluator

2. Setup environment

(This repo is tested with node_v16 and python3.8 on Ubuntu)

a. setup frontend environment

npm install -g yarn
cd frontend
yarn

b. setup backend environment

# install from requirements.txt
cd ../backend
# install the packages in order
cat requirements.txt | xargs pip install

# install RangeTree
cd ./data/RangeQuery
python setup.py build_ext --inplace
cd ../../

# install fastlapjv
git clone git@github.com:thu-vis/fast-lapjv.git
cd fast-lapjv/
python setup.py install --user
cd ../

# install faiss (you can also follow https://github.com/facebookresearch/faiss/blob/main/INSTALL.md)
conda install -c pytorch faiss-gpu
# or
conda install -c pytorch faiss-cpu

3. download data

We provide COCO test data used in our case study for demonstration, which can be downloaded here. It corresponds to the dataPath below.

4. run

# start backend
cd backend
python server.py \
    --dataPath <path> \ # contains sub-directories with training or test data
    --seg \ # added for segmentation task
    --dataName <name> # the name of dataset, e.g., "COCO", "iSAID"
    --host <host> \
    --port <port> \
cd ../

# start frontend
cd frontend
yarn start

Citation

If you use this code or the cleaned annotations of COCO validation data for your research, please consider citing:

@article{chen2023unified,
  title={A unified interactive model evaluation for classification, object detection, and instance segmentation in computer vision},
  author={Chen, Changjian and Guo, Yukai and Tian, Fengyuan and Liu, Shilong and Yang, Weikai and Wang, Zhaowei and Wu, Jing and Su, Hang and Pfister, Hanspeter and Liu, Shixia},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2023},
  publisher={IEEE}
}

About

A visual analysis tool to support a unified model evaluation for different computer vision tasks, including classification, object detection, and instance segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published