vedacls is an open source classification toolbox based on PyTorch.
This project is released under the Apache 2.0 license.
- Linux
- Python 3.6+
- PyTorch 1.4.0 or higher
- CUDA 9.0 or higher
We have tested the following versions of OS and softwares:
- OS: Ubuntu 16.04.6 LTS
- CUDA: 10.2
- PyTorch 1.4.0
- Python 3.6.9
- Create a conda virtual environment and activate it.
conda create -n vedacls python=3.6.9 -y
conda activate vedacls
- Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorch
- Clone the vedacls repository.
git clone https://github.com/Media-Smart/vedacls.git
cd vedacls
vedacls_root=${PWD}
- Install dependencies.
pip install -r requirements.txt
The catalogue structure of dataset supported by vedacls toolbox is as follows:
data/
├── train
│ ├── 0
│ │ ├── XXX.jpg
│ │
│ ├── 1
│ ├── 2
│ ├── ...
│
├── val
│ ├── 0
│ ├── 1
│ ├── 2
│ ├── ...
│
├── test
├── 0
├── 1
├── 2
├── ...
- Config
Modify some configuration accordingly in the config file like configs/resnet18.py
- Run
python tools/train.py configs/resnet18.py
Snapshots and logs will be generated at ${vedacls_root}/workdir/resnet18
- Config
Modify some configuration accordingly in the config file like configs/resnet18.py
- Run
python tools/test.py configs/resnet18.py checkpoint_path
- Config
Modify some configuration accordingly in the config file like configs/resnet18.py
- Run
python tools/inference.py configs/resnet18.py checkpoint_path image_path
-
Install volksdep following the official instructions
-
Benchmark(optional)
python tools/deploy/benchmark.py configs/resnet18.py checkpoint_path image_path
More available arguments are detailed in tools/deploy/benchmark.py
- Export model as ONNX or TensorRT engine format
python tools/deploy/export.py configs/resnet18.py checkpoint_path image_path out_model_path
More available arguments are detailed in tools/deploy/export.py
- Inference SDK
You can refer to FlexInfer for details.
This repository is currently maintained by Chenhao Wang (@C-H-Wong), Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).
We got a lot of code from mmcv and mmdetection, thanks to open-mmlab.