This repository clone from FPT-AI
Update:
- src/: contains all utils files for EDA.
- eda/: contains all files for EDA (NOTE: don't commit notebook file)
- download_data.sh: file bash for downloading the dataset.
conda create -n fptai python=3.7
conda activate fptai
git clone https://github.com/DatacollectorVN/fpt-ai-data-competition.git
pip install -r requirements.txt
Run to download raw dataset:
bash download_data.sh
We use Streamlit to display and check annotations of image.
streamlit run eda/streamlit_annotations.py
- Increase brightness
python eda/increase_brightness.py
- Enhence face of people
python eda/enhence_face.py
NOTE: Remember to change config correctly
- Mosaic | Flip | Rotate | Mixup
python src/{augmentation_name}_augmentation.py
NOTE: Change path of dataset and number images to generate
- Auto augmentation based on Yolov5 source code
python auto_augmentation.py
Val:
Public_test:
For more details DRIVE-CHUNG
- On Google Colab: (Note: Make a copy in drive)
- On server:
python train.py --batch-size 32 --device 0 --name <version_name>
Note: Change the number of epochs to 70 in config/train_cfg.yaml
python val.py --weights results/train/<version_name>/weights/best.pt --task test --name <version_name> --batch-size 64 --device 0
val
train
- Results are saved at
results/evaluate/<task>/<version_name>
.
- Results are saved at
<save_dir>
.
python detect.py --weights results/train/<version_name>/weights/best.pt --source <path_to_folder> --dir <save_dir> --device 0
Note:
- <path_to_folder>: folder contain images to predict (Usually ./dataset/public_test)
- <save_dir>: path to save images predict
In the final result, our team finished 15th out of 394 participating teams. We are very happy with this result and will try to do better in the upcoming competitions.