Skip to content

mapilio/compare-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Compare Model

Overview

Model Compare is a tool to perform the model and calculate its mean average precision on given test dataset, in a simple and efficient way.

Topics

Getting Started

These instructions below is going to help you get, run and improve it on your local machine for development and testing purposes right after training computer vision models.

Prerequisites

Make sure you have the following installed before proceeding:
  • Note: Note that according to supervision library they only support images in .jpg, .png, .jpeg formats. Also they must be lowercase (such as, example.png not example.PNG). Please modify your images according to these information.

Installation

# clone the repository
git clone https://github.com/mapilio/compare-model.git

# jump into the project directory
cd compare-model

# Create a virtual environment (in case you don't have virtualenv package please use `pip install virtualenv` to install it. If you don't want to install it then you may use `python -m venv compare-model-venv` as well. 
virtualenv compare-model-venv

# Activate the virtual env (on windows use `source compare-model-venv\.Scripts\activate`)
source compare-model-venv/bin/activate 

# Install dependencies
pip install -r requirements.txt

Usage

  • Config takes all of your configurations to use model compare tool
python main.py --config config.yaml
  • Config file arguments to configure configuration parameters
model_name: yolov5 # trained model name  (for now the tool is only compatible with yolov5 and yolov8, so give trained models with 'yolov5' or 'yolov8'. 
model_path: "example_model.pt" # trained model weight
image_path: "/images" # ground truth images path to validate trained model
project_name: "example-model-v-x" # trained project name 
project_folder_name: "example-model" # project folder name
conf_thresh: 0.5 # confidence threshold for model prediction
write_results: False # to decide to save prediction results or not 
calculate_map: True # whether to choose calculate mean average precision or not
image_size: 1280 # to set image size according to your model
annotation_path: "/ground_truth/labels" # ground truth labels path to validate trained model
yaml_path: "/cfg/example.yaml" # trained model's yaml file
act_mask: False # if your model provides masks 
task_mode: "detection" # to choose wheter to perform model on detection mode or segmentation mode 
verbose: True # to decide whether to see logs of predictions or not

LICENSE

This project is licensed under the MIT LICENSE - see the LICENSE.md file for details.

Contribution

To make a contribution feel free to fork the repository, improve the project and then open a feature request.

Contact

For model compare tool's bug reports and feature requests please visit GitHub Issues, and join our Discord community for questions and discussions!

About

A model comparison repository to compare computer vision models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages