Skip to content

Benchmark the inference speed of YOLOv5 using parallel and linear processing.

License

Notifications You must be signed in to change notification settings

NTUA-Edge-Robotics/YoloProcessing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YoloProcessing

The project aims to benchmark the inference speed of YOLOv5 using parallel and linear processing.

Read, watch or cite our paper

Installation

  1. Install the dependencies with pip install -r requirements.txt

Primary Data Generation

The API of the script can be found using python src/generate_primary_data.py -h. The script follows these steps :

  1. Run the inference with YOLOv5 with the specified batch sizes (number of images)
  2. Get the total inference time and the inference time for each image
  3. Save the results in a CSV file

Inference Time Visualization

The following script will produce a graphic of the total inference time according to the batch size. The API of the script can be found using python src/visualize_total_inference_time.py -h.

The following script will produce a table of the average inference time according to the batch size. The API of the script can be found using python src/table_average_inference_time.py -h.

What could be improved

  • Log and handle errors
  • Automated tests

About

Benchmark the inference speed of YOLOv5 using parallel and linear processing.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages