Skip to content

Latest commit

 

History

History
139 lines (94 loc) · 5.11 KB

run_on_windows.md

File metadata and controls

139 lines (94 loc) · 5.11 KB

How to Compile and Run on Windows

This tutorial can be applied to any models in this repo. Only need to adapt couple of lines.

Environments

Compile and Run

1. Modify CmakeLists.txt

cmake_minimum_required(VERSION 2.6)

project(yolov5) # 1
set(OpenCV_DIR "D:\\opencv\\opencv346\\build")  #2
set(TRT_DIR "D:\\TensorRT-7.0.0.11.Windows10.x86_64.cuda-10.2.cudnn7.6\\TensorRT-7.0.0.11")  #3

add_definitions(-std=c++11)
option(CUDA_USE_STATIC_CUDA_RUNTIME OFF)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE Debug)

set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads)

# setup CUDA
find_package(CUDA REQUIRED)
message(STATUS "    libraries: ${CUDA_LIBRARIES}")
message(STATUS "    include path: ${CUDA_INCLUDE_DIRS}")

include_directories(${CUDA_INCLUDE_DIRS})

set(CUDA_NVCC_PLAGS ${CUDA_NVCC_PLAGS};-std=c++11; -g; -G;-gencode; arch=compute_75;code=sm_75)
####
enable_language(CUDA)  # add this line, then no need to setup cuda path in vs
####
include_directories(${PROJECT_SOURCE_DIR}/include)
include_directories(${TRT_DIR}\\include)

# -D_MWAITXINTRIN_H_INCLUDED for solving error: identifier "__builtin_ia32_mwaitx" is undefined
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Ofast -D_MWAITXINTRIN_H_INCLUDED")

# setup opencv
find_package(OpenCV QUIET
    NO_MODULE
    NO_DEFAULT_PATH
    NO_CMAKE_PATH
    NO_CMAKE_ENVIRONMENT_PATH
    NO_SYSTEM_ENVIRONMENT_PATH
    NO_CMAKE_PACKAGE_REGISTRY
    NO_CMAKE_BUILDS_PATH
    NO_CMAKE_SYSTEM_PATH
    NO_CMAKE_SYSTEM_PACKAGE_REGISTRY
)

message(STATUS "OpenCV library status:")
message(STATUS "    version: ${OpenCV_VERSION}")
message(STATUS "    libraries: ${OpenCV_LIBS}")
message(STATUS "    include path: ${OpenCV_INCLUDE_DIRS}")

include_directories(${OpenCV_INCLUDE_DIRS})
link_directories(${TRT_DIR}\\lib)

add_executable(yolov5 ${PROJECT_SOURCE_DIR}/yolov5.cpp ${PROJECT_SOURCE_DIR}/yololayer.cu ${PROJECT_SOURCE_DIR}/yololayer.h 
                ${PROJECT_SOURCE_DIR}/hardswish.cu ${PROJECT_SOURCE_DIR}/hardswish.h)   #4

target_link_libraries(yolov5  "nvinfer" "nvinfer_plugin")   #5
target_link_libraries(yolov5 ${OpenCV_LIBS})          #6
target_link_libraries(yolov5 ${CUDA_LIBRARIES})   #7
target_link_libraries(yolov5 Threads::Threads)       #8

Notice: 8 lines to adapt in CMakeLists.txt, marked with #1-#8

  • #1 project name, set according to your project name
  • #2 your opencv path
  • #3 your tensorrt path
  • #4 source file needed, including .cpp .cu .h
  • #5-#8 libs needed

2. run cmake-gui to config the project

2.1 open cmake-gui and set the path

image-20200828124434245

2.2 click Configure and set the envs

image-20200828124902923

2.3 click Finish, and wait for the Configuring done

image-20200828124951872

2.4 click Generate

image-20200828125046738

2.5 click Open Project

image-20200828125215067

2.6 Click Generate -> Generate solution

image-20200828125402056

3. run in command line

cd to the path of exe (e.g. E:\LearningCodes\GithubRepo\tensorrtx\yolov5\build\Debug)

yolov5.exe -s             // serialize model to plan file i.e. 'yolov5s.engine'
yolov5.exe -d  ../samples // deserialize plan file and run inference, the images in samples will be processed.

Notice: while serializing the model, the .wts should put in the parent dir of xxx.vcxproj, or just modify the .wts path in yolov5.cpp

image-20200828125938472

4. run in vs

In vs, firstly Set As Startup Project, and then setup Project ==> Properties ==> Configuration Properties ==> Debugging ==> Command Arguments as -s or -d ../yolov3-spp/samples. Then can run or debug.

image-20200828130117902

image-20200828130415658

image-20200828131516231

Notice: The .dll of tensorrt and opencv should be put in the same directory with exe file. Or set environment variables in windows.(Not recommended)