A Python Wrapper for Yolo
This application is tested on Ubuntu 64bit environment. Tested in
I have added
typing support also. Therefore you will get function suggestions when you
using this library with an IDE.
You will need
make in order
to build this application.
sudo apt-get update sudo apt-get install build-essential sudo apt-get install git-core
Then you will need
opencv-python packages as 3rd party python packages.
You can build
OpenCV from sources for
C++ and enable
OpenCV while building
darknet (Recommended way).
Or you can install
OpenCV directly from
pip. Then you won't be able to build darknet with
pip install opencv-python
Building and Installing
First you need to download and build darknet.
git clone https://github.com/Ramesh-X/pyyolo.git cd pyyolo python setup.py build_ext
You can pass the following additional options while you building the darknet sources.
# To provide custom darknet location # If you did not provide this, darknet will be downloaded to current location DARKNET_HOME=/home/user/darknet python setup.py build_ext # To force rebuilding sources REBUILD=1 python setup.py build_ext # To enable OpenCV OPENCV=1 python setup.py build_ext # To enable GPU GPU=1 python setup.py build_ext # To enable OpenMP OPENMP=1 python setup.py build_ext # To enable cuDNN CUDNN=1 python setup.py build_ext # You can combine more than one option for building. CUDNN=1 GPU=1 python setup.py build_ext
Then you can install the
pyyolo to the system by:
pip install -U .
detect function in darknet can be used to run
YOLO models. Similar function is defined in
pyyolo module also.
I have given an example code on how to use this with
YOLO. First you need to download
weights, cfg files and meta files.
YOLO weights can be downloaded
from their website. Other files comes with darknet.
You will need to install
OpenCV to run this example.
You can download the example code and change the
weights_filepath. Then get your image which you will need to detect objects and give
its path to
You can run the code by
python example.py to visualize the output.
example2.py is there to test this on videos.