Skip to content
This repository has been archived by the owner on Oct 28, 2018. It is now read-only.

explosion/lightnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LightNet: Bringing pjreddie's DarkNet out of the shadows

LightNet provides a simple and efficient Python interface to DarkNet, a neural network library written by Joseph Redmon that's well known for its state-of-the-art object detection models, YOLO and YOLOv2. LightNet's main purpose for now is to power Prodigy's upcoming object detection and image segmentation features. However, it may be useful to anyone interested in the DarkNet library.

Build Status

Current Release Version

pypi Version

Explosion AI on Twitter


LightNet's features include:

  • State-of-the-art object detection: YOLOv2 offers unmatched speed/accuracy trade-offs.
  • Easy-to-use via Python: Pass in byte strings, get back numpy arrays with bounding boxes.
  • Lightweight and self-contained: No dependency on large frameworks like Tensorflow, PyTorch etc. The DarkNet source is provided in the package.
  • Easy to install: Just pip install lightnet and python -m lightnet download yolo.
  • Cross-platform: Works on OSX and Linux, on Python 2.7, 3.5 and 3.6.
  • 10x faster on CPU: Uses BLAS for its matrix multiplications routines.
  • Not named DarkNet: Avoids some potentially awkward misunderstandings.

LightNet "logo"

πŸŒ“ Installation

Operating system macOS / OS X, Linux (Windows coming soon)
Python version CPython 2.7, 3.5, 3.6. Only 64 bit.
Package managers pip (source packages only)

LightNet requires an installation of OpenBLAS:

sudo apt-get install libopenblas-dev

LightNet can be installed via pip:

pip install lightnet

Once you've downloaded LightNet, you can install a model using the lightnet download command. This will save the models in the lightnet/data directory. If you've installed LightNet system-wide, make sure to run the command as administrator.

python -m lightnet download tiny-yolo
python -m lightnet download yolo

The following models are currently available via the download command:

yolo.weights 258 MB Direct download__
tiny-yolo.weights 44.9 MB Direct download__

πŸŒ“ Usage

An object detection system predicts labelled bounding boxes on an image. The label scheme comes from the training data, so different models will have different label sets. YOLOv2 can detect objects in images of any resolution. Smaller images will be faster to predict, while high resolution images will give you better object detection accuracy.

Images can be loaded by file-path, by JPEG-encoded byte-string, or by numpy array. If passing in a numpy array, it should be of dtype float32, and shape (width, height, colors).

import lightnet

model = lightnet.load('tiny-yolo')
image = lightnet.Image.from_bytes(open('eagle.jpg', 'rb').read())
boxes = model(image)

METHOD lightnet.load

Load a pre-trained model. If a path is provided, it shoud be a directory containing two files, named {name}.weights and {name}.cfg. If a path is not provided, the built-in data directory is used, which is located within the LightNet package.

model = lightnet.load('tiny-yolo')
model = lightnet.load(path='/path/to/yolo')
Argument Type Description
name unicode Name of the model located in the data directory, e.g. tiny-yolo.
path unicode Optional path to a model data directory.
RETURNS Network The loaded model.

πŸŒ“ Network

The neural network object. Wraps DarkNet's network struct.

CLASSMETHOD Network.load

Load a pre-trained model. Identical to lightnet.load().

METHOD Network.__call__

Detect bounding boxes given an Image object. The bounding boxes are provided as a list, with each entry (class_id, class_name, prob, [(x, y, width, height)]), where `x and yare the pixel coordinates of the center of the centre of the box, andwidthandheightdescribe its dimensions.class_idis the integer index of the object type, class_name is a string with the object type, andprobis a float indicating the detection score. Thethreshparameter controls the prediction threshold. Objects with a detection probability abovethreshare returned. We don't know whathier_threshornmsdo. .. code:: python boxes = model(image, thresh=0.5, hier_thresh=0.5, nms=0.45) =============== =========== =========== Argument Type Description =============== =========== ===========imageImageThe image to process.threshfloat Prediction threshold.hier_threshfloatpathunicode Optional path to a model data directory. **RETURNS** list The bounding boxes, as(class_id, class_name, prob, xywh)tuples. =============== =========== ===========METHODNetwork.update ------------------------- Update the model, on a batch of examples. The images should be provided as a list ofImageobjects. Thebox_labelsshould be a list ofBoxLabelobjects. Returns a float, indicating how much the models prediction differed from the provided true labels. .. code:: python loss = model.update([image1, image2], [box_labels1, box_labels2]) ============== =========== =========== Argument Type Description ============== =========== ===========imageslist List ofImageobjects.box_labelslist List ofBoxLabelobjects. **RETURNS** float The loss indicating how much the prediction differed from the provided labels. ============== =========== =========== ---- πŸŒ“ Image ======== Data container for a single image. Wraps DarkNet'simagestruct.METHODImage.__init__ ------------------------- Create an image. `data` should be a numpy array of dtype float32, and shape (width, height, colors). .. code:: python image = Image(data) =========== =========== =========== Argument Type Description =========== =========== ===========datanumpy array The image data **RETURNS**ImageThe newly constructed object. =========== =========== ===========CLASSMETHODImage.blank --------------------------- Create a blank image, of specified dimensions. .. code:: python image = Image.blank(width, height, colors) =========== =========== =========== Argument Type Description =========== =========== ===========widthint The image width, in pixels.heightint The image height, in pixels.colorsint The number of color channels (usually3). **RETURNS**ImageThe newly constructed object. =========== =========== ===========CLASSMETHODImage.load -------------------------- Load an image from a path to a jpeg file, of the specified dimensions. .. code:: python image = Image.load(path, width, height, colors) =========== =========== =========== Argument Type Description =========== =========== ===========pathunicode The path to the image file.widthint The image width, in pixels.heightint The image height, in pixels.colorsint The number of color channels (usually3). **RETURNS**ImageThe newly constructed object. =========== =========== ===========CLASSMETHODImage.from_bytes -------------------------------- Read an image from a byte-string, which should be the contents of a jpeg file. .. code:: python image = Image.from_bytes(bytes_data) ============== =========== =========== Argument Type Description ============== =========== ===========bytes_databytes The image contents. **RETURNS**ImageThe newly constructed object. ============== =========== =========== ---- πŸŒ“ BoxLabels ============ Data container for labelled bounding boxes for a single image. Wraps an array of DarkNet'sbox_labelstruct.METHODBoxLabels.__init__ ----------------------------- Labelled box annotations for a single image, used to update the model.idsshould be a 1d numpy array of dtype int32, indicating the correct class IDs of the objects.boxesshould be a 2d array of dtype float32, and shape(len(ids), 4). The 4 columns of the boxes should provide the **relative**x, y, width, heightof the bounding box, wherexandyare the coordinates of the centre, relative to the image size, andwidthandheightare the relative dimensions of the box. .. code:: python box_labels = BoxLabels(ids, boxes) ============== ============= =========== Argument Type Description ============== ============= ===========idsnumpy array The class IDs of the objects.boxesnumpy array The boxes providing the relativex, y, width, heightof the bounding box. **RETURNS**BoxLabelsThe newly constructed object. ============== ============= ===========CLASSMETHODBoxLabels.load ------------------------------ Load annotations for a single image from a text file. Each box should be described on a single line, in the formatclass_id x y width height. .. code:: python box_labels = BoxLabels.load(path) ============== ============= =========== Argument Type Description ============== ============= ===========pathunicode The path to load from. **RETURNS**BoxLabels`` The newly constructed object. ============== ============= ===========