Skip to content

gvaduha/multicam-objectdetection-nn

Repository files navigation

Object detection from multiply video sources

License Build Status

Online multicam object detection mini framework. Currently doesn't have an object tracker and supposed it is an external component aggregating detection results from group of detection servers. Started as a tool to test and measure different object detection neural networks, capturing set of cameras in the fashion of "process all images at time point". Tool has pluggable design. Capture done with opencv, classes for NN under test and result processor could be substituted via config.json

How to run

Main service entry is main.py. It runs without argumets.

GUI test application is test-cli-main.py. You have to specify camera (or other video source) uri and uri where service provides detection data on.

Pluggable classes

Neural network class interface

  • def init(self, config, logger):
  • def detectObjects(self, img) -> List[e.DetectedObject]:
  • def stop(self):

Available implementations:

  • fakes::FakeNn
  "nn": {
    "module": "fakes",
    "class": "FakeNn"
  },
  • tfdetector::TensorFlowDetector
  "nn": {
    "module": "tfdetector",
    "class": "TensorFlowDetector"
  },
  • torchdetector::TorchDetector
  "nn": {
    "module": "torchdetector",
    "class": "TorchDetector"
  },

Envent result processor

  • def init(self, config, logger):
  • def pushDetectedObjectsFrame(self, frame: e.DetectedObjectsFrame):
  • def stop(self):

Available implementations:

  • fileresultsink::WriteJsonResultSink
  "resultsink": {
    "module": "fileresultsink",
    "class": "WriteJsonResultSink"
  },
  • webservice::FlaskResultSink
  "resultsink": {
    "module": "webservice",
    "class": "FlaskResultSink"
  },

Config

Full example

Main

  "runintervalsec": 1.0

Modules config

Pluggable modules gets config tree under "modules"/"modulename" upon init

"modules": {
  "WriteJsonResultSink": {
    "file": "./results.log"
  },
  "FlaskResultSink": {
    "server": "0.0.0.0:5555",
    "resultep": "/currentresult"
  },
  "TensorFlowDetector": {
    "model": "models/tf.model",
    "threshold": 0.1
  },
  "TorchDetector": {
    "model": "models/torch.model",
    "threshold": 0.1,
    "device": "gpu"
  }
},

Cams config

"cams": [
  {
    "vsid": 1,
    "uri": "rtsp://admin:admin@cam1/h264"
  },
  {
    "vsid": 2,
    "uri": "rtsp://admin:admin@cam2"
  }
]