Skip to content

Images Best Practices #3511

@malfonso0

Description

@malfonso0

hi, i am developing a object detection projects on jetsons Xavier NX. where i need to monitor multiple cameras, and do some object detection on each frame of each camera
:( Sorry for the long question, im trying to introduce thr problem/objective

At first... i started with only one camera... so all my code went on the same script.. something like

model = loadModel()
while True
frame = getFrame()
detections = model.detect(frame)
...

For this simple script.. i had almost 70FPS (~15 ms) ( model is a tinyyolo4-416.. )
Then i started with the multicamera approach... i tryed threading/multiprocess.. and worked, but i think that lacks separation of concerns.. and my code was a bit messy.

in order to do separation of concerns.. i haave created two scripts/dockers.

  1. An Api using fastapi. which loads the model, and has the predict endpoint for prediction
  2. CameraProcessor.. which, as the name goes, its a python script that takes images from a camera, and it then call the API for detectin things on the frame.

when i started testing this, again with only one camera.. my FPS went down from 70 FPS to 30FPS (~ 30ms)

Here my questions

  • What is the recomended/best practices to send images to the API?
    right now.. im encoding the numpy image( cv2.imencode then to base64) and sending and decoding on the other side... i have done some profiling.. and the encoding/decoding takes approx 4 ms each.. wich would reduce my FPS to ~50

  • Is there any "configuration" that i can do ? for reducing this gap ? or this is the expected latency?
    thanks

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions