Skip to content
This repository has been archived by the owner on Jul 10, 2023. It is now read-only.

Some public models from Open Model Zoo do not produce inference results #89

Closed
nikparmar opened this issue Sep 12, 2021 · 18 comments
Closed
Labels
bug Something isn't working question Further information is requested

Comments

@nikparmar
Copy link

nikparmar commented Sep 12, 2021

Here is the pipeline I am testing with.
The pipeline runs without any error and there is no log for any output for the following command:

command

sudo ./vaclient/vaclient.sh run object_classification/face_mask_classification https://github.com/intel/video-analytics-serving/blob/master/samples/classroom.mp4?raw=true

pipeline.json

{
    "type": "GStreamer",
    "template": [
      "uridecodebin name=source",
      " ! gvadetect model=/home/video-analytics-serving/models/ultra-lightweight-face-detection-slim-320/1/FP32/ultra-lightweight-face-detection-slim-320.xml name=detection",
      " ! gvaclassify model=/home/video-analytics-serving/models/face_classification/1/FP32/face_mask.xml name=classification",
      " ! gvapython name=face-classification class=FaceClassification module=/home/video-analytics-serving/extensions/spatial_analytics/face_classification.py",
      " ! gvametaconvert name=metaconvert",
      " ! gvapython module=/home/video-analytics-serving/extensions/gva_event_meta/gva_event_convert.py",
      " ! gvametapublish name=destination",
      " ! appsink name=appsink"
    ],
    "description": "Bank Face CLassification",
    "parameters": {
      "type": "object",
      "properties": {
        "detection-device": {
          "element": {
            "name": "detection",
            "property": "device"
          },
          "type": "string"
        },
        "classification-device": {
          "element": {
            "name": "classification",
            "property": "device"
          },
          "type": "string"
        },
        "inference-interval": {
          "element": [
            {
              "name": "detection",
              "property": "inference-interval"
            },
            {
              "name": "classification",
              "property": "inference-interval"
            }
          ],
          "type": "integer"
        },
        "detection-model-instance-id": {
          "element": {
            "name": "detection",
            "property": "model-instance-id"
          },
          "type": "string"
        },
        "classification-model-instance-id": {
          "element": {
            "name": "classification",
            "property": "model-instance-id"
          },
          "type": "string"
        },
        "object-class": {
            "element": [
              {
                "name": "detection",
                "property": "object-class"
              },
              {
                "name": "classification",
                "property": "object-class"
              }
            ],
            "type": "string"
          },
        "reclassify-interval": {
          "element": "classification",
          "type": "integer"
        },
        "detection-threshold": {
          "element": {
            "name": "detection",
            "property": "threshold"
          },
          "type": "number"
        },
        "classification-threshold": {
          "element": {
            "name": "classification",
            "property": "threshold"
          },
          "type": "number"
        }
      }
    }
  }

I think this is because I have not added the model-proc file for the models?

@akwrobel akwrobel added the bug Something isn't working label Sep 15, 2021
@akwrobel
Copy link
Contributor

Hi @nikparmar,
We are currently looking into this issue.

@nikparmar
Copy link
Author

Hi @akwrobel Any update on this?

@nnshah1
Copy link

nnshah1 commented Sep 20, 2021

@nikparmar Can you check the contents of /tmp/results.jsonl we have seen a similar issue in scenarios where the output file takes longer than usual to be created and vaclient does not print output from the pipeline. The output is still generated in that case but the vaclient doesn't report anything.

@nikparmar
Copy link
Author

Hi @nnshah1 Let me try once and check it.

@nikparmar
Copy link
Author

nikparmar commented Sep 21, 2021

Hi @nnshah1 I tried checking the results.jsonl file but it's empty.
As you can see in the below screenshot the pipeline is running without any errors and also when I stopped it the FPS values are also coming.
Can you check at your end the same or any other model from the public repository?
Image

@tthakkal
Copy link
Contributor

tthakkal commented Sep 21, 2021

Hi @nikparmar could you please share exact models you are using, will test and get back to you.

@nikparmar
Copy link
Author

Hi @tthakkal here are the details of the models:

  1. ultra-lightweight-face-detection-slim-320
  2. face-mask

@tthakkal
Copy link
Contributor

Thanks, I tried face detection with ultra-lightweight-face-detection-slim-320 and see no results . I will verify if this model requires a special model proc settings and will let you know.

@nikparmar
Copy link
Author

Also, @tthakkal I found the case with lot of public model available here and not just the ultra-lightweight-face-detection-slim-320. You can try any other face detection model and similar is the case.

@tthakkal
Copy link
Contributor

Hi @nikparmar, I checked and sorry to say we don't support those models. But you can use these models by inferencing using gvainference and creating detection with gvapython.

Get tensors using gvainference and process tensors using gvapython to create and add region before sending for face mask classification.

" ! gvainference model=ultra-lightweight-face-detection-slim-320.xml name=detection",
" ! gvapython name=face-detection class=FaceDetection module=face_detection.py",`
" ! gvaclassify model=face_mask.xml name=classification",
" ! gvapython name=face-classification class=FaceClassification module=face_classification.py",

@nikparmar
Copy link
Author

Hi @tthakkal I tried using the below script but unable to find any tensors on this video .

"""
* Copyright (C) 2021 Intel Corporation.
*
* SPDX-License-Identifier: BSD-3-Clause
"""

import traceback
from extensions.gva_event_meta import gva_event_meta
from vaserving.common.utils import logging



def print_message(message):
    print("", flush=True)
    print(message, flush=True)


logger = logging.get_logger("face_detection", is_static=True)


# class FaceDetection:
#     DEFAULT_EVENT_TYPE = "face-detection"
#     DEFAULT_DETECTION_CONFIDENCE_THRESHOLD = 0.0

#     # Caller supplies one or more zones via request parameter
#     def __init__(self, threshold=0, log_level="INFO"):
#         self._threshold = threshold
#         self._logger = logger
#         self._logger.log_level = log_level

def process_frame(frame):
    try:
        width = frame.video_info().width
        height = frame.video_info().height    

        for tensor in frame.tensors():
            dims = tensor.dims()
            data = tensor.data()
            object_size = dims[-1]
            for i in range(dims[-2]):
                image_id = data[i * object_size + 0]
                label_id = data[i * object_size + 1]
                confidence = data[i * object_size + 2]
                x_min = int(data[i * object_size + 3] * width + 0.5)
                y_min = int(data[i * object_size + 4] * height + 0.5)
                x_max = int(data[i * object_size + 5] * width + 0.5)
                y_max = int(data[i * object_size + 6] * height + 0.5)

                if image_id != 0:
                    break
                if confidence < 0.5:
                    continue

                roi = frame.add_region(x_min, y_min, x_max - x_min, y_max - y_min, str(label_id), confidence)

    except Exception:
        print_message("Error processing frame: {}".format(traceback.format_exc()))
    return True

@whbruce
Copy link

whbruce commented Sep 27, 2021

A few points on compatible models

  1. A list of compatible models is defined by those that have an associated model-proc. This gist has a listing by model name. Some public models are supported so title of this issue is not totally accurate.
  2. If your model is not on this list, it may still work with appropriate model preparation, which requires a good understanding of deep learning model formats.
  3. If model preparation is not successful a final option is custom processing which requires GStreamer knowledge.

@tthakkal
Copy link
Contributor

@nikparmar I tried with same video and I see tensor data

uri=https://github.com/intel-iot-devkit/sample-videos/raw/master/classroom.mp4 ! gvainference model=/home/video-analytics-serving/models/face_detection/ultra_lightweight/FP32/ultra-lightweight-face-detection-slim-320.xml ! gvapython module=face_detect.py ! fakesink

<<snip>>
[1, 4420, 4]
[9.1916602e-04 5.3762719e-03 2.2127293e-02 ... 4.1273904e-01 1.2037479e+00
 1.2570462e+00]
[1, 4420, 2]
[0.89466214 0.10533787 0.8947022  ... 0.03718159 0.9683689  0.03163114]
New clock: GstSystemClock
[1, 4420, 4]
[0.00148836 0.00500072 0.0228582  ... 0.4127335  1.203156   1.2570627 ]
[1, 4420, 2]
[0.89466614 0.10533387 0.89470595 ... 0.03728213 0.9683611  0.03163888]
[1, 4420, 4]
[0.00135019 0.00493123 0.0226246  ... 0.41368762 1.2044996  1.2576498 ]
<<snip>>

def process_frame(frame):
    for tensor in frame.tensors():
        dims = tensor.dims()
        data = tensor.data()
        print(dims)
        print(data)
    return True

@tthakkal
Copy link
Contributor

@nikparmar Were you able to get it working?

@nikparmar
Copy link
Author

@tthakkal I will update you soon. Engaged in something

@whbruce whbruce added the question Further information is requested label Oct 4, 2021
@nikparmar
Copy link
Author

hi @tthakkal apologies for the late reply. But this isn't working at my end. Not getting any output for some reason.

@akwrobel
Copy link
Contributor

@nikparmar Could you please provide more information on what changes you tried? Were you able to see results replacing your process_frame call with the one @tthakkal provided?

@akwrobel
Copy link
Contributor

@nikparmar can you provide any updates on what changes you tried? Is this no longer an issue?

@whbruce whbruce changed the title Unable to get inference results when using the public models from the Model Zoo Some public models from Open Model Zoo do not produce inference results Nov 30, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants