Skip to content

Outputs

Vladimir Mandic edited this page Jan 5, 2021 · 28 revisions

Outputs

Result of humand.detect() is a single object that includes data for all enabled modules and all detected objects:

result = {
  version:         // <string> version string of the human library
  face:            // <array of detected objects>
  [
    {
      confidence,  // <number>
      box,         // <array [x, y, width, height]>
      rawBox,      // normalized values for box, only set if returnRawData is set
      mesh,        // <array of 3D points [x, y, z]> 468 base points & 10 iris points
      rawMesh,      // normalized values for box, only set if returnRawData is set
      annotations, // <list of object { landmark: array of points }> 32 base annotated landmarks & 2 iris annotations
      iris,        // <number> relative distance of iris to camera, multiple by focal lenght to get actual distance
      age,         // <number> estimated age
      gender,      // <string> 'male', 'female'
      embedding,   // <array>[float] vector of 192 values used for face simmilarity compare
      emotion:         // <array of emotions>
      [
        {
          score,       // <number> probabily of emotion
          emotion,     // <string> 'angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral'
        }
      ],
    }
  ],
  body:            // <array of detected objects>
  [
    {
      score,       // <number>,
      keypoints,   // <array of 2D landmarks [ score, landmark, position [x, y] ]> 17 annotated landmarks
    }
  ],
  hand:            // <array of detected objects>
  [
    {
      confidence,  // <number>,
      box,         // <array [x, y, width, height]>,
      landmarks,   // <array of 3D points [x, y,z]> 21 points
      annotations, // <array of 3D landmarks [ landmark: <array of points> ]> 5 annotated landmakrs
    }
  ],
  gesture:         // <array of objects object>
  [
    {
      <gesture-type>: <person-number>,
      gesture:        <gesture-string>
    }
  ],
  performance = {  // performance data of last execution for each module measuredin miliseconds
                   // note that per-model performance data is not available in async execution mode
    backend,       // time to initialize tf backend, keeps longest value measured
    load,          // time to load models, keeps longest value measured
    image,         // time for image processing
    gesture,       // gesture analysis time
    body,          // model time
    hand,          // model time
    face,          // model time
    agegender,     // model time
    emotion,       // model time
    total,         // end to end time
  }
}