diff --git a/docs/docs/CPP-Streaming-Component-API.md b/docs/docs/CPP-Streaming-Component-API.md
index e4c7349e6890..142be9578a5e 100644
--- a/docs/docs/CPP-Streaming-Component-API.md
+++ b/docs/docs/CPP-Streaming-Component-API.md
@@ -177,7 +177,7 @@ Process a single video frame for the current segment.
Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.
-If the `job_properties` map contained in the `MPFStreamingVideoJob` struct passed to the component constructor contains a `QUALITY_SELECTION_THRESHOLD` entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the [Quality Selection Guide](Quality-Selection-Guide/index.html). After the Component Executable invokes `EndSegment()` to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.
+If the `job_properties` map contained in the `MPFStreamingVideoJob` struct passed to the component constructor contains a `CONFIDENCE_THRESHOLD` entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. After the Component Executable invokes `EndSegment()` to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. [NOTE: In the future the C++ Streaming Component API may be updated to support `QUALITY_SELECTION_THRESHOLD` instead of `CONFIDENCE_THRESHOLD`.]
Note that this function may not be invoked for every frame in the current segment. For example, if `FRAME_INTERVAL = 2`, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
diff --git a/docs/docs/Quality-Selection-Guide.md b/docs/docs/Quality-Selection-Guide.md
index 6e399dd712a3..fa9618020b1c 100644
--- a/docs/docs/Quality-Selection-Guide.md
+++ b/docs/docs/Quality-Selection-Guide.md
@@ -18,10 +18,8 @@ example, a face detection component may generate detections with a `DESCRIPTOR_M
quality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the
`detection_properties` map in each detection and track for that key and use the corresponding value as the detection
quality. The value associated with this property must be an integer or floating point value, where higher values
-indicate higher quality.
-
-One exception is when this property is set to `CONFIDENCE` and no `CONFIDENCE` property exists in the
-`detection_properties` map. Then the `confidence` member of each detection and track is used instead.
+indicate higher quality. The one exception is that if this property is set to `CONFIDENCE`, then the `confidence` member
+of each detection and track is used to determine quality.
The primary way in which OpenMPF uses detection quality is to determine the track "exemplar", which is the highest
quality detection in the track. For components that do not compute a quality value, or where all detections have
@@ -48,23 +46,18 @@ the `QUALITY_SELECTION_THRESHOLD`, then only that many artifacts will be extract
# Hybrid Quality Selection
In some cases, there may be a detection property that a component would like to use as a measure of quality but it
-doesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and
-would like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose
-may be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality
-detection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property
-to take on a range of values where the highest value indicates the highest quality.
-
-Another use case might be where the component would like to choose detections based on a set of quality values, or
-properties. Continuing with the face pose example, the component might like to designate the detection with pose closest
-to frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to
-profile, meaning values of yaw closest to -90 or +90 degrees.
-
-In both of these cases, the component can create a custom detection property that is used to rank these detections as it
-sees fit. It could use a detection property called `RANK`, and assign values to that property to rank the detections
-from lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the
-detection with a value of yaw closest to 0 would be assigned a `RANK` property with the highest value, then the
-detections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of `RANK`.
-Detections without the `RANK` property would be treated as having the lowest possible quality value. Thus, the track
-exemplar would be the face with the frontal pose, and the `ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT` property could
-be set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.
+doesn't lend itself to simple thresholding, perhaps because its value is not linearly increasing, or it is not numeric. The
+component can in this case create a custom property that represents the quality of detections using a numerical value that
+corresponds to the ordering of the detections from low to high quality.
+
+As a simple example, a face detector might be able to calculate the face pose and would like to select for artifact
+extraction the face that is closest to frontal pose, and the two that are closest to left and right profile pose. If the face
+detector computes the yaw with values between -90 degrees and +90 degrees, then the numerical order of those values would
+not produce the desired result. In this case, the component could create a custom detection property called `RANK`, and
+assign values to that property that orders the detections from highest to lowest quality. The face detection component would
+assign the highest value of `RANK` to the detection with a value of yaw closest to 0, and the detections with values of yaw
+closest to +/-90 degrees would be assigned the second and third highest values of `RANK`. Detections without the `RANK`
+property would be treated as having the lowest possible quality value. Thus, the track exemplar would be the face with the
+frontal pose, and the `ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT` property would be set to 3, so that the frontal and
+two profile pose detections would be kept as track artifacts in addition to the exemplar.
diff --git a/docs/site/CPP-Streaming-Component-API/index.html b/docs/site/CPP-Streaming-Component-API/index.html
index 727868355a04..f5911c747c37 100644
--- a/docs/site/CPP-Streaming-Component-API/index.html
+++ b/docs/site/CPP-Streaming-Component-API/index.html
@@ -449,7 +449,7 @@
BeginSegment(VideoSegmentInfo)
ProcessFrame(Mat ...)
Process a single video frame for the current segment.
Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.
-If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a QUALITY_SELECTION_THRESHOLD entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the Quality Selection Guide. After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.
+If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a CONFIDENCE_THRESHOLD entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. [NOTE: In the future the C++ Streaming Component API may be updated to support QUALITY_SELECTION_THRESHOLD instead of CONFIDENCE_THRESHOLD.]
Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
diff --git a/docs/site/Quality-Selection-Guide/index.html b/docs/site/Quality-Selection-Guide/index.html
index 2f59e9e5abbe..97cb8a9738a9 100644
--- a/docs/site/Quality-Selection-Guide/index.html
+++ b/docs/site/Quality-Selection-Guide/index.html
@@ -259,9 +259,8 @@ Quality Selection Properties
quality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the
detection_properties map in each detection and track for that key and use the corresponding value as the detection
quality. The value associated with this property must be an integer or floating point value, where higher values
-indicate higher quality.
-One exception is when this property is set to CONFIDENCE and no CONFIDENCE property exists in the
-detection_properties map. Then the confidence member of each detection and track is used instead.
+indicate higher quality. The one exception is that if this property is set to CONFIDENCE, then the confidence member
+of each detection and track is used to determine quality.
The primary way in which OpenMPF uses detection quality is to determine the track "exemplar", which is the highest
quality detection in the track. For components that do not compute a quality value, or where all detections have
identical quality, the Workflow Manager will choose the first detection in the track as the exemplar.
@@ -281,23 +280,19 @@ Quality Selection Properties
the QUALITY_SELECTION_THRESHOLD, then only that many artifacts will be extracted.
Hybrid Quality Selection
In some cases, there may be a detection property that a component would like to use as a measure of quality but it
-doesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and
-would like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose
-may be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality
-detection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property
-to take on a range of values where the highest value indicates the highest quality.
-Another use case might be where the component would like to choose detections based on a set of quality values, or
-properties. Continuing with the face pose example, the component might like to designate the detection with pose closest
-to frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to
-profile, meaning values of yaw closest to -90 or +90 degrees.
-In both of these cases, the component can create a custom detection property that is used to rank these detections as it
-sees fit. It could use a detection property called RANK, and assign values to that property to rank the detections
-from lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the
-detection with a value of yaw closest to 0 would be assigned a RANK property with the highest value, then the
-detections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of RANK.
-Detections without the RANK property would be treated as having the lowest possible quality value. Thus, the track
-exemplar would be the face with the frontal pose, and the ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT property could
-be set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.
+doesn't lend itself to simple thresholding, perhaps because its value is not linearly increasing, or it is not numeric. The
+component can in this case create a custom property that represents the quality of detections using a numerical value that
+corresponds to the ordering of the detections from low to high quality.
+As a simple example, a face detector might be able to calculate the face pose and would like to select for artifact
+extraction the face that is closest to frontal pose, and the two that are closest to left and right profile pose. If the face
+detector computes the yaw with values between -90 degrees and +90 degrees, then the numerical order of those values would
+not produce the desired result. In this case, the component could create a custom detection property called RANK, and
+assign values to that property that orders the detections from highest to lowest quality. The face detection component would
+assign the highest value of RANK to the detection with a value of yaw closest to 0, and the detections with values of yaw
+closest to +/-90 degrees would be assigned the second and third highest values of RANK. Detections without the RANK
+property would be treated as having the lowest possible quality value. Thus, the track exemplar would be the face with the
+frontal pose, and the ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT property would be set to 3, so that the frontal and
+two profile pose detections would be kept as track artifacts in addition to the exemplar.
diff --git a/docs/site/index.html b/docs/site/index.html
index ff86b9980311..b9d3bf028d6e 100644
--- a/docs/site/index.html
+++ b/docs/site/index.html
@@ -400,5 +400,5 @@ Overview
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index 6688c105358c..8842dc5e6fdf 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -607,7 +607,7 @@
},
{
"location": "/Quality-Selection-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nThere are a few places in OpenMPF where the quality of a detection comes into play. Here, \"detection quality\" is defined\nto be a measurement of how \"good\" the detection is that can be used to rank the detections in a track from highest to\nlowest quality. In many cases, components use \"confidence\" as an indicator of quality; however, there are some\ncomponents that do not compute a confidence value for its detections, and there are others that compute a different\nvalue that is a better measure of quality for that detection algorithm. As discussed in the next section, OpenMPF uses\ndetection quality for a variety of purposes.\n\n\nQuality Selection Properties\n\n\nQUALITY_SELECTION_PROPERTY\n is a string that defines the name of the property to use for quality selection. For\nexample, a face detection component may generate detections with a \nDESCRIPTOR_MAGNITUDE\n property that represents the\nquality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the\n\ndetection_properties\n map in each detection and track for that key and use the corresponding value as the detection\nquality. The value associated with this property must be an integer or floating point value, where higher values\nindicate higher quality.\n\n\nOne exception is when this property is set to \nCONFIDENCE\n and no \nCONFIDENCE\n property exists in the\n\ndetection_properties\n map. Then the \nconfidence\n member of each detection and track is used instead.\n\n\nThe primary way in which OpenMPF uses detection quality is to determine the track \"exemplar\", which is the highest\nquality detection in the track. For components that do not compute a quality value, or where all detections have\nidentical quality, the Workflow Manager will choose the first detection in the track as the exemplar.\n\n\nQUALITY_SELECTION_THRESHOLD\n is a numerical value used for filtering out low quality detections and tracks. All\ndetections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself\nis also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager\nto do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the\nlowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to\n1, then setting the threshold property to -1 will result in all detections and all tracks being retained.\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n can be used to select the number of detections to include in a feed-forward track. For\nexample, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that\ntrack. If less then 10 detections meet the \nQUALITY_SELECTION_THRESHOLD\n, then only that many detections are fed\nforward. Refer to the \nFeed Forward Guide\n for more information.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n can be used to select the number of detections that will be used to\nextract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,\nand then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet\nthe \nQUALITY_SELECTION_THRESHOLD\n, then only that many artifacts will be extracted.\n\n\nHybrid Quality Selection\n\n\nIn some cases, there may be a detection property that a component would like to use as a measure of quality but it\ndoesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and\nwould like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose\nmay be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality\ndetection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property\nto take on a range of values where the highest value indicates the highest quality.\n\n\nAnother use case might be where the component would like to choose detections based on a set of quality values, or\nproperties. Continuing with the face pose example, the component might like to designate the detection with pose closest\nto frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to\nprofile, meaning values of yaw closest to -90 or +90 degrees.\n\n\nIn both of these cases, the component can create a custom detection property that is used to rank these detections as it\nsees fit. It could use a detection property called \nRANK\n, and assign values to that property to rank the detections\nfrom lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the\ndetection with a value of yaw closest to 0 would be assigned a \nRANK\n property with the highest value, then the\ndetections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of \nRANK\n.\nDetections without the \nRANK\n property would be treated as having the lowest possible quality value. Thus, the track\nexemplar would be the face with the frontal pose, and the \nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n property could\nbe set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nThere are a few places in OpenMPF where the quality of a detection comes into play. Here, \"detection quality\" is defined\nto be a measurement of how \"good\" the detection is that can be used to rank the detections in a track from highest to\nlowest quality. In many cases, components use \"confidence\" as an indicator of quality; however, there are some\ncomponents that do not compute a confidence value for its detections, and there are others that compute a different\nvalue that is a better measure of quality for that detection algorithm. As discussed in the next section, OpenMPF uses\ndetection quality for a variety of purposes.\n\n\nQuality Selection Properties\n\n\nQUALITY_SELECTION_PROPERTY\n is a string that defines the name of the property to use for quality selection. For\nexample, a face detection component may generate detections with a \nDESCRIPTOR_MAGNITUDE\n property that represents the\nquality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the\n\ndetection_properties\n map in each detection and track for that key and use the corresponding value as the detection\nquality. The value associated with this property must be an integer or floating point value, where higher values\nindicate higher quality. The one exception is that if this property is set to \nCONFIDENCE\n, then the \nconfidence\n member\nof each detection and track is used to determine quality.\n\n\nThe primary way in which OpenMPF uses detection quality is to determine the track \"exemplar\", which is the highest\nquality detection in the track. For components that do not compute a quality value, or where all detections have\nidentical quality, the Workflow Manager will choose the first detection in the track as the exemplar.\n\n\nQUALITY_SELECTION_THRESHOLD\n is a numerical value used for filtering out low quality detections and tracks. All\ndetections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself\nis also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager\nto do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the\nlowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to\n1, then setting the threshold property to -1 will result in all detections and all tracks being retained.\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n can be used to select the number of detections to include in a feed-forward track. For\nexample, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that\ntrack. If less then 10 detections meet the \nQUALITY_SELECTION_THRESHOLD\n, then only that many detections are fed\nforward. Refer to the \nFeed Forward Guide\n for more information.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n can be used to select the number of detections that will be used to\nextract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,\nand then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet\nthe \nQUALITY_SELECTION_THRESHOLD\n, then only that many artifacts will be extracted.\n\n\nHybrid Quality Selection\n\n\nIn some cases, there may be a detection property that a component would like to use as a measure of quality but it\ndoesn't lend itself to simple thresholding, perhaps because its value is not linearly increasing, or it is not numeric. The\ncomponent can in this case create a custom property that represents the quality of detections using a numerical value that\ncorresponds to the ordering of the detections from low to high quality.\n\n\nAs a simple example, a face detector might be able to calculate the face pose and would like to select for artifact\nextraction the face that is closest to frontal pose, and the two that are closest to left and right profile pose. If the face\ndetector computes the yaw with values between -90 degrees and +90 degrees, then the numerical order of those values would\nnot produce the desired result. In this case, the component could create a custom detection property called \nRANK\n, and\nassign values to that property that orders the detections from highest to lowest quality. The face detection component would\nassign the highest value of \nRANK\n to the detection with a value of yaw closest to 0, and the detections with values of yaw\nclosest to +/-90 degrees would be assigned the second and third highest values of \nRANK\n. Detections without the \nRANK\n\nproperty would be treated as having the lowest possible quality value. Thus, the track exemplar would be the face with the\nfrontal pose, and the \nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n property would be set to 3, so that the frontal and\ntwo profile pose detections would be kept as track artifacts in addition to the exemplar.",
"title": "Quality Selection Guide"
},
{
@@ -617,12 +617,12 @@
},
{
"location": "/Quality-Selection-Guide/index.html#quality-selection-properties",
- "text": "QUALITY_SELECTION_PROPERTY is a string that defines the name of the property to use for quality selection. For\nexample, a face detection component may generate detections with a DESCRIPTOR_MAGNITUDE property that represents the\nquality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the detection_properties map in each detection and track for that key and use the corresponding value as the detection\nquality. The value associated with this property must be an integer or floating point value, where higher values\nindicate higher quality. One exception is when this property is set to CONFIDENCE and no CONFIDENCE property exists in the detection_properties map. Then the confidence member of each detection and track is used instead. The primary way in which OpenMPF uses detection quality is to determine the track \"exemplar\", which is the highest\nquality detection in the track. For components that do not compute a quality value, or where all detections have\nidentical quality, the Workflow Manager will choose the first detection in the track as the exemplar. QUALITY_SELECTION_THRESHOLD is a numerical value used for filtering out low quality detections and tracks. All\ndetections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself\nis also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager\nto do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the\nlowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to\n1, then setting the threshold property to -1 will result in all detections and all tracks being retained. FEED_FORWARD_TOP_QUALITY_COUNT can be used to select the number of detections to include in a feed-forward track. For\nexample, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that\ntrack. If less then 10 detections meet the QUALITY_SELECTION_THRESHOLD , then only that many detections are fed\nforward. Refer to the Feed Forward Guide for more information. ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT can be used to select the number of detections that will be used to\nextract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,\nand then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet\nthe QUALITY_SELECTION_THRESHOLD , then only that many artifacts will be extracted.",
+ "text": "QUALITY_SELECTION_PROPERTY is a string that defines the name of the property to use for quality selection. For\nexample, a face detection component may generate detections with a DESCRIPTOR_MAGNITUDE property that represents the\nquality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the detection_properties map in each detection and track for that key and use the corresponding value as the detection\nquality. The value associated with this property must be an integer or floating point value, where higher values\nindicate higher quality. The one exception is that if this property is set to CONFIDENCE , then the confidence member\nof each detection and track is used to determine quality. The primary way in which OpenMPF uses detection quality is to determine the track \"exemplar\", which is the highest\nquality detection in the track. For components that do not compute a quality value, or where all detections have\nidentical quality, the Workflow Manager will choose the first detection in the track as the exemplar. QUALITY_SELECTION_THRESHOLD is a numerical value used for filtering out low quality detections and tracks. All\ndetections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself\nis also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager\nto do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the\nlowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to\n1, then setting the threshold property to -1 will result in all detections and all tracks being retained. FEED_FORWARD_TOP_QUALITY_COUNT can be used to select the number of detections to include in a feed-forward track. For\nexample, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that\ntrack. If less then 10 detections meet the QUALITY_SELECTION_THRESHOLD , then only that many detections are fed\nforward. Refer to the Feed Forward Guide for more information. ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT can be used to select the number of detections that will be used to\nextract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,\nand then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet\nthe QUALITY_SELECTION_THRESHOLD , then only that many artifacts will be extracted.",
"title": "Quality Selection Properties"
},
{
"location": "/Quality-Selection-Guide/index.html#hybrid-quality-selection",
- "text": "In some cases, there may be a detection property that a component would like to use as a measure of quality but it\ndoesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and\nwould like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose\nmay be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality\ndetection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property\nto take on a range of values where the highest value indicates the highest quality. Another use case might be where the component would like to choose detections based on a set of quality values, or\nproperties. Continuing with the face pose example, the component might like to designate the detection with pose closest\nto frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to\nprofile, meaning values of yaw closest to -90 or +90 degrees. In both of these cases, the component can create a custom detection property that is used to rank these detections as it\nsees fit. It could use a detection property called RANK , and assign values to that property to rank the detections\nfrom lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the\ndetection with a value of yaw closest to 0 would be assigned a RANK property with the highest value, then the\ndetections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of RANK .\nDetections without the RANK property would be treated as having the lowest possible quality value. Thus, the track\nexemplar would be the face with the frontal pose, and the ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT property could\nbe set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.",
+ "text": "In some cases, there may be a detection property that a component would like to use as a measure of quality but it\ndoesn't lend itself to simple thresholding, perhaps because its value is not linearly increasing, or it is not numeric. The\ncomponent can in this case create a custom property that represents the quality of detections using a numerical value that\ncorresponds to the ordering of the detections from low to high quality. As a simple example, a face detector might be able to calculate the face pose and would like to select for artifact\nextraction the face that is closest to frontal pose, and the two that are closest to left and right profile pose. If the face\ndetector computes the yaw with values between -90 degrees and +90 degrees, then the numerical order of those values would\nnot produce the desired result. In this case, the component could create a custom detection property called RANK , and\nassign values to that property that orders the detections from highest to lowest quality. The face detection component would\nassign the highest value of RANK to the detection with a value of yaw closest to 0, and the detections with values of yaw\nclosest to +/-90 degrees would be assigned the second and third highest values of RANK . Detections without the RANK \nproperty would be treated as having the lowest possible quality value. Thus, the track exemplar would be the face with the\nfrontal pose, and the ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT property would be set to 3, so that the frontal and\ntwo profile pose detections would be kept as track artifacts in addition to the exemplar.",
"title": "Hybrid Quality Selection"
},
{
@@ -1517,7 +1517,7 @@
},
{
"location": "/CPP-Streaming-Component-API/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.\n\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Streaming Component API currently supports the development of \ndetection components\n, which are used detect objects in live RTSP or HTTP video streams.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\n\n\nEach frame of the video is processed as it is read from the stream. After processing enough frames to form a segment (for example, 100 frames), the component starts processing the next segment. Like with batch processing, each segment read from the stream is processed independently of the rest. No detection or track information is carried over between segments. Tracks are not merged across segments.\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\nwhile (has_next_frame) {\n if (is_new_segment) {\n component->BeginSegment(video_segment_info)\n }\n activity_found = component->ProcessFrame(frame, frame_number) // Component logic does the work here\n if (activity_found && !already_sent_new_activity_alert_for_this_segment) {\n SendActivityAlert(frame_number)\n }\n if (is_end_of_segment) {\n streaming_video_tracks = component->EndSegment()\n SendSummaryReport(frame_number, streaming_video_tracks)\n }\n}\n\n\n\nEach instance of a Component Executable runs as a separate process. Generally, each process will execute a different detection algorithm that corresponds to a single stage in a detection pipeline. Each instance is started by the Node Manager as needed in order to execute a streaming video job. The Node Manager will monitor the process status and eventually stop it.\n\n\nThe Component Executable invokes functions on the Component Logic to get detection objects, and subsequently generates new track alerts and segment summary reports based on the output. These alerts and reports are sent to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFStreamingDetectionComponent\n.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Streaming Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n of an example OpenMPF C++ detection component that supports stream processing.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFStreamingDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nPackaging the component into an OpenMPF-compliant .tar.gz file. (See \nComponent Packaging\n).\n\n\nRegistering the component with OpenMPF. (See \nComponent Registration\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Streaming Component API:\n\n\n\n\nThe API consists of a \nDetection Component Interface\n and related input and output structures.\n\n\nDetection Component Interface\n\n\n\n\nMPFStreamingDetectionComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform stream processing.\n\n\n\n\nInputs\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nOutputs\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macro in its implementation:\n\n\nEXPORT_MPF_STREAMING_COMPONENT(TYPENAME);\n\n\n\nThis creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cStreamingHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThis macro also creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThis macro must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nEXPORT_MPF_STREAMING_COMPONENT(StreamingHelloWorld);\n\n\n\nDetection Component Interface\n\n\nThe \nMPFStreamingDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform stream processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\nConstructor\n\n\nSuperclass constructor that must be invoked by the constructor of the component subclass.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFStreamingVideoJob &\n\n\nStructure containing details about the work to be performed. See \nMPFStreamingVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nSampleComponent::SampleComponent(const MPFStreamingVideoJob &job)\n : MPFStreamingDetectionComponent(job)\n , hw_logger_(log4cxx::Logger::getLogger(\"SampleComponent\"))\n , job_name_(job.job_name) {\n\n LOG4CXX_INFO(hw_logger_, \"[\" << job_name_ << \"] Initialized SampleComponent component.\")\n}\n\n\n\nBeginSegment(VideoSegmentInfo)\n\n\nIndicate the beginning of a new video segment. The next call to \nProcessFrame()\n will be the first frame of the new segment. \nProcessFrame()\n will never be called before this function.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_info\n\n\nconst VideoSegmentInfo &\n\n\nStructure containing details about next video segment to process. See \nVideoSegmentInfo\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvoid SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {\n // Prepare for next segment\n}\n\n\n\nProcessFrame(Mat ...)\n\n\nProcess a single video frame for the current segment.\n\n\nMust return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.\n\n\nIf the \njob_properties\n map contained in the \nMPFStreamingVideoJob\n struct passed to the component constructor contains a \nQUALITY_SELECTION_THRESHOLD\n entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the \nQuality Selection Guide\n. After the Component Executable invokes \nEndSegment()\n to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.\n\n\nNote that this function may not be invoked for every frame in the current segment. For example, if \nFRAME_INTERVAL = 2\n, then this function will only be invoked for every other frame since those are the only ones that need to be processed.\n\n\nAlso, it may not be invoked for the first nor last frame in the segment. For example, if \nFRAME_INTERVAL = 3\n and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nframe\n\n\nconst cv::Mat &\n\n\nOpenCV class containing frame data. See \ncv::Mat\n\n\n\n\n\n\nframe_number\n\n\nint\n\n\nA unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True when the component begins generating the first track for the current segment; false otherwise.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}\n\n\n\nEndSegment()\n\n\nIndicate the end of the current video segment. This will always be called after \nBeginSegment()\n. Generally, \nProcessFrame()\n will be called one or more times before this function, depending on the number of frames in the segment and the number of frames actually read from the stream.\n\n\nNote that the next time \nBeginSegment()\n is called, this component should start generating new tracks. Each time \nEndSegment()\n is called, it should return only the most recent track data for that segment. Tracks should not be carried over between segments. Do not append new detections to a preexisting track from the previous segment and return that cumulative track when this function is called.\n\n\n\n\nFunction Definition:\n\n\n\n\nvector EndSegment()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nvector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvector SampleComponent::EndSegment() {\n // Perform any necessary cleanup before processing the next segment.\n // Return the collection of tracks generated for this segment only.\n}\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nMPFStreamingVideoJob\n\n\nStructure containing information about a job to be performed on a video stream.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFStreamingVideoJob(\n const string &job_name,\n const string &run_directory,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name\n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\nrun_directory \n\n\nconst string &\n\n\nContains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nVideoSegmentInfo\n\n\nStructure containing information about a segment of a video stream to be processed. A segment is a subset of contiguous video frames.\n\n\n\n\nConstructor(s):\n\n\n\n\nVideoSegmentInfo(\n int segment_number,\n int start_frame,\n int end_frame,\n int frame_width,\n int frame_height\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_number\n\n\nint\n\n\nA unique segment number (0-based index).\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the first frame in this segment.\n\n\n\n\n\n\nend_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the last frame in this segment.\n\n\n\n\n\n\nframe_width\n\n\nint\n\n\nThe height of each frame in this segment.\n\n\n\n\n\n\nframe_height\n\n\nint\n\n\nThe width of each frame in this segment.\n\n\n\n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a single video frame (image).\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMPFImageLocation detection;\ndetection.x_left_upper = 0;\ndetection.y_left_upper = 0;\ndetection.width = 100;\ndetection.height = 100;\ndetection.confidence = 1.0;\ndetection.detection_properties[\"CLASSIFICATION\"] = \"backpack\";\n\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nThrow Exceptions\n\n\nUnlike the \nC++ Batch Component API\n, none of the the C++ Streaming Component API functions return an \nMPFDetectionError\n. Instead, streaming components should throw an exception when a non-recoverable error occurs. The exception should be an instantiation or subclass of \nstd::exception\n and provide a descriptive error message that can be retrieved using \nwhat()\n. For example:\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Something bad happened\n throw std::exception(\"Error: Cannot do X with value Y.\");\n}\n\n\n\nThe exception will be handled by the Component Executable. It will immediately invoke \nEndSegment()\n to retrieve the current tracks. Then the component process and streaming job will be terminated.\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing a segment with the same \nVideoSegmentInfo\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.\n\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Streaming Component API currently supports the development of \ndetection components\n, which are used detect objects in live RTSP or HTTP video streams.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\n\n\nEach frame of the video is processed as it is read from the stream. After processing enough frames to form a segment (for example, 100 frames), the component starts processing the next segment. Like with batch processing, each segment read from the stream is processed independently of the rest. No detection or track information is carried over between segments. Tracks are not merged across segments.\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\nwhile (has_next_frame) {\n if (is_new_segment) {\n component->BeginSegment(video_segment_info)\n }\n activity_found = component->ProcessFrame(frame, frame_number) // Component logic does the work here\n if (activity_found && !already_sent_new_activity_alert_for_this_segment) {\n SendActivityAlert(frame_number)\n }\n if (is_end_of_segment) {\n streaming_video_tracks = component->EndSegment()\n SendSummaryReport(frame_number, streaming_video_tracks)\n }\n}\n\n\n\nEach instance of a Component Executable runs as a separate process. Generally, each process will execute a different detection algorithm that corresponds to a single stage in a detection pipeline. Each instance is started by the Node Manager as needed in order to execute a streaming video job. The Node Manager will monitor the process status and eventually stop it.\n\n\nThe Component Executable invokes functions on the Component Logic to get detection objects, and subsequently generates new track alerts and segment summary reports based on the output. These alerts and reports are sent to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFStreamingDetectionComponent\n.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Streaming Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n of an example OpenMPF C++ detection component that supports stream processing.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFStreamingDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nPackaging the component into an OpenMPF-compliant .tar.gz file. (See \nComponent Packaging\n).\n\n\nRegistering the component with OpenMPF. (See \nComponent Registration\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Streaming Component API:\n\n\n\n\nThe API consists of a \nDetection Component Interface\n and related input and output structures.\n\n\nDetection Component Interface\n\n\n\n\nMPFStreamingDetectionComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform stream processing.\n\n\n\n\nInputs\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nOutputs\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macro in its implementation:\n\n\nEXPORT_MPF_STREAMING_COMPONENT(TYPENAME);\n\n\n\nThis creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cStreamingHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThis macro also creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThis macro must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nEXPORT_MPF_STREAMING_COMPONENT(StreamingHelloWorld);\n\n\n\nDetection Component Interface\n\n\nThe \nMPFStreamingDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform stream processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\nConstructor\n\n\nSuperclass constructor that must be invoked by the constructor of the component subclass.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFStreamingVideoJob &\n\n\nStructure containing details about the work to be performed. See \nMPFStreamingVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nSampleComponent::SampleComponent(const MPFStreamingVideoJob &job)\n : MPFStreamingDetectionComponent(job)\n , hw_logger_(log4cxx::Logger::getLogger(\"SampleComponent\"))\n , job_name_(job.job_name) {\n\n LOG4CXX_INFO(hw_logger_, \"[\" << job_name_ << \"] Initialized SampleComponent component.\")\n}\n\n\n\nBeginSegment(VideoSegmentInfo)\n\n\nIndicate the beginning of a new video segment. The next call to \nProcessFrame()\n will be the first frame of the new segment. \nProcessFrame()\n will never be called before this function.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_info\n\n\nconst VideoSegmentInfo &\n\n\nStructure containing details about next video segment to process. See \nVideoSegmentInfo\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvoid SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {\n // Prepare for next segment\n}\n\n\n\nProcessFrame(Mat ...)\n\n\nProcess a single video frame for the current segment.\n\n\nMust return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.\n\n\nIf the \njob_properties\n map contained in the \nMPFStreamingVideoJob\n struct passed to the component constructor contains a \nCONFIDENCE_THRESHOLD\n entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. After the Component Executable invokes \nEndSegment()\n to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. [NOTE: In the future the C++ Streaming Component API may be updated to support \nQUALITY_SELECTION_THRESHOLD\n instead of \nCONFIDENCE_THRESHOLD\n.]\n\n\nNote that this function may not be invoked for every frame in the current segment. For example, if \nFRAME_INTERVAL = 2\n, then this function will only be invoked for every other frame since those are the only ones that need to be processed.\n\n\nAlso, it may not be invoked for the first nor last frame in the segment. For example, if \nFRAME_INTERVAL = 3\n and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nframe\n\n\nconst cv::Mat &\n\n\nOpenCV class containing frame data. See \ncv::Mat\n\n\n\n\n\n\nframe_number\n\n\nint\n\n\nA unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True when the component begins generating the first track for the current segment; false otherwise.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}\n\n\n\nEndSegment()\n\n\nIndicate the end of the current video segment. This will always be called after \nBeginSegment()\n. Generally, \nProcessFrame()\n will be called one or more times before this function, depending on the number of frames in the segment and the number of frames actually read from the stream.\n\n\nNote that the next time \nBeginSegment()\n is called, this component should start generating new tracks. Each time \nEndSegment()\n is called, it should return only the most recent track data for that segment. Tracks should not be carried over between segments. Do not append new detections to a preexisting track from the previous segment and return that cumulative track when this function is called.\n\n\n\n\nFunction Definition:\n\n\n\n\nvector EndSegment()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nvector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvector SampleComponent::EndSegment() {\n // Perform any necessary cleanup before processing the next segment.\n // Return the collection of tracks generated for this segment only.\n}\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nMPFStreamingVideoJob\n\n\nStructure containing information about a job to be performed on a video stream.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFStreamingVideoJob(\n const string &job_name,\n const string &run_directory,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name\n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\nrun_directory \n\n\nconst string &\n\n\nContains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nVideoSegmentInfo\n\n\nStructure containing information about a segment of a video stream to be processed. A segment is a subset of contiguous video frames.\n\n\n\n\nConstructor(s):\n\n\n\n\nVideoSegmentInfo(\n int segment_number,\n int start_frame,\n int end_frame,\n int frame_width,\n int frame_height\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_number\n\n\nint\n\n\nA unique segment number (0-based index).\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the first frame in this segment.\n\n\n\n\n\n\nend_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the last frame in this segment.\n\n\n\n\n\n\nframe_width\n\n\nint\n\n\nThe height of each frame in this segment.\n\n\n\n\n\n\nframe_height\n\n\nint\n\n\nThe width of each frame in this segment.\n\n\n\n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a single video frame (image).\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMPFImageLocation detection;\ndetection.x_left_upper = 0;\ndetection.y_left_upper = 0;\ndetection.width = 100;\ndetection.height = 100;\ndetection.confidence = 1.0;\ndetection.detection_properties[\"CLASSIFICATION\"] = \"backpack\";\n\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nThrow Exceptions\n\n\nUnlike the \nC++ Batch Component API\n, none of the the C++ Streaming Component API functions return an \nMPFDetectionError\n. Instead, streaming components should throw an exception when a non-recoverable error occurs. The exception should be an instantiation or subclass of \nstd::exception\n and provide a descriptive error message that can be retrieved using \nwhat()\n. For example:\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Something bad happened\n throw std::exception(\"Error: Cannot do X with value Y.\");\n}\n\n\n\nThe exception will be handled by the Component Executable. It will immediately invoke \nEndSegment()\n to retrieve the current tracks. Then the component process and streaming job will be terminated.\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing a segment with the same \nVideoSegmentInfo\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
"title": "C++ Streaming Component API"
},
{
@@ -1562,7 +1562,7 @@
},
{
"location": "/CPP-Streaming-Component-API/index.html#processframemat",
- "text": "Process a single video frame for the current segment. Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment. If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a QUALITY_SELECTION_THRESHOLD entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the Quality Selection Guide . After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2 , then this function will only be invoked for every other frame since those are the only ones that need to be processed. Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment. Function Definition: bool ProcessFrame(const cv::Mat &frame, int frame_number) Parameters: Parameter Data Type Description frame const cv::Mat & OpenCV class containing frame data. See cv::Mat frame_number int A unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function. Returns: ( bool ) True when the component begins generating the first track for the current segment; false otherwise. Example: bool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}",
+ "text": "Process a single video frame for the current segment. Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment. If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a CONFIDENCE_THRESHOLD entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. [NOTE: In the future the C++ Streaming Component API may be updated to support QUALITY_SELECTION_THRESHOLD instead of CONFIDENCE_THRESHOLD .] Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2 , then this function will only be invoked for every other frame since those are the only ones that need to be processed. Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment. Function Definition: bool ProcessFrame(const cv::Mat &frame, int frame_number) Parameters: Parameter Data Type Description frame const cv::Mat & OpenCV class containing frame data. See cv::Mat frame_number int A unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function. Returns: ( bool ) True when the component begins generating the first track for the current segment; false otherwise. Example: bool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}",
"title": "ProcessFrame(Mat ...)"
},
{
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index 1cc95f98739c..62e07b60d361 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,152 +2,152 @@
/index.html
- 2024-03-25
+ 2024-03-29
daily
/Release-Notes/index.html
- 2024-03-25
+ 2024-03-29
daily
/License-And-Distribution/index.html
- 2024-03-25
+ 2024-03-29
daily
/Acknowledgements/index.html
- 2024-03-25
+ 2024-03-29
daily
/Install-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Admin-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/User-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/OpenID-Connect-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Media-Segmentation-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Feed-Forward-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Derivative-Media-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Object-Storage-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Markup-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/TiesDb-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Trigger-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Roll-Up-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Health-Check-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Quality-Selection-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/REST-API/index.html
- 2024-03-25
+ 2024-03-29
daily
/Component-API-Overview/index.html
- 2024-03-25
+ 2024-03-29
daily
/Component-Descriptor-Reference/index.html
- 2024-03-25
+ 2024-03-29
daily
/CPP-Batch-Component-API/index.html
- 2024-03-25
+ 2024-03-29
daily
/Python-Batch-Component-API/index.html
- 2024-03-25
+ 2024-03-29
daily
/Java-Batch-Component-API/index.html
- 2024-03-25
+ 2024-03-29
daily
/GPU-Support-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Contributor-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Development-Environment-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Node-Guide/index.html
- 2024-03-25
+ 2024-03-29
daily
/Workflow-Manager-Architecture/index.html
- 2024-03-25
+ 2024-03-29
daily
/CPP-Streaming-Component-API/index.html
- 2024-03-25
+ 2024-03-29
daily
\ No newline at end of file