diff --git a/docs/docs/Admin-Guide.md b/docs/docs/Admin-Guide.md
index 0cd794bbfc29..87474e0d2a99 100644
--- a/docs/docs/Admin-Guide.md
+++ b/docs/docs/Admin-Guide.md
@@ -32,9 +32,9 @@ An admin user can click inside of the "Value" field for any of the properties an
Note that if the admin user types in the original value of the property, or clicks the "Reset" button, then it will return back to the normal coloration.
-
WARNING: Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!
+
WARNING: Changing the value of these properties can prevent the Workflow Manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!
-At the bottom of the properties table is the "Save Properties" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:
+At the bottom of the properties table is the "Save Properties" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the Workflow Manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:

diff --git a/docs/docs/CPP-Streaming-Component-API.md b/docs/docs/CPP-Streaming-Component-API.md
index b2a036be83fd..e4c7349e6890 100644
--- a/docs/docs/CPP-Streaming-Component-API.md
+++ b/docs/docs/CPP-Streaming-Component-API.md
@@ -177,11 +177,11 @@ Process a single video frame for the current segment.
Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.
-If the `job_properties` map contained in the `MPFStreamingVideoJob` struct passed to the component constructor contains a CONFIDENCE_THRESHOLD entry, then this function should only return true for a detection with a confidence value that meets or exceeds that threshold. After the Component Executable invokes `EndSegment()` to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.
+If the `job_properties` map contained in the `MPFStreamingVideoJob` struct passed to the component constructor contains a `QUALITY_SELECTION_THRESHOLD` entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the [Quality Selection Guide](Quality-Selection-Guide/index.html). After the Component Executable invokes `EndSegment()` to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.
-Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
+Note that this function may not be invoked for every frame in the current segment. For example, if `FRAME_INTERVAL = 2`, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
-Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
+Also, it may not be invoked for the first nor last frame in the segment. For example, if `FRAME_INTERVAL = 3` and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
* Function Definition:
```c++
diff --git a/docs/docs/Development-Environment-Guide.md b/docs/docs/Development-Environment-Guide.md
index fcb17f0ff5f4..534a70ab85f3 100644
--- a/docs/docs/Development-Environment-Guide.md
+++ b/docs/docs/Development-Environment-Guide.md
@@ -436,7 +436,7 @@ drag and drop the file onto the "Upload a new component" dropzone area or click
the dropzone area to open a file browser and select the file that way.
In either case, the component will begin to be uploaded to the system. If the
admin user dragged and dropped the file onto the dropzone area then the upload
-progress will be shown in that area. Once uploaded, the workflow manager will
+progress will be shown in that area. Once uploaded, the Workflow Manager will
automatically attempt to register the component. Notification messages will
appear in the upper right side of the screen to indicate success or failure if
an error occurs. The "Current Components" table will display the component
@@ -447,7 +447,7 @@ status.
If for some reason the component package upload succeeded but the component
registration failed then the admin user will be able to click the "Register"
button again to try to another registration attempt. For example, the admin
-user may do this after reviewing the workflow manager logs and resolving any
+user may do this after reviewing the Workflow Manager logs and resolving any
issues that prevented the component from successfully registering the first
time. One reason may be that a component with the same name already exists on
the system. Note that an error will also occur if the top-level directory of
diff --git a/docs/docs/Feed-Forward-Guide.md b/docs/docs/Feed-Forward-Guide.md
index 96ba0fbf60da..c1d651c5ab2e 100644
--- a/docs/docs/Feed-Forward-Guide.md
+++ b/docs/docs/Feed-Forward-Guide.md
@@ -3,76 +3,133 @@ Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The
# Introduction
-Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be directly “fed into” the next stage. It differs from the default segmenting behavior in the following major ways:
+Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be
+directly “fed into” the next stage. It differs from the default segmenting behavior in the following major ways:
-1. The next stage will only look at the frames that had detections in the previous stage. The default segmenting behavior results in “filling the gaps” so that the next stage looks at all the frames between the start and end frames of the feed forward track, regardless of whether a detection was actually found in those frames.
+1. The next stage will only look at the frames that had detections in the previous stage. The default segmenting
+ behavior results in “filling the gaps” so that the next stage looks at all the frames between the start and end
+ frames of the feed forward track, regardless of whether a detection was actually found in those frames.
-2. The next stage can be configured to only look at the detection regions for the frames in the feed forward track. The default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks at the whole frame region for every frame in the segment.
+2. The next stage can be configured to only look at the detection regions for the frames in the feed forward track. The
+ default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks
+ at the whole frame region for every frame in the segment.
-3. The next stage will process one sub-job per track generated in the previous stage. If the previous stage generated more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed forward can be configured such that only the detection regions for those tracks are processed. If they are non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that captures the frame associated with all 3 tracks.
+3. The next stage will process one sub-job per track generated in the previous stage. If the previous stage generated
+ more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed
+ forward can be configured such that only the detection regions for those tracks are processed. If they are
+ non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that
+ captures the frame associated with all 3 tracks.
# Motivation
Consider using feed forward for the following reasons:
-1. You have an algorithm that isn’t capable of breaking down a frame into regions of interest. For example, face detection can take a whole frame and generate a separate detection region for each face in the frame. On the other hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and generate a single detection that’s the size of the frame’s width and height. The OpenCV DNN component will produce better results if it operates on smaller regions that only capture the desired object to be classified. Using feed forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.
+1. You have an algorithm that isn’t capable of breaking down a frame into regions of interest. For example, face
+ detection can take a whole frame and generate a separate detection region for each face in the frame. On the other
+ hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and
+ generate a single detection that’s the size of the frame’s width and height. The OpenCV DNN component will produce
+ better results if it operates on smaller regions that only capture the desired object to be classified. Using feed
+ forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.
-2. You wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest. For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will speed up run times.
+2. You wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.
+ For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which
+ may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will
+ speed up run times.
-> **NOTE:** Enabling feed forward results in more sub-jobs and more message passing between the workflow manager and components than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the overhead cost. The cost may be outweighed by how feed forward can “filter out” pixel data that doesn’t need to be processed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.
+> **NOTE:** Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and
+> components than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the
+> overhead cost. The cost may be outweighed by how feed forward can “filter out” pixel data that doesn’t need to be
+> processed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.
-The output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward pipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected in the first stage and a face was detected in the second stage.
+The output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward
+pipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected
+in the first stage and a face was detected in the second stage.
# First Stage and Combining Properties
-When feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there is no track to feed in. In other words, the first stage will process the media file as though feed forward was not enabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take advantage of the feed forward behavior.
+When feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there
+is no track to feed in. In other words, the first stage will process the media file as though feed forward was not
+enabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take
+advantage of the feed forward behavior.
-> **NOTE:** When `FEED_FORWARD_TYPE` is set to anything other than `NONE`, the following properties will be ignored: `FRAME_INTERVAL`, `USE_KEY_FRAMES`, `SEARCH_REGION_*`.
+> **NOTE:** When `FEED_FORWARD_TYPE` is set to anything other than `NONE`, the following properties will be ignored:
+> `FRAME_INTERVAL`, `USE_KEY_FRAMES`, `SEARCH_REGION_*`.
-If you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure that `FEED_FORWARD_TYPE` is set to `NONE`, or not specified, for the first stage. You can then configure each subsequent stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from the first stage, the subsequent stages will inherit the effects of those properties set on the first stage.
+If you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure
+that `FEED_FORWARD_TYPE` is set to `NONE`, or not specified, for the first stage. You can then configure each subsequent
+stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from
+the first stage, the subsequent stages will inherit the effects of those properties set on the first stage.
# Feed Forward Properties
-Components that support feed forward have two algorithm properties that control the feed forward behavior: `FEED_FORWARD_TYPE` and `FEED_FORWARD_TOP_CONFIDENCE_COUNT`.
+Components that support feed forward have two algorithm properties that control the feed forward behavior:
+`FEED_FORWARD_TYPE` and `FEED_FORWARD_TOP_QUALITY_COUNT`.
`FEED_FORWARD_TYPE` can be set to the following values:
- `NONE`: Feed forward is disabled (default setting).
-- `FRAME`: For each detection in the feed forward track, search the entire frame associated with that detection. The track's detection regions are ignored.
-- `SUPERSET_REGION`: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all of the detection regions in that track across all of the frames in that track. Refer to the [Superset Region](#superset-region) section for more details. For each detection in the feed forward track, search the superset region.
+- `FRAME`: For each detection in the feed forward track, search the entire frame associated with that detection. The
+ track's detection regions are ignored.
+- `SUPERSET_REGION`: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all
+ of the detection regions in that track across all of the frames in that track. Refer to the [Superset
+ Region](#superset-region) section for more details. For each detection in the feed forward track, search the superset
+ region.
- `REGION`: For each detection in the feed forward track, search the exact detection region.
-> **NOTE:** When using `REGION`, the location of the region within the frame, and the size of the region, may be different for each detection in the feed forward track. Thus, `REGION` should not be used by algorithms that perform region tracking and require a consistent coordinate space from detection to detection. For those algorithms, use `SUPERSET_REGION` instead. That will ensure that each detection region is relative to the upper right corner of the superset region for that track.
+> **NOTE:** When using `REGION`, the location of the region within the frame, and the size of the region, may be
+> different for each detection in the feed forward track. Thus, `REGION` should not be used by algorithms that perform
+> region tracking and require a consistent coordinate space from detection to detection. For those algorithms, use
+> `SUPERSET_REGION` instead. That will ensure that each detection region is relative to the upper right corner of the
+> superset region for that track.
-`FEED_FORWARD_TOP_CONFIDENCE_COUNT` allows you to drop low confidence detections from feed forward tracks. Setting the property to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be processed.
+`FEED_FORWARD_TOP_QUALITY_COUNT` allows you to drop low quality detections from feed forward tracks. Setting the
+property to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be
+processed.
-When `FEED_FORWARD_TOP_CONFIDENCE_COUNT` is set to a number greater than 0, say 5, then the top 5 detections in the feed forward track (based on highest confidence) will be processed. If the track contains less than 5 detections then all of the detections in the track will be processed. If one or more detections have the same confidence value, then the detection(s) with the lower frame index take precedence.
+When `FEED_FORWARD_TOP_QUALITY_COUNT` is set to a number greater than 0, say 5, then the top 5 highest quality
+detections in the feed forward track will be processed. Determination of quality is based on the job property
+`QUALITY_SELECTION_PROPERTY`, which defaults to `CONFIDENCE`, but may be set to a different detection property. Refer to
+the [Quality Selection Guide](Quality-Selection-Guide/index.html). If the track contains less than 5 detections then all
+of the detections in the track will be processed. If one or more detections have the same quality value, then the
+detection(s) with the lower frame index take precedence.
# Superset Region
-A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a track. This is also known as a “union” or [“minimum bounding rectangle"](https://en.wikipedia.org/wiki/Minimum_bounding_rectangle).
+A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a
+track. This is also known as a “union” or [“minimum bounding
+rectangle"](https://en.wikipedia.org/wiki/Minimum_bounding_rectangle).

-For example, consider a track representing a person moving from the upper left to the lower right. The track consists of 3 frames that have the following detection regions:
+For example, consider a track representing a person moving from the upper left to the lower right. The track consists of
+3 frames that have the following detection regions:
- Frame 0: `(x = 10, y = 10, width = 10, height = 10)`
- Frame 1: `(x = 15, y = 15, width = 10, height = 10)`
- Frame 2: `(x = 20, y = 20, width = 10, height = 10)`
-Each detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame region. The superset region for the track is `(x = 10, y = 10, width = 20, height = 20)`, and is drawn with a dotted red line.
+Each detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame
+region. The superset region for the track is `(x = 10, y = 10, width = 20, height = 20)`, and is drawn with a dotted red
+line.
-The major advantage of using a superset region is constant size. Some algorithms require the search space in each frame to be a constant size in order to successfully track objects.
+The major advantage of using a superset region is constant size. Some algorithms require the search space in each frame
+to be a constant size in order to successfully track objects.
-A disadvantage is that the superset region will often be larger than any specific detection region, so the search space is not restricted to the smallest possible size in each frame; however, in many cases the search space will be significantly smaller than the whole frame.
+A disadvantage is that the superset region will often be larger than any specific detection region, so the search space
+is not restricted to the smallest possible size in each frame; however, in many cases the search space will be
+significantly smaller than the whole frame.
-In the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a video to the lower right corner. In that case the superset region will be the entire width and height of the frame, so `SUPERSET_REGION` devolves into `FRAME`.
+In the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a
+video to the lower right corner. In that case the superset region will be the entire width and height of the frame, so
+`SUPERSET_REGION` devolves into `FRAME`.
-In a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In that case `SUPERSET_REGION` is able to filter out 75% of the rest of the frame data. In the example shown in the above diagram, `SUPERSET_REGION` is able to filter out 83% of the rest of the frame data.
+In a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In
+that case `SUPERSET_REGION` is able to filter out 75% of the rest of the frame data. In the example shown in the above
+diagram, `SUPERSET_REGION` is able to filter out 83% of the rest of the frame data.
-The above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box that does not. The inner bounding box represents the face detection in that frame, while the outer bounding box represents the superset region for the track associated with that face. Note that the bounding box for each face uses a different color. The colors are not related to those used in the above diagram.
+The above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box
+that does not. The inner bounding box represents the face detection in that frame, while the outer bounding box
+represents the superset region for the track associated with that face. Note that the bounding box for each face uses a
+different color. The colors are not related to those used in the above diagram.
# MPFVideoCapture and MPFImageReader Tools
-When developing a component, the [C++ Batch Component API](CPP-Batch-Component-API/index.html) and [Python Batch Component API](Python-Batch-Component-API/index.html) include utilities that make it easier to support feed forward in your components. They work similarly, but only the C++ tools will be discussed here. The `MPFVideoCapture` class is a wrapper around OpenCV's `cv::VideoCapture` class. `MPFVideoCapture` works very similarly to `cv::VideoCapture`, except that it might modify the video frames based on job properties. From the point of view of someone using `MPFVideoCapture`, these modifications are mostly transparent. `MPFVideoCapture` makes it look like you are reading the original video file.
+When developing a component, the [C++ Batch Component API](CPP-Batch-Component-API/index.html) and [Python Batch
+Component API](Python-Batch-Component-API/index.html) include utilities that make it easier to support feed forward in
+your components. They work similarly, but only the C++ tools will be discussed here. The `MPFVideoCapture` class is a
+wrapper around OpenCV's `cv::VideoCapture` class. `MPFVideoCapture` works very similarly to `cv::VideoCapture`, except
+that it might modify the video frames based on job properties. From the point of view of someone using
+`MPFVideoCapture`, these modifications are mostly transparent. `MPFVideoCapture` makes it look like you are reading the
+original video file.
-Conceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless there was a detection in every frame) and possibly a smaller frame size.
+Conceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless
+there was a detection in every frame) and possibly a smaller frame size.
-For example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found detections in frames 4, 7, and 10, then `MPFVideoCapture` will make it look like the video only has those 3 frames. If the feed forward type is `SUPERSET_REGION` or `REGION,` and each detection is 30x50 pixels, then `MPFVideoCapture` will make it look like the video's original resolution was 30x50 pixels.
+For example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found
+detections in frames 4, 7, and 10, then `MPFVideoCapture` will make it look like the video only has those 3 frames. If
+the feed forward type is `SUPERSET_REGION` or `REGION,` and each detection is 30x50 pixels, then `MPFVideoCapture` will
+make it look like the video's original resolution was 30x50 pixels.
-One issue with this approach is that the detection frame numbers and bounding box will be relative to the modified video, not the original. To make the detections relative to the original video the `MPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)` function must be used.
+One issue with this approach is that the detection frame numbers and bounding box will be relative to the modified
+video, not the original. To make the detections relative to the original video the
+`MPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)` function must be used.
The general pattern for using `MPFVideoCapture` is as follows:
@@ -115,18 +187,36 @@ std::vector tracks;
}
```
-`MPFVideoCapture` makes it look like the user is processing the original video, when in reality they are processing a modified version. To avoid confusion, this means that `MPFVideoCapture` should always be returning frames that are the same size because most users expect each frame of a video to be the same size.
+`MPFVideoCapture` makes it look like the user is processing the original video, when in reality they are processing a
+modified version. To avoid confusion, this means that `MPFVideoCapture` should always be returning frames that are the
+same size because most users expect each frame of a video to be the same size.
-When using `SUPERSET_REGION` this is not an issue, since one bounding box is used for the entire track. However, when using `REGION`, each detection can be a different size, so it is not possible for `MPFVideoCapture` to return frames that are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of `MPFVideoCapture`, `SUPERSET_REGION` should usually be preferred over `REGION`. The `REGION` setting should only be used with components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region tracking, so processing frames of various sizes is not a problem.
+When using `SUPERSET_REGION` this is not an issue, since one bounding box is used for the entire track. However, when
+using `REGION`, each detection can be a different size, so it is not possible for `MPFVideoCapture` to return frames
+that are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of
+`MPFVideoCapture`, `SUPERSET_REGION` should usually be preferred over `REGION`. The `REGION` setting should only be used
+with components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region
+tracking, so processing frames of various sizes is not a problem.
-The `MPFImageReader` class is similar to `MPFVideoCapture`, but it works on images instead of videos. `MPFImageReader` makes it look like the user is processing an original image, when in reality they are processing a modified version where the frame region is generated based on a detection (`MPFImageLocation`) fed forward from the previous stage of a pipeline. Note that `SUPERSET_REGION` and `REGION` have the same effect when working with images. `MPFImageReader` also has a reverse transform function.
+The `MPFImageReader` class is similar to `MPFVideoCapture`, but it works on images instead of videos. `MPFImageReader`
+makes it look like the user is processing an original image, when in reality they are processing a modified version
+where the frame region is generated based on a detection (`MPFImageLocation`) fed forward from the previous stage of a
+pipeline. Note that `SUPERSET_REGION` and `REGION` have the same effect when working with images. `MPFImageReader` also
+has a reverse transform function.
# OpenCV DNN Component Tracking
-The OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking behavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process the entire region of each frame of a video. If one or more consecutive frames has the same highest confidence classification, then a new track is generated that contains those frames.
+The OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking
+behavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process
+the entire region of each frame of a video. If one or more consecutive frames has the same highest confidence
+classification, then a new track is generated that contains those frames.
-When feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track according to the `FEED_FORWARD_TYPE`. It will generate one track that contains the same frames as the feed forward track. If `FEED_FORWARD_TYPE` is set to `REGION` then the OpenCV DNN track will contain (inherit) the same detection regions as the feed forward track. In any case, the `detectionProperties` map for the detections in the OpenCV DNN track will include the `CLASSIFICATION` entries and possibly other OpenCV DNN component properties.
+When feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track
+according to the `FEED_FORWARD_TYPE`. It will generate one track that contains the same frames as the feed forward
+track. If `FEED_FORWARD_TYPE` is set to `REGION` then the OpenCV DNN track will contain (inherit) the same detection
+regions as the feed forward track. In any case, the `detectionProperties` map for the detections in the OpenCV DNN track
+will include the `CLASSIFICATION` entries and possibly other OpenCV DNN component properties.
# Feed Forward Pipeline Examples
@@ -160,13 +250,25 @@ CAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIP
+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK
```
-Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each detection in each track will have an OpenCV DNN `CLASSIFICATION` entry. Each track has a 1-to-1 correspondence with a MOG motion track.
+Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each
+detection in each track will have an OpenCV DNN `CLASSIFICATION` entry. Each track has a 1-to-1 correspondence with a
+MOG motion track.
-Refer to `runMogThenCaffeFeedForwardExactRegionTest()` in the [`TestSystemOnDiff`](https://github.com/openmpf/openmpf/blob/master/trunk/mpf-system-tests/src/test/java/org/mitre/mpf/mst/TestSystemOnDiff.java) class for a system test that demonstrates this behavior. Refer to `runMogThenCaffeFeedForwardSupersetRegionTest()` in that class for a system test that uses `SUPERSET_REGION` instead. Refer to `runMogThenCaffeFeedForwardFullFrameTest()` for a system test that uses `FRAME` instead.
+Refer to `runMogThenCaffeFeedForwardExactRegionTest()` in the
+[`TestSystemOnDiff`](https://github.com/openmpf/openmpf/blob/master/trunk/mpf-system-tests/src/test/java/org/mitre/mpf/mst/TestSystemOnDiff.java)
+class for a system test that demonstrates this behavior. Refer to `runMogThenCaffeFeedForwardSupersetRegionTest()` in
+that class for a system test that uses `SUPERSET_REGION` instead. Refer to `runMogThenCaffeFeedForwardFullFrameTest()`
+for a system test that uses `FRAME` instead.
-> **NOTE:** Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To mitigate this, consider setting the `MERGE_TRACKS`, `MIN_GAP_BETWEEN_TRACKS`, and `MIN_TRACK_LENGTH` properties to generate longer motion tracks and discard short and/or spurious motion tracks.
+> **NOTE:** Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To
+> mitigate this, consider setting the `MERGE_TRACKS`, `MIN_GAP_BETWEEN_TRACKS`, and `MIN_TRACK_LENGTH` properties to
+> generate longer motion tracks and discard short and/or spurious motion tracks.
-> **NOTE:** It doesn’t make sense to use `FEED_FORWARD_TOP_CONFIDENCE_COUNT` on a pipeline stage that follows a MOG or SuBSENSE motion detection stage. That’s because those motion detectors don’t generate tracks with confidence values. Instead, `FEED_FORWARD_TOP_CONFIDENCE_COUNT` could potentially be used when feeding person tracks into a face detector, for example, if those person tracks have confidence values.
+> **NOTE:** It doesn’t make sense to use `FEED_FORWARD_TOP_QUALITY_COUNT` on a pipeline stage that follows a MOG or
+> SuBSENSE motion detection stage. That’s because those motion detectors don’t generate tracks with confidence values
+> (`CONFIDENCE` being the default value for the `QUALITY_SELECTION_PROPERTY` job property). Instead,
+> `FEED_FORWARD_TOP_QUALITY_COUNT` could potentially be used when feeding person tracks into a face detector, for
+> example, if the detections in those person tracks have the requested `QUALITY_SELECTION_PROPERTY` set.
OCV Face Detection with MOG Motion Detection and Feed Forward Superset Region
@@ -194,6 +296,9 @@ OCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) P
+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK
```
-Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has a 1-to-1 correspondence with a MOG motion track.
+Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has
+a 1-to-1 correspondence with a MOG motion track.
-Refer to `runMogThenOcvFaceFeedForwardRegionTest()` in the [`TestSystemOnDiff`](https://github.com/openmpf/openmpf/blob/master/trunk/mpf-system-tests/src/test/java/org/mitre/mpf/mst/TestSystemOnDiff.java) class for a system test that demonstrates this behavior.
+Refer to `runMogThenOcvFaceFeedForwardRegionTest()` in the
+[`TestSystemOnDiff`](https://github.com/openmpf/openmpf/blob/master/trunk/mpf-system-tests/src/test/java/org/mitre/mpf/mst/TestSystemOnDiff.java)
+class for a system test that demonstrates this behavior.
diff --git a/docs/docs/Quality-Selection-Guide.md b/docs/docs/Quality-Selection-Guide.md
new file mode 100644
index 000000000000..6e399dd712a3
--- /dev/null
+++ b/docs/docs/Quality-Selection-Guide.md
@@ -0,0 +1,70 @@
+**NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the
+Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.
+
+# Introduction
+
+There are a few places in OpenMPF where the quality of a detection comes into play. Here, "detection quality" is defined
+to be a measurement of how "good" the detection is that can be used to rank the detections in a track from highest to
+lowest quality. In many cases, components use "confidence" as an indicator of quality; however, there are some
+components that do not compute a confidence value for its detections, and there are others that compute a different
+value that is a better measure of quality for that detection algorithm. As discussed in the next section, OpenMPF uses
+detection quality for a variety of purposes.
+
+
+# Quality Selection Properties
+
+`QUALITY_SELECTION_PROPERTY` is a string that defines the name of the property to use for quality selection. For
+example, a face detection component may generate detections with a `DESCRIPTOR_MAGNITUDE` property that represents the
+quality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the
+`detection_properties` map in each detection and track for that key and use the corresponding value as the detection
+quality. The value associated with this property must be an integer or floating point value, where higher values
+indicate higher quality.
+
+One exception is when this property is set to `CONFIDENCE` and no `CONFIDENCE` property exists in the
+`detection_properties` map. Then the `confidence` member of each detection and track is used instead.
+
+The primary way in which OpenMPF uses detection quality is to determine the track "exemplar", which is the highest
+quality detection in the track. For components that do not compute a quality value, or where all detections have
+identical quality, the Workflow Manager will choose the first detection in the track as the exemplar.
+
+`QUALITY_SELECTION_THRESHOLD` is a numerical value used for filtering out low quality detections and tracks. All
+detections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself
+is also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager
+to do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the
+lowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to
+1, then setting the threshold property to -1 will result in all detections and all tracks being retained.
+
+`FEED_FORWARD_TOP_QUALITY_COUNT` can be used to select the number of detections to include in a feed-forward track. For
+example, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that
+track. If less then 10 detections meet the `QUALITY_SELECTION_THRESHOLD`, then only that many detections are fed
+forward. Refer to the [Feed Forward Guide](Feed-Forward-Guide/index.html) for more information.
+
+`ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT` can be used to select the number of detections that will be used to
+extract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,
+and then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet
+the `QUALITY_SELECTION_THRESHOLD`, then only that many artifacts will be extracted.
+
+
+# Hybrid Quality Selection
+
+In some cases, there may be a detection property that a component would like to use as a measure of quality but it
+doesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and
+would like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose
+may be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality
+detection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property
+to take on a range of values where the highest value indicates the highest quality.
+
+Another use case might be where the component would like to choose detections based on a set of quality values, or
+properties. Continuing with the face pose example, the component might like to designate the detection with pose closest
+to frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to
+profile, meaning values of yaw closest to -90 or +90 degrees.
+
+In both of these cases, the component can create a custom detection property that is used to rank these detections as it
+sees fit. It could use a detection property called `RANK`, and assign values to that property to rank the detections
+from lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the
+detection with a value of yaw closest to 0 would be assigned a `RANK` property with the highest value, then the
+detections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of `RANK`.
+Detections without the `RANK` property would be treated as having the lowest possible quality value. Thus, the track
+exemplar would be the face with the frontal pose, and the `ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT` property could
+be set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.
+
diff --git a/docs/docs/Release-Notes.md b/docs/docs/Release-Notes.md
index c9ec271017d1..8dbb9406adce 100644
--- a/docs/docs/Release-Notes.md
+++ b/docs/docs/Release-Notes.md
@@ -2892,7 +2892,7 @@ for optional dependencies.
Feed Forward Behavior
-- Updated the workflow manager (WFM) and all video components to optionally perform feed forward processing for batch
+- Updated the Workflow Manager (WFM) and all video components to optionally perform feed forward processing for batch
jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will
only process the frames associated with the detections in those tracks. This differs from the default segmenting
behavior, which does not preserve detection regions or track information between stages.
diff --git a/docs/docs/User-Guide.md b/docs/docs/User-Guide.md
index 23c15ae2488e..e2aefe3e74af 100644
--- a/docs/docs/User-Guide.md
+++ b/docs/docs/User-Guide.md
@@ -256,7 +256,7 @@ This page allows a user to view the various log files that are generated by syst
In general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same "ocv-face-detection" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different "ocv-face-detection" log file.
-Note that only the master node will have the "workflow-manager" log. This is because the workflow manager only runs on the master node.
+Note that only the master node will have the "workflow-manager" log. This is because the Workflow Manager only runs on the master node.
The "node-manager-startup" and "node-manager" logs will appear for every node in a non-Docker OpenMPF cluster. The "node-manager-startup" log captures information about the nodemanager startup process, such as if any errors occurred. The "node-manager" log captures information about node manager execution, such as starting and stopping services.
@@ -270,19 +270,19 @@ This page allows a user to view the various OpenMPF properties configured automa
## Statistics
-The "Jobs" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.
+The "Jobs" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the Workflow Manager is restarted.

For example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.
-The "Processes" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The "Count" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.
+The "Processes" tab on this page allows a user to view a table with information about the runtime of various internal Workflow Manager operations. The "Count" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the Workflow Manager is restarted.

## REST API
-This page allows a user to try out the [various REST API endpoints](REST-API) provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.
+This page allows a user to try out the [various REST API endpoints](REST-API) provided by the Workflow Manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.
After selecting a functional category, such as "meta", "jobs", "statistics", "nodes", "pipelines", or "system-message", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index f7e56eb04d90..9c3a88730ee1 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -29,6 +29,7 @@ pages:
- Trigger Guide: Trigger-Guide.md
- Roll Up Guide: Roll-Up-Guide.md
- Health Check Guide: Health-Check-Guide.md
+ - Quality Selection Guide: Quality-Selection-Guide.md
- REST API: REST-API.md
- Component Development:
- Component API Overview: Component-API-Overview.md
diff --git a/docs/site/404.html b/docs/site/404.html
index 2b5fcc28400c..8fd80512eec5 100644
--- a/docs/site/404.html
+++ b/docs/site/404.html
@@ -122,6 +122,10 @@
An admin user can click inside of the "Value" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.
Note that if the admin user types in the original value of the property, or clicks the "Reset" button, then it will return back to the normal coloration.
-
WARNING: Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!
+
WARNING: Changing the value of these properties can prevent the Workflow Manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!
-
At the bottom of the properties table is the "Save Properties" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:
+
At the bottom of the properties table is the "Save Properties" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the Workflow Manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:
Hawtio
The Hawtio web console can be accessed by selecting "Hawtio" from the
diff --git a/docs/site/CPP-Batch-Component-API/index.html b/docs/site/CPP-Batch-Component-API/index.html
index 88e6fafcc1f3..8de5259d83d9 100644
--- a/docs/site/CPP-Batch-Component-API/index.html
+++ b/docs/site/CPP-Batch-Component-API/index.html
@@ -129,6 +129,10 @@
Process a single video frame for the current segment.
Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.
-
If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a CONFIDENCE_THRESHOLD entry, then this function should only return true for a detection with a confidence value that meets or exceeds that threshold. After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.
-
Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
-
Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
+
If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a QUALITY_SELECTION_THRESHOLD entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the Quality Selection Guide. After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.
+
Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
+
Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
the dropzone area to open a file browser and select the file that way.
In either case, the component will begin to be uploaded to the system. If the
admin user dragged and dropped the file onto the dropzone area then the upload
-progress will be shown in that area. Once uploaded, the workflow manager will
+progress will be shown in that area. Once uploaded, the Workflow Manager will
automatically attempt to register the component. Notification messages will
appear in the upper right side of the screen to indicate success or failure if
an error occurs. The "Current Components" table will display the component
@@ -702,7 +706,7 @@
Component Registration
If for some reason the component package upload succeeded but the component
registration failed then the admin user will be able to click the "Register"
button again to try to another registration attempt. For example, the admin
-user may do this after reviewing the workflow manager logs and resolving any
+user may do this after reviewing the Workflow Manager logs and resolving any
issues that prevented the component from successfully registering the first
time. One reason may be that a component with the same name already exists on
the system. Note that an error will also occur if the top-level directory of
diff --git a/docs/site/Feed-Forward-Guide/index.html b/docs/site/Feed-Forward-Guide/index.html
index 25c75ac68e9d..ccc86e3dfb97 100644
--- a/docs/site/Feed-Forward-Guide/index.html
+++ b/docs/site/Feed-Forward-Guide/index.html
@@ -156,6 +156,10 @@
NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the
Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.
Introduction
-
Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be directly “fed into” the next stage. It differs from the default segmenting behavior in the following major ways:
+
Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be
+directly “fed into” the next stage. It differs from the default segmenting behavior in the following major ways:
-
The next stage will only look at the frames that had detections in the previous stage. The default segmenting behavior results in “filling the gaps” so that the next stage looks at all the frames between the start and end frames of the feed forward track, regardless of whether a detection was actually found in those frames.
+
The next stage will only look at the frames that had detections in the previous stage. The default segmenting
+ behavior results in “filling the gaps” so that the next stage looks at all the frames between the start and end
+ frames of the feed forward track, regardless of whether a detection was actually found in those frames.
-
The next stage can be configured to only look at the detection regions for the frames in the feed forward track. The default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks at the whole frame region for every frame in the segment.
+
The next stage can be configured to only look at the detection regions for the frames in the feed forward track. The
+ default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks
+ at the whole frame region for every frame in the segment.
-
The next stage will process one sub-job per track generated in the previous stage. If the previous stage generated more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed forward can be configured such that only the detection regions for those tracks are processed. If they are non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that captures the frame associated with all 3 tracks.
+
The next stage will process one sub-job per track generated in the previous stage. If the previous stage generated
+ more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed
+ forward can be configured such that only the detection regions for those tracks are processed. If they are
+ non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that
+ captures the frame associated with all 3 tracks.
Motivation
Consider using feed forward for the following reasons:
-
You have an algorithm that isn’t capable of breaking down a frame into regions of interest. For example, face detection can take a whole frame and generate a separate detection region for each face in the frame. On the other hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and generate a single detection that’s the size of the frame’s width and height. The OpenCV DNN component will produce better results if it operates on smaller regions that only capture the desired object to be classified. Using feed forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.
+
You have an algorithm that isn’t capable of breaking down a frame into regions of interest. For example, face
+ detection can take a whole frame and generate a separate detection region for each face in the frame. On the other
+ hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and
+ generate a single detection that’s the size of the frame’s width and height. The OpenCV DNN component will produce
+ better results if it operates on smaller regions that only capture the desired object to be classified. Using feed
+ forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.
-
You wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest. For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will speed up run times.
+
You wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.
+ For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which
+ may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will
+ speed up run times.
-
NOTE: Enabling feed forward results in more sub-jobs and more message passing between the workflow manager and components than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the overhead cost. The cost may be outweighed by how feed forward can “filter out” pixel data that doesn’t need to be processed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.
+
NOTE: Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and
+components than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the
+overhead cost. The cost may be outweighed by how feed forward can “filter out” pixel data that doesn’t need to be
+processed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.
-
The output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward pipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected in the first stage and a face was detected in the second stage.
+
The output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward
+pipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected
+in the first stage and a face was detected in the second stage.
First Stage and Combining Properties
-
When feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there is no track to feed in. In other words, the first stage will process the media file as though feed forward was not enabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take advantage of the feed forward behavior.
+
When feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there
+is no track to feed in. In other words, the first stage will process the media file as though feed forward was not
+enabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take
+advantage of the feed forward behavior.
-
NOTE: When FEED_FORWARD_TYPE is set to anything other than NONE, the following properties will be ignored: FRAME_INTERVAL, USE_KEY_FRAMES, SEARCH_REGION_*.
+
NOTE: When FEED_FORWARD_TYPE is set to anything other than NONE, the following properties will be ignored:
+FRAME_INTERVAL, USE_KEY_FRAMES, SEARCH_REGION_*.
-
If you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure that FEED_FORWARD_TYPE is set to NONE, or not specified, for the first stage. You can then configure each subsequent stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from the first stage, the subsequent stages will inherit the effects of those properties set on the first stage.
+
If you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure
+that FEED_FORWARD_TYPE is set to NONE, or not specified, for the first stage. You can then configure each subsequent
+stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from
+the first stage, the subsequent stages will inherit the effects of those properties set on the first stage.
Feed Forward Properties
-
Components that support feed forward have two algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE and FEED_FORWARD_TOP_CONFIDENCE_COUNT.
+
Components that support feed forward have two algorithm properties that control the feed forward behavior:
+FEED_FORWARD_TYPE and FEED_FORWARD_TOP_QUALITY_COUNT.
FEED_FORWARD_TYPE can be set to the following values:
NONE: Feed forward is disabled (default setting).
-
FRAME: For each detection in the feed forward track, search the entire frame associated with that detection. The track's detection regions are ignored.
-
SUPERSET_REGION: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all of the detection regions in that track across all of the frames in that track. Refer to the Superset Region section for more details. For each detection in the feed forward track, search the superset region.
+
FRAME: For each detection in the feed forward track, search the entire frame associated with that detection. The
+ track's detection regions are ignored.
+
SUPERSET_REGION: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all
+ of the detection regions in that track across all of the frames in that track. Refer to the Superset
+ Region section for more details. For each detection in the feed forward track, search the superset
+ region.
REGION: For each detection in the feed forward track, search the exact detection region.
-
NOTE: When using REGION, the location of the region within the frame, and the size of the region, may be different for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform region tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the superset region for that track.
+
NOTE: When using REGION, the location of the region within the frame, and the size of the region, may be
+different for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform
+region tracking and require a consistent coordinate space from detection to detection. For those algorithms, use
+SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the
+superset region for that track.
-
FEED_FORWARD_TOP_CONFIDENCE_COUNT allows you to drop low confidence detections from feed forward tracks. Setting the property to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be processed.
-
When FEED_FORWARD_TOP_CONFIDENCE_COUNT is set to a number greater than 0, say 5, then the top 5 detections in the feed forward track (based on highest confidence) will be processed. If the track contains less than 5 detections then all of the detections in the track will be processed. If one or more detections have the same confidence value, then the detection(s) with the lower frame index take precedence.
+
FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the
+property to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be
+processed.
+
When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality
+detections in the feed forward track will be processed. Determination of quality is based on the job property
+QUALITY_SELECTION_PROPERTY, which defaults to CONFIDENCE, but may be set to a different detection property. Refer to
+the Quality Selection Guide. If the track contains less than 5 detections then all
+of the detections in the track will be processed. If one or more detections have the same quality value, then the
+detection(s) with the lower frame index take precedence.
Superset Region
-
A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a track. This is also known as a “union” or “minimum bounding rectangle".
+
A “superset region” is the smallest region of interest that contains all of the detections for all of the frames in a
+track. This is also known as a “union” or “minimum bounding
+rectangle".
-
For example, consider a track representing a person moving from the upper left to the lower right. The track consists of 3 frames that have the following detection regions:
+
For example, consider a track representing a person moving from the upper left to the lower right. The track consists of
+3 frames that have the following detection regions:
Each detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame region. The superset region for the track is (x = 10, y = 10, width = 20, height = 20), and is drawn with a dotted red line.
-
The major advantage of using a superset region is constant size. Some algorithms require the search space in each frame to be a constant size in order to successfully track objects.
-
A disadvantage is that the superset region will often be larger than any specific detection region, so the search space is not restricted to the smallest possible size in each frame; however, in many cases the search space will be significantly smaller than the whole frame.
-
In the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a video to the lower right corner. In that case the superset region will be the entire width and height of the frame, so SUPERSET_REGION devolves into FRAME.
-
In a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In that case SUPERSET_REGION is able to filter out 75% of the rest of the frame data. In the example shown in the above diagram, SUPERSET_REGION is able to filter out 83% of the rest of the frame data.
+
Each detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame
+region. The superset region for the track is (x = 10, y = 10, width = 20, height = 20), and is drawn with a dotted red
+line.
+
The major advantage of using a superset region is constant size. Some algorithms require the search space in each frame
+to be a constant size in order to successfully track objects.
+
A disadvantage is that the superset region will often be larger than any specific detection region, so the search space
+is not restricted to the smallest possible size in each frame; however, in many cases the search space will be
+significantly smaller than the whole frame.
+
In the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a
+video to the lower right corner. In that case the superset region will be the entire width and height of the frame, so
+SUPERSET_REGION devolves into FRAME.
+
In a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In
+that case SUPERSET_REGION is able to filter out 75% of the rest of the frame data. In the example shown in the above
+diagram, SUPERSET_REGION is able to filter out 83% of the rest of the frame data.
-
The above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box that does not. The inner bounding box represents the face detection in that frame, while the outer bounding box represents the superset region for the track associated with that face. Note that the bounding box for each face uses a different color. The colors are not related to those used in the above diagram.
+
The above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box
+that does not. The inner bounding box represents the face detection in that frame, while the outer bounding box
+represents the superset region for the track associated with that face. Note that the bounding box for each face uses a
+different color. The colors are not related to those used in the above diagram.
MPFVideoCapture and MPFImageReader Tools
-
When developing a component, the C++ Batch Component API and Python Batch Component API include utilities that make it easier to support feed forward in your components. They work similarly, but only the C++ tools will be discussed here. The MPFVideoCapture class is a wrapper around OpenCV's cv::VideoCapture class. MPFVideoCapture works very similarly to cv::VideoCapture, except that it might modify the video frames based on job properties. From the point of view of someone using MPFVideoCapture, these modifications are mostly transparent. MPFVideoCapture makes it look like you are reading the original video file.
-
Conceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless there was a detection in every frame) and possibly a smaller frame size.
-
For example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found detections in frames 4, 7, and 10, then MPFVideoCapture will make it look like the video only has those 3 frames. If the feed forward type is SUPERSET_REGION or REGION, and each detection is 30x50 pixels, then MPFVideoCapture will make it look like the video's original resolution was 30x50 pixels.
-
One issue with this approach is that the detection frame numbers and bounding box will be relative to the modified video, not the original. To make the detections relative to the original video the MPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack) function must be used.
+
When developing a component, the C++ Batch Component API and Python Batch
+Component API include utilities that make it easier to support feed forward in
+your components. They work similarly, but only the C++ tools will be discussed here. The MPFVideoCapture class is a
+wrapper around OpenCV's cv::VideoCapture class. MPFVideoCapture works very similarly to cv::VideoCapture, except
+that it might modify the video frames based on job properties. From the point of view of someone using
+MPFVideoCapture, these modifications are mostly transparent. MPFVideoCapture makes it look like you are reading the
+original video file.
+
Conceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless
+there was a detection in every frame) and possibly a smaller frame size.
+
For example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found
+detections in frames 4, 7, and 10, then MPFVideoCapture will make it look like the video only has those 3 frames. If
+the feed forward type is SUPERSET_REGION or REGION, and each detection is 30x50 pixels, then MPFVideoCapture will
+make it look like the video's original resolution was 30x50 pixels.
+
One issue with this approach is that the detection frame numbers and bounding box will be relative to the modified
+video, not the original. To make the detections relative to the original video the
+MPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack) function must be used.
The general pattern for using MPFVideoCapture is as follows:
MPFVideoCapture makes it look like the user is processing the original video, when in reality they are processing a modified version. To avoid confusion, this means that MPFVideoCapture should always be returning frames that are the same size because most users expect each frame of a video to be the same size.
-
When using SUPERSET_REGION this is not an issue, since one bounding box is used for the entire track. However, when using REGION, each detection can be a different size, so it is not possible for MPFVideoCapture to return frames that are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of MPFVideoCapture, SUPERSET_REGION should usually be preferred over REGION. The REGION setting should only be used with components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region tracking, so processing frames of various sizes is not a problem.
-
The MPFImageReader class is similar to MPFVideoCapture, but it works on images instead of videos. MPFImageReader makes it look like the user is processing an original image, when in reality they are processing a modified version where the frame region is generated based on a detection (MPFImageLocation) fed forward from the previous stage of a pipeline. Note that SUPERSET_REGION and REGION have the same effect when working with images. MPFImageReader also has a reverse transform function.
+
MPFVideoCapture makes it look like the user is processing the original video, when in reality they are processing a
+modified version. To avoid confusion, this means that MPFVideoCapture should always be returning frames that are the
+same size because most users expect each frame of a video to be the same size.
+
When using SUPERSET_REGION this is not an issue, since one bounding box is used for the entire track. However, when
+using REGION, each detection can be a different size, so it is not possible for MPFVideoCapture to return frames
+that are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of
+MPFVideoCapture, SUPERSET_REGION should usually be preferred over REGION. The REGION setting should only be used
+with components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region
+tracking, so processing frames of various sizes is not a problem.
+
The MPFImageReader class is similar to MPFVideoCapture, but it works on images instead of videos. MPFImageReader
+makes it look like the user is processing an original image, when in reality they are processing a modified version
+where the frame region is generated based on a detection (MPFImageLocation) fed forward from the previous stage of a
+pipeline. Note that SUPERSET_REGION and REGION have the same effect when working with images. MPFImageReader also
+has a reverse transform function.
OpenCV DNN Component Tracking
-
The OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking behavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process the entire region of each frame of a video. If one or more consecutive frames has the same highest confidence classification, then a new track is generated that contains those frames.
-
When feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track according to the FEED_FORWARD_TYPE. It will generate one track that contains the same frames as the feed forward track. If FEED_FORWARD_TYPE is set to REGION then the OpenCV DNN track will contain (inherit) the same detection regions as the feed forward track. In any case, the detectionProperties map for the detections in the OpenCV DNN track will include the CLASSIFICATION entries and possibly other OpenCV DNN component properties.
+
The OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking
+behavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process
+the entire region of each frame of a video. If one or more consecutive frames has the same highest confidence
+classification, then a new track is generated that contains those frames.
+
When feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track
+according to the FEED_FORWARD_TYPE. It will generate one track that contains the same frames as the feed forward
+track. If FEED_FORWARD_TYPE is set to REGION then the OpenCV DNN track will contain (inherit) the same detection
+regions as the feed forward track. In any case, the detectionProperties map for the detections in the OpenCV DNN track
+will include the CLASSIFICATION entries and possibly other OpenCV DNN component properties.
Feed Forward Pipeline Examples
GoogLeNet Classification with MOG Motion Detection and Feed Forward Region
Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each detection in each track will have an OpenCV DNN CLASSIFICATION entry. Each track has a 1-to-1 correspondence with a MOG motion track.
-
Refer to runMogThenCaffeFeedForwardExactRegionTest() in the TestSystemOnDiff class for a system test that demonstrates this behavior. Refer to runMogThenCaffeFeedForwardSupersetRegionTest() in that class for a system test that uses SUPERSET_REGION instead. Refer to runMogThenCaffeFeedForwardFullFrameTest() for a system test that uses FRAME instead.
+
Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each
+detection in each track will have an OpenCV DNN CLASSIFICATION entry. Each track has a 1-to-1 correspondence with a
+MOG motion track.
+
Refer to runMogThenCaffeFeedForwardExactRegionTest() in the
+TestSystemOnDiff
+class for a system test that demonstrates this behavior. Refer to runMogThenCaffeFeedForwardSupersetRegionTest() in
+that class for a system test that uses SUPERSET_REGION instead. Refer to runMogThenCaffeFeedForwardFullFrameTest()
+for a system test that uses FRAME instead.
-
NOTE: Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To mitigate this, consider setting the MERGE_TRACKS, MIN_GAP_BETWEEN_TRACKS, and MIN_TRACK_LENGTH properties to generate longer motion tracks and discard short and/or spurious motion tracks.
-
NOTE: It doesn’t make sense to use FEED_FORWARD_TOP_CONFIDENCE_COUNT on a pipeline stage that follows a MOG or SuBSENSE motion detection stage. That’s because those motion detectors don’t generate tracks with confidence values. Instead, FEED_FORWARD_TOP_CONFIDENCE_COUNT could potentially be used when feeding person tracks into a face detector, for example, if those person tracks have confidence values.
+
NOTE: Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To
+mitigate this, consider setting the MERGE_TRACKS, MIN_GAP_BETWEEN_TRACKS, and MIN_TRACK_LENGTH properties to
+generate longer motion tracks and discard short and/or spurious motion tracks.
+
NOTE: It doesn’t make sense to use FEED_FORWARD_TOP_QUALITY_COUNT on a pipeline stage that follows a MOG or
+SuBSENSE motion detection stage. That’s because those motion detectors don’t generate tracks with confidence values
+(CONFIDENCE being the default value for the QUALITY_SELECTION_PROPERTY job property). Instead,
+FEED_FORWARD_TOP_QUALITY_COUNT could potentially be used when feeding person tracks into a face detector, for
+example, if the detections in those person tracks have the requested QUALITY_SELECTION_PROPERTY set.
OCV Face Detection with MOG Motion Detection and Feed Forward Superset Region
Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has a 1-to-1 correspondence with a MOG motion track.
-
Refer to runMogThenOcvFaceFeedForwardRegionTest() in the TestSystemOnDiff class for a system test that demonstrates this behavior.
+
Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has
+a 1-to-1 correspondence with a MOG motion track.
+
Refer to runMogThenOcvFaceFeedForwardRegionTest() in the
+TestSystemOnDiff
+class for a system test that demonstrates this behavior.
NOTICE: This software (or technical data) was produced for the U.S. Government under contract, and is subject to the
+Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.
+
Introduction
+
There are a few places in OpenMPF where the quality of a detection comes into play. Here, "detection quality" is defined
+to be a measurement of how "good" the detection is that can be used to rank the detections in a track from highest to
+lowest quality. In many cases, components use "confidence" as an indicator of quality; however, there are some
+components that do not compute a confidence value for its detections, and there are others that compute a different
+value that is a better measure of quality for that detection algorithm. As discussed in the next section, OpenMPF uses
+detection quality for a variety of purposes.
+
Quality Selection Properties
+
QUALITY_SELECTION_PROPERTY is a string that defines the name of the property to use for quality selection. For
+example, a face detection component may generate detections with a DESCRIPTOR_MAGNITUDE property that represents the
+quality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the
+detection_properties map in each detection and track for that key and use the corresponding value as the detection
+quality. The value associated with this property must be an integer or floating point value, where higher values
+indicate higher quality.
+
One exception is when this property is set to CONFIDENCE and no CONFIDENCE property exists in the
+detection_properties map. Then the confidence member of each detection and track is used instead.
+
The primary way in which OpenMPF uses detection quality is to determine the track "exemplar", which is the highest
+quality detection in the track. For components that do not compute a quality value, or where all detections have
+identical quality, the Workflow Manager will choose the first detection in the track as the exemplar.
+
QUALITY_SELECTION_THRESHOLD is a numerical value used for filtering out low quality detections and tracks. All
+detections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself
+is also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager
+to do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the
+lowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to
+1, then setting the threshold property to -1 will result in all detections and all tracks being retained.
+
FEED_FORWARD_TOP_QUALITY_COUNT can be used to select the number of detections to include in a feed-forward track. For
+example, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that
+track. If less then 10 detections meet the QUALITY_SELECTION_THRESHOLD, then only that many detections are fed
+forward. Refer to the Feed Forward Guide for more information.
+
ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT can be used to select the number of detections that will be used to
+extract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,
+and then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet
+the QUALITY_SELECTION_THRESHOLD, then only that many artifacts will be extracted.
+
Hybrid Quality Selection
+
In some cases, there may be a detection property that a component would like to use as a measure of quality but it
+doesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and
+would like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose
+may be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality
+detection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property
+to take on a range of values where the highest value indicates the highest quality.
+
Another use case might be where the component would like to choose detections based on a set of quality values, or
+properties. Continuing with the face pose example, the component might like to designate the detection with pose closest
+to frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to
+profile, meaning values of yaw closest to -90 or +90 degrees.
+
In both of these cases, the component can create a custom detection property that is used to rank these detections as it
+sees fit. It could use a detection property called RANK, and assign values to that property to rank the detections
+from lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the
+detection with a value of yaw closest to 0 would be assigned a RANK property with the highest value, then the
+detections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of RANK.
+Detections without the RANK property would be treated as having the lowest possible quality value. Thus, the track
+exemplar would be the face with the frontal pose, and the ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT property could
+be set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.
Updated the workflow manager (WFM) and all video components to optionally perform feed forward processing for batch
+
Updated the Workflow Manager (WFM) and all video components to optionally perform feed forward processing for batch
jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will
only process the frames associated with the detections in those tracks. This differs from the default segmenting
behavior, which does not preserve detection regions or track information between stages.
This page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the "Available Hosts" drop-down and then selecting a log file from the "Available Logs" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).
In general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same "ocv-face-detection" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different "ocv-face-detection" log file.
-
Note that only the master node will have the "workflow-manager" log. This is because the workflow manager only runs on the master node.
+
Note that only the master node will have the "workflow-manager" log. This is because the Workflow Manager only runs on the master node.
The "node-manager-startup" and "node-manager" logs will appear for every node in a non-Docker OpenMPF cluster. The "node-manager-startup" log captures information about the nodemanager startup process, such as if any errors occurred. The "node-manager" log captures information about node manager execution, such as starting and stopping services.
The "detection" log captures information about initializing C++ detection components and how they handle job request and response messages.
Properties Settings
This page allows a user to view the various OpenMPF properties configured automatically or by an admin user:
Statistics
-
The "Jobs" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.
+
The "Jobs" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the Workflow Manager is restarted.
For example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.
-
The "Processes" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The "Count" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.
+
The "Processes" tab on this page allows a user to view a table with information about the runtime of various internal Workflow Manager operations. The "Count" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the Workflow Manager is restarted.
REST API
-
This page allows a user to try out the various REST API endpoints provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.
+
This page allows a user to try out the various REST API endpoints provided by the Workflow Manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.
After selecting a functional category, such as "meta", "jobs", "statistics", "nodes", "pipelines", or "system-message", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.
In the example above, the "/rest/jobs/{id}" endpoint was selected. It takes a required "id" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an "id" of "1", providing the "mpf" user credentials when prompted, and then clicking the "Try it out!" button:
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index 7580b76465e7..6688c105358c 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -12,7 +12,7 @@
},
{
"location": "/Release-Notes/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOpenMPF 8.0.x\n\n\n8.0.0: December 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nOpenID Connect Guide\n.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to remove\n \n/workflow-manager\n from the Workflow Manager base URL. The Admin Guide includes a section for the new Hawtio web\n console.\n\n\nUpdated the \nREST API\n to use path parameters for pipelines, tasks, actions, and algorithms\n endpoints.\n\n\nUpdated the \nComponent Descriptor Reference\n with \nalgorithm.trackType\n.\n\n\nUpdated the \nC++ Batch Component API\n, \nPython Batch Component\n API\n, and \nJava Batch Component API\n to\n remove the ability to get the detection type since track type is now specified in \ndescriptor.json\n.\n\n\nCreated a new \nTrigger Guide\n.\n\n\nCreated a new \nRoll Up Guide\n.\n\n\n\n\nOpenID-Connect (OIDC) Authentication\n\n\n\n\n\nThe Workflow Manager can now optionally use an OpenID Connect (OIDC) provider to handle authentication for users of\n the web UI and clients of the REST API. The URI for the OIDC provider is specified using the \nOIDC_ISSUER_URI\n\n environment variable.\n\n\nWhen enabled, OIDC is used to authenticate components when they register with the Workflow Manager.\n\n\nWhen \nCALLBACK_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token in job request callbacks.\n\n\nWhen \nTIES_DB_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token when posting to a TiesDb server.\n\n\nWhen OIDC is not enabled, the Workflow Manager uses basic authentication with usernames and passwords, as in previous\n versions of OpenMPF.\n\n\nRefer to the \nOpenID Connect Guide\n for more information on the various OIDC\n environment variables and a Keycloak example.\n\n\n\n\nEmbedded ActiveMQ Broker and Hawtio\n\n\n\n\n\nActiveMQ is now part of the Workflow Manager Spring Boot web application and is no longer run as a separate Docker\n service. This enables ActiveMQ to integrate with Spring Security so it can be protected by the Workflow Manager's OIDC\n support.\n\n\nThe Workflow Manager is the sender or recipient of all ActiveMQ messages, so embedding ActiveMQ in the Workflow\n Manager prevents a network hop on all messages.\n\n\nThe ActiveMQ management page has been replaced by \nHawtio\n, which is more feature rich and can be\n used to monitor the state of the ActiveMQ queues used for communication between the Workflow Manager and the\n components. The Hawtio web console can be accessed by selecting \"Hawtio\" from the \"Configuration\" dropdown menu in the\n top menu bar of the web UI.\n\n\nImportantly, the base URL for the Workflow Manager is now http://localhost:8080 instead of\n http://localhost:8080/workflow-manager. \n/workflow-manager\n is no longer part of the path. This change was made to\n enable Hawtio integration.\n\n\n\n\nREST API Updates\n\n\n\n\n\nThe following changes have been made to the REST endpoints to address a limitation with Swagger (OpenAPI). These\n changes enable the REST endpoints to properly show up in the Swagger page, which is accessed by selecting \"REST API\"\n from the \"Configuration\" dropdown menu in the top menu bar of the web UI.\n\n\n\n\n\n\n\n\n\n\nOld REST Endpoint\n\n\nNew REST Endpoint\n\n\n\n\n\n\n\n\n\n\n[GET] /rest/pipelines?name={name}\n\n\n[GET] /rest/pipelines/{name}\n\n\n\n\n\n\n[GET] /rest/tasks?name={name}\n\n\n[GET] /rest/tasks/{name}\n\n\n\n\n\n\n[GET] /rest/actions?name={name}\n\n\n[GET] /rest/actions/{name}\n\n\n\n\n\n\n[GET] /rest/algorithms?name={name}\n\n\n[GET] /rest/algorithms/{name}\n\n\n\n\n\n\n[DELETE] /rest/pipelines?name={name}\n\n\n[DELETE] /rest/pipelines/{name}\n\n\n\n\n\n\n[DELETE] /rest/tasks?name={name}\n\n\n[DELETE] /rest/tasks/{name}\n\n\n\n\n\n\n[DELETE] /rest/actions?name={name}\n\n\n[DELETE] /rest/actions/{name}\n\n\n\n\n\n\n\n\n\n\nIn general, the name is now specified as part of the URL path instead of as a URL parameter.\n\n\n/\n and \n;\n characters are no longer allowed in these names.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nEach component's \ndescriptor.json\n now requires an \nalgorithm.trackType\n field. This is used by the Workflow Manager\n to determine the kind of tracks that may be generated by the component (e.g. \nFACE\n, \nTEXT\n, \nCLASS\n, etc.). This is\n now used in place of the component API calls that were used to get the detection type. \n\n\n\n\nComponent API Updates\n\n\n\n\n\nThe following changes were made since the track type is now part of each component's \ndescriptor.json\n:\n\n\nRemoved \nGetDetectionType()\n from the CPP Component API.\n\n\nRemoved \ndetection_type\n from the Python Component API.\n\n\nRemoved \ngetDetectionType()\n from the Java Component API.\n\n\n\n\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects use \naction\n instead of \nsource\n in the track type group. Also, \nsource\n is removed from each track.\n\n\nConsider this example of the old JSON output:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"confidence\": 8.799637,\n ...\n\n\n\n\n\nThe corresponding new JSON output is:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"action\": \"OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"confidence\": 8.799637,\n ...\n\n\n\nTrigger Support\n\n\n\n\n\nA \nTRIGGER\n property can now be added to any action in a pipeline. It will only be used if \nFEED_FORWARD_TYPE\n is\n provided and set to something other than \nNONE\n. The \nTRIGGER\n property is used to conditionally control if the\n Workflow Manager executes that action. Each feed-forward track that is not executed is passed to the next stage of the\n pipeline. This results in skipping untriggered actions.\n\n\nThe value of \nTRIGGER\n takes the form \n=[;...]\n. For example, if the value is\n \nCLASSIFICATION=car\n then the Workflow Manager would only execute the associated action using feed-forward tracks from\n the previous stage in the pipeline if those tracks have the \nCLASSIFICATION\n track property with a value of \ncar\n.\n This could be useful to skip a license plate detection action. To enable the action to trigger on more than just \ncar\n\n tracks you can provide a list of valid values. For example, \nCLASSIFICATION=car;truck;bus\n.\n\n\nThe \nTrigger Guide\n goes into more detail and provides an example of a pipeline with\n multiple speech-to-text stages. \nTRIGGER\n is used to select which speech-to-text algorithm is executed based on the\n detected language in the media.\n\n\n\n\nRoll Up Support\n\n\n\n\n\nThe Workflow Manager can be configured to replace the values of track and detection properties\n after receiving tracks and detections from a component. For example, the \nCLASSIFICATION\n property\n may be set to \"car\", \"bus\", and \"truck\". Those can be rolled up into \"vehicle\".\n\n\nTo use this feature, set the \nROLL_UP_FILE\n property to the path of a JSON file that matches\n the format of this example:\n\n\n\n\n[\n {\n \"propertyToProcess\": \"CLASSIFICATION\",\n \"originalPropertyCopy\": \"ORIGINAL CLASSIFICATION\",\n \"groups\": [\n {\n \"rollUp\": \"vehicle\",\n \"members\": [\n \"truck\",\n \"car\",\n \"bus\"\n ]\n }\n ]\n }\n]\n\n\n\n\n\nRefer to the \nRoll Up Guide\n for an explanation and more details.\n\n\n\n\nChanged All \"whitelist\" References to \"allow list\"\n\n\n\n\n\nIn an effort to be more culturally sensitive, all references to \"whitelist\" have been removed or renamed to \"allow\n list\".\n\n\nThe \nwhitelist.\n prefix has been removed from the entries in the \nmediaType.properties\n file. For example,\n \nwhitelist.image/gif=VIDEO\n is now \nimage/gif=VIDEO\n.\n\n\nThe OcvDnnDetection component \nFEED_FORWARD_WHITELIST_FILE\n property has been renamed to\n \nFEED_FORWARD_ALLOW_LIST_FILE\n.\n\n\nThe OcvYoloDetection component \nCLASS_WHITELIST_FILE\n property has been renamed to \nCLASS_ALLOW_LIST_FILE\n.\n\n\n\n\nArgos Translation Component\n\n\n\n\n\nThis new component utilizes \nArgos Translate\n to translate input\n text from a given source language to English. It can be used in a feed-forward pipeline to process tracks with\n language and/or script identifiers from an upstream stage.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nWhisper Speech-to-Text and Translation Component\n\n\n\n\n\nThis new component utilizes \nOpenAI Whisper\n to perform language detection,\n speech-to-text transcription, or speech translation.\n\n\nIf multiple languages are spoken in a single piece of media, language detection will detect only one of them.\n\n\nNote that Whisper is not designed to return a transcription in the source language when performing translation, so we\n implemented the component to perform an additional transcribe call when configured to perform translation.\n\n\nRefer to the \nREADME\n\n for details.\n\n\n\n\nContrastive Language\u2013Image Pre-training (CLIP) Component\n\n\n\n\n\nThis new component utilizes \nCLIP\n to classify images using the 80 COCO classes, 1000\n ImageNet classes, or a list of user-provided classes. It can run on a CPU or GPU, and can make calls to an NVIDIA\n Triton inference server.\n\n\nClassification is performed by taking the class names and filling in one or more text prompts. For example, \"a photo\n of {}\", where \"{}\" can be \"dog\" or \"cat\". An embedding is generated using the text prompt(s) for each class and\n compared against the image embedding to get a match score. Optionally, users can provide a list of their own text\n prompts.\n\n\nOpenAI trained the CLIP model using a wide variety of images and their respective captions from the Internet. This may\n make it suitable for a wide variety of classification tasks without further training (known as zero-shot\n classification). For example, a user could make up a list of classes for arbitrary objects like \"walrus\", \"paperclip\",\n \"pizza\", etc., and use the default text prompts.\n\n\nIt is also possible to use CLIP to classify concepts like scenes and sentiment. For example, using a text prompt of \"a\n {} scene\" where the classes are \"safe\", \"violent\", and \"dangerous\".\n\n\nOptionally, the CLIP component can return the image embedding as the track \nFEATURE\n. For example, this can be used\n for search and retrieval tasks by comparing it to other embeddings enrolled in a database.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1547\n] Create Argos translation component\n\n\n[\n#1574\n] Update the WFM to support an optional \nTRIGGER\n property on any action\n\n\n[\n#1598\n] Create a Whisper component for speech-to-text and and translation\n\n\n[\n#1644\n] Create CLIP component for processing images\n\n\n[\n#1704\n] Update Workflow Manager to authenticate users and REST clients using OIDC\n\n\n[\n#1730\n] Update Workflow Manager to optionally use OIDC when sending callbacks and posting to TiesDb\n\n\n[\n#1733\n] Update Workflow Manager to use an embedded ActiveMQ broker\n\n\n[\n#1793\n] Add Roll Up support to Workflow Manager\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#799\n] Avoid unnecessary serialization between Camel routes\n\n\n[\n#949\n] Change \n/pipelines?name=MYPIPELINE\n REST endpoint to \n/pipelines/MYPIPELINE\n\n\n[\n#1643\n] Remove \nLONG_SPEAKER_ID\n and instead only use \nSPEAKER_ID\n\n\n[\n#1645\n] Refactor camel code\n\n\n[\n#1705\n] Change all references to \"whitelist\" to \"allow list\" and \"blacklist\" to \"block list\"\n\n\n[\n#1759\n] Disable markup animation by default\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1642\n] \nInProgressBatchJobsService.setProcessedAction\n is now called when a previous task produces no tracks\n\n\n[\n#1755\n] The Workflow Manager logs page does not properly handle multi-byte characters\n\n\n\n\nOpenMPF 7.2.x\n\n\n7.2.6: January 2024\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nHealth Check Guide\n.\n\n\n\n\nHealth Check Support\n\n\n\n\n\nThe C++ and Python component executors can be configured to run health checks on components prior to running jobs.\n Health checks are configured using environment variables:\n\n\nHEALTH_CHECK\n: When set to \"ENABLED\", the component executor will run health checks.\n\n\nHEALTH_CHECK_TIMEOUT\n: When set to a positive integer, specifies the minimum number of seconds between health\n checks. When absent or set to 0, a health check will run before every job.\n\n\nHEALTH_CHECK_RETRY_MAX_ATTEMPTS\n: When set to a positive integer, specifies the number of consecutive health\n check failures that will cause the component service to exit. When absent or set to 0, the component service will\n never exit because of a failed health check.\n\n\n\n\n\n\nAlso, an INI file must be provided at \n$MPF_HOME/plugins//health/health-check.ini\n. For example:\n\n\n\n\nmedia=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE\n\n\n\n\n\nRefer to the \nHealth Check Guide\n for an explanation and more details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1731\n] Implement health checks for C++ and Python components\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1727\n] Update ffmpeg to 6.1\n\n\n\n\n7.2.5: November 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1715\n] Upgrade ActiveMQ to 5.17.6\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1711\n] When selecting detections with the highest confidence,\n Workflow Manager should consistently handle detections with equal confidence\n\n\n\n\n7.2.4: September 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1707\n] Fix bug where TiesDB check status reports\n \nNO_TIES_DB_URL_IN_JOB\n instead of \nMEDIA_MIME_TYPES_ABSENT\n\n\n\n\n7.2.3: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1697\n] Prevent OcvYoloDetection component from deadlocking on\n strange frame sizes when using Triton\n\n\n\n\n7.2.2: June 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1693\n] Add property to enable/disable SAS in AzureSpeech\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1695\n] Fix memory leak in KeywordTagging component\n\n\n\n\n7.2.1: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1678\n] Fix bug where ffmpeg hangs when processing some kinds of\n unsupported/corrupted media\n\n\n\n\n7.2.0: May 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nTiesDb Guide\n.\n\n\nUpdated the \nComponent Descriptor Reference\n with \noutputChangedCounter\n.\n\n\nUpdated the \nREST API\n with a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint.\n\n\nUpdated the REST API \n[POST] /rest/jobs\n response with \ntiesDbCheckStatus\n and \noutputObjectUri\n.\n\n\n\n\nTiesDb Re-Post\n\n\n\n\n\nAdded a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint that accepts an array of job ids as an input and will attempt to\n re-post the job assertions (records) to TiesDb for each one. \n\n\nAdded a \"TiesDb\" column to the Job Status page. If there is a problem posting a record to the TiesDb server the column\n will contain an \"ERROR\" button. Clicking on it will provide a description of the error and a button that can be used\n to re-post the associated job records.\n\n\n\n\nTiesDb Checking\n\n\n\n\n\nIf the \nTIES_DB_URL\n job property or \nties.db.url\n system property is set when submitting a job creation request, \n then the Workflow Manager will attempt to check TiesDb for existing job results before running the job again.\n\n\nThe Workflow Manager will attempt to use the most-recently-created job results, preferring jobs that completed without\n errors or warnings, and preferring jobs that completed with warnings over completed with errors.\n\n\nTo prevent this check, set \nSKIP_TIES_DB_CHECK=true\n. That will force the job to run and attempt to post the new\n job results to TiesDb.\n\n\nWhen using TiesDb, we strongly recommend providing both the \nMEDIA_HASH\n and \nMIME_TYPE\n in the \nmedia.metadata\n map\n in the job request. This will enable the Workflow Manager to skip media inspection. When using S3 object storage, this\n means that the Workflow Manager will not need to download the media before checking TiesDb for existing job records.\n\n\nThe \n[POST] /rest/jobs\n response now contains a \ntiesDbCheckStatus\n and \noutputObjectUri\n field. \ntiesDbCheckStatus\n\n will be set to one of the following values:\n\n\nNOT_REQUESTED\n\n\nNO_TIES_DB_URL_IN_JOB\n\n\nMEDIA_HASHES_ABSENT\n\n\nMEDIA_MIME_TYPES_ABSENT\n\n\nNO_MATCH\n\n\nFOUND_MATCH\n\n\n\n\n\n\nWhen there is a \nFOUND_MATCH\n, the \noutputObjectUri\n will be set to the URI of the old TiesDb record if S3 copy is\n not enabled.\n\n\nBy default, the \nties.db.s3.copy.enabled\n system property is set to \ntrue\n. This means that the Workflow Manager will\n attempt to copy all of the artifacts, markup, and derivative media associated with the job in TiesDb from the S3\n locations associated with the old job to the new S3 location specified in the new job. A new JSON output object will\n be generated. To disable this behavior set the system property, or \nTIES_DB_S3_COPY_ENABLED\n, to \nfalse\n. Then the\n Workflow Manager will simply provide a link to the old JSON as the result of the new job.\n\n\nIf there is a problem copying between S3 locations, the \"TiesDb\" column to the Job Status page will show a\n \"COPY ERROR\" button. Clicking on it will provide a description of the error.\n\n\n\n\nTiesDb Linked Media\n\n\n\n\n\nAdded support for \nLINKED_MEDIA_HASH\n in the \nmedia.properties\n section of the job creation request. When specified,\n the value of \nLINKED_MEDIA_HASH\n will be used instead of the actual media hash when creating a record in TiesDb,\n and also when looking for existing records in TiesDb.\n\n\nThis feature can be used to submit a transcoded (or thumbnail) version of an image to process instead of the source\n image. For example, the source image may be in a format not supported by OpenMPF. In this case, the value of\n \nLINKED_MEDIA_HASH\n can be set to the source image, but the rest of the job creation request would specify\n the \nmedia.mediaUri\n and \nmedia.metadata\n for the transcoded version of that image.\n\n\n\n\nOutput Changed Counter\n\n\n\n\n\nAdded the \noutput.changed.counter\n system property to the Workflow Manager and \noutputChangedCounter\n field to each\n component's \ndescriptor.json\n. These values are used when calculating the hash for a job when its record is posted to\n TiesDb, and also when checking TiesDb for existing records when a new job is submitted.\n\n\nIf the Workflow Manager is updated for any reason that should invalidate pre-existing job results, such as a\n change to the fields in the JSON output object, or significant improvements to track merging, for example, then the\n value of \noutput.changed.counter\n should be incremented by one. This will ensure that records in TiesDb will not be\n used so that all future jobs will need to be (re)run at least once until the counter is incremented again.\n\n\nThe same is true for each component. If a component is updated for any reason that should invalidate\n pre-existing job results, such as changes to input or output properties, or substantial improvements to the algorithm,\n then the value of \noutputChangedCounter\n should be incremented by one.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects will include \ntiesDbSourceJobId\n and \ntiesDbSourceMediaPath\n when the Workflow Manager can use\n previous job results stored in TiesDB. Note that the Workflow Manager will not generate a new JSON output object\n unless \nS3_RESULTS_BUCKET\n is set to a valid value, S3 access and secret keys are provided, and\n \nTIES_DB_S3_COPY_ENABLED=true\n.\n\n\n\n\nffprobe for Media Inspection\n\n\n\n\n\nThe Workflow Manager media inspection behavior now uses \nffprobe\n with \n-print_format json\n to return more precise\n \nFPS\n values for the \nmedia.mediaMetadata\n in the JSON output object. For example, the previous version of the\n Workflow Manager would return \n29.97\n, where the new version will return \n29.97002997002997\n. In multi-hour-long\n vidoes this can prevent cases where the last few frames were being ignored.\n\n\nThe previous version of the Workflow Manager was using both \nffmpeg\n and OpenCV to determine the number of frames in\n a video. We removed the OpenCV frame counter in this version because the \nffprobe\n approach is more accurate.\n The \nffprobe\n command replaces the old \nffmpeg\n command. \n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Job Status page to be more efficient. Searching a database of hundreds of thousands of jobs takes a long\n time. By limiting the search to one page of results at a time the UI is more responsive.\n\n\nRemoved timeout and bootout. The user session will no longer automatically end due to time out, or due to the same\n user logging in from a different host or browser. These behaviors were deemed too disruptive by end users.\n\n\nUpdated the Job Status page to include a \"TiesDb\" column that reports TiesDb status, such as when posting records\n to TiesDb and when retrieving existing records.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1438\n] Create a REST endpoint that will attempt to re-post to TiesDb\n\n\n[\n#1613\n] Check TiesDb before running a job\n\n\n[\n#1650\n] Create TiesDb records for thumbnail jobs under the parent media\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1342\n] Use ffprobe to get FPS during media inspection\n\n\n[\n#1564\n] Use ffprobe's JSON output instead of regexes during media inspection\n\n\n[\n#1601\n] Update the Workflow Manager jobs table to be more efficient\n\n\n[\n#1611\n] Remove Workflow Manager timeout and bootout behavior\n\n\n\n\nOpenMPF 7.1.x\n\n\n7.1.12: March 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1667\n] Handle Webp files with extra data at the end that cause components to crash\n\n\n\n\n7.1.10: March 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1662\n] Monitor StorageBackend\n\n\n\n\n7.1.9: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1675\n] Prevent upgrade of cudnn in yolo server dockerfile\n\n\n\n\n7.1.8: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1649\n] Install specific version of libcudnn8 in Docker build\n\n\n\n\n7.1.7: February 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1674\n] Update \nSPEAKER_ID\n logic, set \nLONG_SPEAKER_ID=0\n\n\n\n\n7.1.5: January 2023\n\n\n\nFeatures\n\n\n\n\n\n[\n#1542\n] Update Azure Speech Detection component to select transcription language based on feed-forward track\n\n\n[\n#1543\n] Update audio transcoder to accept subsegments\n\n\n[\n#1605\n] Update Azure Translation to use detected language from upstream\n\n\n\n\n7.1.1: December 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1634\n] Update version numbers to 7.1\n\n\n\n\n7.1.0: December 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_UPLOAD_OBJECT_KEY_PREFIX\n.\n\n\nUpdated the Markup Guide with \nMARKUP_TEXT_LABEL_MAX_LENGTH\n.\n\n\n\n\nExemplar Selection Policy\n\n\n\n\n\nThe policy for selecting the exemplar detection for each track can now be set using the \nEXEMPLAR_POLICY\n job property\n with following values:\n\n\nCONFIDENCE\n: Select the detection with the maximum confidence. If some confidences are the same, select the\n detection with the lower frame number. This is the default setting.\n\n\nFIRST\n: Select the detection with the lowest frame number\n\n\nLAST\n: Select the detection with the highest frame number\n\n\nMIDDLE\n: Select the detection with the frame number closest to the middle frame of the track, preferring the\n detection with the lower frame number if there is an even number of frames\n\n\n\n\n\n\n\n\nAutomatic Rotation and Horizontal Flip Enabled by Default\n\n\n\n\n\nIt is no longer necessary to explicitly set \nAUTO_ROTATE\n and \nAUTO_FLIP\n to true since that is now the default value.\n\n\nThese properties affect all video and image components that use the MPFImageReader and MPFVideoCapture tools. When\n true, if the image has EXIF data, or there is metadata associated with a video that ffmpeg understands, the tools will\n use that information to properly orient the frames before returning the frames to the component for processing.\n\n\n\n\nSupport S3 Object Storage Key Prefix\n\n\n\n\n\nSet the \nS3_UPLOAD_OBJECT_KEY_PREFIX\n job property or \ns3.upload.object.key.prefix\n system property to add a prefix to\n object keys when the Workflow Manager uploads objects to the S3 object store. This affects the JSON output object,\n artifacts, markup files, and derivative media.\n\n\nSpecifically, the Workflow Manager will upload objects to\n \n///\n.\n\n\nFor example, if you wish to add \"work/\" to the object key, then set \nS3_UPLOAD_OBJECT_KEY_PREFIX=work/\n.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1526\n] Allow markup to display more than 10 characters in the text\n part of the label\n\n\n[\n#1527\n] Enable the Workflow Manager to select the middle detection\n as the exemplar\n\n\n[\n#1566\n] Make \nAUTO_ROTATE\n and \nAUTO_FLIP\n true by default\n\n\n[\n#1569\n] Modify C++ and Python component executor to automatically\n add the job name to log messages\n\n\n[\n#1621\n] Make S3 object keys used for upload configurable\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1602\n] Update Workflow Manager to use Spring Boot\n\n\n[\n#1631\n] Update byte-buddy, Mockito, and Hibernate versions to\n resolve build issue. Most notably, update Hibernate to 5.6.14.\n\n\n[\n#1632\n] Update ActiveMQ to 5.17.3\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1581\n] Don't change track start and end frame when\n \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n is disabled\n\n\n[\n#1595\n] Work around how Ubuntu only recognizes certificate files\n that end in .crt\n\n\n[\n#1610\n] Prevent premature pipeline creation when using web UI\n\n\n[\n#1612\n] At startup, prevent Workflow Manager from consuming from\n queues before purging them\n\n\n\n\nOpenMPF 7.0.x\n\n\n7.0.3: September 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1561\n] Fix logging for Python components when running through CLI\n runner\n\n\n[\n#1583\n] Can now properly view media while job is in progress\n\n\n[\n#1587\n] Fix bugs in amq_detection_component's use of select\n\n\n\n\n7.0.2: August 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1562\n] Fix bug where an ffmpeg change prevented detecting video\n rotation\n\n\n\n\n7.0.0: July 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Development Environment Guide by replacing steps for CentOS 7 with Ubuntu 20.04.\n\n\nAdded the Derivative Media Guide.\n\n\nUpdated the Batch Component APIs with revised error codes.\n\n\nUpdated the Python Batch Component API and Python base Docker image README with instructions for\n using \npyproject.toml\n and \nsetup.cfg\n.\n\n\nUpdated the Admin Guide and User Guide with images that show the new TiesDb and Callback columns in the job status UI.\n\n\nUpdated the REST API with the \npipelineDefinition\n, \nframeRanges\n, and \ntimeRanges\n fields now supported by the\n \n[POST] /rest/jobs\n endpoint.\n\n\nUpdated the OcvYoloDetection component README with information on using the NVIDIA Triton inference server.\n\n\nUpdated the Markup Guide with \nMARKUP_ANIMATION_ENABLED\n and \nMARKUP_LABELS_TRACK_INDEX_ENABLED\n.\n\n\nUpdated the Contributor Guide with new steps for generating documentation.\n\n\n\n\nTransition from CentOS 7 to Ubuntu 20.04\n\n\n\n\n\nAll the Docker images that previously used CentOS 7 as a base now use Ubuntu 20.04.\n\n\nWe decided not to use CentOS 8, which is a version of CentOS Stream, due to concerns about stability.\n\n\nAlso, Ubuntu is a very common OS within the AI and ML space, and has significant community support.\n\n\n\n\nUse Job Id that Enables Load Balancing\n\n\n\n\n\nThe Workflow Manager can now optionally accept job ids of the form \n-\n through\n the REST endpoints, where \n\n is the same as the shorter id used in previous releases. The\n \n-\n prefix enables better tracking and separation of jobs run across multiple\n Workflow Manager instances in a cluster.\n\n\nThe prefix can be set in the \ndocker-compose.yml\n file by assigning \n{{.Node.Hostname}}\n to the \nNODE_HOSTNAME\n\n environment variable for the Workflow Manager service, or hard-coding \nNODE_HOSTNAME\n to the desired hostname.\n\n\nThe shorter version of the id can still be used in REST requests, but the longer id will always be returned by the\n Workflow Manager when responding to those requests.\n\n\nThe shorter id will always be used internally by the Workflow Manager, meaning the job status web UI and log messages\n will all use the shorter job id. \n\n\n\n\nSupport for Derivative Media\n\n\n\n\n\nThe TikaImageDetection component now returns \nMEDIA\n tracks instead of \nIMAGE\n tracks when extracting images from\n documents, such as PDFs, Word documents, and PowerPoint slides. The document is considered the \"source\", or \"parent\",\n media, and the images are considered the \"derivative\", or \"child\", media.\n\n\nActions can now be configured with \nSOURCE_MEDIA_ONLY=true\n or \nDERIVATIVE_MEDIA_ONLY=true\n, which will result in only\n performing the action on that kind of media. Feed forward can still be used to pass track information from one stage\n to another. The tracks will skip the stages (actions) that don't apply.\n\n\nThis enables complex pipelines like one that extracts text from a PDF using TikaTextDetection, OCRs embedded images\n using EastTextDetection and TesseractOCRTextDetection, and runs all of the \nTEXT\n tracks through KeywordTagging.\n\n\nAdded the following pipelines to the TikaImageDetection component:\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING AND MARKUP PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE AND MARKUP PIPELINE\n\n\n\n\n\n\n\n\nReport when Job Callbacks and TiesDb POSTs Fail\n\n\n\n\n\nThe job status UI displays two new columns, one that indicates the status of posting to TiesDB, and one that indicates\n the status of posting the job callback to the job producer.\n\n\nAdditionally, the \n[GET] /rest/jobs/{id}\n endpoint now includes a \ntiesDbStatus\n and \ncallbackStatus\n field.\n\n\nNote that, by design, the JSON output itself does not contain these statuses.\n\n\n\n\nAllow Pipelines to be Specified in a Job Request\n\n\n\n\n\nOptionally, the \npipelineDefinition\n field can be provided instead of the \npipelineName\n field when using the\n \n[POST] /rest/jobs\n endpoint in order to specify a pipeline on the fly for that specific job run. It will not be saved\n for later reuse.\n\n\nThe format of the pipeline definition is similar to that in a \ndescriptor.json\n file, with separate sections for\n defining \ntasks\n and \nactions\n. Pre-existing tasks and actions known to the Workflow Manager can be specified in the\n definition. They do not need to be defined again.\n\n\nThis feature is a convenient alternative to creating persistent definitions using the \n[POST] /rest/pipelines\n,\n \n[POST] /rest/tasks\n, and \n[POST] /rest/actions\n endpoints. For example, this feature could be used to quickly add or\n remove a motion preprocessing stage from a pipeline.\n\n\n\n\nAllow User-Specified Segment Boundaries\n\n\n\n\n\nOptionally, multiple \nframeRanges\n and/or \ntimeRanges\n fields can be provided when using the \n[POST] /rest/jobs\n\n endpoint in order to manually specify segment boundaries. These values will override the normal segmenting behavior of\n the Workflow Manager.\n\n\nNote that overlapping ranges will be combined and large ranges may still be split up according to the value of\n \nTARGET_SEGMENT_LENGTH\n and \nVFR_TARGET_SEGMENT_LENGTH\n.\n\n\nNote that \nframeRanges\n is specified using the frame number and \ntimeRanges\n is specified in milliseconds.\n\n\n\n\nAdd Triton Inference Server support to YOLO component\n\n\n\n\n\nThe OcvYoloDetection component now supports the ability to send requests to an NVIDIA Triton Inference Server by\n setting \nENABLE_TRITON=true\n. If set to false, the component will process jobs using OpenCV DNN on the local host\n running the Docker service, as per normal.\n\n\nBy default \nTRITON_SERVER=ocv-yolo-detection-server:8001\n, which\n corresponds to the \nocv-yolo-detection-server\n entry in your \ndocker-compose.yml\n file. Refer to the example entry\n within \ndocker-compose.components.yml\n\n . That entry uses a pre-built and pre-configured version of the Triton server.\n\n\nThe Triton server runs the YOLOv4 model within the TensorRT framework, which performs a warmup operation when the\n server starts up to determine which optimizations to enable for the available GPU hardware. \n*.engine\n files are\n generated within the \nyolo_engine_file\n Docker volume for later reuse.\n\n\nTo further improve inferencing speed, shared memory can be configured between the \nocv-yolo-detection\n client service and the\n \nocv-yolo-detection-server\n service if they are running on the same host. Set \nTRITON_USE_SHM=true\n and configure the\n server with a \n/dev/shm:/dev/shm\n Docker volume.\n\n\nDepending on the available GPU hardware, the Triton server can achieve speeds that are 5x faster than OpenCV DNN with\n tracking enabled, no shared memory, and nearly 9x faster with tracking disabled, with shared memory. Our tests used a\n single RTX 2080 GPU.\n\n\n\n\nRemoved Unused and Redundant Error Codes\n\n\n\n\n\nThe error codes shown on the left were redundant and replaced with the corresponding error codes on the right:\n\n\n\n\n\n\n\n\n\n\nOld Error Code\n\n\nNew Error Code\n\n\n\n\n\n\n\n\n\n\nMPF_IMAGE_READ_ERROR\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\n\n\n\n\nMPF_BOUNDING_BOX_SIZE_ERROR\n\n\nMPF_BAD_FRAME_SIZE\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_INT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_FLOAT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_INVALID_FRAME_INTERVAL\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_DETECTION_TRACKING_FAILED\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\n\n\n\n\n\n\nAlso, the following error codes are no longer being used and have been removed:\n\n\n\n\nMPF_UNRECOGNIZED_DATA_TYPE\n\n\nAll media types can now be processed since we support the \nUNKNOWN\n (a.k.a. \"generic\")\n media type\n\n\n\n\n\n\nMPF_INVALID_DATAFILE_URI\n\n\nThe Workflow Manager will reject a job with an invalid media URI before it gets to a\n component\n\n\n\n\n\n\nMPF_INVALID_START_FRAME\n\n\nMPF_INVALID_STOP_FRAME\n\n\nMPF_INVALID_ROTATION\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nBy default, the Markup component draws bounding boxes to fill in the gaps between detections in each track by\n interpolating the box size and position. This can now be disabled by setting the job property\n \nMARKUP_ANIMATION_ENABLED=false\n, or the system property \nmarkup.video.animation.enabled=false\n.\n Disabling this feature can be useful to prevent floating boxes from cluttering the marked-up frames.\n\n\nThe Markup component will now start each bounding box label with a track index like \n[0]\n that can be used to\n correlate the box with the track in the JSON output object. The JSON output now contains an \nindex\n field for every\n track, relative to each piece of media, that is simply an integer that starts at 0 and counts upward. This can be\n disabled by setting the job property \nMARKUP_LABELS_TRACK_INDEX_ENABLED=false\n, or the system property\n \nmarkup.labels.track.index.enabled=false\n.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nComponents that generate \nMEDIA\n tracks will result in new derivative \nmedia\n entries in the JSON output file. This\n means it's possible to provide a single piece of media as an input and have more than one \nmedia\n entry in the JSON\n output. The output will always include the original media.\n\n\nEach \nmedia\n entry in the JSON output now contains a \nparentMediaId\n in addition to the \nmediaId\n. The \nparentMediaId\n\n for original source media will always be set to -1; otherwise, for derivative media, the \nparentMediaId\n is set the\n \nmediaId\n of the source media from which the child media was derived.\n\n\nEach \nmedia\n entry also contains a new \nframeRanges\n and \ntimeRanges\n collection.\n\n\nThe JSON output file also contains a new \nindex\n field for every track, relative to each piece of media.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#792\n] Perform detection on images extracted from PDFs\n\n\n[\n#1283\n] Add user-specified segment boundaries\n\n\n[\n#1374\n] Transition from CentOS 7 to Ubuntu 20.04\n\n\n[\n#1396\n] Report when job callbacks and TiesDb POSTs fail\n\n\n[\n#1398\n] Add Triton Inference Server support to YOLO component\n\n\n[\n#1428\n] Allow pipelines to be specified in a job request\n\n\n[\n#1454\n] Transition from Clair scans to Trivy scans\n\n\n[\n#1485\n] Use \npyproject.toml\n and \nsetup.cfg\n instead of \nsetup.py\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#803\n] Update Tika Image Detection to generate one track per piece of extracted media\n\n\n[\n#808\n] Update Tika Text Detection component to not use leading zeros for \nPAGE_NUM\n\n\n[\n#1105\n] Remove dependency on QT from C++ SDK\n\n\n[\n#1282\n] Use job id that enables load balancing\n\n\n[\n#1303\n] Update Tika Image Detection to return \nMEDIA\n tracks\n\n\n[\n#1319\n] Review existing error codes and remove unused or redundant error codes\n\n\n[\n#1384\n] Update Apache Tika to 2.4.1 for TikaImageDetection and TikaTextDetection Components\n\n\n[\n#1436\n] CLI Runner should initialize a component once when handling multiple jobs\n\n\n[\n#1465\n] Remove YoloV3 support from OcvYoloDetection component\n\n\n[\n#1513\n] Update to Spring 5.3.18\n\n\n[\n#1528\n] CLI runner should also sort by startOffsetTime\n\n\n[\n#1540\n] Upgrade to Java 17\n\n\n[\n#1549\n] Allow markup animation to be disabled\n\n\n[\n#1550\n] Add track index to markup\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1372\n] Tika Image Detection no longer misses images in PowerPoint and Word documents\n\n\n[\n#1449\n] Simon data is now refreshed when clicking the Processes tab\n\n\n[\n#1495\n] Fix bug where invalid CSRF token found for \n/workflow-manager/login\n\n\n\n\nOpenMPF 6.3.x\n\n\n6.3.14: May 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1530\n] Fix S3 code memory leak\n\n\n\n\n6.3.12: April 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1519\n] Upgrade to OpenCV 4.5.5\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1520\n] S3 code now retries on most 400 errors\n\n\n\n\n6.3.11: April 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_SESSION_TOKEN\n, \nS3_USE_VIRTUAL_HOST\n, \nS3_HOST\n, and \nS3_REGION\n.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1496\n] Update S3 client code\n\n\n[\n#1514\n] Update Tomcat to 8.5.78\n\n\n\n\n6.3.10: March 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1486\n] Fix bug where \nMOVING\n was being added to immutable map twice\n\n\n[\n#1498\n] Can now provide media metadata when frameTimeInfo is missing\n\n\n[\n#1501\n] MPFVideoCapture now properly reads frames from videos with rotation metadata\n\n\n[\n#1502\n] Detections with \nHORIZONTAL_FLIP\n will no longer result in illformed detections and incorrectly padded regions\n\n\n[\n#1503\n] Videos with rotation metadata will no longer result in corrupt markup\n\n\n\n\n6.3.8: January 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1469\n] \nTENSORFLOW VEHICLE COLOR DETECTION\n pipelines no longer refer to YOLO tasks that no longer exist\n\n\n\n\n6.3.7: January 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1466\n] Upgrade log4j to 2.17.1\n\n\n\n\n6.3.6: December 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1457\n] Upgrade log4j to 2.16.0\n\n\n\n\n6.3.5: November 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1451\n] Make concurrent callbacks configurable\n\n\n\n\n6.3.4: November 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1441\n] Modify AdminStatisticsController so that it doesn't hold all jobs in memory at once\n\n\n\n\n6.3.3: October 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1425\n] Make protobuf size limit configurable\n\n\n\n\n6.3.2: October 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1420\n] Sphinx component no longer omits audio at end of video files\n\n\n[\n#1422\n] Media inspection now correctly calculates milliseconds from ffmpeg duration\n\n\n\n\n6.3.1: September 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1404\n] Improve OcvDnnDetection vehicle color detection\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1251\n] Add version to JSON output object\n\n\n[\n#1272\n] Update Keyword Tagging to work on multiple inputs\n\n\n[\n#1350\n] Retire old components to the graveyard: DlibFaceDetection, DarknetDetection, and OcvPersonDetection\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n now behaves as expected\n\n\n[\n#1271\n] Azure speech component no longer omits audio at end of video files\n\n\n[\n#1389\n] NLP text correction component now properly reads the value of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n[\n#1403\n] Corrected README to state that the Azure Speech Component doesn't support v2 of the API\n\n\n[\n#1406\n] Speech detections in videos are no longer dropped if using keyword tagging\n\n\n[\n#1411\n] Exception no longer occurs when adding \nSHRUNK_TO_NOTHING=TRUE\n to an immutable map in multiple pipeline stages\n\n\n[\n#1413\n] Speech detections in videos are no longer dropped if using translation\n\n\n\n\n6.3.0: September 2021\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the API documents, Development Environment Guide, Node Guide, Install Guide, User Guide, Admin Guide, and\n others to clarify the difference between Docker and non-Docker behaviors.\n\n\nTransformed Packaging and Registering a Component document into Component Descriptor Reference.\n\n\nSplit Media Segmentation Guide from User Guide.\n\n\nUpdated and renamed the Workflow Manager document to Workflow Manager Architecture.\n\n\nUpdated the various Docker guides to clarify the difference between building Docker images from scratch versus\n building them using pre-built base images on Docker Hub, emphasizing the latter.\n\n\nUpdated the Contributor Guide to document the hotfix pull request process.\n\n\n\n\nTiesDb Integration\n\n\n\n\n\nTiesDb is a PostgreSQL DB with a RESTful API that stores media metadata. The metadata entries are queried using the\n hash (sha256, md5) of the media file. TIES stands\n for \nTriage Import Export Schema\n. TiesDb is deployed and managed externally to\n OpenMPF. For more information please contact us.\n\n\nWhen a job completes, OpenMPF can post assertions to media entries that exist in TiesDb. In general, one assertion is\n generated for each algorithm run on a piece of media. It contains the job status, algorithm name, detection\n type (\nFACE\n, \nTEXT\n, \nMOTION\n, etc.), and number of tracks generated, as well as a link to the full JSON output\n object.\n\n\nEach assertion serves as a lasting record so that job producers may first check TiesDb to see if an algorithm was run\n on a piece of media before submitting the same job to OpenMPF again.\n\n\nTo enable TiesDb support, set the \nTIES_DB_URL\n job property or \nties.db.url\n system property to\n the \n://:\n part of the URL. The Workflow Manager will append\n the \n/api/db/supplementals?sha256Hash=\n part. Here is an example of a TiesDb POST:\n\n\n\n\n{\n \"dataObject\": {\n \"sha256OutputHash\": \"1f8f2a8b2f5178765dd4a2e952f97f5037c290ee8d011cd7e92fb8f57bc75f17\",\n \"outputType\": \"FACE\",\n \"algorithm\": \"FACECV\",\n \"processDate\": \"2021-09-09T21:37:30.516-04:00\",\n \"pipeline\": \"OCV FACE DETECTION PIPELINE\",\n \"outputUri\": \"file:///home/mpf/git/openmpf-projects/openmpf/trunk/install/share/output-objects/1284/detection.json\",\n \"jobStatus\": \"COMPLETE\",\n \"jobId\": 1284,\n \"systemVersion\": \"6.3\",\n \"trackCount\": 1,\n \"systemHostname\": \"openmpf-master\"\n },\n \"system\": \"OpenMPF\",\n \"securityTag\": \"UNCLASSIFIED\",\n \"informationType\": \"OpenMPF FACE\",\n \"assertionId\": \"4874829f666d79881f7803207c7359dc781b97d2c68b471136bf7235a397c5cd\"\n}\n\n\n\nNatural Language Processing (NLP) Text Correction Component\n\n\n\n\n\nThis component utilizes the \nCyHunspell\n library, which is a Python\n port of the \nHunspell\n spell-checking library, to perform post-processing\n correction of OCR text. In general, it's intended to be used in a pipeline after a component like\n TesseractOCRTextDetection that generates \nTEXT\n tracks. These tracks are then fed-forward into NlpTextCorrection,\n which will add a \nCORRECTED TEXT\n property to the existing tracks.\n The \nTESSERACT OCR TEXT DETECTION WITH NLP TEXT CORRECTION PIPELINE\n performs this behavior. The component can also\n run on its own to process plain text files. Refer to\n the \nREADME\n for details.\n\n\n\n\nAzure Cognitive Services (ACS) Read Component\n\n\n\n\n\nThis component utilizes\n the \nAzure Cognitive Services Read Detection REST endpoint\n\n to extract formatted text from documents (PDFs), images, and videos. Refer to\n the \nREADME\n for\n details.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1151\n] Now supports \nIN_PROGRESS_WITH_WARNINGS\n status\n\n\n[\n#1234\n] Now sorts JSON output object media by media id\n\n\n[\n#1341\n] Added job id to all batch-job-specific Workflow Manager log\n messages\n\n\n[\n#1349\n] Improved reporting and recording job status\n\n\n[\n#1353\n] Updated the Workflow Manager to remove and warn about\n zero-size detections\n\n\n[\n#1382\n] Updated Tika version to 1.27 for TikaImageDetection and\n TikaTextDetection components\n\n\n[\n#1387\n] Markup can now be configured in a\n component's \ndescriptor.json\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1080\n] Batch jobs no longer prematurely set to 100% completion\n during artifact extraction\n\n\n[\n#1106\n] When a job ends in \nERROR\n or \nCANCELLED_BY_SHUTDOWN\n the\n job status UI now shows an End Date\n\n\n[\n#1158\n] JSON output object URI no longer changes when callback fails\n\n\n[\n#1317\n] TikaTextDetection no longer generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n[\n#1337\n] Now using \nMPF_BAD_FRAME_SIZE\n instead\n of \nMPF_DETECTION_FAILED\n for OpenCV empty/resize exception\n\n\n[\n#1359\n] Image detection tracks no longer\n have \nendOffsetFrameInclusive\n set to 1\n\n\n[\n#1373\n] When uploading large files through the Workflow Manager web\n UI, now more than the first 865032704 bytes get written\n\n\n[\n#1379\n] TikaImageDetection component now avoids conflicts by no\n longer using the same path when extracting images for jobs with multiple pieces of media\n\n\n[\n#1386\n] FeedForwardFrameCropper in the Python SDK now handles\n negative coordinates properly\n\n\n[\n#1391\n] If a job is configured to upload markup and markup fails,\n the job no longer gets stuck\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1372\n] TikaImageDetection misses images in PowerPoint and Word\n documents\n\n\n[\n#1389\n] NlpTextCorrection does not properly read the value\n of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n\n\nOpenMPF 6.2.x\n\n\n6.2.5: July 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1367\n] Enable cross-origin resource sharing on Workflow Manager\n\n\n\n\n6.2.4: June 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1356\n] AzureSpeech now properly reports when media is missing audio stream\n\n\n[\n#1357\n] AzureSpeech now handles case where speaker id is not present\n\n\n\n\n6.2.2: June 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1333\n] Combine media name and job id into one WFM log line\n\n\n[\n#1336\n] Remove duplicate \"Setting status of job to COMPLETE\" Workflow Manager log line and other improvements\n\n\n[\n#1338\n] Update OpenCV DNN Detection component to optionally use feed-forward confidence values\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1237\n] Fixed jQuery DataTables bug: \"int parameter 'draw' is present but cannot be translated into a null value\"\n\n\n[\n#1254\n] Jobs table no longer flickers when polling is enabled and the search box is used\n\n\n[\n#1308\n] Prevent OCV YOLO Tracking from generating zero-sized detections\n\n\n[\n#1313\n] Fix JSON output object timestamps for variable frame rate videos\n\n\n\n\n6.2.1: May 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1330\n] Return error codes for \nmodels_ini_parser.py\n exceptions\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1331\n] Decoding certain heic images no longer causes Workflow Manager to segfault\n\n\n\n\n6.2.0: May 2021\n\n\n\nTesseract OCR Text Detection Component Support for Videos\n\n\n\n\n\nThe component can now process videos in addition to images and PDFs. Each video frame is processed sequentially.\n The \nMAX_PARALLEL_SCRIPT_THREADS\n property determines how many threads to use to process each frame, one thread per\n language or script.\n\n\nNote that for videos without much text, it may be faster to disable threading by\n setting \nMAX_PARALLEL_SCRIPT_THREADS=1\n. This will allow the component to reuse TessAPI instances instead of creating\n new ones for every frame. Please refer to the Known Issues section.\n\n\nResolved issues: \n#1285\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1086\n] Added support for \nCOULD_NOT_OPEN_MEDIA\n\n and \nCOULD_NOT_READ_MEDIA\n error types\n\n\n[\n#1159\n] Split \nIssueCodes.REMOTE_STORAGE\n\n into \nREMOTE_STORAGE_DOWNLOAD\n and \nREMOTE_STORAGE_UPLOAD\n\n\n[\n#1250\n] Modified \n/rest/jobs/{id}\n to include the job's media\n\n\n[\n#1312\n] Created \nNETWORK_ERROR\n error code for when a component\n can't connect to an external server. Updated Python HTTP retry code to return \nNETWORK_ERROR\n. This affects the Azure\n components.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1008\n] Use global TessAPI instances with parallel processing\n\n\n\n\nOpenMPF 6.1.x\n\n\n6.1.6: May 2021\n\n\n\nHandle Variable Frame Rate Videos\n\n\n\n\n\nThe Workflow Manager will attempt to detect if a video is constant frame rate (CFR) or variable frame rate (VFR)\n during media inspection. If no determination can be made, it will default to VFR behavior. If CFR, the JSON output\n object will have a \nHAS_CONSTANT_FRAME_RATE=true\n property in the \nmediaMetadata\n field.\n\n\nWhen \nMPFVideoCapture\n handles a CFR video it will use OpenCV to set the frame position, unless the position is within\n 16 frames of the current position, in which case it will iteratively use OpenCV \ngrab()\n to advance to the desired\n frame.\n\n\nWhen \nMPFVideoCapture\n handles a VFR video it will always iteratively use OpenCV \ngrab()\n to advance to the desired\n frame because setting the frame position directly has been shown to not work correctly on VFR videos.\n\n\nWhen a video is split into multiple segments, \nMPFVideoCapture\n must iteratively use \ngrab()\n to advance from frame 0\n to the start of the segment. This introduces performance overhead. To mitigate this we recommend using larger video\n segments than those used for CFR videos.\n\n\nIn addition to the existing \nTARGET_SEGMENT_LENGTH\n and \nMIN_SEGMENT_LENGTH\n job\n properties (\ndetection.segment.target.length\n and \ndetection.segment.minimum.length\n system properties) for CFR\n videos, the Workflow Manager now supports the \nVFR_TARGET_SEGMENT_LENGTH\n and \nVFR_MIN_SEGMENT_LENGTH\n job\n properties (\ndetection.vfr.segment.target.length\n and \ndetection.vfr.segment.minimum.length\n system properties) for\n VFR videos.\n\n\nNote that the timestamps associated with tracks and detections in a VFR video may be wrong. Please refer to the Known\n Issues section.\n\n\nResolved issues: \n#1307\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1287\n] Updated Tika Text Detection Component to break up large\n chunks of text. The component now generates tracks with both a \nPAGE_NUM\n property and \nSECTION_NUM\n property. Please\n refer to\n the \nREADME\n.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1313\n] Incorrect JSON output object timestamps for variable frame\n rate videos\n\n\n[\n#1317\n] Tika Text Detection component generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n\n\n6.1.5: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1300\n] Parallelized S3 artifact upload. Use\n the \ndetection.artifact.extraction.parallel.upload.count\n system property to configure the number of parallel uploads.\n\n\n\n\n6.1.4: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1299\n] Improved artifact extraction performance when there is no\n rotation or flip\n\n\n\n\n6.1.3: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1295\n] Improved artifact extraction and markup JNI memory\n utilization\n\n\n[\n#1297\n] Limited Workflow Manager IO threads to a reasonable number\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1296\n] Fixed ActiveMQ job priorities\n\n\n\n\n6.1.2: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1294\n] Limited ffmpeg threads to a reasonable number\n\n\n\n\n6.1.1: April 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1292\n] Don't skip artifact extraction for failed media\n\n\n\n\n6.1.0: April 2021\n\n\n\nOpenMPF Command Line Runner\n\n\n\n\n\nThe Command Line Runner allows users to run jobs with a single component without the Workflow Manager.\n\n\nIt outputs results in a JSON structure that is a subset of the regular OpenMPF output.\n\n\nIt only supports C++ and Python components.\n\n\nSee the\n \nREADME\n\n for more information.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nComponent code should no longer configure Log4CXX. The component executor now handles configuring Log4CXX. Component\n code should call \nlog4cxx::Logger::getLogger(\"\")\n\n to get access to the logger. Calls to \nlog4cxx::xml::DOMConfigurator::configure(logconfig_file);\n\n should be removed.\n\n\n\n\nPython Batch Component API \n\n\n\n\n\nComponent code should no longer configure logging. The component executor now handles configuring logging. Calls\n to \nmpf.configure_logging\n should be replaced with\n \nlogging.getLogger('')\n.\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\n\n\nIn order to support running a component through the CLI runner, C++ component developers should set\n the \nLD_LIBRARY_PATH\n environment variable in the final stage of their Dockerfiles. It should generally be set\n like: \nENV LD_LIBRARY_PATH $PLUGINS_DIR//lib\n.\n\n\n\n\n\n\nBecause of the logging changes mentioned above, components no longer need to set the\n \nCOMPONENT_LOG_NAME\n environment variable in their Dockerfiles.\n\n\n\n\n\n\nAdded the\n \nopenmpf_python_executor_ssb\n base image\n\n . It can be used instead of \nopenmpf_python_component_build\n and \nopenmpf_python_executor\n to simplify Dockerfiles for\n Python components that are pure Python and have no build time dependencies.\n\n\n\n\n\n\nLabel Moving vs. Non-Moving Tracks\n\n\n\n\n\nThe Workflow Manager can now identify whether a track is moving or non-moving. This is determined by calculating the\n average bounding box for a track by averaging the size and position of all the detections in the track. Then, for each\n detection in the track, the intersection over union (IoU) is calculated between that detection and the average\n detection. If the IoU for at least \nMOVING_TRACK_MIN_DETECTIONS\n number of detections is less than or equal to\n \nMOVING_TRACK_MAX_IOU\n, then the track is considered a moving track.\n\n\nAdded the following Workflow Manager job properties. These can be set for any video job:\n\n\nMOVING_TRACK_LABELS_ENABLED\n: When set to true, attempt to label tracks as either moving or non-moving objects.\n Each track will have a \nMOVING\n property set to \nTRUE\n or \nFALSE\n.\n\n\nMOVING_TRACKS_ONLY\n: When set to true, remove any tracks that were marked as not moving.\n\n\nMOVING_TRACK_MAX_IOU\n: The maximum IoU overlap between detection bounding boxes and the average per-track\n bounding box for objects to be considered moving. Value is expected to be between 0 and 1. Note that the lower\n IoU, the more likely the object is moving.\n\n\nMOVING_TRACK_MIN_DETECTIONS\n: The minimum number of moving detections for a track to be labeled as moving.\n\n\n\n\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nUsers can now watch videos directly in the OpenMPF web UI within the media pop-up dialog for each job. Most modern web\n browsers support videos encoded in VP9 and H.264. If a video cannot be played, users have the option to download it\n and play it using a stand-alone media player.\n\n\nTo set the markup encoder use \nMARKUP_VIDEO_ENCODER\n. The default encoder has changed from \nmjpeg\n to \nvp9\n. As a\n result, it will take longer to generate marked up videos, but they will be higher quality and can be viewed in the web\n UI.\n\n\nEach bounding box in the marked up media is now labeled. By default, the label shows the track-level \nCLASSIFICATION\n\n and associated confidence value. The information shown in the label can be changed by\n setting \nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n and \nMARKUP_LABELS_NUMERIC_PROP_TO_SHOW\n. To show information for each\n individual detection, rather than the entire track, set \nMARKUP_LABELS_FROM_DETECTIONS=TRUE\n.\n\n\nExemplar detections in video tracks include a star icon in their label.\n\n\nOptionally, set \nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED=TRUE\n to show icons that represent if the track is moving or\n non-moving.\n\n\nOptionally, set \nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED=TRUE\n to show icons that represent the source of the detection.\n For example, if the box is the result of an algorithm detection, tracking performing gap fill, or Workflow Manager\n animation.\n\n\nEach frame of a marked-up video now has a frame number in the upper right corner.\n\n\nPlease refer to the \nMarkup Guide\n for the complete set of markup properties, icon definitions, and\n encoder considerations.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1181\n] Updated the Tesseract OCR Text Detection component from\n Tesseract version 4.0.0 to 4.1.1\n\n\n[\n#1232\n] Updated the Azure Speech Detection component from Azure\n Batch Transcription version 2.0 to 3.0\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1187\n] EXIF orientation is now preserved during markup and artifact\n extraction\n\n\n[\n#1257\n] Updated \nOUTPUT_LAST_TASK_ONLY\n to work on all media types\n\n\n\n\nOpenMPF 6.0.x\n\n\n6.0.11: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1284\n] Updated the Azure Translation component to count emoji as 2\n characters\n\n\n\n\n6.0.10: March 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1270\n] The Azure Cognitive Services components now retry HTTP\n requests\n\n\n\n\n6.0.9: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1273\n] Setting \nTRANSLATION\n to the empty string no longer prevents\n Keyword Tagging\n\n\n\n\n6.0.6: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1265\n] Updated the Tika Text Detection component to handle\n spreadsheets\n\n\n[\n#1268\n] Updated the Tika Text Detection component to remove metadata\n\n\n\n\n6.0.5: February 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1266\n] The Azure Translation component now handles the final\n segment correctly when guessing sentence breaks\n\n\n\n\n6.0.4: February 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1264\n] Updated the Azure Translation component to handle large\n amounts of text\n\n\n[\n#1269\n] AzureTranslation no longer tries to translate text that is\n already in the \nTO_LANGUAGE\n\n\n\n\n6.0.3: February 2021\n\n\n\nOpenCV YOLO Detection Component\n\n\n\n\n\nThis new component utilizes the OpenCV Deep Neural Networks (DNN) framework to detect and classify objects in images\n and videos using Darknet YOLOv4 models trained on the COCO dataset. It supports both CPU and GPU modes of operation.\n Tracking is performed using a combination of intersection over union, pixel difference after Fast Fourier transform (\n FFT) phase correlation, Kalman filtering, and OpenCV MOSSE tracking. Refer to\n the \nREADME\n for details.\n\n\n\n\n6.0.2: January 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1249\n] FFmpeg no longer reports different frame counts for the same\n piece of media\n\n\n\n\n6.0.1: December 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1238\n] The JSON output object is now generated when remote media\n cannot be downloaded.\n\n\n\n\n6.0.0: December 2020\n\n\n\nUpgrade to OpenCV 4.5.0\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.4.7 to OpenCV 4.5.0.\n\n\nOpenCV is now built with CUDA support, including cuDNN (CUDA Deep Neural Network library) and cuBLAS (CUDA Basic\n Linear Algebra Subroutines library). All C++ components that use the base C++ builder and executor Docker images have\n CUDA support built in, giving developers the option to make use of it.\n\n\nAdded GPU support to the OcvDnnDetection component.\n\n\n\n\nAzure Cognitive Services (ACS) Translation Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Translator REST endpoint\n\n to translate text from one language (locale) to another. Generally, it's intended to operate on feed-forward tracks\n that contain detections with \nTEXT\n and \nTRANSCRIPT\n properties. It can also operate on plain text file inputs. Refer\n to the \nREADME\n for\n details.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded \nalgorithm\n field to the element that describes a collection of tracks generated by an action in the JSON output\n object. For example:\n\n\n\n\n\"output\": {\n \"FACE\": [{\n \"source\": \"+#MOG MOTION DETECTION PREPROCESSOR ACTION#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [{ ... }],\n ...\n },\n\n\n\nMerge Tasks in JSON Output Object\n\n\n\n\n\nThe output of two tasks in the JSON output object can be merged by setting the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n\n property to true. This is a Workflow Manager property and can be set on any task in any pipeline, although it has no\n effect when set on the first task or the Markup task.\n\n\nWhen the output of two tasks are merged, the tracks for the previous task will not be shown in the JSON output object,\n and no artifacts are generated for it. The task will be listed under \nTRACKS MERGED\n, if it's not already listed\n under \nTRACKS SUPPRESSED\n due to the \nmpf.output.objects.last.task.only\n system property setting,\n or \nOUTPUT_LAST_TASK_ONLY\n property. The tracks associated with the second task will inherit the detection type and\n algorithm of the previous task.\n\n\nFor example, the \nTESSERACT OCR TEXT DETECTION WITH KEYWORD TAGGING PIPELINE\n is defined as\n the \nTESSERACT OCR TEXT DETECTION TASK\n followed by the \nKEYWORD TAGGING (WITH FF REGION) TASK\n. The second task\n sets \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n to true. The resulting JSON output object contains one set of keyword-tagged\n OCR tracks that have the \nTEXT\n detection type and \nTESSERACTOCR\n algorithm (both inherited from\n the \nTESSERACT OCR TEXT DETECTION TASK\n):\n\n\n\n\n\"output\": {\n \"TRACKS MERGED\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }],\n \"TEXT\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [{\n \"type\": \"TEXT\",\n \"trackProperties\": {\n \"TAGS\": \"ANIMAL\",\n \"TEXT\": \"The quick brown fox\",\n \"TEXT_LANGUAGE\": \"script/Latin\",\n \"TRIGGER_WORDS\": \"fox\",\n \"TRIGGER_WORDS_OFFSET\": \"16-18\"\n ...\n\n\n\n\n\nNote that you can use the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n setting on multiple tasks. For example, if you set it as a\n job property it will be applied to all tasks (with the exception of Markup - in which case the task before Markup is\n used), so you will only get the output of the last task in the pipeline. The last task will inherit the detection type\n and algorithm of the first task in the pipeline.\n\n\n\n\nTesseract Custom Dictionaries\n\n\n\n\n\nThe Tesseract component Docker image now contains an \n/opt/mpf/tessdata_model_updater\n binary that you can use to\n update \n*.traineddata\n models with a custom dictionary, as well as extract files from existing models. Refer to\n the \nDICTIONARIES\n\n guide to learn how to use the tool.\n\n\nIn general, legacy \n*.traineddata\n models are more influenced by words in their dictionary than more modern\n LSTM \n*.traineddata\n models. Also, refer to the known issue below.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1243\n] Unpacking a \n*.traineddata\n model, for example, in order to\n modify its dictionary, and then repacking it may result in dropping some of the words present in the original\n dictionary file. This may be due to some kind of compression or filtering. It's unknown what effect this has on OCR\n results.\n\n\n\n\nOpenMPF 5.1.x\n\n\n5.1.3: December 2020\n\n\n\nSetting Properties as Docker Environment Variables\n\n\n\n\n\nAny property that can be set as a job property can now be set as a Docker environment variable by prefixing it\n with \nMPF_PROP_\n. For example, setting the \nMPF_PROP_TRTIS_SERVER\n environment variable in the \ntrtis-detection\n\n service in your \ndocker-compose.yml\n file will have the same effect as setting the \nTRTIS_SERVER\n job property.\n\n\nProperties set in this way will take precedence over all other property types (job, algorithm, media, etc). It is not\n possible to change the value of properties set via environment variables at runtime and therefore they should only be\n used to specify properties that will not change throughout the entire lifetime of the service.\n\n\n\n\nUpdates\n\n\n\n\n\nThe \nmpf.output.objects.censored.properties\n system property can be used to prevent properties from being shown in\n JSON output objects. The value for these properties will appear as \n\n.\n\n\nThe Azure Speech Detection component now retries without diarization when diarization is not supported by the selected\n locale.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1230\n] The Azure Speech Detection component now uses a UUID for the\n recording id associated with a piece of media in order to prevent deleting a piece of media while it's in use.\n\n\n\n\n5.1.1: December 2020\n\n\n\nUpdates\n\n\n\n\n\nOnly generate \nFRAME_COUNT\n warning when the frame difference is > 1. This can be configured using\n the \nwarn.frame.count.diff\n system property.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1209\n] The Keyword Tagging component now generates video tracks in\n the JSON output object.\n\n\n[\n#1212\n] The Keyword Tagging component now preserves the detection\n bounding box and confidence.\n\n\n\n\n5.1.0: November 2020\n\n\n\nMedia Inspection Improvements\n\n\n\n\n\nThe Workflow Manager will now handle video files that don't have a video stream as an \nAUDIO\n type, and handle video\n files that don't have a video or audio stream as an \nUNKNOWN\n type. The JSON output object contains a\n new \nmedia.mediaType\n field that will be set to \nVIDEO\n, \nAUDIO\n, \nIMAGE\n, or \nUNKNOWN\n.\n\n\nThe Workflow Manager now configures Tika\n with \ncustom MIME type support\n\n . Currently, this enables the detection of \nvideo/vnd.dlna.mpeg-tts\n and \nimage/jxr\n MIME types.\n\n\nIf the Workflow Manager cannot use Tika to determine the media MIME type then it will fall back to using the\n Linux \nfile\n command with\n a \ncustom magicfile\n\n .\n\n\nOpenMPF now supports Apple-optimized PNGs and HEIC images. Refer to the Bug Fixes section below.\n\n\n\n\nEAST Text Region Detection Component Improvements\n\n\n\n\n\nThe \nTEMPORARY_PADDING\n property has been separated into \nTEMPORARY_PADDING_X\n and \nTEMPORARY_PADDING_Y\n so that X and\n Y padding can be configured independently.\n\n\nThe \nMERGE_MIN_OVERLAP\n property has been renamed to \nMERGE_OVERLAP_THRESHOLD\n so that setting it to a value of 0 will\n merge all regions that touch, regardless of how small the amount of overlap.\n\n\nRefer to\n the \nREADME\n\n for details.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tool Improvements\n\n\n\n\n\nThese tools now support a \nROTATION_FILL_COLOR\n property for setting the fill color for pixels near the corners and\n edges of frames when performing non-orthogonal rotations. Previously, the color was hardcoded to \nBLACK\n. That is\n still the default setting for most components. Now the color can be set to \nWHITE\n, which is the default setting for\n the Tesseract component.\n\n\nThese tools now support a \nROTATION_THRESHOLD\n property for adjusting the threshold at which the frame transformer\n performs rotation. Previously, the value was hardcoded to 0.1 degrees. That is still the default value. Rotation is\n not performed on any \nROTATION\n value less than that threshold. The motivation is that some algorithms detect small\n rotations (for example, on structured text) when there is no rotation. In such cases rotating the frame results in\n fewer detections.\n\n\nOpenMPF now uses FFmpeg when counting video frames. Refer to the Bug Fixes section below.\n\n\n\n\nAzure Cognitive Services (ACS) Form Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Form Detection REST endpoint\n\n to extract formatted text from documents (PDFs) and images. Refer to\n the \nREADME\n for\n details.\n\n\nThis component is capable of performing detections using a specified ACS endpoint URL. For example, different\n endpoints support receipt detection, business card detection, layout analysis, and support for custom models trained\n with or without labeled data.\n\n\nThis component may output the following detection properties depending on the endpoint, model, and media being\n processed: \nTEXT\n, \nTABLE_CSV_OUTPUT\n, \nKEY_VALUE_PAIRS_JSON\n, and \nDOCUMENT_JSON_FIELDS\n.\n\n\n\n\nKeyword Tagging Component\n\n\n\n\n\nThis new component performs the same keyword tagging behavior that was previously part of the Tesseract component, but\n does so on feed-forward tracks that generate detections with \nTEXT\n and \nTRANSCRIPT\n properties. Refer to\n the \nREADME\n for details.\n\n\nIn addition to the Tesseract component, keyword tagging behavior has been removed from the Tika Text component and ACS\n OCR component.\n\n\nExample pipelines have been added to the following components which make use of a final Keyword Tagging component\n stage:\n\n\nTesseract\n\n\nTika Text\n\n\nACS OCR\n\n\nSphinx\n\n\nACS Speech\n\n\n\n\n\n\n\n\nOptionally Skip Media Inspection\n\n\n\n\n\nThe Workflow Manager will skip media inspection if all of the required media metadata is provided in the job request.\n The \nMEDIA_HASH\n and \nMIME_TYPE\n fields are always required. Depending on the media data type, other fields may be\n required or optional:\n\n\nImages\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nVideos\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n, \nFRAME_COUNT\n, \nFPS\n, \nDURATION\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nAudio files\n\n\nRequired: \nDURATION\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nUpdate OpenMPF Python SDK exception handling for Python 3. Now instead of raising an \nEnvironmentError\n, which has\n been deprecated in Python 3, the SDK will raise an \nmpf.DetectionError\n or allow the underlying exception to be\n thrown.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1028\n] OpenMPF can now properly handle Apple-optimized PNGs, which\n have a non-standard data chunk named CgBI before the IHDR chunk. The Workflow Manager\n uses \npngdefry\n to convert the image into a standard PNG for processing. Before\n this fix, Tika would throw an error when trying to determine the MIME type of the Apple-optimized PNG.\n\n\n[\n#1130\n] OpenMPF can now properly handle HEIC images. The Workflow\n Manager uses \nlibheif\n to convert the image into a standard PNG for processing.\n Before this fix, the HEIC image was sometimes falsely identified as a video and the Workflow Manager would fail to\n count the number of frames.\n\n\n[\n#1171\n] The MIME type in the JSON output object is no longer null\n when there is a frame counting exception.\n\n\n[\n#1192\n] When processing videos, the frame count is now obtained from\n both OpenCV and FFmpeg. The lower of the two is used. If they don't match, a \nFRAME_COUNT\n warning is generated.\n Before this fix, on some videos OpenCV would return frame counts that were magnitudes higher than the frames that\n could actually be read. This resulted in failing to process many video segments with a \nBAD_FRAME_SIZE\n error.\n\n\n\n\nOpenMPF 5.0.x\n\n\n5.0.9: October 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1200\n] The MPFVideoCapture and MPFImageReader tools now properly\n handle cropping to frame regions when the region coordinates fall outside of the frame boundary. There was a bug that\n would result in an OpenCV error. Note that the bug only occurred when cropping was not performed with rotation or\n flipping.\n\n\n\n\n5.0.8: October 2020\n\n\n\nUpdates\n\n\n\n\n\nThe Tesseract component now supports a \nTESSDATA_MODELS_SUBDIRECTORY\n property. The component will look for tessdata\n files in \n/\n. This allows users to easily switch between \ntessdata\n\n , \ntessdata_best\n, and \ntessdata_fast\n subdirectories.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1199\n] Added missing synchronized to InProgressBatchJobsService,\n which was resulting in some jobs staying \nIN_PROGRESS\n indefinitely.\n\n\n\n\n5.0.7: September 2020\n\n\n\nTensorRT Inference Server (TRTIS) Object Detection Component\n\n\n\n\n\nThis new component detects objects in images and videos by making use of\n an \nNVIDIA TensorRT Inference Server\n (\n TRTIS), and calculates features that can later be used by other systems to recognize the same object in other media.\n We provide support for running the server as a separate service during a Docker deployment, but an external server\n instance can be used instead.\n\n\nBy default, the ip_irv2_coco model is supported and will optionally classify detected objects\n using \nCOCO labels\n\n . Additionally, features can be generated for whole frames, automatically-detected object regions, and user-specified\n regions. Refer to the \nREADME\n\n .\n\n\n\n\n5.0.6: August 2020\n\n\n\nEnable OcvDnnDetection to Annotate Feed-forward Detections\n\n\n\n\n\nThe OcvDnnDetection component can now by configured to operate only on certain feed-forward detections and annotate\n them with supplementary information. For example, the following pipeline can be configured to generate detections that\n have both \nCLASSIFICATION\n and \nCOLOR\n detection properties:\n\n\n\n\nDarknetDetection (person + vehicle) --> OcvDnnDetection (vehicle color)\n\n\n\n\n\nFor example:\n\n\n\n\n \"detectionProperties\": {\n \"CLASSIFICATION\": \"car\",\n \"CLASSIFICATION CONFIDENCE LIST\": \"0.397336\",\n \"CLASSIFICATION LIST\": \"car\",\n \"COLOR\": \"blue\",\n \"COLOR CONFIDENCE LIST\": \"0.93507; 0.055744\",\n \"COLOR LIST\": \"blue; gray\"\n }\n\n\n\n\n\nThe OcvDnnDetection component now supports the following properties:\n\n\nCLASSIFICATION_TYPE\n: Set this value to change the \nCLASSIFICATION*\n part of each output property name to\n something else. For example, setting it to \nCOLOR\n will generate \nCOLOR\n, \nCOLOR LIST\n,\n and \nCOLOR CONFIDENCE LIST\n. When handling feed-foward detections, the pre-existing \nCLASSIFICATION*\n properties\n will be carried over and the \nCOLOR*\n properties will be added to the detection.\n\n\nFEED_FORWARD_WHITELIST_FILE\n: When \nFEED_FORWARD_TYPE\n is provided and not set to \nNONE\n, only feed-forward\n detections with class names contained in the specified file will be processed. For, example, a file with only \"\n car\" in it will result in performing the exclude behavior (below) for all feed-foward detections that do not have\n a \nCLASSIFICATION\n of \"car\".\n\n\nFEED_FORWARD_EXCLUDE_BEHAVIOR\n: Specifies what to do when excluding detections not specified in\n the \nFEED_FORWARD_WHITELIST_FILE\n. Acceptable values are:\n\n\nPASS_THROUGH\n: Return the excluded detections, without modification, along with any annotated detections.\n\n\nDROP\n: Don't return the excluded detections. Only return annotated detections.\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nMake interop package work with Java 8 to better support exernal job producers and consumers.\n\n\n\n\n5.0.5: August 2020\n\n\n\nUpdates\n\n\n\n\n\nConfigure Camel not to auto-acknowledge messages. Users can now see the number of pending messages in the ActiveMQ\n management console for queues consumed by the Workflow Manager.\n\n\nImprove Tesseract OSD fallback behavior. This prevents selecting the OSD rotation from the fallback pass without the\n OSD script from the fallback pass.\n\n\n\n\n5.0.4: August 2020\n\n\n\nUpdates\n\n\n\n\n\nRetry job callbacks when they fail. The Workflow Manager now supports the \nhttp.callback.timeout.ms\n\n and \nhttp.callback.retries\n system properties.\n\n\nDrop \"duplicate paged in from cursor\" DLQ messages.\n\n\n\n\n5.0.3: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdate ActiveMQ to 5.16.0.\n\n\n\n\n5.0.2: July 2020\n\n\n\nUpdates\n\n\n\n\n\nDisable video segmentation for ACS Speech Detection to prevent issues when generating speaker ids.\n\n\n\n\n5.0.1: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated Tessseract component with \nMAX_PIXELS\n setting to prevent processing large images.\n\n\n\n\n5.0.0: June 2020\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guides to describe the new build process,\n which now includes automatically copying the openmpf repo source code into the openmpf-build image instead of using\n various bind mounts, and building all of the component base builder and executor images.\n\n\nUpdated the openmpf-docker repo \nREADME\n with the\n following sections:\n\n\nHow\n to \nUse Kibana for Log Viewing and Aggregation\n\n\nHow\n to \nRestrict Media Types that a Component Can Process\n\n\nHow\n to \nImport Root Certificates for Additional Certificate Authorities\n\n\n\n\n\n\nUpdated the \nCONTRIBUTING\n guide for Docker\n deployment with information on the new build process and component base builder and executor images.\n\n\nUpdated the \nInstall Guide\n with a pointer to the \"Quick Start\" section on DockerHub.\n\n\nUpdated the \nREST API\n with the new endpoints for getting, deleting, and creating actions, tasks, and\n pipelines, as well as a change to the \n[GET] /rest/info\n endpoint.\n\n\nUpdated the \nC++ Batch Component API\n to describe changes to the \nGetDetection()\n calls,\n which now return a collection of detections or tracks instead of an error code, and to describe improvements to\n exception handling.\n\n\nUpdated the \nC++ Batch Component API\n\n , \nPython Batch Component API\n,\n and \nJava Batch Component API\n with \nMIME_TYPE\n, \nFRAME_WIDTH\n, and \nFRAME_HEIGHT\n media\n properties.\n\n\nUpdated the \nPython Batch Component API\n with information on Python3 and the\n simplification of using a \ndict\n for some of the data members.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nRenamed \nstages\n to \ntasks\n for clarity and consistency with the rest of the code.\n\n\nThe \nmedia\n element no longer contains a \nmessage\n field.\n\n\nEach \ndetectionProcessingError\n element now contains a \ncode\n field.\n\n\nErrors and warnings are now grouped by \nmediaId\n and summarized using a \ndetails\n element that contains a \nsource\n\n , \ncode\n, and \nmessage\n field. Refer\n to \nthis comment\n for an example of the JSON\n structure. Note that errors and warnings generated by the Workflow Manager do not have a \nmediaId\n.\n\n\nWhen an error or warning occurs in multiple frames of a video for a single piece of media it will be represented\n in one \ndetails\n element and the \nmessage\n will list the frame ranges.\n\n\n\n\n\n\n\n\nInteroperability Package\n\n\n\n\n\nRenamed \nJsonStage.java\n to \nJsonTask.java\n.\n\n\nRemoved \nJsonJobRequest.java\n.\n\n\nModified \nJsonDetectionProcessingError.java\n by removing the \nstartOffset\n and \nstopOffset\n fields and adding the\n following new fields: \nstartOffsetFrame\n, \nstopOffsetFrame\n, \nstartOffsetTime\n, \nstopOffsetTime\n, and \ncode\n.\n\n\nUpdated \nJsonMediaOutputObject.java\n by removing \nmessage\n field.\n\n\nAdded \nJsonMediaIssue.java\n and \nJsonIssueDetails.java\n.\n\n\n\n\nPersistent Database\n\n\n\n\n\nThe \ninput_object\n column in the \njob_request\n table has been renamed to \njob\n and the content now contains a\n serialized form of \nBatchJob.java\n instead of \nJsonJobRequest.java\n.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nThe \nGetDetection()\n calls now return a collection instead of an error code:\n\n\nstd::vector GetDetections(const MPFImageJob &job)\n\n\nstd::vector GetDetections(const MPFVideoJob &job)\n\n\nstd::vector GetDetections(const MPFAudioJob &job)\n\n\nstd::vector GetDetections(const MPFGenericJob &job)\n\n\n\n\n\n\nMPFDetectionException\n can now be constructed with a \nwhat\n parameter representing a descriptive error message:\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\n\n\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nSimplified the \ndetection_properties\n and \nframe_locations\n data members to use a Python \ndict\n instead of a custom\n data type.\n\n\n\n\nFull Docker Conversion\n\n\n\n\n\nEach component is now encapsulated in its own Docker image which self-registers with the Workflow Manager at runtime.\n This deconflicts component dependencies, and allows for greater flexibility when deciding which components to deploy\n at runtime.\n\n\nThe Node Manager image has been removed. For Docker deployments, component services should be managed using Docker\n tools external to OpenMPF.\n\n\nIn Docker deployments, streaming job REST endpoints are disabled, the Nodes web page is no longer available, component\n tar.gz packages cannot be registered through the Component Registration web page, and the \nmpf\n command line script\n can now only be run on the Workflow Manager container to modify user settings. The preexisting features are now\n reserved for non-Docker deployments and development environments.\n\n\nThe OpenMPF Docker stack can optionally be deployed with \nKibana\n (which depends on\n Elasticsearch and Filebeat) for viewing log files. Refer to the\n openmpf-docker \nREADME\n\n .\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\nA base builder image and executor image are provided for\n C++ (\nREADME\n),\n Python (\nREADME\n), and\n Java (\nREADME\n) component\n development. Component developers can also refer to the Dockerfile in the source code for each component as reference\n for how to make use of the base images.\n\n\n\n\nRestrict Media Types that a Component Can Process\n\n\n\n\n\nEach component service now supports an optional \nRESTRICT_MEDIA_TYPES\n Docker environment variable that specifies the\n types of media that service will process. For example, \nRESTRICT_MEDIA_TYPES: VIDEO,IMAGE\n will process both videos\n and images, while \nRESTRICT_MEDIA_TYPES: IMAGE\n will only process images. If not specified, the service will process\n all of the media types it natively supports. For example, this feature can be used to ensure that some services are\n always available to process images while others are processing long videos.\n\n\n\n\nImport Additional Root Certificates into the Workflow Manager\n\n\n\n\n\nAdditional root certificates can be imported into the Workflow Manager at runtime by adding an entry\n for \nMPF_CA_CERTS\n to the workflow-manager service's environment variables in \ndocker-compose.core.yml\n\n . \nMPF_CA_CERTS\n must contain a colon-delimited list of absolute file paths. Of note, a root certificate may be used\n to trust the identity of a remote object storage server.\n\n\n\n\nDockerHub\n\n\n\n\n\nPushed prebuilt OpenMPF Docker images to \nDockerHub\n. Refer to the \"Quick Start\"\n section of the OpenMPF Workflow Manager\n image \ndocumentation\n.\n\n\n\n\nVersion Updates\n\n\n\n\n\nUpdated from Oracle Java 8 to OpenJDK 11, which required updating to Tomcat 8.5.41. We now\n use \nCargo\n to run integration tests.\n\n\nUpdated OpenCV from 3.0.0 to 3.4.7 to update Deep Neural Networks (DNN) support.\n\n\nUpdated Python from 2.7 to 3.8.2.\n\n\n\n\nFFmpeg\n\n\n\n\n\nWe are no longer building separate audio and video encoders and decoders for FFmpeg. Instead, we are using the\n built-in decoders that come with FFmpeg by default. This simplifies the build process and redistribution via Docker\n images.\n\n\n\n\nArtifact Extraction\n\n\n\n\n\nThe \nARTIFACT_EXTRACTION_POLICY\n property can now be assigned a value of \nNONE\n, \nVISUAL_TYPES_ONLY\n, \nALL_TYPES\n,\n or \nALL_DETECTIONS\n.\n\n\nWith the \nVISUAL_TYPES_ONLY\n or \nALL_TYPES\n policy, artifacts will be extracted according to\n the \nARTIFACT_EXTRACTION_POLICY*\n properties. With the \nNONE\n and \nALL_DETECTIONS\n policies, those settings are\n ignored.\n\n\nNote that previously \nNONE\n, \nVISUAL_EXEMPLARS_ONLY\n, \nEXEMPLARS_ONLY\n, \nALL_VISUAL_DETECTIONS\n,\n and \nALL_DETECTIONS\n were supported.\n\n\n\n\n\n\nThe following \nARTIFACT_EXTRACTION_POLICY*\n properties are now supported:\n\n\nARTIFACT_EXTRACTION_POLICY_EXEMPLAR_FRAME_PLUS\n: Extract the exemplar frame from the track, plus this many frames\n before and after the exemplar.\n\n\nARTIFACT_EXTRACTION_POLICY_FIRST_FRAME\n: If true, extract the first frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_MIDDLE_FRAME\n: If true, extract the frame with a detection that is closest to the\n middle frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_LAST_FRAME\n: If true, extract the last frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n: Sort the detections in a track by confidence and then extract\n this many detections, starting with those which have the highest confidence.\n\n\nARTIFACT_EXTRACTION_POLICY_CROPPING\n: If true, an artifact will be extracted for each detection in each frame\n that is selected according to the other \nARTIFACT_EXTRACTION_POLICY*\n properties. The extracted artifact will be\n cropped to the width and height of the detection bounding box, and the artifact will be rotated according to the\n detection \nROTATION\n property. If false, the artifact extraction behavior is unchanged from the previous release:\n the entire frame will be extracted without any rotation.\n\n\n\n\n\n\nFor clarity, \nOUTPUT_EXEMPLARS_ONLY\n has been renamed to \nOUTPUT_ARTIFACTS_AND_EXEMPLARS_ONLY\n. Extracted artifacts\n will always be reported in the JSON output object.\n\n\nThe \nmpf.output.objects.exemplars.only\n system property has been renamed\n to \nmpf.output.objects.artifacts.and.exemplars.only\n. It works the same as before with the exception that if an\n artifact is extracted for a detection then that detection will always be represented in the JSON output object,\n whether it's an exemplar or not.\n\n\nThe \nmpf.output.objects.last.stage.only\n system property has been renamed to \nmpf.output.objects.last.task.only\n. It\n works the same as before with the exception that when set to true artifact extraction is skipped for all tasks but the\n last task.\n\n\n\n\nREST Endpoints\n\n\n\n\n\nModified \n[GET] /rest/info\n. Now returns output like \n{\"version\": \"4.1.0\", \"dockerEnabled\": true}\n.\n\n\nAdded the following REST endpoints for getting, removing, and creating actions, tasks, and pipelines. Refer to\n the \nREST API\n for more information:\n\n\n[GET] /rest/actions\n, \n[GET] /rest/tasks\n, \n[GET] /rest/pipelines\n\n\n[DELETE] /rest/actions\n, \n[DELETE] /rest/tasks\n, \n[DELETE] /rest/pipelines\n\n\n[POST] /rest/actions\n , \n[POST] /rest/tasks\n, \n[POST] /rest/pipelines\n\n\n\n\n\n\nAll of the endpoints above are new with the exception of \n[GET] /rest/pipelines\n. The endpoint has changed since the\n last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a\n collection of tasks for each pipelines. Refer to the REST API.\n\n\n[GET] /rest/algorithms\n can be used to get information about algorithms. Note that algorithms are tied to registered\n components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the\n associated component's Docker container so it self-registers with the Workflow Manager.\n\n\n\n\nIncomplete Actions, Tasks, and Pipelines\n\n\n\n\n\nThe previous version of OpenMPF would generate an error when attempting to register a component that included actions,\n tasks, or pipelines that depend on algorithms, actions, or tasks that are not yet registered with the Workflow\n Manager. This required components to be registered in a specific order. Also, when unregistering a component, it\n required the components which depend on it to be unregistered. These dependency checks are no longer enforced.\n\n\nIn general, the Workflow Manager now appropriately handles incomplete actions, tasks, and pipelines by checking if all\n of the elements are defined before executing a job, and then preserving that information in memory until the job is\n complete. This allows components to be registered and removed in an arbitrary order without affecting the state of\n other components, actions, tasks, or pipelines. This also allows actions and tasks to be removed using the new REST\n endpoints and then re-added at a later time while still preserving the elements that depend on them.\n\n\nNote that unregistering a component while a job is running will cause it to stall. Please ensure that no jobs are\n using a component before unregistering it.\n\n\n\n\nPython Arbitrary Rotation\n\n\n\n\n\nThe Python MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270\n degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range\n will be normalized to that range. Floating point values are accepted. This is similar to the existing support\n for \nC++ arbitrary rotation\n.\n\n\n\n\nOpenCV Deep Neural Networks (DNN) Detection Component\n\n\n\n\n\nThis new component replaces the old CaffeDetection component. It supports the same GoogLeNet and Yahoo Not Suitable\n For Work (NSFW) models as the old component, but removes support for the Rezafuad vehicle color detection model in\n favor of a custom TensorFlow vehicle color detection model. In our tests, the new model has proven to be more\n generalizable and provide more accurate results on never-before-seen test data. Refer to\n the \nREADME\n.\n\n\n\n\nAzure Cognitive Services (ACS) Speech Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Batch Transcription REST endpoint\n\n to transcribe speech from audio and video files. Refer to\n the \nREADME\n.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nText tagging has been simplified to only support regular expression searches. Whole keyword searches are a subset of\n regular expression searches, and are therefore still supported. Also, the \ntext-tags.json\n file format has been\n updated to allow for specifying case-sensitive regular expression searches.\n\n\nAdditionally, the \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n detection properties are now supported, which list the\n OCR'd words that resulted in adding a \nTAG\n to the detection, and the character offset of those words within the\n OCR'd \nTEXT\n, respectively.\n\n\nKey changes to tagging output and \ntext-tags.json\n format are outlined below. Refer to\n the \nREADME\n\n for more information:\n\n\nRegex patterns should now be entered in the format \n{\"pattern\": \"regex_pattern\"}\n. Users can add and toggle\n the \n\"caseSensitive\"\n regex flag for each pattern.\n\n\nFor example: \n{\"pattern\": \"(\\\\b)bus(\\\\b)\", \"caseSensitive\": true}\n enables case-sensitive regex pattern\n matching.\n\n\nBy default, each regex pattern, including those in the legacy format, will be case-insensitive.\n\n\n\n\n\n\nAs part of the text tagging update, the \nTAGS\n outputs are now separated by semicolons \n;\n rather than commas \n,\n\n to be consistent with the delimiters for \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n output patterns.\n\n\nBecause semicolons can be part of the trigger word itself, those semicolons will be encapsulated in brackets.\n\n\nFor example, \ndetected trigger with a ;\n in the OCR'd \nTEXT\n is reported\n as \nTRIGGER_WORDS=detected trigger with a [;]; some other trigger\n.\n\n\n\n\n\n\nCommas are now used to group each set of \nTRIGGER_WORDS_OFFSET\n with its respective \nTRIGGER_WORDS\n output.\n Both \nTAGS\n and \nTRIGGER_WORDS\n are separated by semicolons only.\n\n\nFor example: \nTRIGGER_WORDS=trigger1; trigger2\n, \nTRIGGER_WORDS_OFFSET=0-5, 6-10; 12-15\n, means\n that \ntrigger1\n occurs twice in the text at the index ranges 0-5 and 6-10, and \ntrigger2\n occurs at index\n range 12-15.\n\n\n\n\n\n\n\n\n\n\nRegex tagging now follows the C++ ECMAS format (\n see \nexamples here\n) after resolving JSON string conversion\n for regex tags.\n\n\nAs a result the regex patterns \n\\b\n and \n\\p\n in the text tagging file must now be written as \n\\\\b\n and \n\\\\p\n,\n respectively, to match the format of other regex character patterns (ex. \n\\\\d\n, \n\\\\w\n, \n\\\\s\n, etc.).\n\n\n\n\n\n\nThe \nMAX_PARALLEL_SCRIPT_THREADS\n and \nMAX_PARALLEL_PAGE_THREADS\n properties are now supported. When processing\n images, the first property is used to determine how many threads to run in parallel. Each thread performs OCR using a\n different language or script model. When processing PDFs, the second property is used to determine how many threads to\n run in parallel. Each thread performs OCR on a different page of the PDF.\n\n\nThe \nENABLE_OSD_FALLBACK\n property is now supported. If enabled, an additional round of OSD is performed when the\n first round fails to generate script predictions that are above the OSD score and confidence thresholds. In the second\n pass, the component will run OSD on multiple copies of the input text image to get an improved prediction score\n and \nOSD_FALLBACK_OCCURRED\n detection property will be set to true.\n\n\nIf any OSD-detected models are missing, the new \nMISSING_LANGUAGE_MODELS\n detection property will list the missing\n models.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThe Tika text detection component now supports text tagging in the same way as the Tesseract component. Refer to\n the \nREADME\n.\n\n\n\n\nOther Improvements\n\n\n\n\n\nSimplified component \ndescriptor.json\n files by moving the specification of common properties, such\n as \nCONFIDENCE_THRESHOLD\n, \nFRAME_INTERVAL\n, \nMIN_SEGMENT_LENGTH\n, etc., to a single \nworkflow-properties.json\n file.\n Now when the Workflow Manager is updated to support new features, the component \ndescriptor.json\n file will not need\n to be updated.\n\n\nUpdated the Sphinx component to return \nTRANSCRIPT\n instead of \nTRANSCRIPTION\n, which is grammatically correct.\n\n\nWhitespace is now trimmed from property names when jobs are submitted via the REST API.\n\n\nThe Darknet Docker image now includes the YOLOv3 model weights.\n\n\nThe C++ and Python ModelsIniParser now allows users to specify optional fields.\n\n\nWhen a job completion callback fails, but otherwise the job is successful, the final state of the job will\n be \nCOMPLETE_WITH_WARNINGS\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#772\n] Can now create a custom pipeline with long action names using\n the Pipelines 2 UI.\n\n\n[\n#812\n] Now properly setting the start and stop index for elements in\n the \ndetectionProcessingErrors\n collection in the JSON output object. Errors reported for each job segment will now\n appear in the collection.\n\n\n[\n#941\n] Tesseract component no longer segfaults when handling corrupt\n media.\n\n\n[\n#1005\n] Fixed a bug that caused a NullPointerException when\n attempting to get output object JSON via REST before a job completes.\n\n\n[\n#1035\n] The search bar in the Job Status UI can once again for used\n to search for job id.\n\n\n[\n#1104\n] Fixed C++/Python component executor memory leaks.\n\n\n[\n#1108\n] Fixed a bug when handling frames and detections that are\n horizontally flipped. This affected both markup and feed-forward behaviors.\n\n\n[\n#1119\n] Fixed Tesseract component memory leaks and uninitialized\n read issues.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1028\n] Media inspection fails to handle Apple-optimized PNGs with\n the CgBI data chunk before the IHDR chunk.\n\n\n[\n#1109\n] We made the search bar in the Job Status UI more efficient\n by shifting it to a database query, but in doing so introduced a bug where the search operates on UTC time instead of\n local system time.\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n does not behave as expected for\n batch jobs. A user would expect it to control whether the JSON output object is generated, but it's generated\n regardless of that setting.\n\n\n[\n#1032\n] Jobs fail on corrupt QuickTime videos. For these videos, the\n OpenCV-reported frame count is more than twice the actual frame count.\n\n\n[\n#1106\n] When a job ends in ERROR the job status UI does not show an\n End Date.\n\n\n\n\nOpenMPF 4.1.x\n\n\n4.1.14: June 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1120\n] The node-manager Docker image now correctly installs CUDA\n libraries so that GPU-enabled components on that image can run on the GPU.\n\n\n[\n#1064\n] Fixed memory leaks in the Darknet component for various\n network types, and when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.13: June 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated the OpenCV build and media inspection process to properly handle webp images.\n\n\n\n\n4.1.12: May 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated JDK from \njdk-8u181-linux-x64.rpm\n to \njdk-8u251-linux-x64.rpm\n.\n\n\n\n\n4.1.11: May 2020\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nAdded \nINVALID_MIN_IMAGE_SIZE\n job property to filter out images with extremely low width or height.\n\n\nUpdated image rescaling behavior to account for image dimension limits.\n\n\nFixed handling of \nnullptr\n returns from Tesseract API OCR calls.\n\n\n\n\n4.1.8: May 2020\n\n\n\nAzure Cognitive Services (ACS) OCR Component\n\n\n\n\n\nThis new component utilizes\n the \nACS OCR REST endpoint\n\n to extract text from images and videos. Refer to\n the \nREADME\n.\n\n\n\n\n4.1.6: April 2020\n\n\n\nUpdates\n\n\n\n\n\nNow silently discarding ActiveMQ DLQ \"Suppressing duplicate delivery on connection\" messages in addition to \"duplicate\n from store\" messages.\n\n\n\n\n4.1.5: March 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1062\n] Fixed a memory leak in the Darknet component that occurred\n when running jobs on CPU resources with the Tiny YOLO model.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1064\n] The Darknet component has memory leaks for various network\n types, and potentially when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.4: March 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated from Hibernate 5.0.8 to 5.4.12 to support schema-based multitenancy. This allows multiple instances of OpenMPF\n to use the same PostgreSQL database as long as each instance connects to the database as a separate user, and the\n database is configured appropriately. This also required updating Tomcat from 7.0.72 to 7.0.76.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nUpdated the Workflow Manager to include an \noutputobjecturi\n in GET callbacks, and \noutputObjectUri\n in POST\n callbacks, when jobs complete. This URI specifies a file path, or path on the object storage server, depending on\n where the JSON output object is located.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nUpdated \nJsonCallbackBody.java\n to contain an \noutputObjectUri\n field.\n\n\n\n\n4.1.3: February 2020\n\n\n\nFeatures\n\n\n\n\n\nAdded support for \nDETECTION_PADDING_X\n and \nDETECTION_PADDING_Y\n optional job properties. The value can be a\n percentage or whole-number pixel value. When positive, each detection region in each track will be expanded. When\n negative, the region will shrink. If the detection region is shrunk to nothing, the shrunk dimension(s) will be set to\n a value of 1 pixel and the \nSHRUNK_TO_NOTHING\n detection property will be set to true.\n\n\nAdded support for \nDISTANCE_CONFIDENCE_WEIGHT_FACTOR\n and \nSIZE_CONFIDENCE_WEIGHT_FACTOR\n SuBSENSE algorithm\n properties. Increasing the value of the first property will generate detection confidence values that favor being\n closer to the center frame of a track. Increasing the value of the second property will generate detection confidence\n values that favor large detection regions.\n\n\n\n\n4.1.1: January 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1016\n] Fixed a bug that caused a deadlock situation when the media\n inspection process failed quickly when processing many jobs using a pipeline with more than one stage.\n\n\n\n\n4.1.0: July 2019\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nC++ Batch Component API\n to describe the \nROTATION\n\n detection property. See the \nC++ Arbitrary Rotation\n section below.\n\n\nUpdated the \nREST API\n with new component registration REST endpoints. See\n the \nComponent Registration REST Endpoints\n section below.\n\n\nAdded a \nREADME\n for\n the EAST text region detection component. See\n the \nEAST Text Region Detection Component\n section below.\n\n\nUpdated the Tesseract OCR text detection\n component \nREADME\n\n . See the \nTesseract OCR Text Detection Component\n section below.\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guide to describe the new streamlined\n approach to using \ndocker-compose config\n. See the \nDocker Deployment\n section below.\n\n\nFixed the description of \nMIN_SEGMENT_LENGTH\n and associated examples in\n the \nUser Guide\n for\n issue \n#891\n.\n\n\nUpdated the \nJava Batch Component API\n with information on how to use Log4j2.\n Related to resolving issue \n#855\n.\n\n\nUpdated the \nInstall Guide\n to point to the\n Docker \nREADME\n.\n\n\nTransformed the Build Guide into a \nDevelopment Environment Guide\n.\n\n\n\n\n\n\nC++ Arbitrary Rotation\n\n\n\n\n\nThe C++ MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees.\n Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be\n normalized to that range. Floating point values are accepted.\n\n\nWhen using those tools to read frame data, they will automatically correct for rotation so that the returned frame is\n horizontally oriented toward the normal 3 o'clock position.\n\n\nWhen \nFEED_FORWARD_TYPE=REGION\n, these tools will look for a \nROTATION\n detection property in the feed-forward\n detections and automatically correct for rotation. For example, a detection property of \nROTATION=90\n represents\n that the region is rotated 90 degrees counter clockwise, and therefore must be rotated 90 degrees clockwise to\n correct for it.\n\n\nWhen \nFEED_FORWARD_TYPE=SUPERSET_REGION\n, these tools will properly account for the \nROTATION\n detection property\n associated with each feed-forward detection when calculating the bounding box that encapsulates all of those\n regions.\n\n\nWhen \nFEED_FORWARD_TYPE=FRAME\n, these tools will rotate the frame according to the \nROTATION\n job property. It's\n important to note that for rotations other than 0, 90, 180, and 270 degrees the rotated frame dimensions will be\n larger than the original frame dimensions. This is because the frame needs to be expanded to encapsulate the\n entirety of the original rotated frame region. Black pixels are used to fill the empty space near the edges of the\n original frame.\n\n\n\n\n\n\nThe Markup component now places a colored dot at the upper-left corner of each detection region so that users can\n determine the rotation of the region relative to the entire frame.\n\n\n\n\n\n\nComponent Registration REST Endpoints\n\n\n\n\n\nAdded a \n[POST] /rest/components/registerUnmanaged\n endpoint so that components running as separate Docker containers\n can self-register with the Workflow Manager.\n\n\nSince these components are not managed by the Node Manager, they are considered unmanaged OpenMPF components.\n These components are not displayed in Nodes web UI and are tagged as unmanaged in the Component Registration web\n UI where they can only be removed.\n\n\nNote that components uploaded to the Component Registration web UI as .tar.gz files are considered managed\n components.\n\n\n\n\n\n\nAdded a \n[DELETE] /rest/components/{componentName}\n endpoint that can be used to remove managed and unmanaged\n components.\n\n\n\n\nPython Component Executor Docker Image\n\n\n\n\n\nComponent developers can now use a Python component executor Docker image to write a Python component for OpenMPF that\n can be encapsulated within a Docker container. This isolates the build and execution environment from the rest of\n OpenMPF. For more information, see\n the \nREADME\n.\n\n\nComponents developed with this image are not managed by the Node Manager; rather, they self-register with the Workflow\n Manager and their lifetime is determined by their own Docker container.\n\n\n\n\n\n\nDocker Deployment\n\n\n\n\n\nStreamlined single-host \ndocker-compose up\n deployments and multi-host \ndocker stack deploy\n swarm deployments. Now\n users are instructed to create a single \ndocker-compose.yml\n file for both types of deployments.\n\n\nRemoved the \ndocker-generate-compose-files.sh\n script in favor of allowing users the flexibility of combining\n multiple \ndocker-compose.*.yml\n files together using \ndocker-compose config\n. See\n the \nGenerate docker-compose.yml\n\n section of the README.\n\n\nComponents based on the Python component executor Docker image can now be defined and configured directly\n in \ndocker-compose.yml\n.\n\n\nOpenMPF Docker images now make use of Docker labels.\n\n\n\n\n\n\nEAST Text Region Detection Component\n\n\n\n\n\nThis new component uses the Efficient and Accurate Scene Text (EAST) detection model to detect text regions in images\n and videos. It reports their location, angle of rotation, and text type (\nSTRUCTURED\n or \nUNSTRUCTURED\n), and supports\n a variety of settings to control the behavior of merging text regions into larger regions. It does not perform OCR on\n the text or track detections across video frames. Thus, each video track is at most one detection long. For more\n information, see\n the \nREADME\n.\n\n\nOptionally, this component can be built as a Docker image using the Python component executor Docker image, allowing\n it to exist apart from the Node Manager image.\n\n\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to support reading tessdata \n*.traineddata\n files at a specified \nMODELS_DIR_PATH\n. This allows users to\n install new \n*.traineddata\n files post deployment.\n\n\nUpdated to optionally perform Tesseract Orientation and Script Detection (OSD). When enabled, the component will\n attempt to use the orientation results of OSD to automatically rotate the image, as well as perform OCR using the\n scripts detected by OSD.\n\n\nUpdated to optionally rotate a feed-forward text region 180 degrees to account for upside-down text.\n\n\nNow supports the following preprocessing properties for both structured and unstructured text:\n\n\nText sharpening\n\n\nText rescaling\n\n\nOtsu image thresholding\n\n\nAdaptive thresholding\n\n\nHistogram equalization\n\n\nAdaptive histogram equalization (also known as Contrast Limited Adaptive Histogram Equalization (CLAHE))\n\n\n\n\n\n\nWill use the \nTEXT_TYPE\n detection property in feed-forward regions provided by the EAST component to determine which\n preprocessing steps to perform.\n\n\nFor more information on these new features, see\n the \nREADME\n.\n\n\nRemoved gibberish and string filters since they only worked on English text.\n\n\n\n\nActiveMQ Profiles\n\n\n\n\n\nThe ActiveMQ Docker image now supports custom profiles. The container selects an \nactivemq.xml\n and \nenv\n file to use\n at runtime based on the value of the \nACTIVE_MQ_PROFILE\n environment variable. Among others, these files contain\n configuration settings for Java heap space and component queue memory limits.\n\n\nThis release only supports a \ndefault\n profile setting, as defined by \nactivemq-default.xml\n and \nenv.default\n;\n however, developers are free to add other \nactivemq-.xml\n and \nenv.\n files to the ActiveMQ Docker\n image to suit their needs.\n\n\n\n\nDisabled ActiveMQ Prefetch\n\n\n\n\n\nDisabled ActiveMQ prefetching on all component queues. Previously, a prefetch value of one was resulting in situations\n where one component service could be dispatched two sub-jobs, thereby starving other available component services\n which could process one of those sub-jobs in parallel.\n\n\n\n\nSearch Region Percentages\n\n\n\n\n\nIn addition to using exact pixel values, users can now use percentages for the following properties when specifying\n search regions for C++ and Python components:\n\n\nSEARCH_REGION_TOP_LEFT_X_DETECTION\n\n\nSEARCH_REGION_TOP_LEFT_Y_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_X_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_Y_DETECTION\n\n\n\n\n\n\nFor example, setting \nSEARCH_REGION_TOP_LEFT_X_DETECTION=50%\n will result in components only processing the right half\n of an image or video.\n\n\nOptionally, users can specify exact pixel values of some of these properties and percentages for others.\n\n\n\n\nOther Improvements\n\n\n\n\n\nIncreased the number of ActiveMQ maxConcurrentConsumers for the \nMPF.COMPLETED_DETECTIONS\n queue from 30 to 60.\n\n\nThe Create Job web UI now only displays the content of the \n$MPF_HOME/share/remote-media\n directory instead of all\n of \n$MPF_HOME/share\n, which prevents the Workflow Manager from indexing generated JSON output files, artifacts, and\n markup. Indexing the latter resulted in Java heap space issues for large scale production systems. This is a\n mitigation for issue \n#897\n.\n\n\nThe Job Status web UI now makes proper use of pagination in SQL/Hibernate through the Workflow Manager to avoid\n retrieving the entire jobs table, which was inefficient.\n\n\nThe Workflow Manager will now silently discard all duplicate messages in the ActiveMQ Dead Letter Queue (DLQ),\n regardless of destination. Previously, only messages destined for component sub-job request queues were discarded.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#891\n] Fixed a bug where the Workflow Manager media segmenter\n generated short segments that were minimally \nMIN_SEGMENT_LENGTH+1\n in size instead of \nMIN_SEGMENT_LENGTH\n.\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. This seems to have been resolved by disabling ActiveMQ prefetch behavior on component queues.\n\n\n[\n#855\n] A logback circular reference suppressed exception no longer\n throws a StackOverflowError. This was resolved by transitioning the Workflow Manager and Java components from the\n Logback framework to Log4j2.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#897\n] OpenMPF will attempt to index files located\n in \n$MPF_HOME/share\n as soon as the webapp is started by Tomcat. This is so that those files can be listed in a\n directory tree in the Create Job web UI. The main problem is that once a file gets indexed it's never removed from the\n cache, even if the file is manually deleted, resulting in a memory leak.\n\n\n\n\nLate Additions: November 2019\n\n\n\n\n\nUser names, roles, and passwords can now be set by using an optional \nuser.properties\n file. This allows\n administrators to override the default OpenMPF users that come preconfigured, which may be a security risk. Refer to\n the \"Configure Users\" section of the\n openmpf-docker \nREADME\n for\n more information.\n\n\n\n\nLate Additions: December 2019\n\n\n\n\n\nTransitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL\n database in the cloud.\n\n\nUpdated the EAST component to support a \nTEMPORARY_PADDING\n and \nFINAL_PADDING\n property. The first property\n determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is\n effectively removed from the final detections. The second property is used to control the final amount of padding on\n the output regions. Refer to\n the \nREADME\n.\n\n\n\n\nOpenMPF 4.0.x\n\n\n4.0.0: February 2019\n\n\n\nDocumentation\n\n\n\n\n\nAdded an \nObject Storage Guide\n with information on how to configure OpenMPF to work\n with a custom NGINX object storage server, and how to run jobs that use an S3 object storage server. Note that the\n system properties for the custom NGINX object storage server have changed since the last release.\n\n\n\n\nUpgrade to Tesseract 4.0\n\n\n\n\n\nBoth the Tesseract OCR Text Detection Component and OpenALPR License Plate Detection Components have been updated to\n use the new version of Tesseract.\n\n\nAdditionally, Leptonica has been upgraded from 1.72 to 1.75.\n\n\n\n\nDocker Deployment\n\n\n\n\n\nThe Docker images now use the yum package manager to install ImageMagick6 from a public RPM repository instead of\n downloading the RPMs directly from imagemagick.org. This resolves an issue with the OpenMPF Docker build where RPMs\n on \nimagemagick.org\n were no longer available.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to allow the user to set a \nTESSERACT_OEM\n property in order to select an OCR engine mode (OEM).\n\n\n\"script/Latin\" can now be specified as the \nTESSERACT_LANGUAGE\n. When selected, Tesseract will select all Latin\n characters, which can be from different Latin languages.\n\n\n\n\nCeph S3 Object Storage\n\n\n\n\n\nAdded support for downloading files from, and uploading files to, an S3 object storage server. The following job\n properties can be provided: \nS3_ACCESS_KEY\n, \nS3_SECRET_KEY\n, \nS3_RESULTS_BUCKET\n, \nS3_UPLOAD_ONLY\n.\n\n\nAt this time, only support for Ceph object storage has been tested. However, the Workflow Manager uses the AWS SDK for\n Java to communicate with the object store, so it is possible that other S3-compatible storage solutions may work as\n well.\n\n\n\n\nISO-8601 Timestamps\n\n\n\n\n\nAll timestamps in the JSON output object, and streaming video callbacks, are now in the ISO-8601 format (e.g. \"\n 2018-12-19T12:12:59.995-05:00\"). This new format includes the time zone, which makes it possible to compare timestamps\n generated between systems in different time zones.\n\n\nThis change does not affect the track and detection start and stop offset times, which are still reported in\n milliseconds since the start of the video.\n\n\n\n\nReduced Redis Usage\n\n\n\n\n\nThe Workflow Manager has been refactored to reduce usage of the Redis in-memory database. In general, Redis is not\n necessary for storing job information and only resulted in introducing potential delays in accessing that data over\n the network stack.\n\n\nNow, only track and detection data is stored in Redis for batch jobs. This reduces the amount of memory the Workflow\n Manager requires of the Java Virtual Machine. Compared to the other job information, track and detection data can\n potentially be relatively much larger. In the future, we plan to store frame data in Redis for streaming jobs as well.\n\n\n\n\nCaffe Vehicle Color Estimation\n\n\n\n\n\nThe Caffe\n Component \nmodels.ini\n\n file has been updated with a \"vehicle_color\" section with links for downloading\n the \nReza Fuad Rachmadi's Vehicle Color Recognition Using Convolutional Neural Network\n\n model files.\n\n\nThe following pipelines have been added. These require the above model files to be placed\n in \n$MPF_HOME/share/models/CaffeDetection\n:\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM TINY YOLO VEHICLE DETECTOR) PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM YOLO VEHICLE DETECTOR) PIPELINE\n\n\n\n\n\n\n\n\nTrack Merging and Minimum Track Length\n\n\n\n\n\nThe following system properties now have \"video\" in their names:\n\n\ndetection.video.track.merging.enabled\n\n\ndetection.video.track.min.gap\n\n\ndetection.video.track.min.length\n\n\ndetection.video.track.overlap.threshold\n\n\n\n\n\n\nThe above properties can be overridden by the following job properties, respectively. These have not been renamed\n since the last release:\n\n\nMERGE_TRACKS\n\n\nMIN_GAP_BETWEEN_TRACKS\n\n\nMIN_TRACK_LENGTH\n\n\nMIN_OVERLAP\n\n\n\n\n\n\nThese system and job properties now only apply to video media. This resolves an issue where users had\n set \ndetection.track.min.length=5\n, which resulted in dropping all image media tracks. By design, each image track can\n only contain a single detection.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a bug where the Docker entrypoint scripts appended properties to the end\n of \n$MPF_HOME/share/config/mpf-custom.properties\n every time the Docker deployment was restarted, resulting in entries\n like \ndetection.segment.target.length=5000,5000,5000\n.\n\n\nUpgrading to Tesseract 4 fixes a bug where, when specifying \nTESSERACT_LANGUAGE\n, if one of the languages is Arabic,\n then Arabic must be specified last. Arabic can now be specified first, for example: \nara+eng\n.\n\n\nFixed a bug where the minimum track length property was being applied to image tracks. Now it's only applied to video\n tracks.\n\n\nFixed a bug where ImageMagick6 installation failed while building Docker images.\n\n\n\n\nOpenMPF 3.0.x\n\n\n3.0.0: December 2018\n\n\n\n\n\nNOTE:\n The \nBuild Guide\n and \nInstall Guide\n are outdated. The old process for manually configuring a Build VM, using it to build an OpenMPF package, and installing that package, is deprecated in favor of Docker containers. Please refer to the openmpf-docker \nREADME\n.\n\n\nNOTE:\n Do not attempt to register or unregister a component through the Nodes UI in a Docker deployment. It may appear to succeed, but the changes will not affect the child Node Manager containers, only the Workflow Manager container. Also, do not attempt to use the \nmpf\n command line tools in a Docker deployment.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nREADME\n\n , \nSWARM\n guide,\n and \nCONTRIBUTING\n guide for Docker deployment.\n\n\nUpdated the \nUser Guide\n with information on how track\n properties and track confidence are handled when merging tracks.\n\n\nAdded README files for new components. Refer to the component sections below.\n\n\n\n\nDocker Support\n\n\n\n\n\nOpenMPF can now be built and distributed as 5 Docker images: openmpf_workflow_manager, openmpf_node_manager,\n openmpf_active_mq, mysql_database, and redis.\n\n\nThese images can be deployed on a single host using \ndocker-compose up\n.\n\n\nThey can also be deployed across multiple hosts in a Docker swarm cluster using \ndocker stack deploy\n.\n\n\nGPU support is enabled through the NVIDIA Docker runtime.\n\n\nBoth HTTP and HTTPS deployments are supported.\n\n\n\n\n\n\nJSON Output Object\n\n\n\n\n\nAdded a \ntrackProperties\n field at the track level that works in much the same way as the \ndetectionProperties\n field\n at the detection level. Both are maps that contain zero or more key-value pairs. The component APIs have always\n supported the ability to return track-level properties, but they were never represented in the JSON output object,\n until now.\n\n\nSimilarly, added a track \nconfidence\n field. The component APIs always supported setting it, but the value was never\n used in the JSON output object, until now.\n\n\nAdded \njobErrors\n and\njobWarnings\n fields. The \njobErrors\n field will mention that there are items\n in \ndetectionProcessingErrors\n fields.\n\n\nThe \noffset\n, \nstartOffset\n, and \nstopOffset\n fields have been removed in favor of the existing \noffsetFrame\n\n , \nstartOffsetFrame\n, and \nstopOffsetFrame\n fields, respectively. They were redundant and deprecated.\n\n\nAdded a \nmpf.output.objects.exemplars.only\n system property, and \nOUTPUT_EXEMPLARS_ONLY\n job property, that can be set\n to reduce the size of the JSON output object by only recording the track exemplars instead of all of the detections in\n each track.\n\n\nAdded a \nmpf.output.objects.last.stage.only\n system property, and \nOUTPUT_LAST_STAGE_ONLY\n job property, that can be\n set to reduce the size of the JSON output object by only recording the detections for the last non-markup stage of a\n pipeline.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThe Darknet component can now support processing streaming video.\n\n\nIn batch mode, video frames are prefetched, decoded, and stored in a buffer using a separate thread from the one that\n performs the detection. The size of the prefetch buffer can be configured by setting \nFRAME_QUEUE_CAPACITY\n.\n\n\nThe Darknet component can now perform basic tracking and generate video tracks with multiple detections. Both the\n default detection mode and preprocessor detection mode are supported.\n\n\nThe Darknet component has been updated to support the full and tiny YOLOv3 models. The YOLOv2 models are no longer\n supported.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nThis new component extracts text found in an image and reports it as a single-detection track.\n\n\nPDF documents can also be processed with one track detection per page.\n\n\nUsers may set the language of each track using the \nTESSERACT_LANGUAGE\n property as well as adjust other image\n preprocessing properties for text extraction.\n\n\nRefer to\n the \nREADME\n.\n\n\n\n\nOpenCV Scene Change Detection Component\n\n\n\n\n\nThis new component detects and segments a given video by scenes. Each scene change is detected using histogram\n comparison, edge comparison, brightness (fade outs), and overall hue/saturation/value differences between adjacent\n frames.\n\n\nUsers can toggle each type of of scene change detection technique as well as threshold properties for each detection\n method.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThis new component extracts text contained in documents and performs language detection. 71 languages and most\n document formats (.txt, .pptx, .docx, .doc, .pdf, etc.) are supported.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Image Detection Component\n\n\n\n\n\nThis new component extracts images embedded in document formats (.pdf, .ppt, .doc) and stores them on disk in a\n specified directory.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTrack-Level Properties and Confidence\n\n\n\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\nComponents have been updated to return meaningful track-level properties. Caffe and Darknet include \nCLASSIFICATION\n,\n OALPR includes the exemplar \nTEXT\n, and Sphinx includes the \nTRANSCRIPTION\n.\n\n\nThe Workflow Manager will now populate the track-level confidence. It is the same as the exemplar confidence, which is\n the max of all of the track detections.\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\n\n\n\nAdded \nhttp.object.storage.*\n system properties for configuring an optional custom NGINX object storage server on\n which to store generated detection artifacts, JSON output objects, and markup files.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n,\n which is the default behavior when an object storage server is not specified.\n\n\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field,\n and, if appropriate, the \nmarkupResult.message\n field. If the job completes without other issues, the final status\n will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nThe NGINX storage server runs custom server-side code which we can make available upon request. In the future, we plan\n to support more common storage server solutions, such as Amazon S3.\n\n\n\n\n\n\nActiveMQ\n\n\n\n\n\nThe \nMPF_OUTPUT\n queue is no longer supported and has been removed. Job producers can specify a callback URL when\n creating a job so that they are alerted when the job is complete. Users observed heap space issues with ActiveMQ after\n running thousands of jobs without consuming messages from the \nMPF_OUTPUT\n queue.\n\n\nThe Workflow Manager will now silently discard duplicate sub-job request messages in the ActiveMQ Dead Letter Queue (\n DLQ). This fixes a bug where the Workflow Manager would prematurely terminate jobs corresponding to the duplicate\n messages. It's assumed that ActiveMQ will only place a duplicate message in the DLQ if the original message, or\n another duplicate, can be delivered.\n\n\n\n\nNode Auto-Configuration\n\n\n\n\n\nAdded the \nnode.auto.config.enabled\n, \nnode.auto.unconfig.enabled\n, and \nnode.auto.config.num.services.per.component\n\n system properties for automatically managing the configuration of services when nodes join and leave the OpenMPF\n cluster.\n\n\nDocker will assign a a hostname with a randomly-generated id to containers in a swarm deployment. The above properties\n allow the Workflow Manager to automatically discover and configure services on child Node Manager components, which is\n convenient since the hostname of those containers cannot be known in advance, and new containers with new hostnames\n are created when the swarm is restarted.\n\n\n\n\nJob Status Web UI\n\n\n\n\n\nAdded the \nweb.broadcast.job.status.enabled\n and \nweb.job.polling.interval\n system properties that can be used to\n configure if the Workflow Manager automatically broadcasts updates to the Job Status web UI. By default, the\n broadcasts are enabled.\n\n\nIn a production environment that processes hundreds of jobs or more at the same time, this behavior can result in\n overloading the web UI, causing it to slow down and freeze up. To prevent this, set \nweb.broadcast.job.status.enabled\n\n to \nfalse\n. If \nweb.job.polling.interval\n is set to a non-zero value, the web UI will poll for updates at that\n interval (specified in milliseconds).\n\n\nTo disable broadcasts and polling, set \nweb.broadcast.job.status.enabled\n to \nfalse\n and \nweb.job.polling.interval\n to\n a zero or negative value. Users will then need to manually refresh the Job Status web page using their web browser.\n\n\n\n\nOther Improvements\n\n\n\n\n\nNow using variable-length text fields in the mySQL database for string data that may exceed 255 characters.\n\n\nUpdated the MPFImageReader tool to use OpenCV video capture behind the scenes to support reading data from HTTP URLs.\n\n\nPython components can now include pre-built wheel files in the plugin package.\n\n\nWe now use a \nJenkinsfile\n Groovy script for our\n Jenkins build process. This allows us to use revision control for our continuous integration process and share that\n process with the open source community.\n\n\nAdded \nremote.media.download.retries\n and \nremote.media.download.sleep\n system properties that can be used to\n configure how the Workflow Manager will attempt to retry downloading remote media if it encounters a problem.\n\n\nArtifact extraction now uses MPFVideoCapture, which employs various fallback strategies for extracting frames in cases\n where a video is not well-formed or corrupted. For components that use MPFVideoCapture, this enables better\n consistency between the frames they process and the artifacts that are later extracted.\n\n\n\n\nBug Fixes\n\n\n\n\n\nJobs now properly end in \nERROR\n if an invalid media URL is provided or there is a problem accessing remote media.\n\n\nJobs now end in \nCOMPLETE_WITH_ERRORS\n when a detection splitter error occurs due to missing system properties.\n\n\nComponents can now include their own version of the Google Protobuf library. It will not conflict with the version\n used by the rest of OpenMPF.\n\n\nThe Java component executor now sets the proper job id in the job name instead of using the ActiveMQ message request\n id.\n\n\nThe Java component executor now sets the run directory using \nsetRunDirectory()\n.\n\n\nActions can now be properly added using an \"extras\" component. An extras component only includes a \ndescriptor.json\n\n file and declares Actions, Tasks, and Pipelines using other component algorithms.\n\n\nRefer to the items listed in the \nActiveMQ\n section.\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. The reason is currently unknown.\n\n\n[\n#544\n] Image artifacts retain some permissions from source files\n available on the local host. This can result in some of the image artifacts having executable permissions.\n\n\n[\n#604\n] The Sphinx component cannot be unregistered\n because \n$MPF_HOME/plugins/SphinxSpeechDetection/lib\n is owned by root on a deployment machine.\n\n\n[\n#623\n] The Nodes UI does not work correctly\n when \n[POST] /rest/nodes/config\n is used at the same time. This is because the UI's state is not automatically updated\n to reflect changes made through the REST endpoint.\n\n\n[\n#783\n] The Tesseract OCR Text Detection Component has\n a \nknown issue\n because it uses Tesseract 3. If a combination\n of languages is specified using \nTESSERACT_LANGUAGE\n, and one of the languages is Arabic, then Arabic must be\n specified last. For example, for English and Arabic, \neng+ara\n will work, but \nara+eng\n will not.\n\n\n[\n#784\n] Sometimes services do not start on OpenMPF nodes, and those\n services cannot be started through the Nodes web UI. This is not a Docker-specific problem, but it has been observed\n in a Docker swarm deployment when auto-configuration is enabled. The workaround is to restart the Docker swarm\n deployment, or remove the entire node in the Nodes UI and add it again.\n\n\n\n\nOpenMPF 2.1.x\n\n\n2.1.0: June 2018\n\n\n\n\n\nNOTE:\n If building this release on a machine used to build a previous version of OpenMPF, then please run \nsudo pip install --upgrade pip\n to update to at least pip 10.0.1. If not, the OpenMPF build script will fail to properly download .whl files for Python modules.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded the \nPython Batch Component API\n.\n\n\nAdded the \nNode Guide\n.\n\n\nAdded the \nGPU Support Guide\n.\n\n\nUpdated the \nInstall Guide\n with an \"(Optional) Install the NVIDIA CUDA Toolkit\" section.\n\n\nRenamed Admin Manual to Admin Guide for consistency.\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nDevelopers can now write batch components in Python using the mpf_component_api module.\n\n\nDependencies can be specified in a setup.py file. OpenMPF will automatically download the .whl files using pip at\n build time.\n\n\nWhen deployed, a virtualenv is created for the Python component so that it runs in a sandbox isolated from the rest of\n the system.\n\n\nOpenMPF ImageReader and VideoCapture tools are provided in the mpf_component_util module.\n\n\nExample Python components are provided for reference.\n\n\n\n\nSpare Nodes\n\n\n\n\n\nSpare nodes can join and leave an OpenMPF cluster while the Workflow Manager is running. You can create a spare node\n by cloning an existing OpenMPF child node. Refer to the \nNode Guide\n.\n\n\nNote that changes made using the Component Registration web page only affect core nodes, not spare nodes. Core nodes\n are those configured during the OpenMPF installation process.\n\n\nAdded \nmpf list-nodes\n command to list the core nodes and available spare nodes.\n\n\nOpenMPF now uses the JGroups FILE_PING protocol for peer discovery instead of TCPPING. This means that the list of\n OpenMPF nodes no longer needs to be fully specified when the Workflow Manager starts. Instead, the Workflow Manager,\n and Node Manager process on each node, use the files in \n$MPF_HOME/share/nodes\n to determine which nodes are currently\n available.\n\n\nUpdated JGroups from 3.6.4. to 4.0.11.\n\n\nThe environment variables specified in \n/etc/profile.d/mpf.sh\n have been simplified. Of note, \nALL_MPF_NODES\n has been\n replaced by \nCORE_MPF_NODES\n.\n\n\n\n\nDefault Detection System Properties\n\n\n\n\n\nThe detection properties that specify the default values when creating new jobs can now be updated at runtime without\n restarting the Workflow Manager. Changing these properties will only have an effect on new jobs, not jobs that are\n currently running.\n\n\nThese default detection system properties are separated from the general system properties in the Properties web page.\n The latter still require the Workflow Manager to be restarted for changes to take effect.\n\n\nThe Apache Commons Configuration library is now used to read and write properties files. When defining a property\n value using an environment variable in the Properties web page, or \n$MPF_HOME/config/mpf-custom.properties\n, be sure\n to prepend the variable name with \nenv:\n. For example:\n\n\n\n\ndetection.models.dir.path=${env:MPF_HOME}/models/\n\n\n\n\n\nAlternatively, you can define system properties using other system properties:\n\n\n\n\ndetection.models.dir.path=${mpf.share.path}/models/\n\n\n\nAdaptive Frame Interval\n\n\n\n\n\nThe \nFRAME_RATE_CAP\n property can be used to set a threshold on the maximum number of frames to process within one\n second of the native video time. This property takes precedence over the user-provided / pipeline-provided value\n for \nFRAME_INTERVAL\n. When the \nFRAME_RATE_CAP\n property is specified, an internal frame interval value is calculated\n as follows:\n\n\n\n\ncalcFrameInterval = max(1, floor(mediaNativeFPS / frameRateCapProp));\n\n\n\n\n\nFRAME_RATE_CAP\n may be disabled by setting it <= 0. \nFRAME_INTERVAL\n can be disabled in the same way.\n\n\nIf \nFRAME_RATE_CAP\n is disabled, then \nFRAME_INTERVAL\n will be used instead.\n\n\nIf both \nFRAME_RATE_CAP\n and \nFRAME_INTERVAL\n are disabled, then a value of 1 will be used for \nFRAME_INTERVAL\n.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThis release includes a component that uses the \nDarknet neural network framework\n to\n perform detection and classification of objects using trained models.\n\n\nPipelines for the Tiny YOLO and YOLOv2 models are provided. Due to its large size, the YOLOv2 weights file must be\n downloaded separately and placed in \n$MPF_HOME/share/models/DarknetDetection\n in order to use the YOLOv2 pipelines.\n Refer to \nDarknetDetection/plugin-files/models/models.ini\n for more information.\n\n\nThis component supports a preprocessor mode and default mode of operation. If preprocessor mode is enabled, and\n multiple Darknet detections in a frame share the same classification, then those are merged into a single detection\n where the region corresponds to the superset region that encapsulates all of the original detections, and the\n confidence value is the probability that at least one of the original detections is a true positive. If disabled,\n multiple Darknet detections in a frame are not merged together.\n\n\nDetections are not tracked across frames. One track is generated per detection.\n\n\nThis component supports an optional \nCLASS_WHITELIST_FILE\n property. When provided, only detections with class names\n listed in the file will be generated.\n\n\nThis component can be compiled with GPU support if the NVIDIA CUDA Toolkit is installed on the build machine. Refer to\n the \nGPU Support Guide\n. If the toolkit is not found, then the component will compile with CPU\n support only.\n\n\nTo run on a GPU, set the \nCUDA_DEVICE_ID\n job property, or set the detection.cuda.device.id system property, >= 0.\n\n\nWhen \nCUDA_DEVICE_ID\n >= 0, you can set the \nFALLBACK_TO_CPU_WHEN_GPU_PROBLEM\n job property, or the\n detection.use.cpu.when.gpu.problem system property, to \nTRUE\n if you want to run the component logic on the CPU\n instead of the GPU when a GPU problem is detected.\n\n\n\n\nModels Directory\n\n\n\n\n\nThe\n$MPF_HOME/share/models\n directory is now used by the Darknet and Caffe components to store model files and\n associated files, such as classification names files, weights files, etc. This allows users to more easily add model\n files post-deployment. Instead of copying the model files to \n$MPF_HOME/plugins//models\n directory on\n each node in the OpenMPF cluster, they only need to copy them to the shared directory once.\n\n\nTo add new models to the Darknet and Caffe component, add an entry to the\n respective \n/plugin-files/models/models.ini\n file.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nPython components are packaged with their respective dependencies as .whl files. This can be automated by providing a\n setup.py file. An example OpenCV Python component is provided that demonstrates how the component is packaged and\n deployed with the opencv-python module. When deployed, a virtualenv is created for the component with the .whl files\n installed in it.\n\n\nWhen deploying OpenMPF, \nLD_LIBRARY_PATH\n is no longer set system-wide. Refer to Known Issues.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Nodes page to distinguish between core nodes and spare nodes, and to show when a node is online or\n offline.\n\n\nUpdated the Component Registration page to list the core nodes as a reminder that changes will not affect spare nodes.\n\n\nUpdated the Properties page to separate the default detection properties from the general system properties.\n\n\n\n\nBug Fixes\n\n\n\n\n\nCustom Action, task, and pipeline names can now contain \"(\" and \")\" characters again.\n\n\nDetection location elements for audio tracks and generic tracks in a JSON output object will now have a y value of \n0\n\n instead of \n1\n.\n\n\nStreaming health report and summary report timestamps have been corrected to represent hours in the 0-23 range instead\n of 1-24.\n\n\nSingle-frame .gif files are now segmented properly and no longer result in a NullPointerException.\n\n\nLD_LIBRARY_PATH\n is now set at the process level for Tomcat, the Node Manager, and component services, instead of at\n the system level in \n/etc/profile.d/mpf.sh\n. Also, deployments no longer create \n/etc/ld.so.conf.d/mpf.conf\n. This\n better isolates OpenMPF from the rest of the system and prevents issues, such as being unable to use SSH, when system\n libraries are not compatible with OpenMPF libraries. The latter situation may occur when running \nyum update\n on the\n system, which can make OpenMPF unusable until a new deployment package with compatible libraries is installed.\n\n\nThe Workflow Manager will no longer generate an \"Error retrieving the SingleJobInfo model\" line in the log if someone\n is viewing the Job Status page when a job submitted through the REST API is in progress.\n\n\n\n\nKnown Issues\n\n\n\n\n\nWhen multiple component services of the same type on the same node log to the same file at the same time, sometimes\n log lines will not be captured in the log file. The logging frameworks (log4j and log4cxx) do not support that usage.\n This problem happens more frequently on systems running many component services at the same time.\n\n\nThe following exception was observed:\n\n\n\n\ncom.google.protobuf.InvalidProtocolBufferException: Message missing required fields: data_uri\n\n\n\n\n\n\nFurther debugging is necessary to determine the reason why that message was missing that field. The situation is not easily reproducible. It may occur when ActiveMQ and / or the system is under heavy load and sends duplicate messages in attempt to ensure message delivery. Some of those messages seem to end up in the dead letter queue (DLQ). For now, we've improved the way we handle messages in the DLQ. If OpenMPF can process a message successfully, the job is marked as \nCOMPLETED_WITH_ERRORS\n, and the message is moved from \nActiveMQ.DLQ\n to \nMPF.DLQ_PROCESSED_MESSAGES\n. If OpenMPF cannot process a message successfully, it is moved from \nActiveMQ.DLQ to MPF.DLQ_INVALID_MESSAGES\n.\n\n\n\n\n\n\nThe \nmpf stop\n command will stop the Workflow Manager, which will in turn send commands to all of the available nodes\n to stop all running component services. If a service is processing a sub-job when the quit command is received, that\n service process will not terminate until that sub-job is completely processed. Thus, the service may put a sub-job\n response on the ActiveMQ response queue after the Workflow Manager has terminated. That will not cause a problem\n because the queues are flushed the next time the Workflow Manager starts; however, there will be a problem if the\n service finishes processing the sub-job after the Workflow Manager is restarted. At that time, the Workflow Manager\n will have no knowledge of the old job and will in turn generate warnings in the log about how the job id is \"not known\n to the system\" and/or \"not found as a batch or a streaming job\". These can be safely ignored. Often, if these messages\n appear in the log, then C++ services were running after stopping the Workflow Manager. To address this, you may wish\n to run \nsudo killall amq_detection_component\n after running \nmpf stop\n.\n\n\n\n\nOpenMPF 2.0.x\n\n\n2.0.0: February 2018\n\n\n\n\n\nNOTE:\n Components built for previous releases of OpenMPF are not compatible with OpenMPF 2.0.0 due to Batch Component API changes to support generic detections, and changes made to the format of the \ndescriptor.json\n file to support stream processing.\n\n\nNOTE:\n This release contains basic support for processing video streams. Currently, the only way to make use of that functionality is through the REST API. Streaming jobs and services cannot be created or monitored through the web UI. Only the SuBSENSE component has been updated to support streaming. Only single-stage pipelines are supported at this time.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated documents to distinguish the batch component APIs from the streaming component API.\n\n\nAdded the \nC++ Streaming Component API\n.\n\n\nUpdated the \nC++ Batch Component API\n to describe support for generic detections.\n\n\nUpdated the \nREST API\n with endpoints for streaming jobs.\n\n\n\n\nSupport for Generic Detections\n\n\n\n\n\nC++ and Java components can now declare support for the \nUNKNOWN\n data type. The respective batch APIs have been\n updated with a function that will enable a component to process an \nMPFGenericJob\n, which represents a piece of media\n that is not a video, image, or audio file.\n\n\nNote that these API changes make OpenMPF R2.0.0 incompatible with components built for previous releases of OpenMPF.\n Specifically, the new component executor will not be able to load the component logic library.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nAdded the following function to support generic detections:\n\n\nMPFDetectionError GetDetections(const MPFGenericJob &job, vector &tracks)\n\n\n\n\n\n\n\n\nJava Batch Component API\n\n\n\n\n\nAdded the following method to support generic detections:\n\n\nList getDetections(MPFGenericJob job)\n\n\n\n\n\n\n\n\nStreaming REST API\n\n\n\n\n\nAdded the following REST endpoints for streaming jobs:\n\n\n[GET] /rest/streaming/jobs\n: Returns a list of streaming job ids.\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job. Users can register for health report and\n summary report callbacks.\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job.\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job.\n\n\n\n\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nUpdated to support generic detections.\n\n\nUpdated Redis to store information about streaming jobs.\n\n\nAdded controllers for streaming job REST endpoints.\n\n\nAdded ability to generate health reports and segment summary reports for streaming jobs.\n\n\nImproved code flow between the Workflow Manager and master Node Manager to support streaming jobs.\n\n\nAdded ActiveMQ queues to enable the C++ Streaming Component Executor to send reports and job status to the Workflow\n Manager.\n\n\n\n\nNode Manager\n\n\n\n\n\nUpdated the master Node Manager and child Node Managers to spawn component services on demand to handle streaming\n jobs, cancel those jobs, and to monitor the status of those processes.\n\n\nUsing .ini files to represent streaming job properties and enable better communication between a child Node Manager\n and C++ Streaming Component Executor.\n\n\n\n\nC++ Streaming Component API\n\n\n\n\n\nDeveloped the C++ Streaming Component API with the following functions:\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n: Constructor that takes a streaming video job.\n\n\nstring GetDetectionType()\n: Returns the type of detection (i.e. \"FACE\").\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n: Indicates the beginning of a new video segment.\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n: Processes a single frame for the current video\n segment.\n\n\nvector EndSegment()\n: Indicates the end of the current video segment.\n\n\n\n\n\n\nUpdated the C++ Hello World component to support streaming jobs.\n\n\n\n\nC++ Streaming Component Executor\n\n\n\n\n\nDeveloped the C++ Streaming Component Executor to load a streaming component logic library, read frames from a video\n stream, and exercise the component logic through the C++ Streaming Component API.\n\n\nWhen the C++ Streaming Component Executor cannot read a frame from the stream, it will sleep for at least 1\n millisecond, doubling the amount of sleep time per attempt until it reaches the \nstallTimeout\n value specified when\n the job was created. While stalled, the job status will be \nSTALLED\n. After the timeout is exceeded, the job will\n be \nTERMINATED\n.\n\n\nThe C++ Streaming Component Executor supports \nFRAME_INTERVAL\n, as well as rotation, horizontal flipping, and\n cropping (region of interest) properties. Does not support \nUSE_KEY_FRAMES\n.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded the following Java classes to the interoperability package to simplify third party integration:\n\n\nJsonHealthReportCollection\n: Represents the JSON content of a health report callback. Contains one or\n more \nJsonHealthReport\n objects.\n\n\nJsonSegmentSummaryReport\n: Represents the JSON content of a summary report callback. Content is similar to the\n JSON output object used for batch processing.\n\n\n\n\n\n\n\n\nSuBSENSE Component\n\n\n\n\n\nThe SuBSENSE component now supports both batch processing and stream processing.\n\n\nEach video segment will be processed independently of the rest. In other words, tracks will be generated on a\n segment-by-segment basis and tracks will not carry over between segments.\n\n\nNote that the last frame in the previous segment will be used to determine if there is motion in the first frame of\n the next segment.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nUpdated \ndescriptor.json\n fields to allow components to support batch and/or streaming jobs. Components that use the\n old \ndescriptor.json\n file format cannot be registered through the web UI.\n\n\nBatch component logic and streaming component logic are compiled into separate libraries.\n\n\nThe mySQL \nstreaming_job_request\n table has been updated with the following fields, which are used to populate the\n JSON health reports:\n\n\nstatus_detail\n: (Optional) A user-friendly description of the current job status.\n\n\nactivity_frame_id\n: The frame id associated with the last job activity. Activity is defined as the start of a new\n track for the current segment.\n\n\nactivity_timestamp\n: The timestamp associated with the last job activity.\n\n\n\n\n\n\n\n\nWeb User Interface\n\n\n\n\n\nAdded column names to the table that appears when the user clicks in the Media button associated with a job on the Job\n Status page. Now descriptive comments are provided when table cells are empty.\n\n\n\n\nBug Fixes\n\n\n\n\n\nUpgraded Tika to 1.17 to resolve an issue with improper indentation in a Python file (rotation.py) that resulted in\n generating at least one error message per image processed. When processing a large number of images, this would\n generate may error messages, causing the Automatic Bug Reporting Tool daemon (abrtd) process to run at 100% CPU. Once\n in that state, that process would stay there, essentially wasting on CPU core. This resulted in some of the Jenkins\n virtual machines we used for testing to become unresponsive.\n\n\n\n\nKnown Issues\n\n\n\n\n\n\n\nOpenCV 3.3.0 \ncv::imread()\n does not properly decode some TIFF images that have EXIF orientation metadata. It can\n handle images that are flipped horizontally, but not vertically. It also has issues with rotated images. Since most\n components rely on that function to read image data, those components may silently fail to generate detections for\n those kinds of images.\n\n\n\n\n\n\nUsing single quotes, apsotrophes, or double quotes in the name of an algorithm, action, task, or pipeline configured\n on an existing OpenMPF system will result in a failure to perform an OpenMPF upgrade on that system. Specifically, the\n step where pre-existing custom actions, tasks, and pipelines are carried over to the upgraded version of OpenMPF will\n fail. Please do not use those special characters while naming those elements. If this has been done already, then\n those elements should be manually renamed in the XML files prior to an upgrade attempt.\n\n\n\n\n\n\nOpenMPF uses OpenCV, which uses FFmpeg, to connect to video streams. If a proxy and/or firewall prevents the network\n connection from succeeding, then OpenCV, or the underlying FFmpeg library, will segfault. This causes the C++\n Streaming Component Executor process to fail. In turn, the job status will be set to \nERROR\n with a status detail\n message of \"Unexpected error. See logs for details\". In this case, the logs will not contain any useful information.\n You can identify a segfault by the following line in the node-manager log:\n\n\n\n\n\n\n2018-02-15 16:01:21,814 INFO [pool-3-thread-4] o.m.m.nms.streaming.StreamingProcess - Process: Component exited with exit code 139\u00a0\n\n\n\n\n\nTo determine if FFmpeg can connect to the stream or not, run \nffmpeg -i \n in a terminal window. Here's an example when it's successful:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[rtsp @ 0x1924240] UDP timeout, retrying with TCP\nInput #0, rtsp, from 'rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov':\n Metadata:\n title : BigBuckBunny_115k.mov\n Duration: 00:09:56.48, start: 0.000000, bitrate: N/A\n Stream #0:0: Audio: aac (LC), 12000 Hz, stereo, fltp\n Stream #0:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 240x160, 24 fps, 24 tbr, 90k tbn, 48 tbc\nAt least one output file must be specified\n\n\n\n\n\nHere's an example when it's not successful, so there may be network issues:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[tcp @ 0x171c300] Connection to tcp://184.72.239.149:554?timeout=0 failed: Invalid argument\nrtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov: Invalid argument\n\n\n\n\n\nTika 1.17 does not come pre-packaged with support for some embedded image formats in PDF files, possibly to avoid\n patent issues. OpenMPF does not handle embedded images in PDFs, so that's not a problem. Tika will print out the\n following warnings, which can be safely ignored:\n\n\n\n\nJan 22, 2018 11:02:15 AM org.apache.tika.config.InitializableProblemHandler$3 handleInitializableProblem\nWARNING: JBIG2ImageReader not loaded. jbig2 files will be ignored\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nTIFFImageWriter not loaded. tiff files will not be processed\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nJ2KImageReader not loaded. JPEG2000 files will not be processed.\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\n\n\n\n\nOpenMPF 1.0.x\n\n\n1.0.0: October 2017\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nBuild Guide\n with instructions for installing the latest JDK,\n latest JRE, FFmpeg 3.3.3, new codecs, and OpenCV 3.3.\n\n\nAdded an \nAcknowledgements\n section that provides information on third party dependencies\n leveraged by the OpenMPF.\n\n\nAdded a \nFeed Forward Guide\n that explains feed forward processing and how to use it.\n\n\nAdded missing requirements checklist content to\n the \nInstall Guide\n.\n\n\nUpdated the README at the top level of each of the primary repositories to help with user navigation and provide\n general information.\n\n\n\n\nUpgrade to FFmpeg 3.3.3 and OpenCV 3.3\n\n\n\n\n\nUpdated core framework from FFmpeg 2.6.3 to FFmpeg 3.3.3.\n\n\nAdded the following FFmpeg codecs: x256, VP9, AAC, Opus, Speex.\n\n\nUpdated core framework and components from OpenCV 3.2 to OpenCV 3.3. No longer building with opencv_contrib.\n\n\n\n\nFeed Forward Behavior\n\n\n\n\n\nUpdated the workflow manager (WFM) and all video components to optionally perform feed forward processing for batch\n jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will\n only process the frames associated with the detections in those tracks. This differs from the default segmenting\n behavior, which does not preserve detection regions or track information between stages.\n\n\nTo enable this behavior, the optional \nFEED_FORWARD_TYPE\n property must be set to \nFRAME\n, \nSUPERSET_REGION\n,\n or \nREGION\n. If set to \nFRAME\n then the components in the next stage will process the whole frame region associated\n with each detection in the track passed forward. If set to \nSUPERSET_REGION\n then the components in the next stage\n will determine the bounding box that encapsulates all of the detection regions in the track, and only process the\n pixel data within that superset region. If set to \nREGION\n then the components in the next stage will process the\n region associated with each detection in the track passed forward, which may vary in size and position from frame to\n frame.\n\n\nThe optional \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n property can be set to a number to limit the number of detections\n passed forward in a track. For example, if set to \"5\", then only the top 5 detections in the track will be passed\n forward and processed by the next stage. The top detections are defined as those with the highest confidence values,\n or if the confidence values are the same, those with the lowest frame index.\n\n\nNote that setting the feed forward properties has no effect on the first pipeline stage because there is no prior\n stage that can pass tracks to it.\n\n\n\n\nCaffe Component\n\n\n\n\n\nUpdated the Caffe component to process images in the BGR color space instead of the RGB color space. This addresses a\n bug found in OpenCV. Refer to the Bug Fixes section below.\n\n\nAdded support for processing videos.\n\n\nAdded support for an optional \nACTIVATION_LAYER_LIST\n property. For each network layer specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is an encoded string of the\n JSON representation of an OpenCV matrix of the activation values for that layer. The activation values are obtained\n after the Caffe network has processed the frame data.\n\n\nAdded support for an optional \nSPECTRAL_HASH_FILE_LIST\n property. For each JSON file specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is a string of 0's and 1's\n representing the spectral hash calculated using the information in the spectral hash JSON file. The spectral hash is\n calculated using activation values after the Caffe network has processed the frame data.\n\n\nAdded a pipeline to showcase the above two features for the GoogLeNet Caffe model.\n\n\nRemoved the \nTRANSPOSE\n property from the Caffe component since it was not necessary.\n\n\nAdded red, green, and blue mean subtraction values to the GoogLeNet pipeline.\n\n\n\n\nUse Key Frames\n\n\n\n\n\nAdded support for an optional \nUSE_KEY_FRAMES\n property to each video component. When true the component will only\n look at key frames (I-frames) from the input video. Can be used in conjunction with \nFRAME_INTERVAL\n. For example,\n when \nUSE_KEY_FRAMES\n is true, and \nFRAME_INTERVAL\n is set to \"2\", then every other key frame will be processed.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\n\n\n\nUpdated the MPFVideoCapture and MPFImageReader tools to handle feed forward properties.\n\n\nUpdated the MPFVideoCapture tool to handle \nFRAME_INTERVAL\n and \nUSE_KEY_FRAMES\n properties.\n\n\nUpdated all existing components to leverage these tools as much as possible.\n\n\nWe encourage component developers to use these tools to automatically take care of common frame grabbing and frame\n manipulation behaviors, and not to reinvent the wheel.\n\n\n\n\nDead Letter Queue\n\n\n\n\n\nIf for some reason a sub-job request that should have gone to a component ends up on the ActiveMQ Dead Letter Queue (\n DLQ), then the WFM will now process that failed request so that the job can complete. The ActiveMQ management page\n will now show that \nActiveMQ.DLQ\n has 1 consumer. It will also show unconsumed messages\n in \nMPF.PROCESSED_DLQ_MESSAGES\n. Those are left for auditing purposes. The \"Message Detail\" for these shows the string\n representation of the original job request protobuf message.\n\n\n\n\nUpgrade Path\n\n\n\n\n\nRemoved the Release 0.8 to Release 0.9 upgrade path in the deployment scripts.\n\n\nAdded support for a Release 0.9 to Release 1.0.0 upgrade path, and a Release 0.10.0 to Release 1.0.0 upgrade path.\n\n\n\n\nMarkup\n\n\n\n\n\nBounding boxes are now drawn along the interpolated path between detection regions whenever there are one or more\n frames in a track which do not have detections associated with them.\n\n\nFor each track, the color of the bounding box is now a randomly selected hue in the HSV color space. The colors are\n evenly distributed using the golden ratio.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a \nbug in OpenCV\n where the Caffe example code was processing\n images in the RGB color space instead of the BGR color space. Updated the OpenMPF Caffe component accordingly.\n\n\nFixed a bug in the OpenCV person detection component that caused bounding boxes to be too large for detections near\n the edge of a frame.\n\n\nResubmitting jobs now properly carries over configured job properties.\n\n\nFixed a bug in the build order of the OpenMPF project so that test modules that the WFM depends on are built before\n the WFM itself.\n\n\nThe Markup component draws bounding boxes between detections when a \nFRAME_INTERVAL\n is specified. This is so that the\n bounding box in the marked-up video appears in every frame. Fixed a bug where the bounding boxes drawn on\n non-detection frames appeared to stand still rather than move along the interpolated path between detection regions.\n\n\nFixed a bug on the OALPR license plate detection component where it was not properly handling the \nSEARCH_REGION_*\n\n properties.\n\n\nSupport for the \nMIN_GAP_BETWEEN_SEGMENTS\n property was not implemented properly. When the gap between two segments is\n less than this property value then the segments should be merged; otherwise, the segments should remain separate. In\n some cases, the exact opposite was happening. This bug has been fixed.\n\n\n\n\nKnown Issues\n\n\n\n\n\nBecause of the number of additional ActiveMQ messages involved, enabling feed forward for low resolution video may\n take longer than the non-feed-forward behavior.\n\n\n\n\nOpenMPF 0.x.x\n\n\n0.10.0: July 2017\n\n\n\n\n\nWARNING:\n There is no longer a \nDEFAULT CAFFE ACTION\n, \nDEFAULT CAFFE TASK\n, or \nDEFAULT CAFFE PIPELINE\n. There is now a \nCAFFE GOOGLENET DETECTION PIPELINE\n and \nCAFFE YAHOO NSFW DETECTION PIPELINE\n, which each have a respective action and task.\n\n\nNOTE:\n MPFImageReader has been re-enabled in this version of OpenMPF since we upgraded to OpenCV 3.2, which addressed the known issues with \nimread()\n, auto-orientation, and jpeg files in OpenCV 3.1.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nContributor Guide\n that provides guidelines for contributing to the OpenMPF\n codebase.\n\n\nUpdated the \nJava Batch Component API\n with links to the example Java components.\n\n\nUpdated the \nBuild Guide\n with instructions for OpenCV 3.2.\n\n\n\n\nUpgrade to OpenCV 3.2\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.1 to OpenCV 3.2.\n\n\n\n\nSupport for Animated gifs\n\n\n\n\n\nAll gifs are now treated as videos. Each gif will be handled as an MPFVideoJob.\n\n\nUnanimated gifs are treated as 1-frame videos.\n\n\nThe WFM Media Inspector now populates the \nmedia_properties\n map with a \nFRAME_COUNT\n entry (in addition to\n the \nDURATION\n and \nFPS\n entries).\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for the Yahoo Not Suitable for Work (NSFW) Caffe model for explicit material detection.\n\n\nUpdated the Caffe component to support the OpenCV 3.2 Deep Neural Network (DNN) module.\n\n\n\n\nFuture Support for Streaming Video\n\n\n\n\n\nNOTE:\n At this time, OpenMPF does not support streaming video. This section details what's being / has been done so far to prepare for that feature.\n\n\n\n\n\n\nThe codebase is being updated / refactored to support both the current \"batch\" job functionality and new \"streaming\"\n job functionality.\n\n\nbatch job: complete video files are written to disk before they are processed\n\n\nstreaming job: video frames are read from a streaming endpoint (such as RTSP) and processed in near real time\n\n\n\n\n\n\nThe REST API is being updated with endpoints for streaming jobs:\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job\n\n\n\n\n\n\nThe Redis and mySQL databases are being updated to support streaming video jobs.\n\n\nA batch job will never have the same id as a streaming job. The integer ids will always be unique.\n\n\n\n\n\n\n\n\nBug Fixes\n\n\n\n\n\nThe MOG and SuBSENSE component services could segfault and terminate if the \nUSE_MOTION_TRACKING\n property was set to\n \u201c1\u201d and a detection was found close to the edge of the frame. Specifically, this would only happen if the video had a\n width and/or height dimension that was not an exact power of two.\n\n\nThe reason was because the code downsamples each frame by a power of two and rounds the value of the width and\n height up to the nearest integer. Later on when upscaling detection rectangles back to a size that\u2019s relative to\n the original image, the resized rectangle sometimes extended beyond the bounds of the original frame.\n\n\n\n\n\n\n\n\nKnown Issues\n\n\n\n\n\nIf a job is submitted through the REST API, and a user to logged into the web UI and looking at the job status page,\n the WFM may generate \"Error retrieving the SingleJobInfo model for the job with id\" messages.\n\n\nThis is because the job status is only added to the HTTP session object if the job is submitted through the web\n UI. When the UI queries the job status it inspects this object.\n\n\nThis message does not appear if job status is obtained using the \n[GET] /rest/jobs/{id}\n endpoint.\n\n\n\n\n\n\nThe \n[GET] /rest/jobs/stats\n endpoint aggregates information about all of the jobs ever run on the system. If\n thousands of jobs have been run, this call could take minutes to complete. The code should be improved to execute a\n direct mySQL query.\n\n\n\n\n0.9.0: April 2017\n\n\n\n\n\nWARNING:\n MPFImageReader has been disabled in this version of OpenMPF. Component developers should use MPFVideoCapture instead. This affects components developed against previous versions of OpenMPF and components developed against this version of OpenMPF. Please refer to the Known Issues section for more information.\n\n\nWARNING:\n The OALPR Text Detection Component has been renamed to OALPR \nLicense Plate\n Text Detection Component. This affects the name of the component package and the name of the actions, tasks, and pipelines. When upgrading from R0.8 to R0.9, if the old OALPR Text Detection Component is installed in R0.8 then you will be prompted to install it again at the end of the upgrade path script. We recommend declining this prompt because the old component will conflict with the new component.\n\n\nWARNING:\n Action, task, and pipeline names that started with \nMOTION DETECTION PREPROCESSOR\n have been renamed \nMOG MOTION DETECTION PREPROCESSOR\n. Similarly, \nWITH MOTION PREPROCESSOR\n has changed to \nWITH MOG MOTION PREPROCESSOR\n.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n to reflect job properties, algorithm-specific properties, and\n media-specific properties.\n\n\nStreamlined the \nC++ Batch Component API\n document for clarity and simplicity.\n\n\nCompleted the \nJava Batch Component API\n document.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to reflect web UI changes.\n\n\nUpdated the \nBuild Guide\n with instructions for GitHub repositories.\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nAdded support for job properties, which will override pre-defined pipeline properties.\n\n\nAdded support for algorithm-specific properties, which will apply to a single stage of the pipeline and will override\n job properties and pre-defined pipeline properties.\n\n\nAdded support for media-specific properties, which will apply to a single piece and media and will override job\n properties, algorithm-specific properties, and pre-defined pipeline properties.\n\n\nComponents can now be automatically registered and installed when the web application starts in Tomcat.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nThe \"Close All\" button on pop-up notifications now dismisses all notifications from the queue, not just the visible\n ones.\n\n\nJob completion notifications now only appear for jobs created during the current login session instead of all jobs.\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties can be set using the web interface when creating a\n job. Once files are selected for a job, these properties can be set individually or by groups of files.\n\n\nThe Node and Process Status page has been merged into the Node Configuration page for simplicity and ease of use.\n\n\nThe Media Markup results page has been merged into the Job Status page for simplicity and ease of use.\n\n\nThe File Manager UI has been improved to handle large numbers of files and symbolic links.\n\n\nThe side navigation menu is now replaced by a top navigation bar.\n\n\n\n\nREST API\n\n\n\n\n\nAdded an optional jobProperties object to the \n/rest/jobs/\n request which contains String key-value pairs which\n override the pipeline's pre-configured job properties.\n\n\nAdded an optional algorithmProperties object to the \n/rest/jobs/\n request which can be used to configure properties\n for specific algorithms in the pipeline. These properties override the pipeline's pre-configured job properties. They\n also override the values in the jobProperties object.\n\n\nUpdated the \n/rest/jobs/\n request to add more detail to media, replacing a list of mediaUri Strings with a list of\n media objects, each of which contains a mediaUri and an optional mediaProperties map. The mediaProperties map can be\n used to configure properties for the specific piece of media. These properties override the pipeline's pre-configured\n job properties, values in the jobProperties object, and values in the algorithmProperties object.\n\n\nStreamlined the actions, tasks, and pipelines endpoints that are used by the web UI.\n\n\n\n\nFlipping, Rotation, and Region of Interest\n\n\n\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties will no longer appear in the detectionProperties\n map in the JSON detection output object. When applied to an algorithm these properties now appear in the\n pipeline.stages.actions.properties element. When applied to a piece of media these properties will now appear in the\n the media.mediaProperties element.\n\n\nThe OpenMPF now supports multiple regions of interest in a single media file. Each region will produce tracks\n separately, and the tracks for each region will be listed in the JSON output as if from a separate media file.\n\n\n\n\nComponent API\n\n\n\n\n\nJava Batch Component API is functionally complete for third-party development, with the exception of Component Adapter\n and frame transformation utilities classes.\n\n\nRe-architected the Java Batch Component API to use a more traditional Java method structure of returning track lists\n and throwing exceptions (rather than modifying input track lists and returning statuses), and encapsulating job\n properties into MPFJob objects:\n\n\nList getDetections(MPFVideoJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFAudioJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFImageJob job) throws MPFComponentDetectionError\n\n\n\n\n\n\nCreated examples for the Java Batch Component API.\n\n\nReorganized the Java and C++ component source code to enable component development without the OpenMPF core, which\n will simplify component development and streamline the code base.\n\n\n\n\nJSON Output Objects\n\n\n\n\n\nThe JSON output object for the job now contains a jobProperties map which contains all properties defined for the job\n in the job request. For example, if the job request specifies a \nCONFIDENCE_THRESHOLD\n of then the jobProperties map\n in the output will also list a \nCONFIDENCE_THRESHOLD\n of 5.\n\n\nThe JSON output object for the job now contains a algorithmProperties element which contains all algorithm-specific\n properties defined for the job in the job request. For example, if the job request specifies a \nFRAME_INTERVAL\n of 2\n for FACECV then the algorithmProperties element in the output will contain an entry for \"FACECV\" and that entry will\n list a \nFRAME_INTERVAL\n of 2.\n\n\nEach JSON media output object now contains a mediaProperties map which contains all media-specific properties defined\n by the job request. For example, if the job request specifies a \nROTATION\n of 90 degrees for a single piece of media\n then the mediaProperties map for that piece of piece will list a \nROTATION\n of 90.\n\n\nThe content of JSON output objects are now organized by detection type (e.g. MOTION, FACE, PERSON, TEXT, etc.) rather\n than action type.\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for flip, rotation, and cropping to regions of interest.\n\n\nAdded support for returning multiple classifications per detection based on user-defined settings. The classification\n list is in order of decreasing confidence value.\n\n\n\n\nNew Pipelines\n\n\n\n\n\nNew SuBSENSE motion preprocessor pipelines have been added to components that perform detection on video.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nActions.xml\n, \nAlgorithms.xml\n, \nnodeManagerConfig.xml\n, \nnodeServicesPalette.json\n, \nPipelines.xml\n, and \nTasks.xml\n\n are no longer stored within the Workflow Manager WAR file. They are now stored under \n$MPF_HOME/data\n. This makes it\n easier to upgrade the Workflow Manager and makes it easier for users to access these files.\n\n\nEach component can now be optionally installed and registered during deployment. Components not registered are set to\n the \nUPLOADED\n state. They can then be removed or registered through the Component Registration page.\n\n\nJava components are now packaged as tar.gz files instead of RPMs, bringing them into alignment with C++ components.\n\n\nOpenMPF R0.9 can be installed over OpenMPF R0.8. The deployment scripts will determine that an upgrade should take\n place.\n\n\nAfter the upgrade, user-defined actions, tasks, and pipelines will have \"CUSTOM\" prepended to their name.\n\n\nThe job_request table in the mySQL database will have a new \"output_object_version\" column. This column will\n have \"1.0\" for jobs created using OpenMPF R0.8 and \"2.0\" for jobs created using OpenMPF R0.9. The JSON output\n object schema has changed between these versions.\n\n\n\n\n\n\nReorganized source code repositories so that component SDKs can be downloaded separately from the OpenMPF core and so\n that components are grouped by license and maturity. Build scripts have been created to streamline and simplify the\n build process across the various repositories.\n\n\n\n\nUpgrade to OpenCV 3.1\n\n\n\n\n\nThe OpenMPF software has been ported to use OpenCV 3.1, including all of the C++ detection components and the markup\n component. For the OpenALPR license plate detection component, the versions of the openalpr, tesseract, and leptonica\n libraries were also upgraded to openalpr-2.3.0, tesseract-3.0.4, and leptonica-1.7.2. For the SuBSENSE motion\n component, the version of the SuBSENSE library was upgraded to use the code found at this\n location: \nhttps://bitbucket.org/pierre_luc_st_charles/subsense/src\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\nMOG motion detection always detected motion in frame 0 of a video. Because motion can only be detected between two\n adjacent frames, frame 1 is now the first frame in which motion can be detected.\n\n\nMOG motion detection never detected motion in the first frame of a video segment (other than the first video segment\n because of the frame 0 bug described above). Now, motion is detected using the first frame before the start of a\n segment, rather than the first frame of the segment.\n\n\nThe above bugs were also present in SuBSENSE motion detection and have been fixed.\n\n\nSuBSENSE motion detection generated tracks where the frame numbers were off by one. Corrected the frame index logic.\n\n\nVery large video files caused an out of memory error in the system during Workflow Manager media inspection.\n\n\nA job would fail when processing images with an invalid metadata tag for the camera flash setting.\n\n\nUsers were permitted to select invalid file types using the File Manager UI.\n\n\n\n\nKnown Issues\n\n\n\n\n\nMPFImageReader does not work reliably with the current release version of OpenCV 3.1\n: In OpenCV 3.1, new\n functionality was introduced to interpret EXIF information when reading jpeg files.\n\n\nThere are two issues with this new functionality that impact our ability to use the OpenCV \nimread()\n function with\n MPFImageReader:\n\n\nFirst, because of a bug in the OpenCV code, reading a jpeg file that contains exif information could cause it to\n hang. (See \nhttps://github.com/opencv/opencv/issues/6665\n.)\n\n\nSecond, it is not possible to tell the \nimread()\nfunction to ignore the EXIF data, so the image it returns is\n automatically rotated. (See \nhttps://github.com/opencv/opencv/issues/6348\n.) This results in the MPFImageReader\n applying a second rotation to the image due to the EXIF information.\n\n\n\n\n\n\nTo address these issues, we developed the following workarounds:\n\n\nCreated a version of the MPFVideoCapture that works with an MPFImageJob. The new MPFVideoCapture can pull frames\n from both video files and images. MPFVideoCapture leverages cv::VideoCapture, which does not have the two issues\n described above.\n\n\nDisabled the use of MPFImageReader to prevent new users from trying to develop code leveraging this previous\n functionality.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOpenMPF 8.0.x\n\n\n8.0.0: December 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nOpenID Connect Guide\n.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to remove\n \n/workflow-manager\n from the Workflow Manager base URL. The Admin Guide includes a section for the new Hawtio web\n console.\n\n\nUpdated the \nREST API\n to use path parameters for pipelines, tasks, actions, and algorithms\n endpoints.\n\n\nUpdated the \nComponent Descriptor Reference\n with \nalgorithm.trackType\n.\n\n\nUpdated the \nC++ Batch Component API\n, \nPython Batch Component\n API\n, and \nJava Batch Component API\n to\n remove the ability to get the detection type since track type is now specified in \ndescriptor.json\n.\n\n\nCreated a new \nTrigger Guide\n.\n\n\nCreated a new \nRoll Up Guide\n.\n\n\n\n\nOpenID-Connect (OIDC) Authentication\n\n\n\n\n\nThe Workflow Manager can now optionally use an OpenID Connect (OIDC) provider to handle authentication for users of\n the web UI and clients of the REST API. The URI for the OIDC provider is specified using the \nOIDC_ISSUER_URI\n\n environment variable.\n\n\nWhen enabled, OIDC is used to authenticate components when they register with the Workflow Manager.\n\n\nWhen \nCALLBACK_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token in job request callbacks.\n\n\nWhen \nTIES_DB_USE_OIDC\n is set to \ntrue\n, the Workflow Manager will send a token when posting to a TiesDb server.\n\n\nWhen OIDC is not enabled, the Workflow Manager uses basic authentication with usernames and passwords, as in previous\n versions of OpenMPF.\n\n\nRefer to the \nOpenID Connect Guide\n for more information on the various OIDC\n environment variables and a Keycloak example.\n\n\n\n\nEmbedded ActiveMQ Broker and Hawtio\n\n\n\n\n\nActiveMQ is now part of the Workflow Manager Spring Boot web application and is no longer run as a separate Docker\n service. This enables ActiveMQ to integrate with Spring Security so it can be protected by the Workflow Manager's OIDC\n support.\n\n\nThe Workflow Manager is the sender or recipient of all ActiveMQ messages, so embedding ActiveMQ in the Workflow\n Manager prevents a network hop on all messages.\n\n\nThe ActiveMQ management page has been replaced by \nHawtio\n, which is more feature rich and can be\n used to monitor the state of the ActiveMQ queues used for communication between the Workflow Manager and the\n components. The Hawtio web console can be accessed by selecting \"Hawtio\" from the \"Configuration\" dropdown menu in the\n top menu bar of the web UI.\n\n\nImportantly, the base URL for the Workflow Manager is now http://localhost:8080 instead of\n http://localhost:8080/workflow-manager. \n/workflow-manager\n is no longer part of the path. This change was made to\n enable Hawtio integration.\n\n\n\n\nREST API Updates\n\n\n\n\n\nThe following changes have been made to the REST endpoints to address a limitation with Swagger (OpenAPI). These\n changes enable the REST endpoints to properly show up in the Swagger page, which is accessed by selecting \"REST API\"\n from the \"Configuration\" dropdown menu in the top menu bar of the web UI.\n\n\n\n\n\n\n\n\n\n\nOld REST Endpoint\n\n\nNew REST Endpoint\n\n\n\n\n\n\n\n\n\n\n[GET] /rest/pipelines?name={name}\n\n\n[GET] /rest/pipelines/{name}\n\n\n\n\n\n\n[GET] /rest/tasks?name={name}\n\n\n[GET] /rest/tasks/{name}\n\n\n\n\n\n\n[GET] /rest/actions?name={name}\n\n\n[GET] /rest/actions/{name}\n\n\n\n\n\n\n[GET] /rest/algorithms?name={name}\n\n\n[GET] /rest/algorithms/{name}\n\n\n\n\n\n\n[DELETE] /rest/pipelines?name={name}\n\n\n[DELETE] /rest/pipelines/{name}\n\n\n\n\n\n\n[DELETE] /rest/tasks?name={name}\n\n\n[DELETE] /rest/tasks/{name}\n\n\n\n\n\n\n[DELETE] /rest/actions?name={name}\n\n\n[DELETE] /rest/actions/{name}\n\n\n\n\n\n\n\n\n\n\nIn general, the name is now specified as part of the URL path instead of as a URL parameter.\n\n\n/\n and \n;\n characters are no longer allowed in these names.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nEach component's \ndescriptor.json\n now requires an \nalgorithm.trackType\n field. This is used by the Workflow Manager\n to determine the kind of tracks that may be generated by the component (e.g. \nFACE\n, \nTEXT\n, \nCLASS\n, etc.). This is\n now used in place of the component API calls that were used to get the detection type. \n\n\n\n\nComponent API Updates\n\n\n\n\n\nThe following changes were made since the track type is now part of each component's \ndescriptor.json\n:\n\n\nRemoved \nGetDetectionType()\n from the CPP Component API.\n\n\nRemoved \ndetection_type\n from the Python Component API.\n\n\nRemoved \ngetDetectionType()\n from the Java Component API.\n\n\n\n\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects use \naction\n instead of \nsource\n in the track type group. Also, \nsource\n is removed from each track.\n\n\nConsider this example of the old JSON output:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#MOG MOTION DETECTION (WITH AUTO-ORIENTATION) PREPROCESSOR ACTION#OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"confidence\": 8.799637,\n ...\n\n\n\n\n\nThe corresponding new JSON output is:\n\n\n\n\n\"output\": {\n \"FACE\": [\n {\n \"action\": \"OCV FACE DETECTION (WITH AUTO-ORIENTATION) ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"4bcba9b95b92a5115b7da1097fcffa962480d0b4424a656772bef12161d775c1\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"confidence\": 8.799637,\n ...\n\n\n\nTrigger Support\n\n\n\n\n\nA \nTRIGGER\n property can now be added to any action in a pipeline. It will only be used if \nFEED_FORWARD_TYPE\n is\n provided and set to something other than \nNONE\n. The \nTRIGGER\n property is used to conditionally control if the\n Workflow Manager executes that action. Each feed-forward track that is not executed is passed to the next stage of the\n pipeline. This results in skipping untriggered actions.\n\n\nThe value of \nTRIGGER\n takes the form \n=[;...]\n. For example, if the value is\n \nCLASSIFICATION=car\n then the Workflow Manager would only execute the associated action using feed-forward tracks from\n the previous stage in the pipeline if those tracks have the \nCLASSIFICATION\n track property with a value of \ncar\n.\n This could be useful to skip a license plate detection action. To enable the action to trigger on more than just \ncar\n\n tracks you can provide a list of valid values. For example, \nCLASSIFICATION=car;truck;bus\n.\n\n\nThe \nTrigger Guide\n goes into more detail and provides an example of a pipeline with\n multiple speech-to-text stages. \nTRIGGER\n is used to select which speech-to-text algorithm is executed based on the\n detected language in the media.\n\n\n\n\nRoll Up Support\n\n\n\n\n\nThe Workflow Manager can be configured to replace the values of track and detection properties\n after receiving tracks and detections from a component. For example, the \nCLASSIFICATION\n property\n may be set to \"car\", \"bus\", and \"truck\". Those can be rolled up into \"vehicle\".\n\n\nTo use this feature, set the \nROLL_UP_FILE\n property to the path of a JSON file that matches\n the format of this example:\n\n\n\n\n[\n {\n \"propertyToProcess\": \"CLASSIFICATION\",\n \"originalPropertyCopy\": \"ORIGINAL CLASSIFICATION\",\n \"groups\": [\n {\n \"rollUp\": \"vehicle\",\n \"members\": [\n \"truck\",\n \"car\",\n \"bus\"\n ]\n }\n ]\n }\n]\n\n\n\n\n\nRefer to the \nRoll Up Guide\n for an explanation and more details.\n\n\n\n\nChanged All \"whitelist\" References to \"allow list\"\n\n\n\n\n\nIn an effort to be more culturally sensitive, all references to \"whitelist\" have been removed or renamed to \"allow\n list\".\n\n\nThe \nwhitelist.\n prefix has been removed from the entries in the \nmediaType.properties\n file. For example,\n \nwhitelist.image/gif=VIDEO\n is now \nimage/gif=VIDEO\n.\n\n\nThe OcvDnnDetection component \nFEED_FORWARD_WHITELIST_FILE\n property has been renamed to\n \nFEED_FORWARD_ALLOW_LIST_FILE\n.\n\n\nThe OcvYoloDetection component \nCLASS_WHITELIST_FILE\n property has been renamed to \nCLASS_ALLOW_LIST_FILE\n.\n\n\n\n\nArgos Translation Component\n\n\n\n\n\nThis new component utilizes \nArgos Translate\n to translate input\n text from a given source language to English. It can be used in a feed-forward pipeline to process tracks with\n language and/or script identifiers from an upstream stage.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nWhisper Speech-to-Text and Translation Component\n\n\n\n\n\nThis new component utilizes \nOpenAI Whisper\n to perform language detection,\n speech-to-text transcription, or speech translation.\n\n\nIf multiple languages are spoken in a single piece of media, language detection will detect only one of them.\n\n\nNote that Whisper is not designed to return a transcription in the source language when performing translation, so we\n implemented the component to perform an additional transcribe call when configured to perform translation.\n\n\nRefer to the \nREADME\n\n for details.\n\n\n\n\nContrastive Language\u2013Image Pre-training (CLIP) Component\n\n\n\n\n\nThis new component utilizes \nCLIP\n to classify images using the 80 COCO classes, 1000\n ImageNet classes, or a list of user-provided classes. It can run on a CPU or GPU, and can make calls to an NVIDIA\n Triton inference server.\n\n\nClassification is performed by taking the class names and filling in one or more text prompts. For example, \"a photo\n of {}\", where \"{}\" can be \"dog\" or \"cat\". An embedding is generated using the text prompt(s) for each class and\n compared against the image embedding to get a match score. Optionally, users can provide a list of their own text\n prompts.\n\n\nOpenAI trained the CLIP model using a wide variety of images and their respective captions from the Internet. This may\n make it suitable for a wide variety of classification tasks without further training (known as zero-shot\n classification). For example, a user could make up a list of classes for arbitrary objects like \"walrus\", \"paperclip\",\n \"pizza\", etc., and use the default text prompts.\n\n\nIt is also possible to use CLIP to classify concepts like scenes and sentiment. For example, using a text prompt of \"a\n {} scene\" where the classes are \"safe\", \"violent\", and \"dangerous\".\n\n\nOptionally, the CLIP component can return the image embedding as the track \nFEATURE\n. For example, this can be used\n for search and retrieval tasks by comparing it to other embeddings enrolled in a database.\n\n\nRefer to the \nREADME\n for\n details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1547\n] Create Argos translation component\n\n\n[\n#1574\n] Update the WFM to support an optional \nTRIGGER\n property on any action\n\n\n[\n#1598\n] Create a Whisper component for speech-to-text and and translation\n\n\n[\n#1644\n] Create CLIP component for processing images\n\n\n[\n#1704\n] Update Workflow Manager to authenticate users and REST clients using OIDC\n\n\n[\n#1730\n] Update Workflow Manager to optionally use OIDC when sending callbacks and posting to TiesDb\n\n\n[\n#1733\n] Update Workflow Manager to use an embedded ActiveMQ broker\n\n\n[\n#1793\n] Add Roll Up support to Workflow Manager\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#799\n] Avoid unnecessary serialization between Camel routes\n\n\n[\n#949\n] Change \n/pipelines?name=MYPIPELINE\n REST endpoint to \n/pipelines/MYPIPELINE\n\n\n[\n#1643\n] Remove \nLONG_SPEAKER_ID\n and instead only use \nSPEAKER_ID\n\n\n[\n#1645\n] Refactor camel code\n\n\n[\n#1705\n] Change all references to \"whitelist\" to \"allow list\" and \"blacklist\" to \"block list\"\n\n\n[\n#1759\n] Disable markup animation by default\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1642\n] \nInProgressBatchJobsService.setProcessedAction\n is now called when a previous task produces no tracks\n\n\n[\n#1755\n] The Workflow Manager logs page does not properly handle multi-byte characters\n\n\n\n\nOpenMPF 7.2.x\n\n\n7.2.6: January 2024\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nHealth Check Guide\n.\n\n\n\n\nHealth Check Support\n\n\n\n\n\nThe C++ and Python component executors can be configured to run health checks on components prior to running jobs.\n Health checks are configured using environment variables:\n\n\nHEALTH_CHECK\n: When set to \"ENABLED\", the component executor will run health checks.\n\n\nHEALTH_CHECK_TIMEOUT\n: When set to a positive integer, specifies the minimum number of seconds between health\n checks. When absent or set to 0, a health check will run before every job.\n\n\nHEALTH_CHECK_RETRY_MAX_ATTEMPTS\n: When set to a positive integer, specifies the number of consecutive health\n check failures that will cause the component service to exit. When absent or set to 0, the component service will\n never exit because of a failed health check.\n\n\n\n\n\n\nAlso, an INI file must be provided at \n$MPF_HOME/plugins//health/health-check.ini\n. For example:\n\n\n\n\nmedia=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE\n\n\n\n\n\nRefer to the \nHealth Check Guide\n for an explanation and more details.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1731\n] Implement health checks for C++ and Python components\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1727\n] Update ffmpeg to 6.1\n\n\n\n\n7.2.5: November 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1715\n] Upgrade ActiveMQ to 5.17.6\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1711\n] When selecting detections with the highest confidence,\n Workflow Manager should consistently handle detections with equal confidence\n\n\n\n\n7.2.4: September 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1707\n] Fix bug where TiesDB check status reports\n \nNO_TIES_DB_URL_IN_JOB\n instead of \nMEDIA_MIME_TYPES_ABSENT\n\n\n\n\n7.2.3: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1697\n] Prevent OcvYoloDetection component from deadlocking on\n strange frame sizes when using Triton\n\n\n\n\n7.2.2: June 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1693\n] Add property to enable/disable SAS in AzureSpeech\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1695\n] Fix memory leak in KeywordTagging component\n\n\n\n\n7.2.1: June 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1678\n] Fix bug where ffmpeg hangs when processing some kinds of\n unsupported/corrupted media\n\n\n\n\n7.2.0: May 2023\n\n\n\nDocumentation\n\n\n\n\n\nCreated a new \nTiesDb Guide\n.\n\n\nUpdated the \nComponent Descriptor Reference\n with \noutputChangedCounter\n.\n\n\nUpdated the \nREST API\n with a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint.\n\n\nUpdated the REST API \n[POST] /rest/jobs\n response with \ntiesDbCheckStatus\n and \noutputObjectUri\n.\n\n\n\n\nTiesDb Re-Post\n\n\n\n\n\nAdded a new \n[POST] /rest/jobs/tiesdbrepost\n endpoint that accepts an array of job ids as an input and will attempt to\n re-post the job assertions (records) to TiesDb for each one. \n\n\nAdded a \"TiesDb\" column to the Job Status page. If there is a problem posting a record to the TiesDb server the column\n will contain an \"ERROR\" button. Clicking on it will provide a description of the error and a button that can be used\n to re-post the associated job records.\n\n\n\n\nTiesDb Checking\n\n\n\n\n\nIf the \nTIES_DB_URL\n job property or \nties.db.url\n system property is set when submitting a job creation request, \n then the Workflow Manager will attempt to check TiesDb for existing job results before running the job again.\n\n\nThe Workflow Manager will attempt to use the most-recently-created job results, preferring jobs that completed without\n errors or warnings, and preferring jobs that completed with warnings over completed with errors.\n\n\nTo prevent this check, set \nSKIP_TIES_DB_CHECK=true\n. That will force the job to run and attempt to post the new\n job results to TiesDb.\n\n\nWhen using TiesDb, we strongly recommend providing both the \nMEDIA_HASH\n and \nMIME_TYPE\n in the \nmedia.metadata\n map\n in the job request. This will enable the Workflow Manager to skip media inspection. When using S3 object storage, this\n means that the Workflow Manager will not need to download the media before checking TiesDb for existing job records.\n\n\nThe \n[POST] /rest/jobs\n response now contains a \ntiesDbCheckStatus\n and \noutputObjectUri\n field. \ntiesDbCheckStatus\n\n will be set to one of the following values:\n\n\nNOT_REQUESTED\n\n\nNO_TIES_DB_URL_IN_JOB\n\n\nMEDIA_HASHES_ABSENT\n\n\nMEDIA_MIME_TYPES_ABSENT\n\n\nNO_MATCH\n\n\nFOUND_MATCH\n\n\n\n\n\n\nWhen there is a \nFOUND_MATCH\n, the \noutputObjectUri\n will be set to the URI of the old TiesDb record if S3 copy is\n not enabled.\n\n\nBy default, the \nties.db.s3.copy.enabled\n system property is set to \ntrue\n. This means that the Workflow Manager will\n attempt to copy all of the artifacts, markup, and derivative media associated with the job in TiesDb from the S3\n locations associated with the old job to the new S3 location specified in the new job. A new JSON output object will\n be generated. To disable this behavior set the system property, or \nTIES_DB_S3_COPY_ENABLED\n, to \nfalse\n. Then the\n Workflow Manager will simply provide a link to the old JSON as the result of the new job.\n\n\nIf there is a problem copying between S3 locations, the \"TiesDb\" column to the Job Status page will show a\n \"COPY ERROR\" button. Clicking on it will provide a description of the error.\n\n\n\n\nTiesDb Linked Media\n\n\n\n\n\nAdded support for \nLINKED_MEDIA_HASH\n in the \nmedia.properties\n section of the job creation request. When specified,\n the value of \nLINKED_MEDIA_HASH\n will be used instead of the actual media hash when creating a record in TiesDb,\n and also when looking for existing records in TiesDb.\n\n\nThis feature can be used to submit a transcoded (or thumbnail) version of an image to process instead of the source\n image. For example, the source image may be in a format not supported by OpenMPF. In this case, the value of\n \nLINKED_MEDIA_HASH\n can be set to the source image, but the rest of the job creation request would specify\n the \nmedia.mediaUri\n and \nmedia.metadata\n for the transcoded version of that image.\n\n\n\n\nOutput Changed Counter\n\n\n\n\n\nAdded the \noutput.changed.counter\n system property to the Workflow Manager and \noutputChangedCounter\n field to each\n component's \ndescriptor.json\n. These values are used when calculating the hash for a job when its record is posted to\n TiesDb, and also when checking TiesDb for existing records when a new job is submitted.\n\n\nIf the Workflow Manager is updated for any reason that should invalidate pre-existing job results, such as a\n change to the fields in the JSON output object, or significant improvements to track merging, for example, then the\n value of \noutput.changed.counter\n should be incremented by one. This will ensure that records in TiesDb will not be\n used so that all future jobs will need to be (re)run at least once until the counter is incremented again.\n\n\nThe same is true for each component. If a component is updated for any reason that should invalidate\n pre-existing job results, such as changes to input or output properties, or substantial improvements to the algorithm,\n then the value of \noutputChangedCounter\n should be incremented by one.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nNew JSON output objects will include \ntiesDbSourceJobId\n and \ntiesDbSourceMediaPath\n when the Workflow Manager can use\n previous job results stored in TiesDB. Note that the Workflow Manager will not generate a new JSON output object\n unless \nS3_RESULTS_BUCKET\n is set to a valid value, S3 access and secret keys are provided, and\n \nTIES_DB_S3_COPY_ENABLED=true\n.\n\n\n\n\nffprobe for Media Inspection\n\n\n\n\n\nThe Workflow Manager media inspection behavior now uses \nffprobe\n with \n-print_format json\n to return more precise\n \nFPS\n values for the \nmedia.mediaMetadata\n in the JSON output object. For example, the previous version of the\n Workflow Manager would return \n29.97\n, where the new version will return \n29.97002997002997\n. In multi-hour-long\n vidoes this can prevent cases where the last few frames were being ignored.\n\n\nThe previous version of the Workflow Manager was using both \nffmpeg\n and OpenCV to determine the number of frames in\n a video. We removed the OpenCV frame counter in this version because the \nffprobe\n approach is more accurate.\n The \nffprobe\n command replaces the old \nffmpeg\n command. \n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Job Status page to be more efficient. Searching a database of hundreds of thousands of jobs takes a long\n time. By limiting the search to one page of results at a time the UI is more responsive.\n\n\nRemoved timeout and bootout. The user session will no longer automatically end due to time out, or due to the same\n user logging in from a different host or browser. These behaviors were deemed too disruptive by end users.\n\n\nUpdated the Job Status page to include a \"TiesDb\" column that reports TiesDb status, such as when posting records\n to TiesDb and when retrieving existing records.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1438\n] Create a REST endpoint that will attempt to re-post to TiesDb\n\n\n[\n#1613\n] Check TiesDb before running a job\n\n\n[\n#1650\n] Create TiesDb records for thumbnail jobs under the parent media\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1342\n] Use ffprobe to get FPS during media inspection\n\n\n[\n#1564\n] Use ffprobe's JSON output instead of regexes during media inspection\n\n\n[\n#1601\n] Update the Workflow Manager jobs table to be more efficient\n\n\n[\n#1611\n] Remove Workflow Manager timeout and bootout behavior\n\n\n\n\nOpenMPF 7.1.x\n\n\n7.1.12: March 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1667\n] Handle Webp files with extra data at the end that cause components to crash\n\n\n\n\n7.1.10: March 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1662\n] Monitor StorageBackend\n\n\n\n\n7.1.9: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1675\n] Prevent upgrade of cudnn in yolo server dockerfile\n\n\n\n\n7.1.8: February 2023\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1649\n] Install specific version of libcudnn8 in Docker build\n\n\n\n\n7.1.7: February 2023\n\n\n\nUpdates\n\n\n\n\n\n[\n#1674\n] Update \nSPEAKER_ID\n logic, set \nLONG_SPEAKER_ID=0\n\n\n\n\n7.1.5: January 2023\n\n\n\nFeatures\n\n\n\n\n\n[\n#1542\n] Update Azure Speech Detection component to select transcription language based on feed-forward track\n\n\n[\n#1543\n] Update audio transcoder to accept subsegments\n\n\n[\n#1605\n] Update Azure Translation to use detected language from upstream\n\n\n\n\n7.1.1: December 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1634\n] Update version numbers to 7.1\n\n\n\n\n7.1.0: December 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_UPLOAD_OBJECT_KEY_PREFIX\n.\n\n\nUpdated the Markup Guide with \nMARKUP_TEXT_LABEL_MAX_LENGTH\n.\n\n\n\n\nExemplar Selection Policy\n\n\n\n\n\nThe policy for selecting the exemplar detection for each track can now be set using the \nEXEMPLAR_POLICY\n job property\n with following values:\n\n\nCONFIDENCE\n: Select the detection with the maximum confidence. If some confidences are the same, select the\n detection with the lower frame number. This is the default setting.\n\n\nFIRST\n: Select the detection with the lowest frame number\n\n\nLAST\n: Select the detection with the highest frame number\n\n\nMIDDLE\n: Select the detection with the frame number closest to the middle frame of the track, preferring the\n detection with the lower frame number if there is an even number of frames\n\n\n\n\n\n\n\n\nAutomatic Rotation and Horizontal Flip Enabled by Default\n\n\n\n\n\nIt is no longer necessary to explicitly set \nAUTO_ROTATE\n and \nAUTO_FLIP\n to true since that is now the default value.\n\n\nThese properties affect all video and image components that use the MPFImageReader and MPFVideoCapture tools. When\n true, if the image has EXIF data, or there is metadata associated with a video that ffmpeg understands, the tools will\n use that information to properly orient the frames before returning the frames to the component for processing.\n\n\n\n\nSupport S3 Object Storage Key Prefix\n\n\n\n\n\nSet the \nS3_UPLOAD_OBJECT_KEY_PREFIX\n job property or \ns3.upload.object.key.prefix\n system property to add a prefix to\n object keys when the Workflow Manager uploads objects to the S3 object store. This affects the JSON output object,\n artifacts, markup files, and derivative media.\n\n\nSpecifically, the Workflow Manager will upload objects to\n \n///\n.\n\n\nFor example, if you wish to add \"work/\" to the object key, then set \nS3_UPLOAD_OBJECT_KEY_PREFIX=work/\n.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#1526\n] Allow markup to display more than 10 characters in the text\n part of the label\n\n\n[\n#1527\n] Enable the Workflow Manager to select the middle detection\n as the exemplar\n\n\n[\n#1566\n] Make \nAUTO_ROTATE\n and \nAUTO_FLIP\n true by default\n\n\n[\n#1569\n] Modify C++ and Python component executor to automatically\n add the job name to log messages\n\n\n[\n#1621\n] Make S3 object keys used for upload configurable\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1602\n] Update Workflow Manager to use Spring Boot\n\n\n[\n#1631\n] Update byte-buddy, Mockito, and Hibernate versions to\n resolve build issue. Most notably, update Hibernate to 5.6.14.\n\n\n[\n#1632\n] Update ActiveMQ to 5.17.3\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1581\n] Don't change track start and end frame when\n \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n is disabled\n\n\n[\n#1595\n] Work around how Ubuntu only recognizes certificate files\n that end in .crt\n\n\n[\n#1610\n] Prevent premature pipeline creation when using web UI\n\n\n[\n#1612\n] At startup, prevent Workflow Manager from consuming from\n queues before purging them\n\n\n\n\nOpenMPF 7.0.x\n\n\n7.0.3: September 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1561\n] Fix logging for Python components when running through CLI\n runner\n\n\n[\n#1583\n] Can now properly view media while job is in progress\n\n\n[\n#1587\n] Fix bugs in amq_detection_component's use of select\n\n\n\n\n7.0.2: August 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1562\n] Fix bug where an ffmpeg change prevented detecting video\n rotation\n\n\n\n\n7.0.0: July 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Development Environment Guide by replacing steps for CentOS 7 with Ubuntu 20.04.\n\n\nAdded the Derivative Media Guide.\n\n\nUpdated the Batch Component APIs with revised error codes.\n\n\nUpdated the Python Batch Component API and Python base Docker image README with instructions for\n using \npyproject.toml\n and \nsetup.cfg\n.\n\n\nUpdated the Admin Guide and User Guide with images that show the new TiesDb and Callback columns in the job status UI.\n\n\nUpdated the REST API with the \npipelineDefinition\n, \nframeRanges\n, and \ntimeRanges\n fields now supported by the\n \n[POST] /rest/jobs\n endpoint.\n\n\nUpdated the OcvYoloDetection component README with information on using the NVIDIA Triton inference server.\n\n\nUpdated the Markup Guide with \nMARKUP_ANIMATION_ENABLED\n and \nMARKUP_LABELS_TRACK_INDEX_ENABLED\n.\n\n\nUpdated the Contributor Guide with new steps for generating documentation.\n\n\n\n\nTransition from CentOS 7 to Ubuntu 20.04\n\n\n\n\n\nAll the Docker images that previously used CentOS 7 as a base now use Ubuntu 20.04.\n\n\nWe decided not to use CentOS 8, which is a version of CentOS Stream, due to concerns about stability.\n\n\nAlso, Ubuntu is a very common OS within the AI and ML space, and has significant community support.\n\n\n\n\nUse Job Id that Enables Load Balancing\n\n\n\n\n\nThe Workflow Manager can now optionally accept job ids of the form \n-\n through\n the REST endpoints, where \n\n is the same as the shorter id used in previous releases. The\n \n-\n prefix enables better tracking and separation of jobs run across multiple\n Workflow Manager instances in a cluster.\n\n\nThe prefix can be set in the \ndocker-compose.yml\n file by assigning \n{{.Node.Hostname}}\n to the \nNODE_HOSTNAME\n\n environment variable for the Workflow Manager service, or hard-coding \nNODE_HOSTNAME\n to the desired hostname.\n\n\nThe shorter version of the id can still be used in REST requests, but the longer id will always be returned by the\n Workflow Manager when responding to those requests.\n\n\nThe shorter id will always be used internally by the Workflow Manager, meaning the job status web UI and log messages\n will all use the shorter job id. \n\n\n\n\nSupport for Derivative Media\n\n\n\n\n\nThe TikaImageDetection component now returns \nMEDIA\n tracks instead of \nIMAGE\n tracks when extracting images from\n documents, such as PDFs, Word documents, and PowerPoint slides. The document is considered the \"source\", or \"parent\",\n media, and the images are considered the \"derivative\", or \"child\", media.\n\n\nActions can now be configured with \nSOURCE_MEDIA_ONLY=true\n or \nDERIVATIVE_MEDIA_ONLY=true\n, which will result in only\n performing the action on that kind of media. Feed forward can still be used to pass track information from one stage\n to another. The tracks will skip the stages (actions) that don't apply.\n\n\nThis enables complex pipelines like one that extracts text from a PDF using TikaTextDetection, OCRs embedded images\n using EastTextDetection and TesseractOCRTextDetection, and runs all of the \nTEXT\n tracks through KeywordTagging.\n\n\nAdded the following pipelines to the TikaImageDetection component:\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA TESSERACT OCR (WITH EAST REGIONS) AND KEYWORD TAGGING AND MARKUP PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE PIPELINE\n\n\nTIKA IMAGE DETECTION WITH DERIVATIVE MEDIA OCV FACE AND MARKUP PIPELINE\n\n\n\n\n\n\n\n\nReport when Job Callbacks and TiesDb POSTs Fail\n\n\n\n\n\nThe job status UI displays two new columns, one that indicates the status of posting to TiesDB, and one that indicates\n the status of posting the job callback to the job producer.\n\n\nAdditionally, the \n[GET] /rest/jobs/{id}\n endpoint now includes a \ntiesDbStatus\n and \ncallbackStatus\n field.\n\n\nNote that, by design, the JSON output itself does not contain these statuses.\n\n\n\n\nAllow Pipelines to be Specified in a Job Request\n\n\n\n\n\nOptionally, the \npipelineDefinition\n field can be provided instead of the \npipelineName\n field when using the\n \n[POST] /rest/jobs\n endpoint in order to specify a pipeline on the fly for that specific job run. It will not be saved\n for later reuse.\n\n\nThe format of the pipeline definition is similar to that in a \ndescriptor.json\n file, with separate sections for\n defining \ntasks\n and \nactions\n. Pre-existing tasks and actions known to the Workflow Manager can be specified in the\n definition. They do not need to be defined again.\n\n\nThis feature is a convenient alternative to creating persistent definitions using the \n[POST] /rest/pipelines\n,\n \n[POST] /rest/tasks\n, and \n[POST] /rest/actions\n endpoints. For example, this feature could be used to quickly add or\n remove a motion preprocessing stage from a pipeline.\n\n\n\n\nAllow User-Specified Segment Boundaries\n\n\n\n\n\nOptionally, multiple \nframeRanges\n and/or \ntimeRanges\n fields can be provided when using the \n[POST] /rest/jobs\n\n endpoint in order to manually specify segment boundaries. These values will override the normal segmenting behavior of\n the Workflow Manager.\n\n\nNote that overlapping ranges will be combined and large ranges may still be split up according to the value of\n \nTARGET_SEGMENT_LENGTH\n and \nVFR_TARGET_SEGMENT_LENGTH\n.\n\n\nNote that \nframeRanges\n is specified using the frame number and \ntimeRanges\n is specified in milliseconds.\n\n\n\n\nAdd Triton Inference Server support to YOLO component\n\n\n\n\n\nThe OcvYoloDetection component now supports the ability to send requests to an NVIDIA Triton Inference Server by\n setting \nENABLE_TRITON=true\n. If set to false, the component will process jobs using OpenCV DNN on the local host\n running the Docker service, as per normal.\n\n\nBy default \nTRITON_SERVER=ocv-yolo-detection-server:8001\n, which\n corresponds to the \nocv-yolo-detection-server\n entry in your \ndocker-compose.yml\n file. Refer to the example entry\n within \ndocker-compose.components.yml\n\n . That entry uses a pre-built and pre-configured version of the Triton server.\n\n\nThe Triton server runs the YOLOv4 model within the TensorRT framework, which performs a warmup operation when the\n server starts up to determine which optimizations to enable for the available GPU hardware. \n*.engine\n files are\n generated within the \nyolo_engine_file\n Docker volume for later reuse.\n\n\nTo further improve inferencing speed, shared memory can be configured between the \nocv-yolo-detection\n client service and the\n \nocv-yolo-detection-server\n service if they are running on the same host. Set \nTRITON_USE_SHM=true\n and configure the\n server with a \n/dev/shm:/dev/shm\n Docker volume.\n\n\nDepending on the available GPU hardware, the Triton server can achieve speeds that are 5x faster than OpenCV DNN with\n tracking enabled, no shared memory, and nearly 9x faster with tracking disabled, with shared memory. Our tests used a\n single RTX 2080 GPU.\n\n\n\n\nRemoved Unused and Redundant Error Codes\n\n\n\n\n\nThe error codes shown on the left were redundant and replaced with the corresponding error codes on the right:\n\n\n\n\n\n\n\n\n\n\nOld Error Code\n\n\nNew Error Code\n\n\n\n\n\n\n\n\n\n\nMPF_IMAGE_READ_ERROR\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\n\n\n\n\nMPF_BOUNDING_BOX_SIZE_ERROR\n\n\nMPF_BAD_FRAME_SIZE\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_INT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_JOB_PROPERTY_IS_NOT_FLOAT\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_INVALID_FRAME_INTERVAL\n\n\nMPF_INVALID_PROPERTY\n\n\n\n\n\n\nMPF_DETECTION_TRACKING_FAILED\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\n\n\n\n\n\n\nAlso, the following error codes are no longer being used and have been removed:\n\n\n\n\nMPF_UNRECOGNIZED_DATA_TYPE\n\n\nAll media types can now be processed since we support the \nUNKNOWN\n (a.k.a. \"generic\")\n media type\n\n\n\n\n\n\nMPF_INVALID_DATAFILE_URI\n\n\nThe Workflow Manager will reject a job with an invalid media URI before it gets to a\n component\n\n\n\n\n\n\nMPF_INVALID_START_FRAME\n\n\nMPF_INVALID_STOP_FRAME\n\n\nMPF_INVALID_ROTATION\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nBy default, the Markup component draws bounding boxes to fill in the gaps between detections in each track by\n interpolating the box size and position. This can now be disabled by setting the job property\n \nMARKUP_ANIMATION_ENABLED=false\n, or the system property \nmarkup.video.animation.enabled=false\n.\n Disabling this feature can be useful to prevent floating boxes from cluttering the marked-up frames.\n\n\nThe Markup component will now start each bounding box label with a track index like \n[0]\n that can be used to\n correlate the box with the track in the JSON output object. The JSON output now contains an \nindex\n field for every\n track, relative to each piece of media, that is simply an integer that starts at 0 and counts upward. This can be\n disabled by setting the job property \nMARKUP_LABELS_TRACK_INDEX_ENABLED=false\n, or the system property\n \nmarkup.labels.track.index.enabled=false\n.\n\n\n\n\nChanges to JSON Output Object\n\n\n\n\n\nComponents that generate \nMEDIA\n tracks will result in new derivative \nmedia\n entries in the JSON output file. This\n means it's possible to provide a single piece of media as an input and have more than one \nmedia\n entry in the JSON\n output. The output will always include the original media.\n\n\nEach \nmedia\n entry in the JSON output now contains a \nparentMediaId\n in addition to the \nmediaId\n. The \nparentMediaId\n\n for original source media will always be set to -1; otherwise, for derivative media, the \nparentMediaId\n is set the\n \nmediaId\n of the source media from which the child media was derived.\n\n\nEach \nmedia\n entry also contains a new \nframeRanges\n and \ntimeRanges\n collection.\n\n\nThe JSON output file also contains a new \nindex\n field for every track, relative to each piece of media.\n\n\n\n\nFeatures\n\n\n\n\n\n[\n#792\n] Perform detection on images extracted from PDFs\n\n\n[\n#1283\n] Add user-specified segment boundaries\n\n\n[\n#1374\n] Transition from CentOS 7 to Ubuntu 20.04\n\n\n[\n#1396\n] Report when job callbacks and TiesDb POSTs fail\n\n\n[\n#1398\n] Add Triton Inference Server support to YOLO component\n\n\n[\n#1428\n] Allow pipelines to be specified in a job request\n\n\n[\n#1454\n] Transition from Clair scans to Trivy scans\n\n\n[\n#1485\n] Use \npyproject.toml\n and \nsetup.cfg\n instead of \nsetup.py\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#803\n] Update Tika Image Detection to generate one track per piece of extracted media\n\n\n[\n#808\n] Update Tika Text Detection component to not use leading zeros for \nPAGE_NUM\n\n\n[\n#1105\n] Remove dependency on QT from C++ SDK\n\n\n[\n#1282\n] Use job id that enables load balancing\n\n\n[\n#1303\n] Update Tika Image Detection to return \nMEDIA\n tracks\n\n\n[\n#1319\n] Review existing error codes and remove unused or redundant error codes\n\n\n[\n#1384\n] Update Apache Tika to 2.4.1 for TikaImageDetection and TikaTextDetection Components\n\n\n[\n#1436\n] CLI Runner should initialize a component once when handling multiple jobs\n\n\n[\n#1465\n] Remove YoloV3 support from OcvYoloDetection component\n\n\n[\n#1513\n] Update to Spring 5.3.18\n\n\n[\n#1528\n] CLI runner should also sort by startOffsetTime\n\n\n[\n#1540\n] Upgrade to Java 17\n\n\n[\n#1549\n] Allow markup animation to be disabled\n\n\n[\n#1550\n] Add track index to markup\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1372\n] Tika Image Detection no longer misses images in PowerPoint and Word documents\n\n\n[\n#1449\n] Simon data is now refreshed when clicking the Processes tab\n\n\n[\n#1495\n] Fix bug where invalid CSRF token found for \n/workflow-manager/login\n\n\n\n\nOpenMPF 6.3.x\n\n\n6.3.14: May 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1530\n] Fix S3 code memory leak\n\n\n\n\n6.3.12: April 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1519\n] Upgrade to OpenCV 4.5.5\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1520\n] S3 code now retries on most 400 errors\n\n\n\n\n6.3.11: April 2022\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the Object Storage Guide with \nS3_SESSION_TOKEN\n, \nS3_USE_VIRTUAL_HOST\n, \nS3_HOST\n, and \nS3_REGION\n.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1496\n] Update S3 client code\n\n\n[\n#1514\n] Update Tomcat to 8.5.78\n\n\n\n\n6.3.10: March 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1486\n] Fix bug where \nMOVING\n was being added to immutable map twice\n\n\n[\n#1498\n] Can now provide media metadata when frameTimeInfo is missing\n\n\n[\n#1501\n] MPFVideoCapture now properly reads frames from videos with rotation metadata\n\n\n[\n#1502\n] Detections with \nHORIZONTAL_FLIP\n will no longer result in illformed detections and incorrectly padded regions\n\n\n[\n#1503\n] Videos with rotation metadata will no longer result in corrupt markup\n\n\n\n\n6.3.8: January 2022\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1469\n] \nTENSORFLOW VEHICLE COLOR DETECTION\n pipelines no longer refer to YOLO tasks that no longer exist\n\n\n\n\n6.3.7: January 2022\n\n\n\nUpdates\n\n\n\n\n\n[\n#1466\n] Upgrade log4j to 2.17.1\n\n\n\n\n6.3.6: December 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1457\n] Upgrade log4j to 2.16.0\n\n\n\n\n6.3.5: November 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1451\n] Make concurrent callbacks configurable\n\n\n\n\n6.3.4: November 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1441\n] Modify AdminStatisticsController so that it doesn't hold all jobs in memory at once\n\n\n\n\n6.3.3: October 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1425\n] Make protobuf size limit configurable\n\n\n\n\n6.3.2: October 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1420\n] Sphinx component no longer omits audio at end of video files\n\n\n[\n#1422\n] Media inspection now correctly calculates milliseconds from ffmpeg duration\n\n\n\n\n6.3.1: September 2021\n\n\n\nFeatures\n\n\n\n\n\n[\n#1404\n] Improve OcvDnnDetection vehicle color detection\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1251\n] Add version to JSON output object\n\n\n[\n#1272\n] Update Keyword Tagging to work on multiple inputs\n\n\n[\n#1350\n] Retire old components to the graveyard: DlibFaceDetection, DarknetDetection, and OcvPersonDetection\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n now behaves as expected\n\n\n[\n#1271\n] Azure speech component no longer omits audio at end of video files\n\n\n[\n#1389\n] NLP text correction component now properly reads the value of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n[\n#1403\n] Corrected README to state that the Azure Speech Component doesn't support v2 of the API\n\n\n[\n#1406\n] Speech detections in videos are no longer dropped if using keyword tagging\n\n\n[\n#1411\n] Exception no longer occurs when adding \nSHRUNK_TO_NOTHING=TRUE\n to an immutable map in multiple pipeline stages\n\n\n[\n#1413\n] Speech detections in videos are no longer dropped if using translation\n\n\n\n\n6.3.0: September 2021\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the API documents, Development Environment Guide, Node Guide, Install Guide, User Guide, Admin Guide, and\n others to clarify the difference between Docker and non-Docker behaviors.\n\n\nTransformed Packaging and Registering a Component document into Component Descriptor Reference.\n\n\nSplit Media Segmentation Guide from User Guide.\n\n\nUpdated and renamed the Workflow Manager document to Workflow Manager Architecture.\n\n\nUpdated the various Docker guides to clarify the difference between building Docker images from scratch versus\n building them using pre-built base images on Docker Hub, emphasizing the latter.\n\n\nUpdated the Contributor Guide to document the hotfix pull request process.\n\n\n\n\nTiesDb Integration\n\n\n\n\n\nTiesDb is a PostgreSQL DB with a RESTful API that stores media metadata. The metadata entries are queried using the\n hash (sha256, md5) of the media file. TIES stands\n for \nTriage Import Export Schema\n. TiesDb is deployed and managed externally to\n OpenMPF. For more information please contact us.\n\n\nWhen a job completes, OpenMPF can post assertions to media entries that exist in TiesDb. In general, one assertion is\n generated for each algorithm run on a piece of media. It contains the job status, algorithm name, detection\n type (\nFACE\n, \nTEXT\n, \nMOTION\n, etc.), and number of tracks generated, as well as a link to the full JSON output\n object.\n\n\nEach assertion serves as a lasting record so that job producers may first check TiesDb to see if an algorithm was run\n on a piece of media before submitting the same job to OpenMPF again.\n\n\nTo enable TiesDb support, set the \nTIES_DB_URL\n job property or \nties.db.url\n system property to\n the \n://:\n part of the URL. The Workflow Manager will append\n the \n/api/db/supplementals?sha256Hash=\n part. Here is an example of a TiesDb POST:\n\n\n\n\n{\n \"dataObject\": {\n \"sha256OutputHash\": \"1f8f2a8b2f5178765dd4a2e952f97f5037c290ee8d011cd7e92fb8f57bc75f17\",\n \"outputType\": \"FACE\",\n \"algorithm\": \"FACECV\",\n \"processDate\": \"2021-09-09T21:37:30.516-04:00\",\n \"pipeline\": \"OCV FACE DETECTION PIPELINE\",\n \"outputUri\": \"file:///home/mpf/git/openmpf-projects/openmpf/trunk/install/share/output-objects/1284/detection.json\",\n \"jobStatus\": \"COMPLETE\",\n \"jobId\": 1284,\n \"systemVersion\": \"6.3\",\n \"trackCount\": 1,\n \"systemHostname\": \"openmpf-master\"\n },\n \"system\": \"OpenMPF\",\n \"securityTag\": \"UNCLASSIFIED\",\n \"informationType\": \"OpenMPF FACE\",\n \"assertionId\": \"4874829f666d79881f7803207c7359dc781b97d2c68b471136bf7235a397c5cd\"\n}\n\n\n\nNatural Language Processing (NLP) Text Correction Component\n\n\n\n\n\nThis component utilizes the \nCyHunspell\n library, which is a Python\n port of the \nHunspell\n spell-checking library, to perform post-processing\n correction of OCR text. In general, it's intended to be used in a pipeline after a component like\n TesseractOCRTextDetection that generates \nTEXT\n tracks. These tracks are then fed-forward into NlpTextCorrection,\n which will add a \nCORRECTED TEXT\n property to the existing tracks.\n The \nTESSERACT OCR TEXT DETECTION WITH NLP TEXT CORRECTION PIPELINE\n performs this behavior. The component can also\n run on its own to process plain text files. Refer to\n the \nREADME\n for details.\n\n\n\n\nAzure Cognitive Services (ACS) Read Component\n\n\n\n\n\nThis component utilizes\n the \nAzure Cognitive Services Read Detection REST endpoint\n\n to extract formatted text from documents (PDFs), images, and videos. Refer to\n the \nREADME\n for\n details.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1151\n] Now supports \nIN_PROGRESS_WITH_WARNINGS\n status\n\n\n[\n#1234\n] Now sorts JSON output object media by media id\n\n\n[\n#1341\n] Added job id to all batch-job-specific Workflow Manager log\n messages\n\n\n[\n#1349\n] Improved reporting and recording job status\n\n\n[\n#1353\n] Updated the Workflow Manager to remove and warn about\n zero-size detections\n\n\n[\n#1382\n] Updated Tika version to 1.27 for TikaImageDetection and\n TikaTextDetection components\n\n\n[\n#1387\n] Markup can now be configured in a\n component's \ndescriptor.json\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1080\n] Batch jobs no longer prematurely set to 100% completion\n during artifact extraction\n\n\n[\n#1106\n] When a job ends in \nERROR\n or \nCANCELLED_BY_SHUTDOWN\n the\n job status UI now shows an End Date\n\n\n[\n#1158\n] JSON output object URI no longer changes when callback fails\n\n\n[\n#1317\n] TikaTextDetection no longer generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n[\n#1337\n] Now using \nMPF_BAD_FRAME_SIZE\n instead\n of \nMPF_DETECTION_FAILED\n for OpenCV empty/resize exception\n\n\n[\n#1359\n] Image detection tracks no longer\n have \nendOffsetFrameInclusive\n set to 1\n\n\n[\n#1373\n] When uploading large files through the Workflow Manager web\n UI, now more than the first 865032704 bytes get written\n\n\n[\n#1379\n] TikaImageDetection component now avoids conflicts by no\n longer using the same path when extracting images for jobs with multiple pieces of media\n\n\n[\n#1386\n] FeedForwardFrameCropper in the Python SDK now handles\n negative coordinates properly\n\n\n[\n#1391\n] If a job is configured to upload markup and markup fails,\n the job no longer gets stuck\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1372\n] TikaImageDetection misses images in PowerPoint and Word\n documents\n\n\n[\n#1389\n] NlpTextCorrection does not properly read the value\n of \nFULL_TEXT_CORRECTION_OUTPUT\n\n\n\n\nOpenMPF 6.2.x\n\n\n6.2.5: July 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1367\n] Enable cross-origin resource sharing on Workflow Manager\n\n\n\n\n6.2.4: June 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1356\n] AzureSpeech now properly reports when media is missing audio stream\n\n\n[\n#1357\n] AzureSpeech now handles case where speaker id is not present\n\n\n\n\n6.2.2: June 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1333\n] Combine media name and job id into one WFM log line\n\n\n[\n#1336\n] Remove duplicate \"Setting status of job to COMPLETE\" Workflow Manager log line and other improvements\n\n\n[\n#1338\n] Update OpenCV DNN Detection component to optionally use feed-forward confidence values\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1237\n] Fixed jQuery DataTables bug: \"int parameter 'draw' is present but cannot be translated into a null value\"\n\n\n[\n#1254\n] Jobs table no longer flickers when polling is enabled and the search box is used\n\n\n[\n#1308\n] Prevent OCV YOLO Tracking from generating zero-sized detections\n\n\n[\n#1313\n] Fix JSON output object timestamps for variable frame rate videos\n\n\n\n\n6.2.1: May 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1330\n] Return error codes for \nmodels_ini_parser.py\n exceptions\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1331\n] Decoding certain heic images no longer causes Workflow Manager to segfault\n\n\n\n\n6.2.0: May 2021\n\n\n\nTesseract OCR Text Detection Component Support for Videos\n\n\n\n\n\nThe component can now process videos in addition to images and PDFs. Each video frame is processed sequentially.\n The \nMAX_PARALLEL_SCRIPT_THREADS\n property determines how many threads to use to process each frame, one thread per\n language or script.\n\n\nNote that for videos without much text, it may be faster to disable threading by\n setting \nMAX_PARALLEL_SCRIPT_THREADS=1\n. This will allow the component to reuse TessAPI instances instead of creating\n new ones for every frame. Please refer to the Known Issues section.\n\n\nResolved issues: \n#1285\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1086\n] Added support for \nCOULD_NOT_OPEN_MEDIA\n\n and \nCOULD_NOT_READ_MEDIA\n error types\n\n\n[\n#1159\n] Split \nIssueCodes.REMOTE_STORAGE\n\n into \nREMOTE_STORAGE_DOWNLOAD\n and \nREMOTE_STORAGE_UPLOAD\n\n\n[\n#1250\n] Modified \n/rest/jobs/{id}\n to include the job's media\n\n\n[\n#1312\n] Created \nNETWORK_ERROR\n error code for when a component\n can't connect to an external server. Updated Python HTTP retry code to return \nNETWORK_ERROR\n. This affects the Azure\n components.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1008\n] Use global TessAPI instances with parallel processing\n\n\n\n\nOpenMPF 6.1.x\n\n\n6.1.6: May 2021\n\n\n\nHandle Variable Frame Rate Videos\n\n\n\n\n\nThe Workflow Manager will attempt to detect if a video is constant frame rate (CFR) or variable frame rate (VFR)\n during media inspection. If no determination can be made, it will default to VFR behavior. If CFR, the JSON output\n object will have a \nHAS_CONSTANT_FRAME_RATE=true\n property in the \nmediaMetadata\n field.\n\n\nWhen \nMPFVideoCapture\n handles a CFR video it will use OpenCV to set the frame position, unless the position is within\n 16 frames of the current position, in which case it will iteratively use OpenCV \ngrab()\n to advance to the desired\n frame.\n\n\nWhen \nMPFVideoCapture\n handles a VFR video it will always iteratively use OpenCV \ngrab()\n to advance to the desired\n frame because setting the frame position directly has been shown to not work correctly on VFR videos.\n\n\nWhen a video is split into multiple segments, \nMPFVideoCapture\n must iteratively use \ngrab()\n to advance from frame 0\n to the start of the segment. This introduces performance overhead. To mitigate this we recommend using larger video\n segments than those used for CFR videos.\n\n\nIn addition to the existing \nTARGET_SEGMENT_LENGTH\n and \nMIN_SEGMENT_LENGTH\n job\n properties (\ndetection.segment.target.length\n and \ndetection.segment.minimum.length\n system properties) for CFR\n videos, the Workflow Manager now supports the \nVFR_TARGET_SEGMENT_LENGTH\n and \nVFR_MIN_SEGMENT_LENGTH\n job\n properties (\ndetection.vfr.segment.target.length\n and \ndetection.vfr.segment.minimum.length\n system properties) for\n VFR videos.\n\n\nNote that the timestamps associated with tracks and detections in a VFR video may be wrong. Please refer to the Known\n Issues section.\n\n\nResolved issues: \n#1307\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1287\n] Updated Tika Text Detection Component to break up large\n chunks of text. The component now generates tracks with both a \nPAGE_NUM\n property and \nSECTION_NUM\n property. Please\n refer to\n the \nREADME\n.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1313\n] Incorrect JSON output object timestamps for variable frame\n rate videos\n\n\n[\n#1317\n] Tika Text Detection component generates first PDF track\n at \nPAGE_NUM\n 2\n\n\n\n\n6.1.5: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1300\n] Parallelized S3 artifact upload. Use\n the \ndetection.artifact.extraction.parallel.upload.count\n system property to configure the number of parallel uploads.\n\n\n\n\n6.1.4: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1299\n] Improved artifact extraction performance when there is no\n rotation or flip\n\n\n\n\n6.1.3: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1295\n] Improved artifact extraction and markup JNI memory\n utilization\n\n\n[\n#1297\n] Limited Workflow Manager IO threads to a reasonable number\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1296\n] Fixed ActiveMQ job priorities\n\n\n\n\n6.1.2: April 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1294\n] Limited ffmpeg threads to a reasonable number\n\n\n\n\n6.1.1: April 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1292\n] Don't skip artifact extraction for failed media\n\n\n\n\n6.1.0: April 2021\n\n\n\nOpenMPF Command Line Runner\n\n\n\n\n\nThe Command Line Runner allows users to run jobs with a single component without the Workflow Manager.\n\n\nIt outputs results in a JSON structure that is a subset of the regular OpenMPF output.\n\n\nIt only supports C++ and Python components.\n\n\nSee the\n \nREADME\n\n for more information.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nComponent code should no longer configure Log4CXX. The component executor now handles configuring Log4CXX. Component\n code should call \nlog4cxx::Logger::getLogger(\"\")\n\n to get access to the logger. Calls to \nlog4cxx::xml::DOMConfigurator::configure(logconfig_file);\n\n should be removed.\n\n\n\n\nPython Batch Component API \n\n\n\n\n\nComponent code should no longer configure logging. The component executor now handles configuring logging. Calls\n to \nmpf.configure_logging\n should be replaced with\n \nlogging.getLogger('')\n.\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\n\n\nIn order to support running a component through the CLI runner, C++ component developers should set\n the \nLD_LIBRARY_PATH\n environment variable in the final stage of their Dockerfiles. It should generally be set\n like: \nENV LD_LIBRARY_PATH $PLUGINS_DIR//lib\n.\n\n\n\n\n\n\nBecause of the logging changes mentioned above, components no longer need to set the\n \nCOMPONENT_LOG_NAME\n environment variable in their Dockerfiles.\n\n\n\n\n\n\nAdded the\n \nopenmpf_python_executor_ssb\n base image\n\n . It can be used instead of \nopenmpf_python_component_build\n and \nopenmpf_python_executor\n to simplify Dockerfiles for\n Python components that are pure Python and have no build time dependencies.\n\n\n\n\n\n\nLabel Moving vs. Non-Moving Tracks\n\n\n\n\n\nThe Workflow Manager can now identify whether a track is moving or non-moving. This is determined by calculating the\n average bounding box for a track by averaging the size and position of all the detections in the track. Then, for each\n detection in the track, the intersection over union (IoU) is calculated between that detection and the average\n detection. If the IoU for at least \nMOVING_TRACK_MIN_DETECTIONS\n number of detections is less than or equal to\n \nMOVING_TRACK_MAX_IOU\n, then the track is considered a moving track.\n\n\nAdded the following Workflow Manager job properties. These can be set for any video job:\n\n\nMOVING_TRACK_LABELS_ENABLED\n: When set to true, attempt to label tracks as either moving or non-moving objects.\n Each track will have a \nMOVING\n property set to \nTRUE\n or \nFALSE\n.\n\n\nMOVING_TRACKS_ONLY\n: When set to true, remove any tracks that were marked as not moving.\n\n\nMOVING_TRACK_MAX_IOU\n: The maximum IoU overlap between detection bounding boxes and the average per-track\n bounding box for objects to be considered moving. Value is expected to be between 0 and 1. Note that the lower\n IoU, the more likely the object is moving.\n\n\nMOVING_TRACK_MIN_DETECTIONS\n: The minimum number of moving detections for a track to be labeled as moving.\n\n\n\n\n\n\n\n\nMarkup Improvements\n\n\n\n\n\nUsers can now watch videos directly in the OpenMPF web UI within the media pop-up dialog for each job. Most modern web\n browsers support videos encoded in VP9 and H.264. If a video cannot be played, users have the option to download it\n and play it using a stand-alone media player.\n\n\nTo set the markup encoder use \nMARKUP_VIDEO_ENCODER\n. The default encoder has changed from \nmjpeg\n to \nvp9\n. As a\n result, it will take longer to generate marked up videos, but they will be higher quality and can be viewed in the web\n UI.\n\n\nEach bounding box in the marked up media is now labeled. By default, the label shows the track-level \nCLASSIFICATION\n\n and associated confidence value. The information shown in the label can be changed by\n setting \nMARKUP_LABELS_TEXT_PROP_TO_SHOW\n and \nMARKUP_LABELS_NUMERIC_PROP_TO_SHOW\n. To show information for each\n individual detection, rather than the entire track, set \nMARKUP_LABELS_FROM_DETECTIONS=TRUE\n.\n\n\nExemplar detections in video tracks include a star icon in their label.\n\n\nOptionally, set \nMARKUP_VIDEO_MOVING_OBJECT_ICONS_ENABLED=TRUE\n to show icons that represent if the track is moving or\n non-moving.\n\n\nOptionally, set \nMARKUP_VIDEO_BOX_SOURCE_ICONS_ENABLED=TRUE\n to show icons that represent the source of the detection.\n For example, if the box is the result of an algorithm detection, tracking performing gap fill, or Workflow Manager\n animation.\n\n\nEach frame of a marked-up video now has a frame number in the upper right corner.\n\n\nPlease refer to the \nMarkup Guide\n for the complete set of markup properties, icon definitions, and\n encoder considerations.\n\n\n\n\nUpdates\n\n\n\n\n\n[\n#1181\n] Updated the Tesseract OCR Text Detection component from\n Tesseract version 4.0.0 to 4.1.1\n\n\n[\n#1232\n] Updated the Azure Speech Detection component from Azure\n Batch Transcription version 2.0 to 3.0\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1187\n] EXIF orientation is now preserved during markup and artifact\n extraction\n\n\n[\n#1257\n] Updated \nOUTPUT_LAST_TASK_ONLY\n to work on all media types\n\n\n\n\nOpenMPF 6.0.x\n\n\n6.0.11: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1284\n] Updated the Azure Translation component to count emoji as 2\n characters\n\n\n\n\n6.0.10: March 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1270\n] The Azure Cognitive Services components now retry HTTP\n requests\n\n\n\n\n6.0.9: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1273\n] Setting \nTRANSLATION\n to the empty string no longer prevents\n Keyword Tagging\n\n\n\n\n6.0.6: March 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1265\n] Updated the Tika Text Detection component to handle\n spreadsheets\n\n\n[\n#1268\n] Updated the Tika Text Detection component to remove metadata\n\n\n\n\n6.0.5: February 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1266\n] The Azure Translation component now handles the final\n segment correctly when guessing sentence breaks\n\n\n\n\n6.0.4: February 2021\n\n\n\nUpdates\n\n\n\n\n\n[\n#1264\n] Updated the Azure Translation component to handle large\n amounts of text\n\n\n[\n#1269\n] AzureTranslation no longer tries to translate text that is\n already in the \nTO_LANGUAGE\n\n\n\n\n6.0.3: February 2021\n\n\n\nOpenCV YOLO Detection Component\n\n\n\n\n\nThis new component utilizes the OpenCV Deep Neural Networks (DNN) framework to detect and classify objects in images\n and videos using Darknet YOLOv4 models trained on the COCO dataset. It supports both CPU and GPU modes of operation.\n Tracking is performed using a combination of intersection over union, pixel difference after Fast Fourier transform (\n FFT) phase correlation, Kalman filtering, and OpenCV MOSSE tracking. Refer to\n the \nREADME\n for details.\n\n\n\n\n6.0.2: January 2021\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1249\n] FFmpeg no longer reports different frame counts for the same\n piece of media\n\n\n\n\n6.0.1: December 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1238\n] The JSON output object is now generated when remote media\n cannot be downloaded.\n\n\n\n\n6.0.0: December 2020\n\n\n\nUpgrade to OpenCV 4.5.0\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.4.7 to OpenCV 4.5.0.\n\n\nOpenCV is now built with CUDA support, including cuDNN (CUDA Deep Neural Network library) and cuBLAS (CUDA Basic\n Linear Algebra Subroutines library). All C++ components that use the base C++ builder and executor Docker images have\n CUDA support built in, giving developers the option to make use of it.\n\n\nAdded GPU support to the OcvDnnDetection component.\n\n\n\n\nAzure Cognitive Services (ACS) Translation Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Translator REST endpoint\n\n to translate text from one language (locale) to another. Generally, it's intended to operate on feed-forward tracks\n that contain detections with \nTEXT\n and \nTRANSCRIPT\n properties. It can also operate on plain text file inputs. Refer\n to the \nREADME\n for\n details.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded \nalgorithm\n field to the element that describes a collection of tracks generated by an action in the JSON output\n object. For example:\n\n\n\n\n\"output\": {\n \"FACE\": [{\n \"source\": \"+#MOG MOTION DETECTION PREPROCESSOR ACTION#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [{ ... }],\n ...\n },\n\n\n\nMerge Tasks in JSON Output Object\n\n\n\n\n\nThe output of two tasks in the JSON output object can be merged by setting the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n\n property to true. This is a Workflow Manager property and can be set on any task in any pipeline, although it has no\n effect when set on the first task or the Markup task.\n\n\nWhen the output of two tasks are merged, the tracks for the previous task will not be shown in the JSON output object,\n and no artifacts are generated for it. The task will be listed under \nTRACKS MERGED\n, if it's not already listed\n under \nTRACKS SUPPRESSED\n due to the \nmpf.output.objects.last.task.only\n system property setting,\n or \nOUTPUT_LAST_TASK_ONLY\n property. The tracks associated with the second task will inherit the detection type and\n algorithm of the previous task.\n\n\nFor example, the \nTESSERACT OCR TEXT DETECTION WITH KEYWORD TAGGING PIPELINE\n is defined as\n the \nTESSERACT OCR TEXT DETECTION TASK\n followed by the \nKEYWORD TAGGING (WITH FF REGION) TASK\n. The second task\n sets \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n to true. The resulting JSON output object contains one set of keyword-tagged\n OCR tracks that have the \nTEXT\n detection type and \nTESSERACTOCR\n algorithm (both inherited from\n the \nTESSERACT OCR TEXT DETECTION TASK\n):\n\n\n\n\n\"output\": {\n \"TRACKS MERGED\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }],\n \"TEXT\": [{\n \"source\": \"+#TESSERACT OCR TEXT DETECTION ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [{\n \"type\": \"TEXT\",\n \"trackProperties\": {\n \"TAGS\": \"ANIMAL\",\n \"TEXT\": \"The quick brown fox\",\n \"TEXT_LANGUAGE\": \"script/Latin\",\n \"TRIGGER_WORDS\": \"fox\",\n \"TRIGGER_WORDS_OFFSET\": \"16-18\"\n ...\n\n\n\n\n\nNote that you can use the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n setting on multiple tasks. For example, if you set it as a\n job property it will be applied to all tasks (with the exception of Markup - in which case the task before Markup is\n used), so you will only get the output of the last task in the pipeline. The last task will inherit the detection type\n and algorithm of the first task in the pipeline.\n\n\n\n\nTesseract Custom Dictionaries\n\n\n\n\n\nThe Tesseract component Docker image now contains an \n/opt/mpf/tessdata_model_updater\n binary that you can use to\n update \n*.traineddata\n models with a custom dictionary, as well as extract files from existing models. Refer to\n the \nDICTIONARIES\n\n guide to learn how to use the tool.\n\n\nIn general, legacy \n*.traineddata\n models are more influenced by words in their dictionary than more modern\n LSTM \n*.traineddata\n models. Also, refer to the known issue below.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1243\n] Unpacking a \n*.traineddata\n model, for example, in order to\n modify its dictionary, and then repacking it may result in dropping some of the words present in the original\n dictionary file. This may be due to some kind of compression or filtering. It's unknown what effect this has on OCR\n results.\n\n\n\n\nOpenMPF 5.1.x\n\n\n5.1.3: December 2020\n\n\n\nSetting Properties as Docker Environment Variables\n\n\n\n\n\nAny property that can be set as a job property can now be set as a Docker environment variable by prefixing it\n with \nMPF_PROP_\n. For example, setting the \nMPF_PROP_TRTIS_SERVER\n environment variable in the \ntrtis-detection\n\n service in your \ndocker-compose.yml\n file will have the same effect as setting the \nTRTIS_SERVER\n job property.\n\n\nProperties set in this way will take precedence over all other property types (job, algorithm, media, etc). It is not\n possible to change the value of properties set via environment variables at runtime and therefore they should only be\n used to specify properties that will not change throughout the entire lifetime of the service.\n\n\n\n\nUpdates\n\n\n\n\n\nThe \nmpf.output.objects.censored.properties\n system property can be used to prevent properties from being shown in\n JSON output objects. The value for these properties will appear as \n\n.\n\n\nThe Azure Speech Detection component now retries without diarization when diarization is not supported by the selected\n locale.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1230\n] The Azure Speech Detection component now uses a UUID for the\n recording id associated with a piece of media in order to prevent deleting a piece of media while it's in use.\n\n\n\n\n5.1.1: December 2020\n\n\n\nUpdates\n\n\n\n\n\nOnly generate \nFRAME_COUNT\n warning when the frame difference is > 1. This can be configured using\n the \nwarn.frame.count.diff\n system property.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1209\n] The Keyword Tagging component now generates video tracks in\n the JSON output object.\n\n\n[\n#1212\n] The Keyword Tagging component now preserves the detection\n bounding box and confidence.\n\n\n\n\n5.1.0: November 2020\n\n\n\nMedia Inspection Improvements\n\n\n\n\n\nThe Workflow Manager will now handle video files that don't have a video stream as an \nAUDIO\n type, and handle video\n files that don't have a video or audio stream as an \nUNKNOWN\n type. The JSON output object contains a\n new \nmedia.mediaType\n field that will be set to \nVIDEO\n, \nAUDIO\n, \nIMAGE\n, or \nUNKNOWN\n.\n\n\nThe Workflow Manager now configures Tika\n with \ncustom MIME type support\n\n . Currently, this enables the detection of \nvideo/vnd.dlna.mpeg-tts\n and \nimage/jxr\n MIME types.\n\n\nIf the Workflow Manager cannot use Tika to determine the media MIME type then it will fall back to using the\n Linux \nfile\n command with\n a \ncustom magicfile\n\n .\n\n\nOpenMPF now supports Apple-optimized PNGs and HEIC images. Refer to the Bug Fixes section below.\n\n\n\n\nEAST Text Region Detection Component Improvements\n\n\n\n\n\nThe \nTEMPORARY_PADDING\n property has been separated into \nTEMPORARY_PADDING_X\n and \nTEMPORARY_PADDING_Y\n so that X and\n Y padding can be configured independently.\n\n\nThe \nMERGE_MIN_OVERLAP\n property has been renamed to \nMERGE_OVERLAP_THRESHOLD\n so that setting it to a value of 0 will\n merge all regions that touch, regardless of how small the amount of overlap.\n\n\nRefer to\n the \nREADME\n\n for details.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tool Improvements\n\n\n\n\n\nThese tools now support a \nROTATION_FILL_COLOR\n property for setting the fill color for pixels near the corners and\n edges of frames when performing non-orthogonal rotations. Previously, the color was hardcoded to \nBLACK\n. That is\n still the default setting for most components. Now the color can be set to \nWHITE\n, which is the default setting for\n the Tesseract component.\n\n\nThese tools now support a \nROTATION_THRESHOLD\n property for adjusting the threshold at which the frame transformer\n performs rotation. Previously, the value was hardcoded to 0.1 degrees. That is still the default value. Rotation is\n not performed on any \nROTATION\n value less than that threshold. The motivation is that some algorithms detect small\n rotations (for example, on structured text) when there is no rotation. In such cases rotating the frame results in\n fewer detections.\n\n\nOpenMPF now uses FFmpeg when counting video frames. Refer to the Bug Fixes section below.\n\n\n\n\nAzure Cognitive Services (ACS) Form Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Form Detection REST endpoint\n\n to extract formatted text from documents (PDFs) and images. Refer to\n the \nREADME\n for\n details.\n\n\nThis component is capable of performing detections using a specified ACS endpoint URL. For example, different\n endpoints support receipt detection, business card detection, layout analysis, and support for custom models trained\n with or without labeled data.\n\n\nThis component may output the following detection properties depending on the endpoint, model, and media being\n processed: \nTEXT\n, \nTABLE_CSV_OUTPUT\n, \nKEY_VALUE_PAIRS_JSON\n, and \nDOCUMENT_JSON_FIELDS\n.\n\n\n\n\nKeyword Tagging Component\n\n\n\n\n\nThis new component performs the same keyword tagging behavior that was previously part of the Tesseract component, but\n does so on feed-forward tracks that generate detections with \nTEXT\n and \nTRANSCRIPT\n properties. Refer to\n the \nREADME\n for details.\n\n\nIn addition to the Tesseract component, keyword tagging behavior has been removed from the Tika Text component and ACS\n OCR component.\n\n\nExample pipelines have been added to the following components which make use of a final Keyword Tagging component\n stage:\n\n\nTesseract\n\n\nTika Text\n\n\nACS OCR\n\n\nSphinx\n\n\nACS Speech\n\n\n\n\n\n\n\n\nOptionally Skip Media Inspection\n\n\n\n\n\nThe Workflow Manager will skip media inspection if all of the required media metadata is provided in the job request.\n The \nMEDIA_HASH\n and \nMIME_TYPE\n fields are always required. Depending on the media data type, other fields may be\n required or optional:\n\n\nImages\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nVideos\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n, \nFRAME_COUNT\n, \nFPS\n, \nDURATION\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nAudio files\n\n\nRequired: \nDURATION\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nUpdate OpenMPF Python SDK exception handling for Python 3. Now instead of raising an \nEnvironmentError\n, which has\n been deprecated in Python 3, the SDK will raise an \nmpf.DetectionError\n or allow the underlying exception to be\n thrown.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1028\n] OpenMPF can now properly handle Apple-optimized PNGs, which\n have a non-standard data chunk named CgBI before the IHDR chunk. The Workflow Manager\n uses \npngdefry\n to convert the image into a standard PNG for processing. Before\n this fix, Tika would throw an error when trying to determine the MIME type of the Apple-optimized PNG.\n\n\n[\n#1130\n] OpenMPF can now properly handle HEIC images. The Workflow\n Manager uses \nlibheif\n to convert the image into a standard PNG for processing.\n Before this fix, the HEIC image was sometimes falsely identified as a video and the Workflow Manager would fail to\n count the number of frames.\n\n\n[\n#1171\n] The MIME type in the JSON output object is no longer null\n when there is a frame counting exception.\n\n\n[\n#1192\n] When processing videos, the frame count is now obtained from\n both OpenCV and FFmpeg. The lower of the two is used. If they don't match, a \nFRAME_COUNT\n warning is generated.\n Before this fix, on some videos OpenCV would return frame counts that were magnitudes higher than the frames that\n could actually be read. This resulted in failing to process many video segments with a \nBAD_FRAME_SIZE\n error.\n\n\n\n\nOpenMPF 5.0.x\n\n\n5.0.9: October 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1200\n] The MPFVideoCapture and MPFImageReader tools now properly\n handle cropping to frame regions when the region coordinates fall outside of the frame boundary. There was a bug that\n would result in an OpenCV error. Note that the bug only occurred when cropping was not performed with rotation or\n flipping.\n\n\n\n\n5.0.8: October 2020\n\n\n\nUpdates\n\n\n\n\n\nThe Tesseract component now supports a \nTESSDATA_MODELS_SUBDIRECTORY\n property. The component will look for tessdata\n files in \n/\n. This allows users to easily switch between \ntessdata\n\n , \ntessdata_best\n, and \ntessdata_fast\n subdirectories.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1199\n] Added missing synchronized to InProgressBatchJobsService,\n which was resulting in some jobs staying \nIN_PROGRESS\n indefinitely.\n\n\n\n\n5.0.7: September 2020\n\n\n\nTensorRT Inference Server (TRTIS) Object Detection Component\n\n\n\n\n\nThis new component detects objects in images and videos by making use of\n an \nNVIDIA TensorRT Inference Server\n (\n TRTIS), and calculates features that can later be used by other systems to recognize the same object in other media.\n We provide support for running the server as a separate service during a Docker deployment, but an external server\n instance can be used instead.\n\n\nBy default, the ip_irv2_coco model is supported and will optionally classify detected objects\n using \nCOCO labels\n\n . Additionally, features can be generated for whole frames, automatically-detected object regions, and user-specified\n regions. Refer to the \nREADME\n\n .\n\n\n\n\n5.0.6: August 2020\n\n\n\nEnable OcvDnnDetection to Annotate Feed-forward Detections\n\n\n\n\n\nThe OcvDnnDetection component can now by configured to operate only on certain feed-forward detections and annotate\n them with supplementary information. For example, the following pipeline can be configured to generate detections that\n have both \nCLASSIFICATION\n and \nCOLOR\n detection properties:\n\n\n\n\nDarknetDetection (person + vehicle) --> OcvDnnDetection (vehicle color)\n\n\n\n\n\nFor example:\n\n\n\n\n \"detectionProperties\": {\n \"CLASSIFICATION\": \"car\",\n \"CLASSIFICATION CONFIDENCE LIST\": \"0.397336\",\n \"CLASSIFICATION LIST\": \"car\",\n \"COLOR\": \"blue\",\n \"COLOR CONFIDENCE LIST\": \"0.93507; 0.055744\",\n \"COLOR LIST\": \"blue; gray\"\n }\n\n\n\n\n\nThe OcvDnnDetection component now supports the following properties:\n\n\nCLASSIFICATION_TYPE\n: Set this value to change the \nCLASSIFICATION*\n part of each output property name to\n something else. For example, setting it to \nCOLOR\n will generate \nCOLOR\n, \nCOLOR LIST\n,\n and \nCOLOR CONFIDENCE LIST\n. When handling feed-foward detections, the pre-existing \nCLASSIFICATION*\n properties\n will be carried over and the \nCOLOR*\n properties will be added to the detection.\n\n\nFEED_FORWARD_WHITELIST_FILE\n: When \nFEED_FORWARD_TYPE\n is provided and not set to \nNONE\n, only feed-forward\n detections with class names contained in the specified file will be processed. For, example, a file with only \"\n car\" in it will result in performing the exclude behavior (below) for all feed-foward detections that do not have\n a \nCLASSIFICATION\n of \"car\".\n\n\nFEED_FORWARD_EXCLUDE_BEHAVIOR\n: Specifies what to do when excluding detections not specified in\n the \nFEED_FORWARD_WHITELIST_FILE\n. Acceptable values are:\n\n\nPASS_THROUGH\n: Return the excluded detections, without modification, along with any annotated detections.\n\n\nDROP\n: Don't return the excluded detections. Only return annotated detections.\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nMake interop package work with Java 8 to better support exernal job producers and consumers.\n\n\n\n\n5.0.5: August 2020\n\n\n\nUpdates\n\n\n\n\n\nConfigure Camel not to auto-acknowledge messages. Users can now see the number of pending messages in the ActiveMQ\n management console for queues consumed by the Workflow Manager.\n\n\nImprove Tesseract OSD fallback behavior. This prevents selecting the OSD rotation from the fallback pass without the\n OSD script from the fallback pass.\n\n\n\n\n5.0.4: August 2020\n\n\n\nUpdates\n\n\n\n\n\nRetry job callbacks when they fail. The Workflow Manager now supports the \nhttp.callback.timeout.ms\n\n and \nhttp.callback.retries\n system properties.\n\n\nDrop \"duplicate paged in from cursor\" DLQ messages.\n\n\n\n\n5.0.3: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdate ActiveMQ to 5.16.0.\n\n\n\n\n5.0.2: July 2020\n\n\n\nUpdates\n\n\n\n\n\nDisable video segmentation for ACS Speech Detection to prevent issues when generating speaker ids.\n\n\n\n\n5.0.1: July 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated Tessseract component with \nMAX_PIXELS\n setting to prevent processing large images.\n\n\n\n\n5.0.0: June 2020\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guides to describe the new build process,\n which now includes automatically copying the openmpf repo source code into the openmpf-build image instead of using\n various bind mounts, and building all of the component base builder and executor images.\n\n\nUpdated the openmpf-docker repo \nREADME\n with the\n following sections:\n\n\nHow\n to \nUse Kibana for Log Viewing and Aggregation\n\n\nHow\n to \nRestrict Media Types that a Component Can Process\n\n\nHow\n to \nImport Root Certificates for Additional Certificate Authorities\n\n\n\n\n\n\nUpdated the \nCONTRIBUTING\n guide for Docker\n deployment with information on the new build process and component base builder and executor images.\n\n\nUpdated the \nInstall Guide\n with a pointer to the \"Quick Start\" section on DockerHub.\n\n\nUpdated the \nREST API\n with the new endpoints for getting, deleting, and creating actions, tasks, and\n pipelines, as well as a change to the \n[GET] /rest/info\n endpoint.\n\n\nUpdated the \nC++ Batch Component API\n to describe changes to the \nGetDetection()\n calls,\n which now return a collection of detections or tracks instead of an error code, and to describe improvements to\n exception handling.\n\n\nUpdated the \nC++ Batch Component API\n\n , \nPython Batch Component API\n,\n and \nJava Batch Component API\n with \nMIME_TYPE\n, \nFRAME_WIDTH\n, and \nFRAME_HEIGHT\n media\n properties.\n\n\nUpdated the \nPython Batch Component API\n with information on Python3 and the\n simplification of using a \ndict\n for some of the data members.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nRenamed \nstages\n to \ntasks\n for clarity and consistency with the rest of the code.\n\n\nThe \nmedia\n element no longer contains a \nmessage\n field.\n\n\nEach \ndetectionProcessingError\n element now contains a \ncode\n field.\n\n\nErrors and warnings are now grouped by \nmediaId\n and summarized using a \ndetails\n element that contains a \nsource\n\n , \ncode\n, and \nmessage\n field. Refer\n to \nthis comment\n for an example of the JSON\n structure. Note that errors and warnings generated by the Workflow Manager do not have a \nmediaId\n.\n\n\nWhen an error or warning occurs in multiple frames of a video for a single piece of media it will be represented\n in one \ndetails\n element and the \nmessage\n will list the frame ranges.\n\n\n\n\n\n\n\n\nInteroperability Package\n\n\n\n\n\nRenamed \nJsonStage.java\n to \nJsonTask.java\n.\n\n\nRemoved \nJsonJobRequest.java\n.\n\n\nModified \nJsonDetectionProcessingError.java\n by removing the \nstartOffset\n and \nstopOffset\n fields and adding the\n following new fields: \nstartOffsetFrame\n, \nstopOffsetFrame\n, \nstartOffsetTime\n, \nstopOffsetTime\n, and \ncode\n.\n\n\nUpdated \nJsonMediaOutputObject.java\n by removing \nmessage\n field.\n\n\nAdded \nJsonMediaIssue.java\n and \nJsonIssueDetails.java\n.\n\n\n\n\nPersistent Database\n\n\n\n\n\nThe \ninput_object\n column in the \njob_request\n table has been renamed to \njob\n and the content now contains a\n serialized form of \nBatchJob.java\n instead of \nJsonJobRequest.java\n.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nThe \nGetDetection()\n calls now return a collection instead of an error code:\n\n\nstd::vector GetDetections(const MPFImageJob &job)\n\n\nstd::vector GetDetections(const MPFVideoJob &job)\n\n\nstd::vector GetDetections(const MPFAudioJob &job)\n\n\nstd::vector GetDetections(const MPFGenericJob &job)\n\n\n\n\n\n\nMPFDetectionException\n can now be constructed with a \nwhat\n parameter representing a descriptive error message:\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\n\n\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nSimplified the \ndetection_properties\n and \nframe_locations\n data members to use a Python \ndict\n instead of a custom\n data type.\n\n\n\n\nFull Docker Conversion\n\n\n\n\n\nEach component is now encapsulated in its own Docker image which self-registers with the Workflow Manager at runtime.\n This deconflicts component dependencies, and allows for greater flexibility when deciding which components to deploy\n at runtime.\n\n\nThe Node Manager image has been removed. For Docker deployments, component services should be managed using Docker\n tools external to OpenMPF.\n\n\nIn Docker deployments, streaming job REST endpoints are disabled, the Nodes web page is no longer available, component\n tar.gz packages cannot be registered through the Component Registration web page, and the \nmpf\n command line script\n can now only be run on the Workflow Manager container to modify user settings. The preexisting features are now\n reserved for non-Docker deployments and development environments.\n\n\nThe OpenMPF Docker stack can optionally be deployed with \nKibana\n (which depends on\n Elasticsearch and Filebeat) for viewing log files. Refer to the\n openmpf-docker \nREADME\n\n .\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\nA base builder image and executor image are provided for\n C++ (\nREADME\n),\n Python (\nREADME\n), and\n Java (\nREADME\n) component\n development. Component developers can also refer to the Dockerfile in the source code for each component as reference\n for how to make use of the base images.\n\n\n\n\nRestrict Media Types that a Component Can Process\n\n\n\n\n\nEach component service now supports an optional \nRESTRICT_MEDIA_TYPES\n Docker environment variable that specifies the\n types of media that service will process. For example, \nRESTRICT_MEDIA_TYPES: VIDEO,IMAGE\n will process both videos\n and images, while \nRESTRICT_MEDIA_TYPES: IMAGE\n will only process images. If not specified, the service will process\n all of the media types it natively supports. For example, this feature can be used to ensure that some services are\n always available to process images while others are processing long videos.\n\n\n\n\nImport Additional Root Certificates into the Workflow Manager\n\n\n\n\n\nAdditional root certificates can be imported into the Workflow Manager at runtime by adding an entry\n for \nMPF_CA_CERTS\n to the workflow-manager service's environment variables in \ndocker-compose.core.yml\n\n . \nMPF_CA_CERTS\n must contain a colon-delimited list of absolute file paths. Of note, a root certificate may be used\n to trust the identity of a remote object storage server.\n\n\n\n\nDockerHub\n\n\n\n\n\nPushed prebuilt OpenMPF Docker images to \nDockerHub\n. Refer to the \"Quick Start\"\n section of the OpenMPF Workflow Manager\n image \ndocumentation\n.\n\n\n\n\nVersion Updates\n\n\n\n\n\nUpdated from Oracle Java 8 to OpenJDK 11, which required updating to Tomcat 8.5.41. We now\n use \nCargo\n to run integration tests.\n\n\nUpdated OpenCV from 3.0.0 to 3.4.7 to update Deep Neural Networks (DNN) support.\n\n\nUpdated Python from 2.7 to 3.8.2.\n\n\n\n\nFFmpeg\n\n\n\n\n\nWe are no longer building separate audio and video encoders and decoders for FFmpeg. Instead, we are using the\n built-in decoders that come with FFmpeg by default. This simplifies the build process and redistribution via Docker\n images.\n\n\n\n\nArtifact Extraction\n\n\n\n\n\nThe \nARTIFACT_EXTRACTION_POLICY\n property can now be assigned a value of \nNONE\n, \nVISUAL_TYPES_ONLY\n, \nALL_TYPES\n,\n or \nALL_DETECTIONS\n.\n\n\nWith the \nVISUAL_TYPES_ONLY\n or \nALL_TYPES\n policy, artifacts will be extracted according to\n the \nARTIFACT_EXTRACTION_POLICY*\n properties. With the \nNONE\n and \nALL_DETECTIONS\n policies, those settings are\n ignored.\n\n\nNote that previously \nNONE\n, \nVISUAL_EXEMPLARS_ONLY\n, \nEXEMPLARS_ONLY\n, \nALL_VISUAL_DETECTIONS\n,\n and \nALL_DETECTIONS\n were supported.\n\n\n\n\n\n\nThe following \nARTIFACT_EXTRACTION_POLICY*\n properties are now supported:\n\n\nARTIFACT_EXTRACTION_POLICY_EXEMPLAR_FRAME_PLUS\n: Extract the exemplar frame from the track, plus this many frames\n before and after the exemplar.\n\n\nARTIFACT_EXTRACTION_POLICY_FIRST_FRAME\n: If true, extract the first frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_MIDDLE_FRAME\n: If true, extract the frame with a detection that is closest to the\n middle frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_LAST_FRAME\n: If true, extract the last frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n: Sort the detections in a track by confidence and then extract\n this many detections, starting with those which have the highest confidence.\n\n\nARTIFACT_EXTRACTION_POLICY_CROPPING\n: If true, an artifact will be extracted for each detection in each frame\n that is selected according to the other \nARTIFACT_EXTRACTION_POLICY*\n properties. The extracted artifact will be\n cropped to the width and height of the detection bounding box, and the artifact will be rotated according to the\n detection \nROTATION\n property. If false, the artifact extraction behavior is unchanged from the previous release:\n the entire frame will be extracted without any rotation.\n\n\n\n\n\n\nFor clarity, \nOUTPUT_EXEMPLARS_ONLY\n has been renamed to \nOUTPUT_ARTIFACTS_AND_EXEMPLARS_ONLY\n. Extracted artifacts\n will always be reported in the JSON output object.\n\n\nThe \nmpf.output.objects.exemplars.only\n system property has been renamed\n to \nmpf.output.objects.artifacts.and.exemplars.only\n. It works the same as before with the exception that if an\n artifact is extracted for a detection then that detection will always be represented in the JSON output object,\n whether it's an exemplar or not.\n\n\nThe \nmpf.output.objects.last.stage.only\n system property has been renamed to \nmpf.output.objects.last.task.only\n. It\n works the same as before with the exception that when set to true artifact extraction is skipped for all tasks but the\n last task.\n\n\n\n\nREST Endpoints\n\n\n\n\n\nModified \n[GET] /rest/info\n. Now returns output like \n{\"version\": \"4.1.0\", \"dockerEnabled\": true}\n.\n\n\nAdded the following REST endpoints for getting, removing, and creating actions, tasks, and pipelines. Refer to\n the \nREST API\n for more information:\n\n\n[GET] /rest/actions\n, \n[GET] /rest/tasks\n, \n[GET] /rest/pipelines\n\n\n[DELETE] /rest/actions\n, \n[DELETE] /rest/tasks\n, \n[DELETE] /rest/pipelines\n\n\n[POST] /rest/actions\n , \n[POST] /rest/tasks\n, \n[POST] /rest/pipelines\n\n\n\n\n\n\nAll of the endpoints above are new with the exception of \n[GET] /rest/pipelines\n. The endpoint has changed since the\n last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a\n collection of tasks for each pipelines. Refer to the REST API.\n\n\n[GET] /rest/algorithms\n can be used to get information about algorithms. Note that algorithms are tied to registered\n components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the\n associated component's Docker container so it self-registers with the Workflow Manager.\n\n\n\n\nIncomplete Actions, Tasks, and Pipelines\n\n\n\n\n\nThe previous version of OpenMPF would generate an error when attempting to register a component that included actions,\n tasks, or pipelines that depend on algorithms, actions, or tasks that are not yet registered with the Workflow\n Manager. This required components to be registered in a specific order. Also, when unregistering a component, it\n required the components which depend on it to be unregistered. These dependency checks are no longer enforced.\n\n\nIn general, the Workflow Manager now appropriately handles incomplete actions, tasks, and pipelines by checking if all\n of the elements are defined before executing a job, and then preserving that information in memory until the job is\n complete. This allows components to be registered and removed in an arbitrary order without affecting the state of\n other components, actions, tasks, or pipelines. This also allows actions and tasks to be removed using the new REST\n endpoints and then re-added at a later time while still preserving the elements that depend on them.\n\n\nNote that unregistering a component while a job is running will cause it to stall. Please ensure that no jobs are\n using a component before unregistering it.\n\n\n\n\nPython Arbitrary Rotation\n\n\n\n\n\nThe Python MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270\n degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range\n will be normalized to that range. Floating point values are accepted. This is similar to the existing support\n for \nC++ arbitrary rotation\n.\n\n\n\n\nOpenCV Deep Neural Networks (DNN) Detection Component\n\n\n\n\n\nThis new component replaces the old CaffeDetection component. It supports the same GoogLeNet and Yahoo Not Suitable\n For Work (NSFW) models as the old component, but removes support for the Rezafuad vehicle color detection model in\n favor of a custom TensorFlow vehicle color detection model. In our tests, the new model has proven to be more\n generalizable and provide more accurate results on never-before-seen test data. Refer to\n the \nREADME\n.\n\n\n\n\nAzure Cognitive Services (ACS) Speech Detection Component\n\n\n\n\n\nThis new component utilizes\n the \nAzure Cognitive Services Batch Transcription REST endpoint\n\n to transcribe speech from audio and video files. Refer to\n the \nREADME\n.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nText tagging has been simplified to only support regular expression searches. Whole keyword searches are a subset of\n regular expression searches, and are therefore still supported. Also, the \ntext-tags.json\n file format has been\n updated to allow for specifying case-sensitive regular expression searches.\n\n\nAdditionally, the \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n detection properties are now supported, which list the\n OCR'd words that resulted in adding a \nTAG\n to the detection, and the character offset of those words within the\n OCR'd \nTEXT\n, respectively.\n\n\nKey changes to tagging output and \ntext-tags.json\n format are outlined below. Refer to\n the \nREADME\n\n for more information:\n\n\nRegex patterns should now be entered in the format \n{\"pattern\": \"regex_pattern\"}\n. Users can add and toggle\n the \n\"caseSensitive\"\n regex flag for each pattern.\n\n\nFor example: \n{\"pattern\": \"(\\\\b)bus(\\\\b)\", \"caseSensitive\": true}\n enables case-sensitive regex pattern\n matching.\n\n\nBy default, each regex pattern, including those in the legacy format, will be case-insensitive.\n\n\n\n\n\n\nAs part of the text tagging update, the \nTAGS\n outputs are now separated by semicolons \n;\n rather than commas \n,\n\n to be consistent with the delimiters for \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n output patterns.\n\n\nBecause semicolons can be part of the trigger word itself, those semicolons will be encapsulated in brackets.\n\n\nFor example, \ndetected trigger with a ;\n in the OCR'd \nTEXT\n is reported\n as \nTRIGGER_WORDS=detected trigger with a [;]; some other trigger\n.\n\n\n\n\n\n\nCommas are now used to group each set of \nTRIGGER_WORDS_OFFSET\n with its respective \nTRIGGER_WORDS\n output.\n Both \nTAGS\n and \nTRIGGER_WORDS\n are separated by semicolons only.\n\n\nFor example: \nTRIGGER_WORDS=trigger1; trigger2\n, \nTRIGGER_WORDS_OFFSET=0-5, 6-10; 12-15\n, means\n that \ntrigger1\n occurs twice in the text at the index ranges 0-5 and 6-10, and \ntrigger2\n occurs at index\n range 12-15.\n\n\n\n\n\n\n\n\n\n\nRegex tagging now follows the C++ ECMAS format (\n see \nexamples here\n) after resolving JSON string conversion\n for regex tags.\n\n\nAs a result the regex patterns \n\\b\n and \n\\p\n in the text tagging file must now be written as \n\\\\b\n and \n\\\\p\n,\n respectively, to match the format of other regex character patterns (ex. \n\\\\d\n, \n\\\\w\n, \n\\\\s\n, etc.).\n\n\n\n\n\n\nThe \nMAX_PARALLEL_SCRIPT_THREADS\n and \nMAX_PARALLEL_PAGE_THREADS\n properties are now supported. When processing\n images, the first property is used to determine how many threads to run in parallel. Each thread performs OCR using a\n different language or script model. When processing PDFs, the second property is used to determine how many threads to\n run in parallel. Each thread performs OCR on a different page of the PDF.\n\n\nThe \nENABLE_OSD_FALLBACK\n property is now supported. If enabled, an additional round of OSD is performed when the\n first round fails to generate script predictions that are above the OSD score and confidence thresholds. In the second\n pass, the component will run OSD on multiple copies of the input text image to get an improved prediction score\n and \nOSD_FALLBACK_OCCURRED\n detection property will be set to true.\n\n\nIf any OSD-detected models are missing, the new \nMISSING_LANGUAGE_MODELS\n detection property will list the missing\n models.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThe Tika text detection component now supports text tagging in the same way as the Tesseract component. Refer to\n the \nREADME\n.\n\n\n\n\nOther Improvements\n\n\n\n\n\nSimplified component \ndescriptor.json\n files by moving the specification of common properties, such\n as \nCONFIDENCE_THRESHOLD\n, \nFRAME_INTERVAL\n, \nMIN_SEGMENT_LENGTH\n, etc., to a single \nworkflow-properties.json\n file.\n Now when the Workflow Manager is updated to support new features, the component \ndescriptor.json\n file will not need\n to be updated.\n\n\nUpdated the Sphinx component to return \nTRANSCRIPT\n instead of \nTRANSCRIPTION\n, which is grammatically correct.\n\n\nWhitespace is now trimmed from property names when jobs are submitted via the REST API.\n\n\nThe Darknet Docker image now includes the YOLOv3 model weights.\n\n\nThe C++ and Python ModelsIniParser now allows users to specify optional fields.\n\n\nWhen a job completion callback fails, but otherwise the job is successful, the final state of the job will\n be \nCOMPLETE_WITH_WARNINGS\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#772\n] Can now create a custom pipeline with long action names using\n the Pipelines 2 UI.\n\n\n[\n#812\n] Now properly setting the start and stop index for elements in\n the \ndetectionProcessingErrors\n collection in the JSON output object. Errors reported for each job segment will now\n appear in the collection.\n\n\n[\n#941\n] Tesseract component no longer segfaults when handling corrupt\n media.\n\n\n[\n#1005\n] Fixed a bug that caused a NullPointerException when\n attempting to get output object JSON via REST before a job completes.\n\n\n[\n#1035\n] The search bar in the Job Status UI can once again for used\n to search for job id.\n\n\n[\n#1104\n] Fixed C++/Python component executor memory leaks.\n\n\n[\n#1108\n] Fixed a bug when handling frames and detections that are\n horizontally flipped. This affected both markup and feed-forward behaviors.\n\n\n[\n#1119\n] Fixed Tesseract component memory leaks and uninitialized\n read issues.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1028\n] Media inspection fails to handle Apple-optimized PNGs with\n the CgBI data chunk before the IHDR chunk.\n\n\n[\n#1109\n] We made the search bar in the Job Status UI more efficient\n by shifting it to a database query, but in doing so introduced a bug where the search operates on UTC time instead of\n local system time.\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n does not behave as expected for\n batch jobs. A user would expect it to control whether the JSON output object is generated, but it's generated\n regardless of that setting.\n\n\n[\n#1032\n] Jobs fail on corrupt QuickTime videos. For these videos, the\n OpenCV-reported frame count is more than twice the actual frame count.\n\n\n[\n#1106\n] When a job ends in ERROR the job status UI does not show an\n End Date.\n\n\n\n\nOpenMPF 4.1.x\n\n\n4.1.14: June 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1120\n] The node-manager Docker image now correctly installs CUDA\n libraries so that GPU-enabled components on that image can run on the GPU.\n\n\n[\n#1064\n] Fixed memory leaks in the Darknet component for various\n network types, and when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.13: June 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated the OpenCV build and media inspection process to properly handle webp images.\n\n\n\n\n4.1.12: May 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated JDK from \njdk-8u181-linux-x64.rpm\n to \njdk-8u251-linux-x64.rpm\n.\n\n\n\n\n4.1.11: May 2020\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nAdded \nINVALID_MIN_IMAGE_SIZE\n job property to filter out images with extremely low width or height.\n\n\nUpdated image rescaling behavior to account for image dimension limits.\n\n\nFixed handling of \nnullptr\n returns from Tesseract API OCR calls.\n\n\n\n\n4.1.8: May 2020\n\n\n\nAzure Cognitive Services (ACS) OCR Component\n\n\n\n\n\nThis new component utilizes\n the \nACS OCR REST endpoint\n\n to extract text from images and videos. Refer to\n the \nREADME\n.\n\n\n\n\n4.1.6: April 2020\n\n\n\nUpdates\n\n\n\n\n\nNow silently discarding ActiveMQ DLQ \"Suppressing duplicate delivery on connection\" messages in addition to \"duplicate\n from store\" messages.\n\n\n\n\n4.1.5: March 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1062\n] Fixed a memory leak in the Darknet component that occurred\n when running jobs on CPU resources with the Tiny YOLO model.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1064\n] The Darknet component has memory leaks for various network\n types, and potentially when using GPU resources. This bug covers everything not addressed\n by \n#1062\n.\n\n\n\n\n4.1.4: March 2020\n\n\n\nUpdates\n\n\n\n\n\nUpdated from Hibernate 5.0.8 to 5.4.12 to support schema-based multitenancy. This allows multiple instances of OpenMPF\n to use the same PostgreSQL database as long as each instance connects to the database as a separate user, and the\n database is configured appropriately. This also required updating Tomcat from 7.0.72 to 7.0.76.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nUpdated the Workflow Manager to include an \noutputobjecturi\n in GET callbacks, and \noutputObjectUri\n in POST\n callbacks, when jobs complete. This URI specifies a file path, or path on the object storage server, depending on\n where the JSON output object is located.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nUpdated \nJsonCallbackBody.java\n to contain an \noutputObjectUri\n field.\n\n\n\n\n4.1.3: February 2020\n\n\n\nFeatures\n\n\n\n\n\nAdded support for \nDETECTION_PADDING_X\n and \nDETECTION_PADDING_Y\n optional job properties. The value can be a\n percentage or whole-number pixel value. When positive, each detection region in each track will be expanded. When\n negative, the region will shrink. If the detection region is shrunk to nothing, the shrunk dimension(s) will be set to\n a value of 1 pixel and the \nSHRUNK_TO_NOTHING\n detection property will be set to true.\n\n\nAdded support for \nDISTANCE_CONFIDENCE_WEIGHT_FACTOR\n and \nSIZE_CONFIDENCE_WEIGHT_FACTOR\n SuBSENSE algorithm\n properties. Increasing the value of the first property will generate detection confidence values that favor being\n closer to the center frame of a track. Increasing the value of the second property will generate detection confidence\n values that favor large detection regions.\n\n\n\n\n4.1.1: January 2020\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1016\n] Fixed a bug that caused a deadlock situation when the media\n inspection process failed quickly when processing many jobs using a pipeline with more than one stage.\n\n\n\n\n4.1.0: July 2019\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nC++ Batch Component API\n to describe the \nROTATION\n\n detection property. See the \nC++ Arbitrary Rotation\n section below.\n\n\nUpdated the \nREST API\n with new component registration REST endpoints. See\n the \nComponent Registration REST Endpoints\n section below.\n\n\nAdded a \nREADME\n for\n the EAST text region detection component. See\n the \nEAST Text Region Detection Component\n section below.\n\n\nUpdated the Tesseract OCR text detection\n component \nREADME\n\n . See the \nTesseract OCR Text Detection Component\n section below.\n\n\nUpdated the openmpf-docker repo \nREADME\n\n and \nSWARM\n guide to describe the new streamlined\n approach to using \ndocker-compose config\n. See the \nDocker Deployment\n section below.\n\n\nFixed the description of \nMIN_SEGMENT_LENGTH\n and associated examples in\n the \nUser Guide\n for\n issue \n#891\n.\n\n\nUpdated the \nJava Batch Component API\n with information on how to use Log4j2.\n Related to resolving issue \n#855\n.\n\n\nUpdated the \nInstall Guide\n to point to the\n Docker \nREADME\n.\n\n\nTransformed the Build Guide into a \nDevelopment Environment Guide\n.\n\n\n\n\n\n\nC++ Arbitrary Rotation\n\n\n\n\n\nThe C++ MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees.\n Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be\n normalized to that range. Floating point values are accepted.\n\n\nWhen using those tools to read frame data, they will automatically correct for rotation so that the returned frame is\n horizontally oriented toward the normal 3 o'clock position.\n\n\nWhen \nFEED_FORWARD_TYPE=REGION\n, these tools will look for a \nROTATION\n detection property in the feed-forward\n detections and automatically correct for rotation. For example, a detection property of \nROTATION=90\n represents\n that the region is rotated 90 degrees counter clockwise, and therefore must be rotated 90 degrees clockwise to\n correct for it.\n\n\nWhen \nFEED_FORWARD_TYPE=SUPERSET_REGION\n, these tools will properly account for the \nROTATION\n detection property\n associated with each feed-forward detection when calculating the bounding box that encapsulates all of those\n regions.\n\n\nWhen \nFEED_FORWARD_TYPE=FRAME\n, these tools will rotate the frame according to the \nROTATION\n job property. It's\n important to note that for rotations other than 0, 90, 180, and 270 degrees the rotated frame dimensions will be\n larger than the original frame dimensions. This is because the frame needs to be expanded to encapsulate the\n entirety of the original rotated frame region. Black pixels are used to fill the empty space near the edges of the\n original frame.\n\n\n\n\n\n\nThe Markup component now places a colored dot at the upper-left corner of each detection region so that users can\n determine the rotation of the region relative to the entire frame.\n\n\n\n\n\n\nComponent Registration REST Endpoints\n\n\n\n\n\nAdded a \n[POST] /rest/components/registerUnmanaged\n endpoint so that components running as separate Docker containers\n can self-register with the Workflow Manager.\n\n\nSince these components are not managed by the Node Manager, they are considered unmanaged OpenMPF components.\n These components are not displayed in Nodes web UI and are tagged as unmanaged in the Component Registration web\n UI where they can only be removed.\n\n\nNote that components uploaded to the Component Registration web UI as .tar.gz files are considered managed\n components.\n\n\n\n\n\n\nAdded a \n[DELETE] /rest/components/{componentName}\n endpoint that can be used to remove managed and unmanaged\n components.\n\n\n\n\nPython Component Executor Docker Image\n\n\n\n\n\nComponent developers can now use a Python component executor Docker image to write a Python component for OpenMPF that\n can be encapsulated within a Docker container. This isolates the build and execution environment from the rest of\n OpenMPF. For more information, see\n the \nREADME\n.\n\n\nComponents developed with this image are not managed by the Node Manager; rather, they self-register with the Workflow\n Manager and their lifetime is determined by their own Docker container.\n\n\n\n\n\n\nDocker Deployment\n\n\n\n\n\nStreamlined single-host \ndocker-compose up\n deployments and multi-host \ndocker stack deploy\n swarm deployments. Now\n users are instructed to create a single \ndocker-compose.yml\n file for both types of deployments.\n\n\nRemoved the \ndocker-generate-compose-files.sh\n script in favor of allowing users the flexibility of combining\n multiple \ndocker-compose.*.yml\n files together using \ndocker-compose config\n. See\n the \nGenerate docker-compose.yml\n\n section of the README.\n\n\nComponents based on the Python component executor Docker image can now be defined and configured directly\n in \ndocker-compose.yml\n.\n\n\nOpenMPF Docker images now make use of Docker labels.\n\n\n\n\n\n\nEAST Text Region Detection Component\n\n\n\n\n\nThis new component uses the Efficient and Accurate Scene Text (EAST) detection model to detect text regions in images\n and videos. It reports their location, angle of rotation, and text type (\nSTRUCTURED\n or \nUNSTRUCTURED\n), and supports\n a variety of settings to control the behavior of merging text regions into larger regions. It does not perform OCR on\n the text or track detections across video frames. Thus, each video track is at most one detection long. For more\n information, see\n the \nREADME\n.\n\n\nOptionally, this component can be built as a Docker image using the Python component executor Docker image, allowing\n it to exist apart from the Node Manager image.\n\n\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to support reading tessdata \n*.traineddata\n files at a specified \nMODELS_DIR_PATH\n. This allows users to\n install new \n*.traineddata\n files post deployment.\n\n\nUpdated to optionally perform Tesseract Orientation and Script Detection (OSD). When enabled, the component will\n attempt to use the orientation results of OSD to automatically rotate the image, as well as perform OCR using the\n scripts detected by OSD.\n\n\nUpdated to optionally rotate a feed-forward text region 180 degrees to account for upside-down text.\n\n\nNow supports the following preprocessing properties for both structured and unstructured text:\n\n\nText sharpening\n\n\nText rescaling\n\n\nOtsu image thresholding\n\n\nAdaptive thresholding\n\n\nHistogram equalization\n\n\nAdaptive histogram equalization (also known as Contrast Limited Adaptive Histogram Equalization (CLAHE))\n\n\n\n\n\n\nWill use the \nTEXT_TYPE\n detection property in feed-forward regions provided by the EAST component to determine which\n preprocessing steps to perform.\n\n\nFor more information on these new features, see\n the \nREADME\n.\n\n\nRemoved gibberish and string filters since they only worked on English text.\n\n\n\n\nActiveMQ Profiles\n\n\n\n\n\nThe ActiveMQ Docker image now supports custom profiles. The container selects an \nactivemq.xml\n and \nenv\n file to use\n at runtime based on the value of the \nACTIVE_MQ_PROFILE\n environment variable. Among others, these files contain\n configuration settings for Java heap space and component queue memory limits.\n\n\nThis release only supports a \ndefault\n profile setting, as defined by \nactivemq-default.xml\n and \nenv.default\n;\n however, developers are free to add other \nactivemq-.xml\n and \nenv.\n files to the ActiveMQ Docker\n image to suit their needs.\n\n\n\n\nDisabled ActiveMQ Prefetch\n\n\n\n\n\nDisabled ActiveMQ prefetching on all component queues. Previously, a prefetch value of one was resulting in situations\n where one component service could be dispatched two sub-jobs, thereby starving other available component services\n which could process one of those sub-jobs in parallel.\n\n\n\n\nSearch Region Percentages\n\n\n\n\n\nIn addition to using exact pixel values, users can now use percentages for the following properties when specifying\n search regions for C++ and Python components:\n\n\nSEARCH_REGION_TOP_LEFT_X_DETECTION\n\n\nSEARCH_REGION_TOP_LEFT_Y_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_X_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_Y_DETECTION\n\n\n\n\n\n\nFor example, setting \nSEARCH_REGION_TOP_LEFT_X_DETECTION=50%\n will result in components only processing the right half\n of an image or video.\n\n\nOptionally, users can specify exact pixel values of some of these properties and percentages for others.\n\n\n\n\nOther Improvements\n\n\n\n\n\nIncreased the number of ActiveMQ maxConcurrentConsumers for the \nMPF.COMPLETED_DETECTIONS\n queue from 30 to 60.\n\n\nThe Create Job web UI now only displays the content of the \n$MPF_HOME/share/remote-media\n directory instead of all\n of \n$MPF_HOME/share\n, which prevents the Workflow Manager from indexing generated JSON output files, artifacts, and\n markup. Indexing the latter resulted in Java heap space issues for large scale production systems. This is a\n mitigation for issue \n#897\n.\n\n\nThe Job Status web UI now makes proper use of pagination in SQL/Hibernate through the Workflow Manager to avoid\n retrieving the entire jobs table, which was inefficient.\n\n\nThe Workflow Manager will now silently discard all duplicate messages in the ActiveMQ Dead Letter Queue (DLQ),\n regardless of destination. Previously, only messages destined for component sub-job request queues were discarded.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#891\n] Fixed a bug where the Workflow Manager media segmenter\n generated short segments that were minimally \nMIN_SEGMENT_LENGTH+1\n in size instead of \nMIN_SEGMENT_LENGTH\n.\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. This seems to have been resolved by disabling ActiveMQ prefetch behavior on component queues.\n\n\n[\n#855\n] A logback circular reference suppressed exception no longer\n throws a StackOverflowError. This was resolved by transitioning the Workflow Manager and Java components from the\n Logback framework to Log4j2.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#897\n] OpenMPF will attempt to index files located\n in \n$MPF_HOME/share\n as soon as the webapp is started by Tomcat. This is so that those files can be listed in a\n directory tree in the Create Job web UI. The main problem is that once a file gets indexed it's never removed from the\n cache, even if the file is manually deleted, resulting in a memory leak.\n\n\n\n\nLate Additions: November 2019\n\n\n\n\n\nUser names, roles, and passwords can now be set by using an optional \nuser.properties\n file. This allows\n administrators to override the default OpenMPF users that come preconfigured, which may be a security risk. Refer to\n the \"Configure Users\" section of the\n openmpf-docker \nREADME\n for\n more information.\n\n\n\n\nLate Additions: December 2019\n\n\n\n\n\nTransitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL\n database in the cloud.\n\n\nUpdated the EAST component to support a \nTEMPORARY_PADDING\n and \nFINAL_PADDING\n property. The first property\n determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is\n effectively removed from the final detections. The second property is used to control the final amount of padding on\n the output regions. Refer to\n the \nREADME\n.\n\n\n\n\nOpenMPF 4.0.x\n\n\n4.0.0: February 2019\n\n\n\nDocumentation\n\n\n\n\n\nAdded an \nObject Storage Guide\n with information on how to configure OpenMPF to work\n with a custom NGINX object storage server, and how to run jobs that use an S3 object storage server. Note that the\n system properties for the custom NGINX object storage server have changed since the last release.\n\n\n\n\nUpgrade to Tesseract 4.0\n\n\n\n\n\nBoth the Tesseract OCR Text Detection Component and OpenALPR License Plate Detection Components have been updated to\n use the new version of Tesseract.\n\n\nAdditionally, Leptonica has been upgraded from 1.72 to 1.75.\n\n\n\n\nDocker Deployment\n\n\n\n\n\nThe Docker images now use the yum package manager to install ImageMagick6 from a public RPM repository instead of\n downloading the RPMs directly from imagemagick.org. This resolves an issue with the OpenMPF Docker build where RPMs\n on \nimagemagick.org\n were no longer available.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to allow the user to set a \nTESSERACT_OEM\n property in order to select an OCR engine mode (OEM).\n\n\n\"script/Latin\" can now be specified as the \nTESSERACT_LANGUAGE\n. When selected, Tesseract will select all Latin\n characters, which can be from different Latin languages.\n\n\n\n\nCeph S3 Object Storage\n\n\n\n\n\nAdded support for downloading files from, and uploading files to, an S3 object storage server. The following job\n properties can be provided: \nS3_ACCESS_KEY\n, \nS3_SECRET_KEY\n, \nS3_RESULTS_BUCKET\n, \nS3_UPLOAD_ONLY\n.\n\n\nAt this time, only support for Ceph object storage has been tested. However, the Workflow Manager uses the AWS SDK for\n Java to communicate with the object store, so it is possible that other S3-compatible storage solutions may work as\n well.\n\n\n\n\nISO-8601 Timestamps\n\n\n\n\n\nAll timestamps in the JSON output object, and streaming video callbacks, are now in the ISO-8601 format (e.g. \"\n 2018-12-19T12:12:59.995-05:00\"). This new format includes the time zone, which makes it possible to compare timestamps\n generated between systems in different time zones.\n\n\nThis change does not affect the track and detection start and stop offset times, which are still reported in\n milliseconds since the start of the video.\n\n\n\n\nReduced Redis Usage\n\n\n\n\n\nThe Workflow Manager has been refactored to reduce usage of the Redis in-memory database. In general, Redis is not\n necessary for storing job information and only resulted in introducing potential delays in accessing that data over\n the network stack.\n\n\nNow, only track and detection data is stored in Redis for batch jobs. This reduces the amount of memory the Workflow\n Manager requires of the Java Virtual Machine. Compared to the other job information, track and detection data can\n potentially be relatively much larger. In the future, we plan to store frame data in Redis for streaming jobs as well.\n\n\n\n\nCaffe Vehicle Color Estimation\n\n\n\n\n\nThe Caffe\n Component \nmodels.ini\n\n file has been updated with a \"vehicle_color\" section with links for downloading\n the \nReza Fuad Rachmadi's Vehicle Color Recognition Using Convolutional Neural Network\n\n model files.\n\n\nThe following pipelines have been added. These require the above model files to be placed\n in \n$MPF_HOME/share/models/CaffeDetection\n:\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM TINY YOLO VEHICLE DETECTOR) PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM YOLO VEHICLE DETECTOR) PIPELINE\n\n\n\n\n\n\n\n\nTrack Merging and Minimum Track Length\n\n\n\n\n\nThe following system properties now have \"video\" in their names:\n\n\ndetection.video.track.merging.enabled\n\n\ndetection.video.track.min.gap\n\n\ndetection.video.track.min.length\n\n\ndetection.video.track.overlap.threshold\n\n\n\n\n\n\nThe above properties can be overridden by the following job properties, respectively. These have not been renamed\n since the last release:\n\n\nMERGE_TRACKS\n\n\nMIN_GAP_BETWEEN_TRACKS\n\n\nMIN_TRACK_LENGTH\n\n\nMIN_OVERLAP\n\n\n\n\n\n\nThese system and job properties now only apply to video media. This resolves an issue where users had\n set \ndetection.track.min.length=5\n, which resulted in dropping all image media tracks. By design, each image track can\n only contain a single detection.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a bug where the Docker entrypoint scripts appended properties to the end\n of \n$MPF_HOME/share/config/mpf-custom.properties\n every time the Docker deployment was restarted, resulting in entries\n like \ndetection.segment.target.length=5000,5000,5000\n.\n\n\nUpgrading to Tesseract 4 fixes a bug where, when specifying \nTESSERACT_LANGUAGE\n, if one of the languages is Arabic,\n then Arabic must be specified last. Arabic can now be specified first, for example: \nara+eng\n.\n\n\nFixed a bug where the minimum track length property was being applied to image tracks. Now it's only applied to video\n tracks.\n\n\nFixed a bug where ImageMagick6 installation failed while building Docker images.\n\n\n\n\nOpenMPF 3.0.x\n\n\n3.0.0: December 2018\n\n\n\n\n\nNOTE:\n The \nBuild Guide\n and \nInstall Guide\n are outdated. The old process for manually configuring a Build VM, using it to build an OpenMPF package, and installing that package, is deprecated in favor of Docker containers. Please refer to the openmpf-docker \nREADME\n.\n\n\nNOTE:\n Do not attempt to register or unregister a component through the Nodes UI in a Docker deployment. It may appear to succeed, but the changes will not affect the child Node Manager containers, only the Workflow Manager container. Also, do not attempt to use the \nmpf\n command line tools in a Docker deployment.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nREADME\n\n , \nSWARM\n guide,\n and \nCONTRIBUTING\n guide for Docker deployment.\n\n\nUpdated the \nUser Guide\n with information on how track\n properties and track confidence are handled when merging tracks.\n\n\nAdded README files for new components. Refer to the component sections below.\n\n\n\n\nDocker Support\n\n\n\n\n\nOpenMPF can now be built and distributed as 5 Docker images: openmpf_workflow_manager, openmpf_node_manager,\n openmpf_active_mq, mysql_database, and redis.\n\n\nThese images can be deployed on a single host using \ndocker-compose up\n.\n\n\nThey can also be deployed across multiple hosts in a Docker swarm cluster using \ndocker stack deploy\n.\n\n\nGPU support is enabled through the NVIDIA Docker runtime.\n\n\nBoth HTTP and HTTPS deployments are supported.\n\n\n\n\n\n\nJSON Output Object\n\n\n\n\n\nAdded a \ntrackProperties\n field at the track level that works in much the same way as the \ndetectionProperties\n field\n at the detection level. Both are maps that contain zero or more key-value pairs. The component APIs have always\n supported the ability to return track-level properties, but they were never represented in the JSON output object,\n until now.\n\n\nSimilarly, added a track \nconfidence\n field. The component APIs always supported setting it, but the value was never\n used in the JSON output object, until now.\n\n\nAdded \njobErrors\n and\njobWarnings\n fields. The \njobErrors\n field will mention that there are items\n in \ndetectionProcessingErrors\n fields.\n\n\nThe \noffset\n, \nstartOffset\n, and \nstopOffset\n fields have been removed in favor of the existing \noffsetFrame\n\n , \nstartOffsetFrame\n, and \nstopOffsetFrame\n fields, respectively. They were redundant and deprecated.\n\n\nAdded a \nmpf.output.objects.exemplars.only\n system property, and \nOUTPUT_EXEMPLARS_ONLY\n job property, that can be set\n to reduce the size of the JSON output object by only recording the track exemplars instead of all of the detections in\n each track.\n\n\nAdded a \nmpf.output.objects.last.stage.only\n system property, and \nOUTPUT_LAST_STAGE_ONLY\n job property, that can be\n set to reduce the size of the JSON output object by only recording the detections for the last non-markup stage of a\n pipeline.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThe Darknet component can now support processing streaming video.\n\n\nIn batch mode, video frames are prefetched, decoded, and stored in a buffer using a separate thread from the one that\n performs the detection. The size of the prefetch buffer can be configured by setting \nFRAME_QUEUE_CAPACITY\n.\n\n\nThe Darknet component can now perform basic tracking and generate video tracks with multiple detections. Both the\n default detection mode and preprocessor detection mode are supported.\n\n\nThe Darknet component has been updated to support the full and tiny YOLOv3 models. The YOLOv2 models are no longer\n supported.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nThis new component extracts text found in an image and reports it as a single-detection track.\n\n\nPDF documents can also be processed with one track detection per page.\n\n\nUsers may set the language of each track using the \nTESSERACT_LANGUAGE\n property as well as adjust other image\n preprocessing properties for text extraction.\n\n\nRefer to\n the \nREADME\n.\n\n\n\n\nOpenCV Scene Change Detection Component\n\n\n\n\n\nThis new component detects and segments a given video by scenes. Each scene change is detected using histogram\n comparison, edge comparison, brightness (fade outs), and overall hue/saturation/value differences between adjacent\n frames.\n\n\nUsers can toggle each type of of scene change detection technique as well as threshold properties for each detection\n method.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThis new component extracts text contained in documents and performs language detection. 71 languages and most\n document formats (.txt, .pptx, .docx, .doc, .pdf, etc.) are supported.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Image Detection Component\n\n\n\n\n\nThis new component extracts images embedded in document formats (.pdf, .ppt, .doc) and stores them on disk in a\n specified directory.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTrack-Level Properties and Confidence\n\n\n\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\nComponents have been updated to return meaningful track-level properties. Caffe and Darknet include \nCLASSIFICATION\n,\n OALPR includes the exemplar \nTEXT\n, and Sphinx includes the \nTRANSCRIPTION\n.\n\n\nThe Workflow Manager will now populate the track-level confidence. It is the same as the exemplar confidence, which is\n the max of all of the track detections.\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\n\n\n\nAdded \nhttp.object.storage.*\n system properties for configuring an optional custom NGINX object storage server on\n which to store generated detection artifacts, JSON output objects, and markup files.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n,\n which is the default behavior when an object storage server is not specified.\n\n\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field,\n and, if appropriate, the \nmarkupResult.message\n field. If the job completes without other issues, the final status\n will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nThe NGINX storage server runs custom server-side code which we can make available upon request. In the future, we plan\n to support more common storage server solutions, such as Amazon S3.\n\n\n\n\n\n\nActiveMQ\n\n\n\n\n\nThe \nMPF_OUTPUT\n queue is no longer supported and has been removed. Job producers can specify a callback URL when\n creating a job so that they are alerted when the job is complete. Users observed heap space issues with ActiveMQ after\n running thousands of jobs without consuming messages from the \nMPF_OUTPUT\n queue.\n\n\nThe Workflow Manager will now silently discard duplicate sub-job request messages in the ActiveMQ Dead Letter Queue (\n DLQ). This fixes a bug where the Workflow Manager would prematurely terminate jobs corresponding to the duplicate\n messages. It's assumed that ActiveMQ will only place a duplicate message in the DLQ if the original message, or\n another duplicate, can be delivered.\n\n\n\n\nNode Auto-Configuration\n\n\n\n\n\nAdded the \nnode.auto.config.enabled\n, \nnode.auto.unconfig.enabled\n, and \nnode.auto.config.num.services.per.component\n\n system properties for automatically managing the configuration of services when nodes join and leave the OpenMPF\n cluster.\n\n\nDocker will assign a a hostname with a randomly-generated id to containers in a swarm deployment. The above properties\n allow the Workflow Manager to automatically discover and configure services on child Node Manager components, which is\n convenient since the hostname of those containers cannot be known in advance, and new containers with new hostnames\n are created when the swarm is restarted.\n\n\n\n\nJob Status Web UI\n\n\n\n\n\nAdded the \nweb.broadcast.job.status.enabled\n and \nweb.job.polling.interval\n system properties that can be used to\n configure if the Workflow Manager automatically broadcasts updates to the Job Status web UI. By default, the\n broadcasts are enabled.\n\n\nIn a production environment that processes hundreds of jobs or more at the same time, this behavior can result in\n overloading the web UI, causing it to slow down and freeze up. To prevent this, set \nweb.broadcast.job.status.enabled\n\n to \nfalse\n. If \nweb.job.polling.interval\n is set to a non-zero value, the web UI will poll for updates at that\n interval (specified in milliseconds).\n\n\nTo disable broadcasts and polling, set \nweb.broadcast.job.status.enabled\n to \nfalse\n and \nweb.job.polling.interval\n to\n a zero or negative value. Users will then need to manually refresh the Job Status web page using their web browser.\n\n\n\n\nOther Improvements\n\n\n\n\n\nNow using variable-length text fields in the mySQL database for string data that may exceed 255 characters.\n\n\nUpdated the MPFImageReader tool to use OpenCV video capture behind the scenes to support reading data from HTTP URLs.\n\n\nPython components can now include pre-built wheel files in the plugin package.\n\n\nWe now use a \nJenkinsfile\n Groovy script for our\n Jenkins build process. This allows us to use revision control for our continuous integration process and share that\n process with the open source community.\n\n\nAdded \nremote.media.download.retries\n and \nremote.media.download.sleep\n system properties that can be used to\n configure how the Workflow Manager will attempt to retry downloading remote media if it encounters a problem.\n\n\nArtifact extraction now uses MPFVideoCapture, which employs various fallback strategies for extracting frames in cases\n where a video is not well-formed or corrupted. For components that use MPFVideoCapture, this enables better\n consistency between the frames they process and the artifacts that are later extracted.\n\n\n\n\nBug Fixes\n\n\n\n\n\nJobs now properly end in \nERROR\n if an invalid media URL is provided or there is a problem accessing remote media.\n\n\nJobs now end in \nCOMPLETE_WITH_ERRORS\n when a detection splitter error occurs due to missing system properties.\n\n\nComponents can now include their own version of the Google Protobuf library. It will not conflict with the version\n used by the rest of OpenMPF.\n\n\nThe Java component executor now sets the proper job id in the job name instead of using the ActiveMQ message request\n id.\n\n\nThe Java component executor now sets the run directory using \nsetRunDirectory()\n.\n\n\nActions can now be properly added using an \"extras\" component. An extras component only includes a \ndescriptor.json\n\n file and declares Actions, Tasks, and Pipelines using other component algorithms.\n\n\nRefer to the items listed in the \nActiveMQ\n section.\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n\n section.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users\n have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is\n created. The reason is currently unknown.\n\n\n[\n#544\n] Image artifacts retain some permissions from source files\n available on the local host. This can result in some of the image artifacts having executable permissions.\n\n\n[\n#604\n] The Sphinx component cannot be unregistered\n because \n$MPF_HOME/plugins/SphinxSpeechDetection/lib\n is owned by root on a deployment machine.\n\n\n[\n#623\n] The Nodes UI does not work correctly\n when \n[POST] /rest/nodes/config\n is used at the same time. This is because the UI's state is not automatically updated\n to reflect changes made through the REST endpoint.\n\n\n[\n#783\n] The Tesseract OCR Text Detection Component has\n a \nknown issue\n because it uses Tesseract 3. If a combination\n of languages is specified using \nTESSERACT_LANGUAGE\n, and one of the languages is Arabic, then Arabic must be\n specified last. For example, for English and Arabic, \neng+ara\n will work, but \nara+eng\n will not.\n\n\n[\n#784\n] Sometimes services do not start on OpenMPF nodes, and those\n services cannot be started through the Nodes web UI. This is not a Docker-specific problem, but it has been observed\n in a Docker swarm deployment when auto-configuration is enabled. The workaround is to restart the Docker swarm\n deployment, or remove the entire node in the Nodes UI and add it again.\n\n\n\n\nOpenMPF 2.1.x\n\n\n2.1.0: June 2018\n\n\n\n\n\nNOTE:\n If building this release on a machine used to build a previous version of OpenMPF, then please run \nsudo pip install --upgrade pip\n to update to at least pip 10.0.1. If not, the OpenMPF build script will fail to properly download .whl files for Python modules.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded the \nPython Batch Component API\n.\n\n\nAdded the \nNode Guide\n.\n\n\nAdded the \nGPU Support Guide\n.\n\n\nUpdated the \nInstall Guide\n with an \"(Optional) Install the NVIDIA CUDA Toolkit\" section.\n\n\nRenamed Admin Manual to Admin Guide for consistency.\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nDevelopers can now write batch components in Python using the mpf_component_api module.\n\n\nDependencies can be specified in a setup.py file. OpenMPF will automatically download the .whl files using pip at\n build time.\n\n\nWhen deployed, a virtualenv is created for the Python component so that it runs in a sandbox isolated from the rest of\n the system.\n\n\nOpenMPF ImageReader and VideoCapture tools are provided in the mpf_component_util module.\n\n\nExample Python components are provided for reference.\n\n\n\n\nSpare Nodes\n\n\n\n\n\nSpare nodes can join and leave an OpenMPF cluster while the Workflow Manager is running. You can create a spare node\n by cloning an existing OpenMPF child node. Refer to the \nNode Guide\n.\n\n\nNote that changes made using the Component Registration web page only affect core nodes, not spare nodes. Core nodes\n are those configured during the OpenMPF installation process.\n\n\nAdded \nmpf list-nodes\n command to list the core nodes and available spare nodes.\n\n\nOpenMPF now uses the JGroups FILE_PING protocol for peer discovery instead of TCPPING. This means that the list of\n OpenMPF nodes no longer needs to be fully specified when the Workflow Manager starts. Instead, the Workflow Manager,\n and Node Manager process on each node, use the files in \n$MPF_HOME/share/nodes\n to determine which nodes are currently\n available.\n\n\nUpdated JGroups from 3.6.4. to 4.0.11.\n\n\nThe environment variables specified in \n/etc/profile.d/mpf.sh\n have been simplified. Of note, \nALL_MPF_NODES\n has been\n replaced by \nCORE_MPF_NODES\n.\n\n\n\n\nDefault Detection System Properties\n\n\n\n\n\nThe detection properties that specify the default values when creating new jobs can now be updated at runtime without\n restarting the Workflow Manager. Changing these properties will only have an effect on new jobs, not jobs that are\n currently running.\n\n\nThese default detection system properties are separated from the general system properties in the Properties web page.\n The latter still require the Workflow Manager to be restarted for changes to take effect.\n\n\nThe Apache Commons Configuration library is now used to read and write properties files. When defining a property\n value using an environment variable in the Properties web page, or \n$MPF_HOME/config/mpf-custom.properties\n, be sure\n to prepend the variable name with \nenv:\n. For example:\n\n\n\n\ndetection.models.dir.path=${env:MPF_HOME}/models/\n\n\n\n\n\nAlternatively, you can define system properties using other system properties:\n\n\n\n\ndetection.models.dir.path=${mpf.share.path}/models/\n\n\n\nAdaptive Frame Interval\n\n\n\n\n\nThe \nFRAME_RATE_CAP\n property can be used to set a threshold on the maximum number of frames to process within one\n second of the native video time. This property takes precedence over the user-provided / pipeline-provided value\n for \nFRAME_INTERVAL\n. When the \nFRAME_RATE_CAP\n property is specified, an internal frame interval value is calculated\n as follows:\n\n\n\n\ncalcFrameInterval = max(1, floor(mediaNativeFPS / frameRateCapProp));\n\n\n\n\n\nFRAME_RATE_CAP\n may be disabled by setting it <= 0. \nFRAME_INTERVAL\n can be disabled in the same way.\n\n\nIf \nFRAME_RATE_CAP\n is disabled, then \nFRAME_INTERVAL\n will be used instead.\n\n\nIf both \nFRAME_RATE_CAP\n and \nFRAME_INTERVAL\n are disabled, then a value of 1 will be used for \nFRAME_INTERVAL\n.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThis release includes a component that uses the \nDarknet neural network framework\n to\n perform detection and classification of objects using trained models.\n\n\nPipelines for the Tiny YOLO and YOLOv2 models are provided. Due to its large size, the YOLOv2 weights file must be\n downloaded separately and placed in \n$MPF_HOME/share/models/DarknetDetection\n in order to use the YOLOv2 pipelines.\n Refer to \nDarknetDetection/plugin-files/models/models.ini\n for more information.\n\n\nThis component supports a preprocessor mode and default mode of operation. If preprocessor mode is enabled, and\n multiple Darknet detections in a frame share the same classification, then those are merged into a single detection\n where the region corresponds to the superset region that encapsulates all of the original detections, and the\n confidence value is the probability that at least one of the original detections is a true positive. If disabled,\n multiple Darknet detections in a frame are not merged together.\n\n\nDetections are not tracked across frames. One track is generated per detection.\n\n\nThis component supports an optional \nCLASS_WHITELIST_FILE\n property. When provided, only detections with class names\n listed in the file will be generated.\n\n\nThis component can be compiled with GPU support if the NVIDIA CUDA Toolkit is installed on the build machine. Refer to\n the \nGPU Support Guide\n. If the toolkit is not found, then the component will compile with CPU\n support only.\n\n\nTo run on a GPU, set the \nCUDA_DEVICE_ID\n job property, or set the detection.cuda.device.id system property, >= 0.\n\n\nWhen \nCUDA_DEVICE_ID\n >= 0, you can set the \nFALLBACK_TO_CPU_WHEN_GPU_PROBLEM\n job property, or the\n detection.use.cpu.when.gpu.problem system property, to \nTRUE\n if you want to run the component logic on the CPU\n instead of the GPU when a GPU problem is detected.\n\n\n\n\nModels Directory\n\n\n\n\n\nThe\n$MPF_HOME/share/models\n directory is now used by the Darknet and Caffe components to store model files and\n associated files, such as classification names files, weights files, etc. This allows users to more easily add model\n files post-deployment. Instead of copying the model files to \n$MPF_HOME/plugins//models\n directory on\n each node in the OpenMPF cluster, they only need to copy them to the shared directory once.\n\n\nTo add new models to the Darknet and Caffe component, add an entry to the\n respective \n/plugin-files/models/models.ini\n file.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nPython components are packaged with their respective dependencies as .whl files. This can be automated by providing a\n setup.py file. An example OpenCV Python component is provided that demonstrates how the component is packaged and\n deployed with the opencv-python module. When deployed, a virtualenv is created for the component with the .whl files\n installed in it.\n\n\nWhen deploying OpenMPF, \nLD_LIBRARY_PATH\n is no longer set system-wide. Refer to Known Issues.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Nodes page to distinguish between core nodes and spare nodes, and to show when a node is online or\n offline.\n\n\nUpdated the Component Registration page to list the core nodes as a reminder that changes will not affect spare nodes.\n\n\nUpdated the Properties page to separate the default detection properties from the general system properties.\n\n\n\n\nBug Fixes\n\n\n\n\n\nCustom Action, task, and pipeline names can now contain \"(\" and \")\" characters again.\n\n\nDetection location elements for audio tracks and generic tracks in a JSON output object will now have a y value of \n0\n\n instead of \n1\n.\n\n\nStreaming health report and summary report timestamps have been corrected to represent hours in the 0-23 range instead\n of 1-24.\n\n\nSingle-frame .gif files are now segmented properly and no longer result in a NullPointerException.\n\n\nLD_LIBRARY_PATH\n is now set at the process level for Tomcat, the Node Manager, and component services, instead of at\n the system level in \n/etc/profile.d/mpf.sh\n. Also, deployments no longer create \n/etc/ld.so.conf.d/mpf.conf\n. This\n better isolates OpenMPF from the rest of the system and prevents issues, such as being unable to use SSH, when system\n libraries are not compatible with OpenMPF libraries. The latter situation may occur when running \nyum update\n on the\n system, which can make OpenMPF unusable until a new deployment package with compatible libraries is installed.\n\n\nThe Workflow Manager will no longer generate an \"Error retrieving the SingleJobInfo model\" line in the log if someone\n is viewing the Job Status page when a job submitted through the REST API is in progress.\n\n\n\n\nKnown Issues\n\n\n\n\n\nWhen multiple component services of the same type on the same node log to the same file at the same time, sometimes\n log lines will not be captured in the log file. The logging frameworks (log4j and log4cxx) do not support that usage.\n This problem happens more frequently on systems running many component services at the same time.\n\n\nThe following exception was observed:\n\n\n\n\ncom.google.protobuf.InvalidProtocolBufferException: Message missing required fields: data_uri\n\n\n\n\n\n\nFurther debugging is necessary to determine the reason why that message was missing that field. The situation is not easily reproducible. It may occur when ActiveMQ and / or the system is under heavy load and sends duplicate messages in attempt to ensure message delivery. Some of those messages seem to end up in the dead letter queue (DLQ). For now, we've improved the way we handle messages in the DLQ. If OpenMPF can process a message successfully, the job is marked as \nCOMPLETED_WITH_ERRORS\n, and the message is moved from \nActiveMQ.DLQ\n to \nMPF.DLQ_PROCESSED_MESSAGES\n. If OpenMPF cannot process a message successfully, it is moved from \nActiveMQ.DLQ to MPF.DLQ_INVALID_MESSAGES\n.\n\n\n\n\n\n\nThe \nmpf stop\n command will stop the Workflow Manager, which will in turn send commands to all of the available nodes\n to stop all running component services. If a service is processing a sub-job when the quit command is received, that\n service process will not terminate until that sub-job is completely processed. Thus, the service may put a sub-job\n response on the ActiveMQ response queue after the Workflow Manager has terminated. That will not cause a problem\n because the queues are flushed the next time the Workflow Manager starts; however, there will be a problem if the\n service finishes processing the sub-job after the Workflow Manager is restarted. At that time, the Workflow Manager\n will have no knowledge of the old job and will in turn generate warnings in the log about how the job id is \"not known\n to the system\" and/or \"not found as a batch or a streaming job\". These can be safely ignored. Often, if these messages\n appear in the log, then C++ services were running after stopping the Workflow Manager. To address this, you may wish\n to run \nsudo killall amq_detection_component\n after running \nmpf stop\n.\n\n\n\n\nOpenMPF 2.0.x\n\n\n2.0.0: February 2018\n\n\n\n\n\nNOTE:\n Components built for previous releases of OpenMPF are not compatible with OpenMPF 2.0.0 due to Batch Component API changes to support generic detections, and changes made to the format of the \ndescriptor.json\n file to support stream processing.\n\n\nNOTE:\n This release contains basic support for processing video streams. Currently, the only way to make use of that functionality is through the REST API. Streaming jobs and services cannot be created or monitored through the web UI. Only the SuBSENSE component has been updated to support streaming. Only single-stage pipelines are supported at this time.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated documents to distinguish the batch component APIs from the streaming component API.\n\n\nAdded the \nC++ Streaming Component API\n.\n\n\nUpdated the \nC++ Batch Component API\n to describe support for generic detections.\n\n\nUpdated the \nREST API\n with endpoints for streaming jobs.\n\n\n\n\nSupport for Generic Detections\n\n\n\n\n\nC++ and Java components can now declare support for the \nUNKNOWN\n data type. The respective batch APIs have been\n updated with a function that will enable a component to process an \nMPFGenericJob\n, which represents a piece of media\n that is not a video, image, or audio file.\n\n\nNote that these API changes make OpenMPF R2.0.0 incompatible with components built for previous releases of OpenMPF.\n Specifically, the new component executor will not be able to load the component logic library.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nAdded the following function to support generic detections:\n\n\nMPFDetectionError GetDetections(const MPFGenericJob &job, vector &tracks)\n\n\n\n\n\n\n\n\nJava Batch Component API\n\n\n\n\n\nAdded the following method to support generic detections:\n\n\nList getDetections(MPFGenericJob job)\n\n\n\n\n\n\n\n\nStreaming REST API\n\n\n\n\n\nAdded the following REST endpoints for streaming jobs:\n\n\n[GET] /rest/streaming/jobs\n: Returns a list of streaming job ids.\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job. Users can register for health report and\n summary report callbacks.\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job.\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job.\n\n\n\n\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nUpdated to support generic detections.\n\n\nUpdated Redis to store information about streaming jobs.\n\n\nAdded controllers for streaming job REST endpoints.\n\n\nAdded ability to generate health reports and segment summary reports for streaming jobs.\n\n\nImproved code flow between the Workflow Manager and master Node Manager to support streaming jobs.\n\n\nAdded ActiveMQ queues to enable the C++ Streaming Component Executor to send reports and job status to the Workflow\n Manager.\n\n\n\n\nNode Manager\n\n\n\n\n\nUpdated the master Node Manager and child Node Managers to spawn component services on demand to handle streaming\n jobs, cancel those jobs, and to monitor the status of those processes.\n\n\nUsing .ini files to represent streaming job properties and enable better communication between a child Node Manager\n and C++ Streaming Component Executor.\n\n\n\n\nC++ Streaming Component API\n\n\n\n\n\nDeveloped the C++ Streaming Component API with the following functions:\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n: Constructor that takes a streaming video job.\n\n\nstring GetDetectionType()\n: Returns the type of detection (i.e. \"FACE\").\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n: Indicates the beginning of a new video segment.\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n: Processes a single frame for the current video\n segment.\n\n\nvector EndSegment()\n: Indicates the end of the current video segment.\n\n\n\n\n\n\nUpdated the C++ Hello World component to support streaming jobs.\n\n\n\n\nC++ Streaming Component Executor\n\n\n\n\n\nDeveloped the C++ Streaming Component Executor to load a streaming component logic library, read frames from a video\n stream, and exercise the component logic through the C++ Streaming Component API.\n\n\nWhen the C++ Streaming Component Executor cannot read a frame from the stream, it will sleep for at least 1\n millisecond, doubling the amount of sleep time per attempt until it reaches the \nstallTimeout\n value specified when\n the job was created. While stalled, the job status will be \nSTALLED\n. After the timeout is exceeded, the job will\n be \nTERMINATED\n.\n\n\nThe C++ Streaming Component Executor supports \nFRAME_INTERVAL\n, as well as rotation, horizontal flipping, and\n cropping (region of interest) properties. Does not support \nUSE_KEY_FRAMES\n.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded the following Java classes to the interoperability package to simplify third party integration:\n\n\nJsonHealthReportCollection\n: Represents the JSON content of a health report callback. Contains one or\n more \nJsonHealthReport\n objects.\n\n\nJsonSegmentSummaryReport\n: Represents the JSON content of a summary report callback. Content is similar to the\n JSON output object used for batch processing.\n\n\n\n\n\n\n\n\nSuBSENSE Component\n\n\n\n\n\nThe SuBSENSE component now supports both batch processing and stream processing.\n\n\nEach video segment will be processed independently of the rest. In other words, tracks will be generated on a\n segment-by-segment basis and tracks will not carry over between segments.\n\n\nNote that the last frame in the previous segment will be used to determine if there is motion in the first frame of\n the next segment.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nUpdated \ndescriptor.json\n fields to allow components to support batch and/or streaming jobs. Components that use the\n old \ndescriptor.json\n file format cannot be registered through the web UI.\n\n\nBatch component logic and streaming component logic are compiled into separate libraries.\n\n\nThe mySQL \nstreaming_job_request\n table has been updated with the following fields, which are used to populate the\n JSON health reports:\n\n\nstatus_detail\n: (Optional) A user-friendly description of the current job status.\n\n\nactivity_frame_id\n: The frame id associated with the last job activity. Activity is defined as the start of a new\n track for the current segment.\n\n\nactivity_timestamp\n: The timestamp associated with the last job activity.\n\n\n\n\n\n\n\n\nWeb User Interface\n\n\n\n\n\nAdded column names to the table that appears when the user clicks in the Media button associated with a job on the Job\n Status page. Now descriptive comments are provided when table cells are empty.\n\n\n\n\nBug Fixes\n\n\n\n\n\nUpgraded Tika to 1.17 to resolve an issue with improper indentation in a Python file (rotation.py) that resulted in\n generating at least one error message per image processed. When processing a large number of images, this would\n generate may error messages, causing the Automatic Bug Reporting Tool daemon (abrtd) process to run at 100% CPU. Once\n in that state, that process would stay there, essentially wasting on CPU core. This resulted in some of the Jenkins\n virtual machines we used for testing to become unresponsive.\n\n\n\n\nKnown Issues\n\n\n\n\n\n\n\nOpenCV 3.3.0 \ncv::imread()\n does not properly decode some TIFF images that have EXIF orientation metadata. It can\n handle images that are flipped horizontally, but not vertically. It also has issues with rotated images. Since most\n components rely on that function to read image data, those components may silently fail to generate detections for\n those kinds of images.\n\n\n\n\n\n\nUsing single quotes, apsotrophes, or double quotes in the name of an algorithm, action, task, or pipeline configured\n on an existing OpenMPF system will result in a failure to perform an OpenMPF upgrade on that system. Specifically, the\n step where pre-existing custom actions, tasks, and pipelines are carried over to the upgraded version of OpenMPF will\n fail. Please do not use those special characters while naming those elements. If this has been done already, then\n those elements should be manually renamed in the XML files prior to an upgrade attempt.\n\n\n\n\n\n\nOpenMPF uses OpenCV, which uses FFmpeg, to connect to video streams. If a proxy and/or firewall prevents the network\n connection from succeeding, then OpenCV, or the underlying FFmpeg library, will segfault. This causes the C++\n Streaming Component Executor process to fail. In turn, the job status will be set to \nERROR\n with a status detail\n message of \"Unexpected error. See logs for details\". In this case, the logs will not contain any useful information.\n You can identify a segfault by the following line in the node-manager log:\n\n\n\n\n\n\n2018-02-15 16:01:21,814 INFO [pool-3-thread-4] o.m.m.nms.streaming.StreamingProcess - Process: Component exited with exit code 139\u00a0\n\n\n\n\n\nTo determine if FFmpeg can connect to the stream or not, run \nffmpeg -i \n in a terminal window. Here's an example when it's successful:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[rtsp @ 0x1924240] UDP timeout, retrying with TCP\nInput #0, rtsp, from 'rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov':\n Metadata:\n title : BigBuckBunny_115k.mov\n Duration: 00:09:56.48, start: 0.000000, bitrate: N/A\n Stream #0:0: Audio: aac (LC), 12000 Hz, stereo, fltp\n Stream #0:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 240x160, 24 fps, 24 tbr, 90k tbn, 48 tbc\nAt least one output file must be specified\n\n\n\n\n\nHere's an example when it's not successful, so there may be network issues:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[tcp @ 0x171c300] Connection to tcp://184.72.239.149:554?timeout=0 failed: Invalid argument\nrtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov: Invalid argument\n\n\n\n\n\nTika 1.17 does not come pre-packaged with support for some embedded image formats in PDF files, possibly to avoid\n patent issues. OpenMPF does not handle embedded images in PDFs, so that's not a problem. Tika will print out the\n following warnings, which can be safely ignored:\n\n\n\n\nJan 22, 2018 11:02:15 AM org.apache.tika.config.InitializableProblemHandler$3 handleInitializableProblem\nWARNING: JBIG2ImageReader not loaded. jbig2 files will be ignored\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nTIFFImageWriter not loaded. tiff files will not be processed\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nJ2KImageReader not loaded. JPEG2000 files will not be processed.\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\n\n\n\n\nOpenMPF 1.0.x\n\n\n1.0.0: October 2017\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nBuild Guide\n with instructions for installing the latest JDK,\n latest JRE, FFmpeg 3.3.3, new codecs, and OpenCV 3.3.\n\n\nAdded an \nAcknowledgements\n section that provides information on third party dependencies\n leveraged by the OpenMPF.\n\n\nAdded a \nFeed Forward Guide\n that explains feed forward processing and how to use it.\n\n\nAdded missing requirements checklist content to\n the \nInstall Guide\n.\n\n\nUpdated the README at the top level of each of the primary repositories to help with user navigation and provide\n general information.\n\n\n\n\nUpgrade to FFmpeg 3.3.3 and OpenCV 3.3\n\n\n\n\n\nUpdated core framework from FFmpeg 2.6.3 to FFmpeg 3.3.3.\n\n\nAdded the following FFmpeg codecs: x256, VP9, AAC, Opus, Speex.\n\n\nUpdated core framework and components from OpenCV 3.2 to OpenCV 3.3. No longer building with opencv_contrib.\n\n\n\n\nFeed Forward Behavior\n\n\n\n\n\nUpdated the Workflow Manager (WFM) and all video components to optionally perform feed forward processing for batch\n jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will\n only process the frames associated with the detections in those tracks. This differs from the default segmenting\n behavior, which does not preserve detection regions or track information between stages.\n\n\nTo enable this behavior, the optional \nFEED_FORWARD_TYPE\n property must be set to \nFRAME\n, \nSUPERSET_REGION\n,\n or \nREGION\n. If set to \nFRAME\n then the components in the next stage will process the whole frame region associated\n with each detection in the track passed forward. If set to \nSUPERSET_REGION\n then the components in the next stage\n will determine the bounding box that encapsulates all of the detection regions in the track, and only process the\n pixel data within that superset region. If set to \nREGION\n then the components in the next stage will process the\n region associated with each detection in the track passed forward, which may vary in size and position from frame to\n frame.\n\n\nThe optional \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n property can be set to a number to limit the number of detections\n passed forward in a track. For example, if set to \"5\", then only the top 5 detections in the track will be passed\n forward and processed by the next stage. The top detections are defined as those with the highest confidence values,\n or if the confidence values are the same, those with the lowest frame index.\n\n\nNote that setting the feed forward properties has no effect on the first pipeline stage because there is no prior\n stage that can pass tracks to it.\n\n\n\n\nCaffe Component\n\n\n\n\n\nUpdated the Caffe component to process images in the BGR color space instead of the RGB color space. This addresses a\n bug found in OpenCV. Refer to the Bug Fixes section below.\n\n\nAdded support for processing videos.\n\n\nAdded support for an optional \nACTIVATION_LAYER_LIST\n property. For each network layer specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is an encoded string of the\n JSON representation of an OpenCV matrix of the activation values for that layer. The activation values are obtained\n after the Caffe network has processed the frame data.\n\n\nAdded support for an optional \nSPECTRAL_HASH_FILE_LIST\n property. For each JSON file specified in the list,\n the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is a string of 0's and 1's\n representing the spectral hash calculated using the information in the spectral hash JSON file. The spectral hash is\n calculated using activation values after the Caffe network has processed the frame data.\n\n\nAdded a pipeline to showcase the above two features for the GoogLeNet Caffe model.\n\n\nRemoved the \nTRANSPOSE\n property from the Caffe component since it was not necessary.\n\n\nAdded red, green, and blue mean subtraction values to the GoogLeNet pipeline.\n\n\n\n\nUse Key Frames\n\n\n\n\n\nAdded support for an optional \nUSE_KEY_FRAMES\n property to each video component. When true the component will only\n look at key frames (I-frames) from the input video. Can be used in conjunction with \nFRAME_INTERVAL\n. For example,\n when \nUSE_KEY_FRAMES\n is true, and \nFRAME_INTERVAL\n is set to \"2\", then every other key frame will be processed.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\n\n\n\nUpdated the MPFVideoCapture and MPFImageReader tools to handle feed forward properties.\n\n\nUpdated the MPFVideoCapture tool to handle \nFRAME_INTERVAL\n and \nUSE_KEY_FRAMES\n properties.\n\n\nUpdated all existing components to leverage these tools as much as possible.\n\n\nWe encourage component developers to use these tools to automatically take care of common frame grabbing and frame\n manipulation behaviors, and not to reinvent the wheel.\n\n\n\n\nDead Letter Queue\n\n\n\n\n\nIf for some reason a sub-job request that should have gone to a component ends up on the ActiveMQ Dead Letter Queue (\n DLQ), then the WFM will now process that failed request so that the job can complete. The ActiveMQ management page\n will now show that \nActiveMQ.DLQ\n has 1 consumer. It will also show unconsumed messages\n in \nMPF.PROCESSED_DLQ_MESSAGES\n. Those are left for auditing purposes. The \"Message Detail\" for these shows the string\n representation of the original job request protobuf message.\n\n\n\n\nUpgrade Path\n\n\n\n\n\nRemoved the Release 0.8 to Release 0.9 upgrade path in the deployment scripts.\n\n\nAdded support for a Release 0.9 to Release 1.0.0 upgrade path, and a Release 0.10.0 to Release 1.0.0 upgrade path.\n\n\n\n\nMarkup\n\n\n\n\n\nBounding boxes are now drawn along the interpolated path between detection regions whenever there are one or more\n frames in a track which do not have detections associated with them.\n\n\nFor each track, the color of the bounding box is now a randomly selected hue in the HSV color space. The colors are\n evenly distributed using the golden ratio.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a \nbug in OpenCV\n where the Caffe example code was processing\n images in the RGB color space instead of the BGR color space. Updated the OpenMPF Caffe component accordingly.\n\n\nFixed a bug in the OpenCV person detection component that caused bounding boxes to be too large for detections near\n the edge of a frame.\n\n\nResubmitting jobs now properly carries over configured job properties.\n\n\nFixed a bug in the build order of the OpenMPF project so that test modules that the WFM depends on are built before\n the WFM itself.\n\n\nThe Markup component draws bounding boxes between detections when a \nFRAME_INTERVAL\n is specified. This is so that the\n bounding box in the marked-up video appears in every frame. Fixed a bug where the bounding boxes drawn on\n non-detection frames appeared to stand still rather than move along the interpolated path between detection regions.\n\n\nFixed a bug on the OALPR license plate detection component where it was not properly handling the \nSEARCH_REGION_*\n\n properties.\n\n\nSupport for the \nMIN_GAP_BETWEEN_SEGMENTS\n property was not implemented properly. When the gap between two segments is\n less than this property value then the segments should be merged; otherwise, the segments should remain separate. In\n some cases, the exact opposite was happening. This bug has been fixed.\n\n\n\n\nKnown Issues\n\n\n\n\n\nBecause of the number of additional ActiveMQ messages involved, enabling feed forward for low resolution video may\n take longer than the non-feed-forward behavior.\n\n\n\n\nOpenMPF 0.x.x\n\n\n0.10.0: July 2017\n\n\n\n\n\nWARNING:\n There is no longer a \nDEFAULT CAFFE ACTION\n, \nDEFAULT CAFFE TASK\n, or \nDEFAULT CAFFE PIPELINE\n. There is now a \nCAFFE GOOGLENET DETECTION PIPELINE\n and \nCAFFE YAHOO NSFW DETECTION PIPELINE\n, which each have a respective action and task.\n\n\nNOTE:\n MPFImageReader has been re-enabled in this version of OpenMPF since we upgraded to OpenCV 3.2, which addressed the known issues with \nimread()\n, auto-orientation, and jpeg files in OpenCV 3.1.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nContributor Guide\n that provides guidelines for contributing to the OpenMPF\n codebase.\n\n\nUpdated the \nJava Batch Component API\n with links to the example Java components.\n\n\nUpdated the \nBuild Guide\n with instructions for OpenCV 3.2.\n\n\n\n\nUpgrade to OpenCV 3.2\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.1 to OpenCV 3.2.\n\n\n\n\nSupport for Animated gifs\n\n\n\n\n\nAll gifs are now treated as videos. Each gif will be handled as an MPFVideoJob.\n\n\nUnanimated gifs are treated as 1-frame videos.\n\n\nThe WFM Media Inspector now populates the \nmedia_properties\n map with a \nFRAME_COUNT\n entry (in addition to\n the \nDURATION\n and \nFPS\n entries).\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for the Yahoo Not Suitable for Work (NSFW) Caffe model for explicit material detection.\n\n\nUpdated the Caffe component to support the OpenCV 3.2 Deep Neural Network (DNN) module.\n\n\n\n\nFuture Support for Streaming Video\n\n\n\n\n\nNOTE:\n At this time, OpenMPF does not support streaming video. This section details what's being / has been done so far to prepare for that feature.\n\n\n\n\n\n\nThe codebase is being updated / refactored to support both the current \"batch\" job functionality and new \"streaming\"\n job functionality.\n\n\nbatch job: complete video files are written to disk before they are processed\n\n\nstreaming job: video frames are read from a streaming endpoint (such as RTSP) and processed in near real time\n\n\n\n\n\n\nThe REST API is being updated with endpoints for streaming jobs:\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job\n\n\n\n\n\n\nThe Redis and mySQL databases are being updated to support streaming video jobs.\n\n\nA batch job will never have the same id as a streaming job. The integer ids will always be unique.\n\n\n\n\n\n\n\n\nBug Fixes\n\n\n\n\n\nThe MOG and SuBSENSE component services could segfault and terminate if the \nUSE_MOTION_TRACKING\n property was set to\n \u201c1\u201d and a detection was found close to the edge of the frame. Specifically, this would only happen if the video had a\n width and/or height dimension that was not an exact power of two.\n\n\nThe reason was because the code downsamples each frame by a power of two and rounds the value of the width and\n height up to the nearest integer. Later on when upscaling detection rectangles back to a size that\u2019s relative to\n the original image, the resized rectangle sometimes extended beyond the bounds of the original frame.\n\n\n\n\n\n\n\n\nKnown Issues\n\n\n\n\n\nIf a job is submitted through the REST API, and a user to logged into the web UI and looking at the job status page,\n the WFM may generate \"Error retrieving the SingleJobInfo model for the job with id\" messages.\n\n\nThis is because the job status is only added to the HTTP session object if the job is submitted through the web\n UI. When the UI queries the job status it inspects this object.\n\n\nThis message does not appear if job status is obtained using the \n[GET] /rest/jobs/{id}\n endpoint.\n\n\n\n\n\n\nThe \n[GET] /rest/jobs/stats\n endpoint aggregates information about all of the jobs ever run on the system. If\n thousands of jobs have been run, this call could take minutes to complete. The code should be improved to execute a\n direct mySQL query.\n\n\n\n\n0.9.0: April 2017\n\n\n\n\n\nWARNING:\n MPFImageReader has been disabled in this version of OpenMPF. Component developers should use MPFVideoCapture instead. This affects components developed against previous versions of OpenMPF and components developed against this version of OpenMPF. Please refer to the Known Issues section for more information.\n\n\nWARNING:\n The OALPR Text Detection Component has been renamed to OALPR \nLicense Plate\n Text Detection Component. This affects the name of the component package and the name of the actions, tasks, and pipelines. When upgrading from R0.8 to R0.9, if the old OALPR Text Detection Component is installed in R0.8 then you will be prompted to install it again at the end of the upgrade path script. We recommend declining this prompt because the old component will conflict with the new component.\n\n\nWARNING:\n Action, task, and pipeline names that started with \nMOTION DETECTION PREPROCESSOR\n have been renamed \nMOG MOTION DETECTION PREPROCESSOR\n. Similarly, \nWITH MOTION PREPROCESSOR\n has changed to \nWITH MOG MOTION PREPROCESSOR\n.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n to reflect job properties, algorithm-specific properties, and\n media-specific properties.\n\n\nStreamlined the \nC++ Batch Component API\n document for clarity and simplicity.\n\n\nCompleted the \nJava Batch Component API\n document.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to reflect web UI changes.\n\n\nUpdated the \nBuild Guide\n with instructions for GitHub repositories.\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nAdded support for job properties, which will override pre-defined pipeline properties.\n\n\nAdded support for algorithm-specific properties, which will apply to a single stage of the pipeline and will override\n job properties and pre-defined pipeline properties.\n\n\nAdded support for media-specific properties, which will apply to a single piece and media and will override job\n properties, algorithm-specific properties, and pre-defined pipeline properties.\n\n\nComponents can now be automatically registered and installed when the web application starts in Tomcat.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nThe \"Close All\" button on pop-up notifications now dismisses all notifications from the queue, not just the visible\n ones.\n\n\nJob completion notifications now only appear for jobs created during the current login session instead of all jobs.\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties can be set using the web interface when creating a\n job. Once files are selected for a job, these properties can be set individually or by groups of files.\n\n\nThe Node and Process Status page has been merged into the Node Configuration page for simplicity and ease of use.\n\n\nThe Media Markup results page has been merged into the Job Status page for simplicity and ease of use.\n\n\nThe File Manager UI has been improved to handle large numbers of files and symbolic links.\n\n\nThe side navigation menu is now replaced by a top navigation bar.\n\n\n\n\nREST API\n\n\n\n\n\nAdded an optional jobProperties object to the \n/rest/jobs/\n request which contains String key-value pairs which\n override the pipeline's pre-configured job properties.\n\n\nAdded an optional algorithmProperties object to the \n/rest/jobs/\n request which can be used to configure properties\n for specific algorithms in the pipeline. These properties override the pipeline's pre-configured job properties. They\n also override the values in the jobProperties object.\n\n\nUpdated the \n/rest/jobs/\n request to add more detail to media, replacing a list of mediaUri Strings with a list of\n media objects, each of which contains a mediaUri and an optional mediaProperties map. The mediaProperties map can be\n used to configure properties for the specific piece of media. These properties override the pipeline's pre-configured\n job properties, values in the jobProperties object, and values in the algorithmProperties object.\n\n\nStreamlined the actions, tasks, and pipelines endpoints that are used by the web UI.\n\n\n\n\nFlipping, Rotation, and Region of Interest\n\n\n\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties will no longer appear in the detectionProperties\n map in the JSON detection output object. When applied to an algorithm these properties now appear in the\n pipeline.stages.actions.properties element. When applied to a piece of media these properties will now appear in the\n the media.mediaProperties element.\n\n\nThe OpenMPF now supports multiple regions of interest in a single media file. Each region will produce tracks\n separately, and the tracks for each region will be listed in the JSON output as if from a separate media file.\n\n\n\n\nComponent API\n\n\n\n\n\nJava Batch Component API is functionally complete for third-party development, with the exception of Component Adapter\n and frame transformation utilities classes.\n\n\nRe-architected the Java Batch Component API to use a more traditional Java method structure of returning track lists\n and throwing exceptions (rather than modifying input track lists and returning statuses), and encapsulating job\n properties into MPFJob objects:\n\n\nList getDetections(MPFVideoJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFAudioJob job) throws MPFComponentDetectionError\n\n\nList getDetections(MPFImageJob job) throws MPFComponentDetectionError\n\n\n\n\n\n\nCreated examples for the Java Batch Component API.\n\n\nReorganized the Java and C++ component source code to enable component development without the OpenMPF core, which\n will simplify component development and streamline the code base.\n\n\n\n\nJSON Output Objects\n\n\n\n\n\nThe JSON output object for the job now contains a jobProperties map which contains all properties defined for the job\n in the job request. For example, if the job request specifies a \nCONFIDENCE_THRESHOLD\n of then the jobProperties map\n in the output will also list a \nCONFIDENCE_THRESHOLD\n of 5.\n\n\nThe JSON output object for the job now contains a algorithmProperties element which contains all algorithm-specific\n properties defined for the job in the job request. For example, if the job request specifies a \nFRAME_INTERVAL\n of 2\n for FACECV then the algorithmProperties element in the output will contain an entry for \"FACECV\" and that entry will\n list a \nFRAME_INTERVAL\n of 2.\n\n\nEach JSON media output object now contains a mediaProperties map which contains all media-specific properties defined\n by the job request. For example, if the job request specifies a \nROTATION\n of 90 degrees for a single piece of media\n then the mediaProperties map for that piece of piece will list a \nROTATION\n of 90.\n\n\nThe content of JSON output objects are now organized by detection type (e.g. MOTION, FACE, PERSON, TEXT, etc.) rather\n than action type.\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for flip, rotation, and cropping to regions of interest.\n\n\nAdded support for returning multiple classifications per detection based on user-defined settings. The classification\n list is in order of decreasing confidence value.\n\n\n\n\nNew Pipelines\n\n\n\n\n\nNew SuBSENSE motion preprocessor pipelines have been added to components that perform detection on video.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nActions.xml\n, \nAlgorithms.xml\n, \nnodeManagerConfig.xml\n, \nnodeServicesPalette.json\n, \nPipelines.xml\n, and \nTasks.xml\n\n are no longer stored within the Workflow Manager WAR file. They are now stored under \n$MPF_HOME/data\n. This makes it\n easier to upgrade the Workflow Manager and makes it easier for users to access these files.\n\n\nEach component can now be optionally installed and registered during deployment. Components not registered are set to\n the \nUPLOADED\n state. They can then be removed or registered through the Component Registration page.\n\n\nJava components are now packaged as tar.gz files instead of RPMs, bringing them into alignment with C++ components.\n\n\nOpenMPF R0.9 can be installed over OpenMPF R0.8. The deployment scripts will determine that an upgrade should take\n place.\n\n\nAfter the upgrade, user-defined actions, tasks, and pipelines will have \"CUSTOM\" prepended to their name.\n\n\nThe job_request table in the mySQL database will have a new \"output_object_version\" column. This column will\n have \"1.0\" for jobs created using OpenMPF R0.8 and \"2.0\" for jobs created using OpenMPF R0.9. The JSON output\n object schema has changed between these versions.\n\n\n\n\n\n\nReorganized source code repositories so that component SDKs can be downloaded separately from the OpenMPF core and so\n that components are grouped by license and maturity. Build scripts have been created to streamline and simplify the\n build process across the various repositories.\n\n\n\n\nUpgrade to OpenCV 3.1\n\n\n\n\n\nThe OpenMPF software has been ported to use OpenCV 3.1, including all of the C++ detection components and the markup\n component. For the OpenALPR license plate detection component, the versions of the openalpr, tesseract, and leptonica\n libraries were also upgraded to openalpr-2.3.0, tesseract-3.0.4, and leptonica-1.7.2. For the SuBSENSE motion\n component, the version of the SuBSENSE library was upgraded to use the code found at this\n location: \nhttps://bitbucket.org/pierre_luc_st_charles/subsense/src\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\nMOG motion detection always detected motion in frame 0 of a video. Because motion can only be detected between two\n adjacent frames, frame 1 is now the first frame in which motion can be detected.\n\n\nMOG motion detection never detected motion in the first frame of a video segment (other than the first video segment\n because of the frame 0 bug described above). Now, motion is detected using the first frame before the start of a\n segment, rather than the first frame of the segment.\n\n\nThe above bugs were also present in SuBSENSE motion detection and have been fixed.\n\n\nSuBSENSE motion detection generated tracks where the frame numbers were off by one. Corrected the frame index logic.\n\n\nVery large video files caused an out of memory error in the system during Workflow Manager media inspection.\n\n\nA job would fail when processing images with an invalid metadata tag for the camera flash setting.\n\n\nUsers were permitted to select invalid file types using the File Manager UI.\n\n\n\n\nKnown Issues\n\n\n\n\n\nMPFImageReader does not work reliably with the current release version of OpenCV 3.1\n: In OpenCV 3.1, new\n functionality was introduced to interpret EXIF information when reading jpeg files.\n\n\nThere are two issues with this new functionality that impact our ability to use the OpenCV \nimread()\n function with\n MPFImageReader:\n\n\nFirst, because of a bug in the OpenCV code, reading a jpeg file that contains exif information could cause it to\n hang. (See \nhttps://github.com/opencv/opencv/issues/6665\n.)\n\n\nSecond, it is not possible to tell the \nimread()\nfunction to ignore the EXIF data, so the image it returns is\n automatically rotated. (See \nhttps://github.com/opencv/opencv/issues/6348\n.) This results in the MPFImageReader\n applying a second rotation to the image due to the EXIF information.\n\n\n\n\n\n\nTo address these issues, we developed the following workarounds:\n\n\nCreated a version of the MPFVideoCapture that works with an MPFImageJob. The new MPFVideoCapture can pull frames\n from both video files and images. MPFVideoCapture leverages cv::VideoCapture, which does not have the two issues\n described above.\n\n\nDisabled the use of MPFImageReader to prevent new users from trying to develop code leveraging this previous\n functionality.",
"title": "Release Notes"
},
{
@@ -157,7 +157,7 @@
},
{
"location": "/Admin-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n Please refer to the \nUser Configuration\n section for changing the default user passwords.\n\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nWeb UI\n\n\nThe login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the \nUser Guide\n for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\".\n\n\nWe highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the \nUser Configuration\n section below.\n\n\nThis document will cover the additional functionality permitted to admin users through the Admin Console pages.\n\n\nDashboard\n\n\nThe landing page for an admin user is the Job Status page:\n\n\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\nProperties Settings\n\n\nThis page allows an admin user to view and edit various OpenMPF properties:\n\n\n\n\nAn admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.\n\n\nNote that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration.\n\n\nWARNING:\n Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!\n\n\n\nAt the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:\n\n\n\n\nHawtio\n\n\nThe \nHawtio\n web console can be accessed by selecting \"Hawtio\" from the\n\"Configuration\" dropdown menu in the top menu bar. Hawtio exposes various management information\nand settings. It can be used to monitor the state of the ActiveMQ queues used for communication\nbetween the Workflow Manager and the components.\n\n\nUser Configuration\n\n\nEvery time the Workflow Manager starts it will attempt to create accounts for the users listed in the \nuser.properties\n file. At runtime this file is extracted to \n$MPF_HOME/config\n on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password.\n\n\nWe highly recommend modifying the \nuser.properties\n file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created.\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker \nREADME\n that explain how to use a \ndocker secret\n for your custom \nuser.properties\n file.\n\n\n(Optional) Configure HTTPS\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform.\nIf you are using Docker, please follow the instructions in the openmpf-docker\n\nREADME\n\nthat explain how to configure HTTPS.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n Please refer to the \nUser Configuration\n section for changing the default user passwords.\n\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nWeb UI\n\n\nThe login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the \nUser Guide\n for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\".\n\n\nWe highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the \nUser Configuration\n section below.\n\n\nThis document will cover the additional functionality permitted to admin users through the Admin Console pages.\n\n\nDashboard\n\n\nThe landing page for an admin user is the Job Status page:\n\n\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\nProperties Settings\n\n\nThis page allows an admin user to view and edit various OpenMPF properties:\n\n\n\n\nAn admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.\n\n\nNote that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration.\n\n\nWARNING:\n Changing the value of these properties can prevent the Workflow Manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!\n\n\n\nAt the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the Workflow Manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:\n\n\n\n\nHawtio\n\n\nThe \nHawtio\n web console can be accessed by selecting \"Hawtio\" from the\n\"Configuration\" dropdown menu in the top menu bar. Hawtio exposes various management information\nand settings. It can be used to monitor the state of the ActiveMQ queues used for communication\nbetween the Workflow Manager and the components.\n\n\nUser Configuration\n\n\nEvery time the Workflow Manager starts it will attempt to create accounts for the users listed in the \nuser.properties\n file. At runtime this file is extracted to \n$MPF_HOME/config\n on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password.\n\n\nWe highly recommend modifying the \nuser.properties\n file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created.\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker \nREADME\n that explain how to use a \ndocker secret\n for your custom \nuser.properties\n file.\n\n\n(Optional) Configure HTTPS\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform.\nIf you are using Docker, please follow the instructions in the openmpf-docker\n\nREADME\n\nthat explain how to configure HTTPS.",
"title": "Admin Guide"
},
{
@@ -172,7 +172,7 @@
},
{
"location": "/Admin-Guide/index.html#properties-settings",
- "text": "This page allows an admin user to view and edit various OpenMPF properties: An admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name. Note that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration. WARNING: Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution! At the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:",
+ "text": "This page allows an admin user to view and edit various OpenMPF properties: An admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name. Note that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration. WARNING: Changing the value of these properties can prevent the Workflow Manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution! At the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the Workflow Manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:",
"title": "Properties Settings"
},
{
@@ -192,7 +192,7 @@
},
{
"location": "/User-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the Workflow Manager only runs on the master node.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the Workflow Manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal Workflow Manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the Workflow Manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the Workflow Manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.",
"title": "User Guide"
},
{
@@ -252,7 +252,7 @@
},
{
"location": "/User-Guide/index.html#logs",
- "text": "This page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information). In general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file. Note that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node. The \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services. The \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.",
+ "text": "This page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information). In general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file. Note that only the master node will have the \"workflow-manager\" log. This is because the Workflow Manager only runs on the master node. The \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services. The \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.",
"title": "Logs"
},
{
@@ -262,12 +262,12 @@
},
{
"location": "/User-Guide/index.html#statistics",
- "text": "The \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted. For example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph. The \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.",
+ "text": "The \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the Workflow Manager is restarted. For example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph. The \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal Workflow Manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the Workflow Manager is restarted.",
"title": "Statistics"
},
{
"location": "/User-Guide/index.html#rest-api",
- "text": "This page allows a user to try out the various REST API endpoints provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF. After selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them. In the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button: The HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.",
+ "text": "This page allows a user to try out the various REST API endpoints provided by the Workflow Manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF. After selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them. In the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button: The HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.",
"title": "REST API"
},
{
@@ -362,47 +362,47 @@
},
{
"location": "/Feed-Forward-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be directly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed forward can be configured such that only the detection regions for those tracks are processed. If they are non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face detection can take a whole frame and generate a separate detection region for each face in the frame. On the other hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce better results if it operates on smaller regions that only capture the desired object to be classified. Using feed forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest. For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the workflow manager and components than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the overhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be processed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward pipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected in the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there is no track to feed in. In other words, the first stage will process the media file as though feed forward was not enabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take advantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored: \nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure that \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from the first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have two algorithm properties that control the feed forward behavior: \nFEED_FORWARD_TYPE\n and \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset Region\n section for more details. For each detection in the feed forward track, search the superset region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be different for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform region tracking and require a consistent coordinate space from detection to detection. For those algorithms, use \nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the superset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n allows you to drop low confidence detections from feed forward tracks. Setting the property to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be processed.\n\n\nWhen \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n is set to a number greater than 0, say 5, then the top 5 detections in the feed forward track (based on highest confidence) will be processed. If the track contains less than 5 detections then all of the detections in the track will be processed. If one or more detections have the same confidence value, then the detection(s) with the lower frame index take precedence.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a track. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding rectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of 3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame region. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red line.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame to be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space is not restricted to the smallest possible size in each frame; however, in many cases the search space will be significantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a video to the lower right corner. In that case the superset region will be the entire width and height of the frame, so \nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In that case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above diagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box that does not. The inner bounding box represents the face detection in that frame, while the outer bounding box represents the superset region for the track associated with that face. Note that the bounding box for each face uses a different color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch Component API\n include utilities that make it easier to support feed forward in your components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a wrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except that it might modify the video frames based on job properties. From the point of view of someone using \nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the original video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless there was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found detections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If the feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will make it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified video, not the original. To make the detections relative to the original video the \nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a modified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the same size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when using \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames that are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of \nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used with components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region tracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n makes it look like the user is processing an original image, when in reality they are processing a modified version where the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a pipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also has a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking behavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process the entire region of each frame of a video. If one or more consecutive frames has the same highest confidence classification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track according to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward track. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection regions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track will include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each detection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the \nTestSystemOnDiff\n class for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in that class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n for a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To mitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to generate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n on a pipeline stage that follows a MOG or SuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values. Instead, \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n could potentially be used when feeding person tracks into a face detector, for example, if those person tracks have confidence values.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has a 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the \nTestSystemOnDiff\n class for a system test that demonstrates this behavior.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have two algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n and \nFEED_FORWARD_TOP_QUALITY_COUNT\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
"title": "Feed Forward Guide"
},
{
"location": "/Feed-Forward-Guide/index.html#introduction",
- "text": "Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be directly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways: The next stage will only look at the frames that had detections in the previous stage. The default segmenting behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end frames of the feed forward track, regardless of whether a detection was actually found in those frames. The next stage can be configured to only look at the detection regions for the frames in the feed forward track. The default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks at the whole frame region for every frame in the segment. The next stage will process one sub-job per track generated in the previous stage. If the previous stage generated more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed forward can be configured such that only the detection regions for those tracks are processed. If they are non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that captures the frame associated with all 3 tracks.",
+ "text": "Feed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways: The next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames. The next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment. The next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.",
"title": "Introduction"
},
{
"location": "/Feed-Forward-Guide/index.html#motivation",
- "text": "Consider using feed forward for the following reasons: You have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face detection can take a whole frame and generate a separate detection region for each face in the frame. On the other hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce better results if it operates on smaller regions that only capture the desired object to be classified. Using feed forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them. You wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest. For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will speed up run times. NOTE: Enabling feed forward results in more sub-jobs and more message passing between the workflow manager and components than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the overhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be processed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit. The output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward pipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected in the first stage and a face was detected in the second stage.",
+ "text": "Consider using feed forward for the following reasons: You have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them. You wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times. NOTE: Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit. The output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.",
"title": "Motivation"
},
{
"location": "/Feed-Forward-Guide/index.html#first-stage-and-combining-properties",
- "text": "When feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there is no track to feed in. In other words, the first stage will process the media file as though feed forward was not enabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take advantage of the feed forward behavior. NOTE: When FEED_FORWARD_TYPE is set to anything other than NONE , the following properties will be ignored: FRAME_INTERVAL , USE_KEY_FRAMES , SEARCH_REGION_* . If you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure that FEED_FORWARD_TYPE is set to NONE , or not specified, for the first stage. You can then configure each subsequent stage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from the first stage, the subsequent stages will inherit the effects of those properties set on the first stage.",
+ "text": "When feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior. NOTE: When FEED_FORWARD_TYPE is set to anything other than NONE , the following properties will be ignored: FRAME_INTERVAL , USE_KEY_FRAMES , SEARCH_REGION_* . If you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat FEED_FORWARD_TYPE is set to NONE , or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage.",
"title": "First Stage and Combining Properties"
},
{
"location": "/Feed-Forward-Guide/index.html#feed-forward-properties",
- "text": "Components that support feed forward have two algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE and FEED_FORWARD_TOP_CONFIDENCE_COUNT . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all of the detection regions in that track across all of the frames in that track. Refer to the Superset Region section for more details. For each detection in the feed forward track, search the superset region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be different for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform region tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the superset region for that track. FEED_FORWARD_TOP_CONFIDENCE_COUNT allows you to drop low confidence detections from feed forward tracks. Setting the property to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be processed. When FEED_FORWARD_TOP_CONFIDENCE_COUNT is set to a number greater than 0, say 5, then the top 5 detections in the feed forward track (based on highest confidence) will be processed. If the track contains less than 5 detections then all of the detections in the track will be processed. If one or more detections have the same confidence value, then the detection(s) with the lower frame index take precedence.",
+ "text": "Components that support feed forward have two algorithm properties that control the feed forward behavior: FEED_FORWARD_TYPE and FEED_FORWARD_TOP_QUALITY_COUNT . FEED_FORWARD_TYPE can be set to the following values: NONE : Feed forward is disabled (default setting). FRAME : For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored. SUPERSET_REGION : Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the Superset\n Region section for more details. For each detection in the feed forward track, search the superset\n region. REGION : For each detection in the feed forward track, search the exact detection region. NOTE: When using REGION , the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, REGION should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use SUPERSET_REGION instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track. FEED_FORWARD_TOP_QUALITY_COUNT allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed. When FEED_FORWARD_TOP_QUALITY_COUNT is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property QUALITY_SELECTION_PROPERTY , which defaults to CONFIDENCE , but may be set to a different detection property. Refer to\nthe Quality Selection Guide . If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.",
"title": "Feed Forward Properties"
},
{
"location": "/Feed-Forward-Guide/index.html#superset-region",
- "text": "A \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a track. This is also known as a \u201cunion\u201d or \u201cminimum bounding rectangle\" . For example, consider a track representing a person moving from the upper left to the lower right. The track consists of 3 frames that have the following detection regions: Frame 0: (x = 10, y = 10, width = 10, height = 10) Frame 1: (x = 15, y = 15, width = 10, height = 10) Frame 2: (x = 20, y = 20, width = 10, height = 10) Each detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame region. The superset region for the track is (x = 10, y = 10, width = 20, height = 20) , and is drawn with a dotted red line. The major advantage of using a superset region is constant size. Some algorithms require the search space in each frame to be a constant size in order to successfully track objects. A disadvantage is that the superset region will often be larger than any specific detection region, so the search space is not restricted to the smallest possible size in each frame; however, in many cases the search space will be significantly smaller than the whole frame. In the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a video to the lower right corner. In that case the superset region will be the entire width and height of the frame, so SUPERSET_REGION devolves into FRAME . In a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In that case SUPERSET_REGION is able to filter out 75% of the rest of the frame data. In the example shown in the above diagram, SUPERSET_REGION is able to filter out 83% of the rest of the frame data. \n \n \n Your browser does not support the embedded video tag. \n Click here to download the video. \n The above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box that does not. The inner bounding box represents the face detection in that frame, while the outer bounding box represents the superset region for the track associated with that face. Note that the bounding box for each face uses a different color. The colors are not related to those used in the above diagram.",
+ "text": "A \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \u201cminimum bounding\nrectangle\" . For example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions: Frame 0: (x = 10, y = 10, width = 10, height = 10) Frame 1: (x = 15, y = 15, width = 10, height = 10) Frame 2: (x = 20, y = 20, width = 10, height = 10) Each detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is (x = 10, y = 10, width = 20, height = 20) , and is drawn with a dotted red\nline. The major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects. A disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame. In the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so SUPERSET_REGION devolves into FRAME . In a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case SUPERSET_REGION is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, SUPERSET_REGION is able to filter out 83% of the rest of the frame data. \n \n \n Your browser does not support the embedded video tag. \n Click here to download the video. \n The above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.",
"title": "Superset Region"
},
{
"location": "/Feed-Forward-Guide/index.html#mpfvideocapture-and-mpfimagereader-tools",
- "text": "When developing a component, the C++ Batch Component API and Python Batch Component API include utilities that make it easier to support feed forward in your components. They work similarly, but only the C++ tools will be discussed here. The MPFVideoCapture class is a wrapper around OpenCV's cv::VideoCapture class. MPFVideoCapture works very similarly to cv::VideoCapture , except that it might modify the video frames based on job properties. From the point of view of someone using MPFVideoCapture , these modifications are mostly transparent. MPFVideoCapture makes it look like you are reading the original video file. Conceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless there was a detection in every frame) and possibly a smaller frame size. For example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found detections in frames 4, 7, and 10, then MPFVideoCapture will make it look like the video only has those 3 frames. If the feed forward type is SUPERSET_REGION or REGION, and each detection is 30x50 pixels, then MPFVideoCapture will make it look like the video's original resolution was 30x50 pixels. One issue with this approach is that the detection frame numbers and bounding box will be relative to the modified video, not the original. To make the detections relative to the original video the MPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack) function must be used. The general pattern for using MPFVideoCapture is as follows: std::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n} MPFVideoCapture makes it look like the user is processing the original video, when in reality they are processing a modified version. To avoid confusion, this means that MPFVideoCapture should always be returning frames that are the same size because most users expect each frame of a video to be the same size. When using SUPERSET_REGION this is not an issue, since one bounding box is used for the entire track. However, when using REGION , each detection can be a different size, so it is not possible for MPFVideoCapture to return frames that are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of MPFVideoCapture , SUPERSET_REGION should usually be preferred over REGION . The REGION setting should only be used with components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region tracking, so processing frames of various sizes is not a problem. The MPFImageReader class is similar to MPFVideoCapture , but it works on images instead of videos. MPFImageReader makes it look like the user is processing an original image, when in reality they are processing a modified version where the frame region is generated based on a detection ( MPFImageLocation ) fed forward from the previous stage of a pipeline. Note that SUPERSET_REGION and REGION have the same effect when working with images. MPFImageReader also has a reverse transform function.",
+ "text": "When developing a component, the C++ Batch Component API and Python Batch\nComponent API include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The MPFVideoCapture class is a\nwrapper around OpenCV's cv::VideoCapture class. MPFVideoCapture works very similarly to cv::VideoCapture , except\nthat it might modify the video frames based on job properties. From the point of view of someone using MPFVideoCapture , these modifications are mostly transparent. MPFVideoCapture makes it look like you are reading the\noriginal video file. Conceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size. For example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then MPFVideoCapture will make it look like the video only has those 3 frames. If\nthe feed forward type is SUPERSET_REGION or REGION, and each detection is 30x50 pixels, then MPFVideoCapture will\nmake it look like the video's original resolution was 30x50 pixels. One issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the MPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack) function must be used. The general pattern for using MPFVideoCapture is as follows: std::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n} MPFVideoCapture makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that MPFVideoCapture should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size. When using SUPERSET_REGION this is not an issue, since one bounding box is used for the entire track. However, when\nusing REGION , each detection can be a different size, so it is not possible for MPFVideoCapture to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of MPFVideoCapture , SUPERSET_REGION should usually be preferred over REGION . The REGION setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem. The MPFImageReader class is similar to MPFVideoCapture , but it works on images instead of videos. MPFImageReader \nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection ( MPFImageLocation ) fed forward from the previous stage of a\npipeline. Note that SUPERSET_REGION and REGION have the same effect when working with images. MPFImageReader also\nhas a reverse transform function.",
"title": "MPFVideoCapture and MPFImageReader Tools"
},
{
"location": "/Feed-Forward-Guide/index.html#opencv-dnn-component-tracking",
- "text": "The OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking behavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process the entire region of each frame of a video. If one or more consecutive frames has the same highest confidence classification, then a new track is generated that contains those frames. When feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track according to the FEED_FORWARD_TYPE . It will generate one track that contains the same frames as the feed forward track. If FEED_FORWARD_TYPE is set to REGION then the OpenCV DNN track will contain (inherit) the same detection regions as the feed forward track. In any case, the detectionProperties map for the detections in the OpenCV DNN track will include the CLASSIFICATION entries and possibly other OpenCV DNN component properties.",
+ "text": "The OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames. When feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the FEED_FORWARD_TYPE . It will generate one track that contains the same frames as the feed forward\ntrack. If FEED_FORWARD_TYPE is set to REGION then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the detectionProperties map for the detections in the OpenCV DNN track\nwill include the CLASSIFICATION entries and possibly other OpenCV DNN component properties.",
"title": "OpenCV DNN Component Tracking"
},
{
"location": "/Feed-Forward-Guide/index.html#feed-forward-pipeline-examples",
- "text": "GoogLeNet Classification with MOG Motion Detection and Feed Forward Region First, create the following action: CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION Then create the following task: CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION Then create the following pipeline: CAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each detection in each track will have an OpenCV DNN CLASSIFICATION entry. Each track has a 1-to-1 correspondence with a MOG motion track. Refer to runMogThenCaffeFeedForwardExactRegionTest() in the TestSystemOnDiff class for a system test that demonstrates this behavior. Refer to runMogThenCaffeFeedForwardSupersetRegionTest() in that class for a system test that uses SUPERSET_REGION instead. Refer to runMogThenCaffeFeedForwardFullFrameTest() for a system test that uses FRAME instead. NOTE: Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To mitigate this, consider setting the MERGE_TRACKS , MIN_GAP_BETWEEN_TRACKS , and MIN_TRACK_LENGTH properties to generate longer motion tracks and discard short and/or spurious motion tracks. NOTE: It doesn\u2019t make sense to use FEED_FORWARD_TOP_CONFIDENCE_COUNT on a pipeline stage that follows a MOG or SuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values. Instead, FEED_FORWARD_TOP_CONFIDENCE_COUNT could potentially be used when feeding person tracks into a face detector, for example, if those person tracks have confidence values. OCV Face Detection with MOG Motion Detection and Feed Forward Superset Region First, create the following action: OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION Then create the following task: OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION Then create the following pipeline: OCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has a 1-to-1 correspondence with a MOG motion track. Refer to runMogThenOcvFaceFeedForwardRegionTest() in the TestSystemOnDiff class for a system test that demonstrates this behavior.",
+ "text": "GoogLeNet Classification with MOG Motion Detection and Feed Forward Region First, create the following action: CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION Then create the following task: CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION Then create the following pipeline: CAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN CLASSIFICATION entry. Each track has a 1-to-1 correspondence with a\nMOG motion track. Refer to runMogThenCaffeFeedForwardExactRegionTest() in the TestSystemOnDiff \nclass for a system test that demonstrates this behavior. Refer to runMogThenCaffeFeedForwardSupersetRegionTest() in\nthat class for a system test that uses SUPERSET_REGION instead. Refer to runMogThenCaffeFeedForwardFullFrameTest() \nfor a system test that uses FRAME instead. NOTE: Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the MERGE_TRACKS , MIN_GAP_BETWEEN_TRACKS , and MIN_TRACK_LENGTH properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks. NOTE: It doesn\u2019t make sense to use FEED_FORWARD_TOP_QUALITY_COUNT on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n( CONFIDENCE being the default value for the QUALITY_SELECTION_PROPERTY job property). Instead, FEED_FORWARD_TOP_QUALITY_COUNT could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested QUALITY_SELECTION_PROPERTY set. OCV Face Detection with MOG Motion Detection and Feed Forward Superset Region First, create the following action: OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION Then create the following task: OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION Then create the following pipeline: OCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track. Refer to runMogThenOcvFaceFeedForwardRegionTest() in the TestSystemOnDiff \nclass for a system test that demonstrates this behavior.",
"title": "Feed Forward Pipeline Examples"
},
{
@@ -605,6 +605,26 @@
"text": "When health checks are enabled, the component executor will look for an INI file at $MPF_HOME/plugins//health/health-check.ini . Below is an example of the expected\nINI file. media=$MPF_HOME/plugins/OcvFaceDetection/health/meds_faces_image.png\nmin_num_tracks=2\nmedia_type=IMAGE\n\n[job_properties]\nJOB PROP1=VALUE1\nJOB PROP2=VALUE2\n\n[media_properties]\nMEDIA PROP=MEDIA VALUE The supported keys are: media : (Required) Path to the media file that will be used in the health check. min_num_tracks : (Required) The minimum number of tracks the component must find for the health\n check to pass. media_type : (Required) The type of media referenced in the media key. It must be one of\n \"IMAGE\", \"VIDEO\", \"AUDIO\", or \"GENERIC\". job_properties : (Optional) Job properties that will set on the health check job. media_properties : (Optional) Media properties that will set on the health check job.",
"title": "The INI File"
},
+ {
+ "location": "/Quality-Selection-Guide/index.html",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nThere are a few places in OpenMPF where the quality of a detection comes into play. Here, \"detection quality\" is defined\nto be a measurement of how \"good\" the detection is that can be used to rank the detections in a track from highest to\nlowest quality. In many cases, components use \"confidence\" as an indicator of quality; however, there are some\ncomponents that do not compute a confidence value for its detections, and there are others that compute a different\nvalue that is a better measure of quality for that detection algorithm. As discussed in the next section, OpenMPF uses\ndetection quality for a variety of purposes.\n\n\nQuality Selection Properties\n\n\nQUALITY_SELECTION_PROPERTY\n is a string that defines the name of the property to use for quality selection. For\nexample, a face detection component may generate detections with a \nDESCRIPTOR_MAGNITUDE\n property that represents the\nquality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the\n\ndetection_properties\n map in each detection and track for that key and use the corresponding value as the detection\nquality. The value associated with this property must be an integer or floating point value, where higher values\nindicate higher quality.\n\n\nOne exception is when this property is set to \nCONFIDENCE\n and no \nCONFIDENCE\n property exists in the\n\ndetection_properties\n map. Then the \nconfidence\n member of each detection and track is used instead.\n\n\nThe primary way in which OpenMPF uses detection quality is to determine the track \"exemplar\", which is the highest\nquality detection in the track. For components that do not compute a quality value, or where all detections have\nidentical quality, the Workflow Manager will choose the first detection in the track as the exemplar.\n\n\nQUALITY_SELECTION_THRESHOLD\n is a numerical value used for filtering out low quality detections and tracks. All\ndetections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself\nis also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager\nto do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the\nlowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to\n1, then setting the threshold property to -1 will result in all detections and all tracks being retained.\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n can be used to select the number of detections to include in a feed-forward track. For\nexample, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that\ntrack. If less then 10 detections meet the \nQUALITY_SELECTION_THRESHOLD\n, then only that many detections are fed\nforward. Refer to the \nFeed Forward Guide\n for more information.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n can be used to select the number of detections that will be used to\nextract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,\nand then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet\nthe \nQUALITY_SELECTION_THRESHOLD\n, then only that many artifacts will be extracted.\n\n\nHybrid Quality Selection\n\n\nIn some cases, there may be a detection property that a component would like to use as a measure of quality but it\ndoesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and\nwould like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose\nmay be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality\ndetection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property\nto take on a range of values where the highest value indicates the highest quality.\n\n\nAnother use case might be where the component would like to choose detections based on a set of quality values, or\nproperties. Continuing with the face pose example, the component might like to designate the detection with pose closest\nto frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to\nprofile, meaning values of yaw closest to -90 or +90 degrees.\n\n\nIn both of these cases, the component can create a custom detection property that is used to rank these detections as it\nsees fit. It could use a detection property called \nRANK\n, and assign values to that property to rank the detections\nfrom lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the\ndetection with a value of yaw closest to 0 would be assigned a \nRANK\n property with the highest value, then the\ndetections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of \nRANK\n.\nDetections without the \nRANK\n property would be treated as having the lowest possible quality value. Thus, the track\nexemplar would be the face with the frontal pose, and the \nARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT\n property could\nbe set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.",
+ "title": "Quality Selection Guide"
+ },
+ {
+ "location": "/Quality-Selection-Guide/index.html#introduction",
+ "text": "There are a few places in OpenMPF where the quality of a detection comes into play. Here, \"detection quality\" is defined\nto be a measurement of how \"good\" the detection is that can be used to rank the detections in a track from highest to\nlowest quality. In many cases, components use \"confidence\" as an indicator of quality; however, there are some\ncomponents that do not compute a confidence value for its detections, and there are others that compute a different\nvalue that is a better measure of quality for that detection algorithm. As discussed in the next section, OpenMPF uses\ndetection quality for a variety of purposes.",
+ "title": "Introduction"
+ },
+ {
+ "location": "/Quality-Selection-Guide/index.html#quality-selection-properties",
+ "text": "QUALITY_SELECTION_PROPERTY is a string that defines the name of the property to use for quality selection. For\nexample, a face detection component may generate detections with a DESCRIPTOR_MAGNITUDE property that represents the\nquality of the face embedding and how useful it is for reidentification. The Workflow Manager will search the detection_properties map in each detection and track for that key and use the corresponding value as the detection\nquality. The value associated with this property must be an integer or floating point value, where higher values\nindicate higher quality. One exception is when this property is set to CONFIDENCE and no CONFIDENCE property exists in the detection_properties map. Then the confidence member of each detection and track is used instead. The primary way in which OpenMPF uses detection quality is to determine the track \"exemplar\", which is the highest\nquality detection in the track. For components that do not compute a quality value, or where all detections have\nidentical quality, the Workflow Manager will choose the first detection in the track as the exemplar. QUALITY_SELECTION_THRESHOLD is a numerical value used for filtering out low quality detections and tracks. All\ndetections below this threshold are discarded, and if all the detections in a track are discarded, then the track itself\nis also discarded. Note that components may do this filtering themselves, while others leave it to the Workflow Manager\nto do the filtering. The thresholding process can be circumvented by setting this threshold to a value less than the\nlowest possible value. For example, if the detection quality value computed by a component has values in the range 0 to\n1, then setting the threshold property to -1 will result in all detections and all tracks being retained. FEED_FORWARD_TOP_QUALITY_COUNT can be used to select the number of detections to include in a feed-forward track. For\nexample, if set to 10, only the top 10 highest quality detections are fed forward to the downstream component for that\ntrack. If less then 10 detections meet the QUALITY_SELECTION_THRESHOLD , then only that many detections are fed\nforward. Refer to the Feed Forward Guide for more information. ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT can be used to select the number of detections that will be used to\nextract artifacts. For example, if set to 10, the detections in a track will be sorted by their detection quality value,\nand then the artifacts for the 10 detections with the highest quality will be extracted. If less then 10 detections meet\nthe QUALITY_SELECTION_THRESHOLD , then only that many artifacts will be extracted.",
+ "title": "Quality Selection Properties"
+ },
+ {
+ "location": "/Quality-Selection-Guide/index.html#hybrid-quality-selection",
+ "text": "In some cases, there may be a detection property that a component would like to use as a measure of quality but it\ndoesn't lend itself to simple thresholding. For example, a face detector might be able to calculate the face pose, and\nwould like to select faces that are in the most frontal pose as the highest quality detections. The yaw of the face pose\nmay be used to indicate this, but if it's values are between, say, -90 degrees and +90 degrees, then the highest quality\ndetection would be the one with a value of yaw closest to 0. This violates the need for the quality selection property\nto take on a range of values where the highest value indicates the highest quality. Another use case might be where the component would like to choose detections based on a set of quality values, or\nproperties. Continuing with the face pose example, the component might like to designate the detection with pose closest\nto frontal as the highest quality, but would also like to assign high quality to detections where the pose is closest to\nprofile, meaning values of yaw closest to -90 or +90 degrees. In both of these cases, the component can create a custom detection property that is used to rank these detections as it\nsees fit. It could use a detection property called RANK , and assign values to that property to rank the detections\nfrom lowest to highest quality. In the example of the face detector wanting to use the yaw of the face pose, the\ndetection with a value of yaw closest to 0 would be assigned a RANK property with the highest value, then the\ndetections with values of yaw closest to +/-90 degrees would be assigned the second and third highest values of RANK .\nDetections without the RANK property would be treated as having the lowest possible quality value. Thus, the track\nexemplar would be the face with the frontal pose, and the ARTIFACT_EXTRACTION_POLICY_TOP_QUALITY_COUNT property could\nbe set to 3, so that the frontal and two profile pose detections would be kept as track artifacts.",
+ "title": "Hybrid Quality Selection"
+ },
{
"location": "/REST-API/index.html",
"text": "The OpenMPF REST API is provided by Swagger and is available within the OpenMPF Workflow Manager web application. Swagger enables users to test the endpoints using the running instance of OpenMPF.\n\n\nClick \nhere\n for a generated version of the content.\n\n\nNote that in a Docker deployment the \n/rest/nodes\n and \n/rest/streaming\n endpoints are disabled.",
@@ -1332,7 +1352,7 @@
},
{
"location": "/Development-Environment-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\n\n \nWARNING:\n\n For most component developers, these steps are not necessary. Instead,\n refer to the\n \nC++\n,\n \nPython\n, or\n \nJava\n\n README for developing a Docker component in your desired language.\n\n\n\n\n\n \nWARNING:\n This guide is a work in progress and may not be completely\n accurate or comprehensive.\n\n\n\n\nOverview\n\n\nThe following instructions are for setting up an environment for building and\nrunning OpenMPF outside of Docker. They serve as a reference for developers who\nwant to develop the Workflow Manager web application itself and perform end-to-\nend integration testing.\n\n\nSetup VM\n\n\n\n\n\n\nDownload the ISO for the desktop version of Ubuntu 20.04 from\n \nhttps://releases.ubuntu.com/20.04\n.\n\n\n\n\n\n\nCreate an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using.\n\n\n\n\nUse mpf as your username.\n\n\nDuring the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software.\n\n\n\n\n\n\n\n\nAfter completing the installation, you will likely be prompted to update\n software. You should install the updates.\n\n\n\n\n\n\nOptionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong.\n\n\n\n\n\n\nOpen a terminal and run \nsudo apt update\n\n\n\n\n\n\nRun \nsudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc\n\n\n\n\n\n\nFollow instructions to install Docker:\n \nhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository\n\n\n\n\n\n\nOptionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use \ndocker\n commands:\n\n\n\n\n\n\nsudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket;\n\n\n\n\n\n\n\nFollow instructions so that you can run Docker without sudo:\n \nhttps://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user\n\n\n\n\n\n\nInstall Docker Compose:\n\n\n\n\n\n\nsudo apt update\nsudo apt install docker-compose-plugin\n\n\n\n\n\n\n\nOptionally, stop redis from starting automatically:\n \nsudo systemctl disable redis\n\n\n\n\n\n\nOptionally, stop postgresql from starting automatically:\n \nsudo systemctl disable postgresql\n\n\n\n\n\n\nInitialize Postgres (use \"password\" when prompted for a password):\n\n\n\n\n\n\nsudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf\n\n\n\n\n\nBuild and install OpenCV:\n\n\n\n\nmkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.5.5;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.5.5' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DWITH_CUBLAS=true \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.5.5/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.5.5/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.5.5/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv;\n\n\n\n\n\nBuild and install the ActiveMQ C++ library:\n\n\n\n\nmkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp;\n\n\n\n\n\nInstall NotoEmoji font for markup:\n\n\n\n\nmkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto;\n\n\n\n\n\nBuild and install PNG Defry:\n\n\n\n\nmkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry;\n\n\n\n\n\nInstall Maven:\n\n\n\n\nwget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin;\n\n\n\n\n\nBuild and install libheif:\n\n\n\n\nmkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif;\n\n\n\n\n\nFrom your home directory run:\n\n\n\n\ngit clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop;\n\n\n\n\n\n\n\nRun: \npip install openmpf-projects/openmpf/trunk/bin/mpf-scripts\n\n\n\n\n\n\nAdd \nPATH=\"$HOME/.local/bin:$PATH\"\n to \n~/.bashrc\n\n\n\n\n\n\nRun \nmkdir -p openmpf-projects/openmpf/trunk/install/share/logs\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh\n\n\n\n\n\n\nRun \nsudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf'\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service\n\n\n\n\n\n\nRun \ncd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties\n\n\n\n\n\n\nRun \nsudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts'\n\n\n\n\n\n\nRun \nmkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/\n\n\n\n\n\n\nReboot the VM.\n\n\n\n\n\n\nAt this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the \nDockerfile\n\nfor each component you're interested in.\n\n\nConfigure Users\n\n\nTo change the default user password settings, modify\n\nopenmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/user.properties\n.\nNote that the default settings are public knowledge, which could be a security\nrisk.\n\n\nNote that \nmpf remove-user\n and \nmpf add-user\n commands explained in the\n\nCommand Line Tools\n section do not modify the\n\nuser.properties\n file. If you remove a user using the \nmpf remove-user\n\ncommand, the changes will take effect at runtime, but an entry may still exist\nfor that user in the \nuser.properties\n file. If so, then the user account will\nbe recreated the next time the Workflow Manager is restarted.\n\n\nBuild and Run the OpenMPF Workflow Manager Web Application\n\n\n\n\nBuild OpenMPF:\n\n\n\n\ncd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false;\n\n\n\n\n\nStart OpenMPF with \nmpf start\n.\n\n\n\n\nLook for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting:\n\n\n2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661)\n\n\n\nAfter startup, the Workflow Manager will be available at \nhttp://localhost:8080\n.\nBrowse to this URL using Firefox or Chrome.\n\n\nIf you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the\n\nOpenMPF User Guide\n for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the\n\nOpenMPF Admin Guide\n for more information.\nWhen finished using OpenMPF, stop Workflow Manager with \nctrl-c\n and then run \nmpf stop\n to stop\nthe other system dependencies.\n\n\nThe preferred method to start and stop services for OpenMPF is with the\n\nmpf start\n and \nmpf stop\n commands. For additional information on these\ncommands, please see the\n\nCommand Line Tools\n section.\nThese will start and stop the PostgreSQL, Redis, Node Manager, and Workflow Manager processes.\n\n\nKnown Issues\n\n\no.m.m.m.c.JobController - Failure creating job. supplier.get()\n\n\nIf you see an error message similar to:\n\n\n2022-02-07 17:17:30,538 ERROR [http-nio-8080-exec-1] o.m.m.m.c.JobController - Failure creating job. supplier.get()\njava.lang.NullPointerException: supplier.get()\n at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]\n at java.util.Objects.requireNonNullElseGet(Objects.java:321) ~[?:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getHostName(PropertiesUtil.java:267) ~[classes/:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getExportedJobId(PropertiesUtil.java:285) ~[classes/:?]\n\n\n\nOpen \n/etc/profile.d/mpf.sh\n and change \nexport HOSTNAME\n to\n\nexport HOSTNAME=$(hostname)\n. Then, restart the VM.\n\n\nAppendices\n\n\nCommand Line Tools\n\n\nOpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions:\n\nmpf [options ...]\n.\n\n\nExecute \nmpf --help\n for general documentation and \nmpf --help\n for\ndocumentation about a specific action.\n\n\n\n\nStart / Stop Actions\n: Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster.\n\n\nmpf status\n: displays a message indicating whether each of the system\n dependencies is running or not\n\n\nmpf start\n: starts all of the system dependencies\n\n\nmpf stop\n: stops all of the system dependencies\n\n\nmpf restart\n : stops and then starts all of the system dependencies\n\n\n\n\n\n\nUser Actions\n: Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect.\n\n\nmpf list-users\n : lists all of the existing user accounts and their role\n (non-admin or admin)\n\n\nmpf add-user \n: adds a new user account; will be\n prompted to enter the account password\n\n\nmpf remove-user \n : removes an existing user account\n\n\nmpf change-role \n : change the role (non-admin to admin\n or vice versa) for an existing user\n\n\nmpf change-password \n: change the password for an existing\n user; will be prompted to enter the new account password\n\n\n\n\n\n\nClean Actions\n: Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved.\n\n\nmpf clean\n: cleans out old job information and results, pending job requests, and marked up\n media files, but preserves log files and uploaded media.\n\n\nmpf clean --delete-logs --delete-uploaded-media\n: the same as \nmpf clean\n\n but also deletes log files and uploaded media\n\n\n\n\n\n\nNode Action\n: Actions for managing node membership in the OpenMPF cluster.\n\n\nmpf list-nodes\n: If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes\n\n\n\n\n\n\n\n\nPackaging a Component\n\n\nIn a non-Docker deployment, admin users can register component packages through\nthe web UI. Refer to \nComponent Registration\n.\n\n\nOnce the descriptor file is complete, as described in\n\nComponent Descriptor Reference\n,\nthe next step is to compile your component source code, and finally, create a\n.tar.gz package containing the descriptor file, component library, and all\nother necessary files.\n\n\nThe package should contain a top-level directory with a unique name that will\nnot conflict with existing component packages that have already been developed.\nThe top-level directory name should be the same as the \ncomponentName\n.\n\n\nWithin the top-level directory there must be a directory named \u201cdescriptor\u201d\nwith the descriptor JSON file in it. The name of the file must be\n\u201cdescriptor.json\u201d.\n\n\nExample:\n\n\n//sample-component-1.0.0-tar.gz contents\nSampleComponent/\n config/\n descriptor/\n descriptor.json\n lib/\n\n\n\nInstalling and registering a component\n\n\nThe Component Registration web page, located in the Admin section of the\nOpenMPF web user interface, can be used to upload and register the component.\n\n\nDrag and drop the .tar.gz file containing the component onto the dropzone area\nof that page. The component will automatically be uploaded and registered.\n\n\nUpon successful registration, the component will be available for deployment\nonto OpenMPF nodes via the Node Configuration web page and\n\n/rest/nodes/config\n end point.\n\n\nIf the descriptor contains custom actions, tasks, or pipelines, then they will\nbe automatically added to the system upon registration.\n\n\n\n\nNOTE:\n If the descriptor does not contain custom actions, tasks,\nor pipelines, then a default action, task, and pipeline will be generated\nand added to the system.\n\n\nThe default action will use the component\u2019s algorithm with its default\nproperty value settings.\nThe default task will use the default action.\nThe default pipeline will use the default task. This will only be generated\nif the algorithm does not specify any \nrequiresCollection\n states.\n\n\n\n\nUnregistering a component\n\n\nA component can be unregistered by using the remove button on the Component\nRegistration page.\n\n\nDuring unregistration, all services, algorithms, actions, tasks, and pipelines\nassociated with the component are deleted. Additionally, all actions, tasks,\nand pipelines that depend on these elements are removed.\n\n\nWeb UI\n\n\nThe following sections will cover some additional functionality permitted to\nadmin users in a non-Docker deployment.\n\n\nNode Configuration and Status\n\n\nThis page provides a list of all of the services that are configured to run on\nthe OpenMPF cluster:\n\n\n\n\nEach node shows information about the current status of each service, if it is\nunlaunchable due to an underlying error, and how many services are running for\neach node. If a service is unlaunchable, it will be indicated using a red\nstatus icon (not shown). Note that services are grouped by component type.\nClick the chevron \">\" to expand a service group to view the individual services.\n\n\nAn admin user can start, stop, or restart them on an individual basis. If a\nnon-admin user views this page, the \"Action(s)\" column is not displayed. This\npage also enables an admin user to edit the configuration for all nodes in the\nOpenMPF cluster. A non-admin user can only view the existing configuration.\n\n\nAn admin user can add a node by using the \"Add Node\" button and selecting a\nnode in the OpenMPF cluster from the drop-down list. You can also select to add\nall services at this time. A node and all if its configured services can be\nremoved by clicking the trash can to the right of the node's hostname.\n\n\nAn admin user can add services individually by selecting the node edit button\nat the bottom of the node. The number of service instances can be increased or\ndecreased by using the drop-down. Click the \"Submit\" button to save the changes.\n\n\nWhen making changes, please be aware of the following:\n\n\n\n\nIt may take a minute for the configuration to take effect on the server.\n\n\nIf you remove an existing service from a node, any job that service is\n processing will be stopped, and you will need to resubmit that job.\n\n\nIf you create a new node, its configuration will not take effect until the\n OpenMPF software is properly installed and started on the associated host.\n\n\nIf you delete a node, you will need to manually turn off the hardware running\n that node (deleting a node does not shut down the machine).\n\n\n\n\nComponent Registration\n\n\nThis page allows an admin user to add and remove non-default components to and\nfrom the system:\n\n\n\n\nA component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the workflow manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus.\n\n\n\n\nIf for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the workflow manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the \n/opt/mpf/plugins\n\ndirectory on the system.\n\n\nOnce registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\n\n \nWARNING:\n\n For most component developers, these steps are not necessary. Instead,\n refer to the\n \nC++\n,\n \nPython\n, or\n \nJava\n\n README for developing a Docker component in your desired language.\n\n\n\n\n\n \nWARNING:\n This guide is a work in progress and may not be completely\n accurate or comprehensive.\n\n\n\n\nOverview\n\n\nThe following instructions are for setting up an environment for building and\nrunning OpenMPF outside of Docker. They serve as a reference for developers who\nwant to develop the Workflow Manager web application itself and perform end-to-\nend integration testing.\n\n\nSetup VM\n\n\n\n\n\n\nDownload the ISO for the desktop version of Ubuntu 20.04 from\n \nhttps://releases.ubuntu.com/20.04\n.\n\n\n\n\n\n\nCreate an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using.\n\n\n\n\nUse mpf as your username.\n\n\nDuring the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software.\n\n\n\n\n\n\n\n\nAfter completing the installation, you will likely be prompted to update\n software. You should install the updates.\n\n\n\n\n\n\nOptionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong.\n\n\n\n\n\n\nOpen a terminal and run \nsudo apt update\n\n\n\n\n\n\nRun \nsudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc\n\n\n\n\n\n\nFollow instructions to install Docker:\n \nhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository\n\n\n\n\n\n\nOptionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use \ndocker\n commands:\n\n\n\n\n\n\nsudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket;\n\n\n\n\n\n\n\nFollow instructions so that you can run Docker without sudo:\n \nhttps://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user\n\n\n\n\n\n\nInstall Docker Compose:\n\n\n\n\n\n\nsudo apt update\nsudo apt install docker-compose-plugin\n\n\n\n\n\n\n\nOptionally, stop redis from starting automatically:\n \nsudo systemctl disable redis\n\n\n\n\n\n\nOptionally, stop postgresql from starting automatically:\n \nsudo systemctl disable postgresql\n\n\n\n\n\n\nInitialize Postgres (use \"password\" when prompted for a password):\n\n\n\n\n\n\nsudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf\n\n\n\n\n\nBuild and install OpenCV:\n\n\n\n\nmkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.5.5;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.5.5' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DWITH_CUBLAS=true \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.5.5/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.5.5/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.5.5/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv;\n\n\n\n\n\nBuild and install the ActiveMQ C++ library:\n\n\n\n\nmkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp;\n\n\n\n\n\nInstall NotoEmoji font for markup:\n\n\n\n\nmkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto;\n\n\n\n\n\nBuild and install PNG Defry:\n\n\n\n\nmkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry;\n\n\n\n\n\nInstall Maven:\n\n\n\n\nwget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin;\n\n\n\n\n\nBuild and install libheif:\n\n\n\n\nmkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif;\n\n\n\n\n\nFrom your home directory run:\n\n\n\n\ngit clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop;\n\n\n\n\n\n\n\nRun: \npip install openmpf-projects/openmpf/trunk/bin/mpf-scripts\n\n\n\n\n\n\nAdd \nPATH=\"$HOME/.local/bin:$PATH\"\n to \n~/.bashrc\n\n\n\n\n\n\nRun \nmkdir -p openmpf-projects/openmpf/trunk/install/share/logs\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh\n\n\n\n\n\n\nRun \nsudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf'\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service\n\n\n\n\n\n\nRun \ncd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties\n\n\n\n\n\n\nRun \nsudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts'\n\n\n\n\n\n\nRun \nmkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/\n\n\n\n\n\n\nReboot the VM.\n\n\n\n\n\n\nAt this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the \nDockerfile\n\nfor each component you're interested in.\n\n\nConfigure Users\n\n\nTo change the default user password settings, modify\n\nopenmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/user.properties\n.\nNote that the default settings are public knowledge, which could be a security\nrisk.\n\n\nNote that \nmpf remove-user\n and \nmpf add-user\n commands explained in the\n\nCommand Line Tools\n section do not modify the\n\nuser.properties\n file. If you remove a user using the \nmpf remove-user\n\ncommand, the changes will take effect at runtime, but an entry may still exist\nfor that user in the \nuser.properties\n file. If so, then the user account will\nbe recreated the next time the Workflow Manager is restarted.\n\n\nBuild and Run the OpenMPF Workflow Manager Web Application\n\n\n\n\nBuild OpenMPF:\n\n\n\n\ncd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false;\n\n\n\n\n\nStart OpenMPF with \nmpf start\n.\n\n\n\n\nLook for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting:\n\n\n2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661)\n\n\n\nAfter startup, the Workflow Manager will be available at \nhttp://localhost:8080\n.\nBrowse to this URL using Firefox or Chrome.\n\n\nIf you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the\n\nOpenMPF User Guide\n for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the\n\nOpenMPF Admin Guide\n for more information.\nWhen finished using OpenMPF, stop Workflow Manager with \nctrl-c\n and then run \nmpf stop\n to stop\nthe other system dependencies.\n\n\nThe preferred method to start and stop services for OpenMPF is with the\n\nmpf start\n and \nmpf stop\n commands. For additional information on these\ncommands, please see the\n\nCommand Line Tools\n section.\nThese will start and stop the PostgreSQL, Redis, Node Manager, and Workflow Manager processes.\n\n\nKnown Issues\n\n\no.m.m.m.c.JobController - Failure creating job. supplier.get()\n\n\nIf you see an error message similar to:\n\n\n2022-02-07 17:17:30,538 ERROR [http-nio-8080-exec-1] o.m.m.m.c.JobController - Failure creating job. supplier.get()\njava.lang.NullPointerException: supplier.get()\n at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]\n at java.util.Objects.requireNonNullElseGet(Objects.java:321) ~[?:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getHostName(PropertiesUtil.java:267) ~[classes/:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getExportedJobId(PropertiesUtil.java:285) ~[classes/:?]\n\n\n\nOpen \n/etc/profile.d/mpf.sh\n and change \nexport HOSTNAME\n to\n\nexport HOSTNAME=$(hostname)\n. Then, restart the VM.\n\n\nAppendices\n\n\nCommand Line Tools\n\n\nOpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions:\n\nmpf [options ...]\n.\n\n\nExecute \nmpf --help\n for general documentation and \nmpf --help\n for\ndocumentation about a specific action.\n\n\n\n\nStart / Stop Actions\n: Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster.\n\n\nmpf status\n: displays a message indicating whether each of the system\n dependencies is running or not\n\n\nmpf start\n: starts all of the system dependencies\n\n\nmpf stop\n: stops all of the system dependencies\n\n\nmpf restart\n : stops and then starts all of the system dependencies\n\n\n\n\n\n\nUser Actions\n: Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect.\n\n\nmpf list-users\n : lists all of the existing user accounts and their role\n (non-admin or admin)\n\n\nmpf add-user \n: adds a new user account; will be\n prompted to enter the account password\n\n\nmpf remove-user \n : removes an existing user account\n\n\nmpf change-role \n : change the role (non-admin to admin\n or vice versa) for an existing user\n\n\nmpf change-password \n: change the password for an existing\n user; will be prompted to enter the new account password\n\n\n\n\n\n\nClean Actions\n: Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved.\n\n\nmpf clean\n: cleans out old job information and results, pending job requests, and marked up\n media files, but preserves log files and uploaded media.\n\n\nmpf clean --delete-logs --delete-uploaded-media\n: the same as \nmpf clean\n\n but also deletes log files and uploaded media\n\n\n\n\n\n\nNode Action\n: Actions for managing node membership in the OpenMPF cluster.\n\n\nmpf list-nodes\n: If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes\n\n\n\n\n\n\n\n\nPackaging a Component\n\n\nIn a non-Docker deployment, admin users can register component packages through\nthe web UI. Refer to \nComponent Registration\n.\n\n\nOnce the descriptor file is complete, as described in\n\nComponent Descriptor Reference\n,\nthe next step is to compile your component source code, and finally, create a\n.tar.gz package containing the descriptor file, component library, and all\nother necessary files.\n\n\nThe package should contain a top-level directory with a unique name that will\nnot conflict with existing component packages that have already been developed.\nThe top-level directory name should be the same as the \ncomponentName\n.\n\n\nWithin the top-level directory there must be a directory named \u201cdescriptor\u201d\nwith the descriptor JSON file in it. The name of the file must be\n\u201cdescriptor.json\u201d.\n\n\nExample:\n\n\n//sample-component-1.0.0-tar.gz contents\nSampleComponent/\n config/\n descriptor/\n descriptor.json\n lib/\n\n\n\nInstalling and registering a component\n\n\nThe Component Registration web page, located in the Admin section of the\nOpenMPF web user interface, can be used to upload and register the component.\n\n\nDrag and drop the .tar.gz file containing the component onto the dropzone area\nof that page. The component will automatically be uploaded and registered.\n\n\nUpon successful registration, the component will be available for deployment\nonto OpenMPF nodes via the Node Configuration web page and\n\n/rest/nodes/config\n end point.\n\n\nIf the descriptor contains custom actions, tasks, or pipelines, then they will\nbe automatically added to the system upon registration.\n\n\n\n\nNOTE:\n If the descriptor does not contain custom actions, tasks,\nor pipelines, then a default action, task, and pipeline will be generated\nand added to the system.\n\n\nThe default action will use the component\u2019s algorithm with its default\nproperty value settings.\nThe default task will use the default action.\nThe default pipeline will use the default task. This will only be generated\nif the algorithm does not specify any \nrequiresCollection\n states.\n\n\n\n\nUnregistering a component\n\n\nA component can be unregistered by using the remove button on the Component\nRegistration page.\n\n\nDuring unregistration, all services, algorithms, actions, tasks, and pipelines\nassociated with the component are deleted. Additionally, all actions, tasks,\nand pipelines that depend on these elements are removed.\n\n\nWeb UI\n\n\nThe following sections will cover some additional functionality permitted to\nadmin users in a non-Docker deployment.\n\n\nNode Configuration and Status\n\n\nThis page provides a list of all of the services that are configured to run on\nthe OpenMPF cluster:\n\n\n\n\nEach node shows information about the current status of each service, if it is\nunlaunchable due to an underlying error, and how many services are running for\neach node. If a service is unlaunchable, it will be indicated using a red\nstatus icon (not shown). Note that services are grouped by component type.\nClick the chevron \">\" to expand a service group to view the individual services.\n\n\nAn admin user can start, stop, or restart them on an individual basis. If a\nnon-admin user views this page, the \"Action(s)\" column is not displayed. This\npage also enables an admin user to edit the configuration for all nodes in the\nOpenMPF cluster. A non-admin user can only view the existing configuration.\n\n\nAn admin user can add a node by using the \"Add Node\" button and selecting a\nnode in the OpenMPF cluster from the drop-down list. You can also select to add\nall services at this time. A node and all if its configured services can be\nremoved by clicking the trash can to the right of the node's hostname.\n\n\nAn admin user can add services individually by selecting the node edit button\nat the bottom of the node. The number of service instances can be increased or\ndecreased by using the drop-down. Click the \"Submit\" button to save the changes.\n\n\nWhen making changes, please be aware of the following:\n\n\n\n\nIt may take a minute for the configuration to take effect on the server.\n\n\nIf you remove an existing service from a node, any job that service is\n processing will be stopped, and you will need to resubmit that job.\n\n\nIf you create a new node, its configuration will not take effect until the\n OpenMPF software is properly installed and started on the associated host.\n\n\nIf you delete a node, you will need to manually turn off the hardware running\n that node (deleting a node does not shut down the machine).\n\n\n\n\nComponent Registration\n\n\nThis page allows an admin user to add and remove non-default components to and\nfrom the system:\n\n\n\n\nA component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the Workflow Manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus.\n\n\n\n\nIf for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the Workflow Manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the \n/opt/mpf/plugins\n\ndirectory on the system.\n\n\nOnce registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.",
"title": "Development Environment Guide"
},
{
@@ -1397,7 +1417,7 @@
},
{
"location": "/Development-Environment-Guide/index.html#component-registration",
- "text": "This page allows an admin user to add and remove non-default components to and\nfrom the system: A component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the workflow manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus. If for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the workflow manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the /opt/mpf/plugins \ndirectory on the system. Once registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.",
+ "text": "This page allows an admin user to add and remove non-default components to and\nfrom the system: A component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the Workflow Manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus. If for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the Workflow Manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the /opt/mpf/plugins \ndirectory on the system. Once registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.",
"title": "Component Registration"
},
{
@@ -1497,7 +1517,7 @@
},
{
"location": "/CPP-Streaming-Component-API/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.\n\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Streaming Component API currently supports the development of \ndetection components\n, which are used detect objects in live RTSP or HTTP video streams.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\n\n\nEach frame of the video is processed as it is read from the stream. After processing enough frames to form a segment (for example, 100 frames), the component starts processing the next segment. Like with batch processing, each segment read from the stream is processed independently of the rest. No detection or track information is carried over between segments. Tracks are not merged across segments.\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\nwhile (has_next_frame) {\n if (is_new_segment) {\n component->BeginSegment(video_segment_info)\n }\n activity_found = component->ProcessFrame(frame, frame_number) // Component logic does the work here\n if (activity_found && !already_sent_new_activity_alert_for_this_segment) {\n SendActivityAlert(frame_number)\n }\n if (is_end_of_segment) {\n streaming_video_tracks = component->EndSegment()\n SendSummaryReport(frame_number, streaming_video_tracks)\n }\n}\n\n\n\nEach instance of a Component Executable runs as a separate process. Generally, each process will execute a different detection algorithm that corresponds to a single stage in a detection pipeline. Each instance is started by the Node Manager as needed in order to execute a streaming video job. The Node Manager will monitor the process status and eventually stop it.\n\n\nThe Component Executable invokes functions on the Component Logic to get detection objects, and subsequently generates new track alerts and segment summary reports based on the output. These alerts and reports are sent to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFStreamingDetectionComponent\n.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Streaming Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n of an example OpenMPF C++ detection component that supports stream processing.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFStreamingDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nPackaging the component into an OpenMPF-compliant .tar.gz file. (See \nComponent Packaging\n).\n\n\nRegistering the component with OpenMPF. (See \nComponent Registration\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Streaming Component API:\n\n\n\n\nThe API consists of a \nDetection Component Interface\n and related input and output structures.\n\n\nDetection Component Interface\n\n\n\n\nMPFStreamingDetectionComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform stream processing.\n\n\n\n\nInputs\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nOutputs\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macro in its implementation:\n\n\nEXPORT_MPF_STREAMING_COMPONENT(TYPENAME);\n\n\n\nThis creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cStreamingHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThis macro also creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThis macro must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nEXPORT_MPF_STREAMING_COMPONENT(StreamingHelloWorld);\n\n\n\nDetection Component Interface\n\n\nThe \nMPFStreamingDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform stream processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\nConstructor\n\n\nSuperclass constructor that must be invoked by the constructor of the component subclass.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFStreamingVideoJob &\n\n\nStructure containing details about the work to be performed. See \nMPFStreamingVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nSampleComponent::SampleComponent(const MPFStreamingVideoJob &job)\n : MPFStreamingDetectionComponent(job)\n , hw_logger_(log4cxx::Logger::getLogger(\"SampleComponent\"))\n , job_name_(job.job_name) {\n\n LOG4CXX_INFO(hw_logger_, \"[\" << job_name_ << \"] Initialized SampleComponent component.\")\n}\n\n\n\nBeginSegment(VideoSegmentInfo)\n\n\nIndicate the beginning of a new video segment. The next call to \nProcessFrame()\n will be the first frame of the new segment. \nProcessFrame()\n will never be called before this function.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_info\n\n\nconst VideoSegmentInfo &\n\n\nStructure containing details about next video segment to process. See \nVideoSegmentInfo\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvoid SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {\n // Prepare for next segment\n}\n\n\n\nProcessFrame(Mat ...)\n\n\nProcess a single video frame for the current segment.\n\n\nMust return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.\n\n\nIf the \njob_properties\n map contained in the \nMPFStreamingVideoJob\n struct passed to the component constructor contains a CONFIDENCE_THRESHOLD entry, then this function should only return true for a detection with a confidence value that meets or exceeds that threshold. After the Component Executable invokes \nEndSegment()\n to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.\n\n\nNote that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed.\n\n\nAlso, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nframe\n\n\nconst cv::Mat &\n\n\nOpenCV class containing frame data. See \ncv::Mat\n\n\n\n\n\n\nframe_number\n\n\nint\n\n\nA unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True when the component begins generating the first track for the current segment; false otherwise.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}\n\n\n\nEndSegment()\n\n\nIndicate the end of the current video segment. This will always be called after \nBeginSegment()\n. Generally, \nProcessFrame()\n will be called one or more times before this function, depending on the number of frames in the segment and the number of frames actually read from the stream.\n\n\nNote that the next time \nBeginSegment()\n is called, this component should start generating new tracks. Each time \nEndSegment()\n is called, it should return only the most recent track data for that segment. Tracks should not be carried over between segments. Do not append new detections to a preexisting track from the previous segment and return that cumulative track when this function is called.\n\n\n\n\nFunction Definition:\n\n\n\n\nvector EndSegment()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nvector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvector SampleComponent::EndSegment() {\n // Perform any necessary cleanup before processing the next segment.\n // Return the collection of tracks generated for this segment only.\n}\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nMPFStreamingVideoJob\n\n\nStructure containing information about a job to be performed on a video stream.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFStreamingVideoJob(\n const string &job_name,\n const string &run_directory,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name\n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\nrun_directory \n\n\nconst string &\n\n\nContains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nVideoSegmentInfo\n\n\nStructure containing information about a segment of a video stream to be processed. A segment is a subset of contiguous video frames.\n\n\n\n\nConstructor(s):\n\n\n\n\nVideoSegmentInfo(\n int segment_number,\n int start_frame,\n int end_frame,\n int frame_width,\n int frame_height\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_number\n\n\nint\n\n\nA unique segment number (0-based index).\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the first frame in this segment.\n\n\n\n\n\n\nend_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the last frame in this segment.\n\n\n\n\n\n\nframe_width\n\n\nint\n\n\nThe height of each frame in this segment.\n\n\n\n\n\n\nframe_height\n\n\nint\n\n\nThe width of each frame in this segment.\n\n\n\n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a single video frame (image).\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMPFImageLocation detection;\ndetection.x_left_upper = 0;\ndetection.y_left_upper = 0;\ndetection.width = 100;\ndetection.height = 100;\ndetection.confidence = 1.0;\ndetection.detection_properties[\"CLASSIFICATION\"] = \"backpack\";\n\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nThrow Exceptions\n\n\nUnlike the \nC++ Batch Component API\n, none of the the C++ Streaming Component API functions return an \nMPFDetectionError\n. Instead, streaming components should throw an exception when a non-recoverable error occurs. The exception should be an instantiation or subclass of \nstd::exception\n and provide a descriptive error message that can be retrieved using \nwhat()\n. For example:\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Something bad happened\n throw std::exception(\"Error: Cannot do X with value Y.\");\n}\n\n\n\nThe exception will be handled by the Component Executable. It will immediately invoke \nEndSegment()\n to retrieve the current tracks. Then the component process and streaming job will be terminated.\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing a segment with the same \nVideoSegmentInfo\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n The C++ Streaming API is not complete, and there are no future development plans. Use at your own risk. The only way to make use of the functionality is through the REST API. It requires the Node Manager and does not work in a Docker deployment.\n\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Streaming Component API currently supports the development of \ndetection components\n, which are used detect objects in live RTSP or HTTP video streams.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\n\n\nEach frame of the video is processed as it is read from the stream. After processing enough frames to form a segment (for example, 100 frames), the component starts processing the next segment. Like with batch processing, each segment read from the stream is processed independently of the rest. No detection or track information is carried over between segments. Tracks are not merged across segments.\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\nwhile (has_next_frame) {\n if (is_new_segment) {\n component->BeginSegment(video_segment_info)\n }\n activity_found = component->ProcessFrame(frame, frame_number) // Component logic does the work here\n if (activity_found && !already_sent_new_activity_alert_for_this_segment) {\n SendActivityAlert(frame_number)\n }\n if (is_end_of_segment) {\n streaming_video_tracks = component->EndSegment()\n SendSummaryReport(frame_number, streaming_video_tracks)\n }\n}\n\n\n\nEach instance of a Component Executable runs as a separate process. Generally, each process will execute a different detection algorithm that corresponds to a single stage in a detection pipeline. Each instance is started by the Node Manager as needed in order to execute a streaming video job. The Node Manager will monitor the process status and eventually stop it.\n\n\nThe Component Executable invokes functions on the Component Logic to get detection objects, and subsequently generates new track alerts and segment summary reports based on the output. These alerts and reports are sent to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFStreamingDetectionComponent\n.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Streaming Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n of an example OpenMPF C++ detection component that supports stream processing.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFStreamingDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nPackaging the component into an OpenMPF-compliant .tar.gz file. (See \nComponent Packaging\n).\n\n\nRegistering the component with OpenMPF. (See \nComponent Registration\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Streaming Component API:\n\n\n\n\nThe API consists of a \nDetection Component Interface\n and related input and output structures.\n\n\nDetection Component Interface\n\n\n\n\nMPFStreamingDetectionComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform stream processing.\n\n\n\n\nInputs\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nOutputs\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macro in its implementation:\n\n\nEXPORT_MPF_STREAMING_COMPONENT(TYPENAME);\n\n\n\nThis creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cStreamingHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThis macro also creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThis macro must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nEXPORT_MPF_STREAMING_COMPONENT(StreamingHelloWorld);\n\n\n\nDetection Component Interface\n\n\nThe \nMPFStreamingDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform stream processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\nConstructor\n\n\nSuperclass constructor that must be invoked by the constructor of the component subclass.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob &job)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFStreamingVideoJob &\n\n\nStructure containing details about the work to be performed. See \nMPFStreamingVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nSampleComponent::SampleComponent(const MPFStreamingVideoJob &job)\n : MPFStreamingDetectionComponent(job)\n , hw_logger_(log4cxx::Logger::getLogger(\"SampleComponent\"))\n , job_name_(job.job_name) {\n\n LOG4CXX_INFO(hw_logger_, \"[\" << job_name_ << \"] Initialized SampleComponent component.\")\n}\n\n\n\nBeginSegment(VideoSegmentInfo)\n\n\nIndicate the beginning of a new video segment. The next call to \nProcessFrame()\n will be the first frame of the new segment. \nProcessFrame()\n will never be called before this function.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid BeginSegment(const VideoSegmentInfo &segment_info)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_info\n\n\nconst VideoSegmentInfo &\n\n\nStructure containing details about next video segment to process. See \nVideoSegmentInfo\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvoid SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {\n // Prepare for next segment\n}\n\n\n\nProcessFrame(Mat ...)\n\n\nProcess a single video frame for the current segment.\n\n\nMust return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment.\n\n\nIf the \njob_properties\n map contained in the \nMPFStreamingVideoJob\n struct passed to the component constructor contains a \nQUALITY_SELECTION_THRESHOLD\n entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the \nQuality Selection Guide\n. After the Component Executable invokes \nEndSegment()\n to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded.\n\n\nNote that this function may not be invoked for every frame in the current segment. For example, if \nFRAME_INTERVAL = 2\n, then this function will only be invoked for every other frame since those are the only ones that need to be processed.\n\n\nAlso, it may not be invoked for the first nor last frame in the segment. For example, if \nFRAME_INTERVAL = 3\n and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool ProcessFrame(const cv::Mat &frame, int frame_number)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nframe\n\n\nconst cv::Mat &\n\n\nOpenCV class containing frame data. See \ncv::Mat\n\n\n\n\n\n\nframe_number\n\n\nint\n\n\nA unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True when the component begins generating the first track for the current segment; false otherwise.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}\n\n\n\nEndSegment()\n\n\nIndicate the end of the current video segment. This will always be called after \nBeginSegment()\n. Generally, \nProcessFrame()\n will be called one or more times before this function, depending on the number of frames in the segment and the number of frames actually read from the stream.\n\n\nNote that the next time \nBeginSegment()\n is called, this component should start generating new tracks. Each time \nEndSegment()\n is called, it should return only the most recent track data for that segment. Tracks should not be carried over between segments. Do not append new detections to a preexisting track from the previous segment and return that cumulative track when this function is called.\n\n\n\n\nFunction Definition:\n\n\n\n\nvector EndSegment()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nvector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nvector SampleComponent::EndSegment() {\n // Perform any necessary cleanup before processing the next segment.\n // Return the collection of tracks generated for this segment only.\n}\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job, and a video segment (work unit) associated with that job:\n\n\n\n\nMPFStreamingVideoJob\n\n\nVideoSegmentInfo\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\n\n\nMPFStreamingVideoJob\n\n\nStructure containing information about a job to be performed on a video stream.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFStreamingVideoJob(\n const string &job_name,\n const string &run_directory,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name\n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\nrun_directory \n\n\nconst string &\n\n\nContains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nVideoSegmentInfo\n\n\nStructure containing information about a segment of a video stream to be processed. A segment is a subset of contiguous video frames.\n\n\n\n\nConstructor(s):\n\n\n\n\nVideoSegmentInfo(\n int segment_number,\n int start_frame,\n int end_frame,\n int frame_width,\n int frame_height\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsegment_number\n\n\nint\n\n\nA unique segment number (0-based index).\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the first frame in this segment.\n\n\n\n\n\n\nend_frame\n\n\nint\n\n\nThe frame number (0-based index) corresponding to the last frame in this segment.\n\n\n\n\n\n\nframe_width\n\n\nint\n\n\nThe height of each frame in this segment.\n\n\n\n\n\n\nframe_height\n\n\nint\n\n\nThe width of each frame in this segment.\n\n\n\n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a single video frame (image).\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMPFImageLocation detection;\ndetection.x_left_upper = 0;\ndetection.y_left_upper = 0;\ndetection.width = 100;\ndetection.height = 100;\ndetection.confidence = 1.0;\ndetection.detection_properties[\"CLASSIFICATION\"] = \"backpack\";\n\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nThrow Exceptions\n\n\nUnlike the \nC++ Batch Component API\n, none of the the C++ Streaming Component API functions return an \nMPFDetectionError\n. Instead, streaming components should throw an exception when a non-recoverable error occurs. The exception should be an instantiation or subclass of \nstd::exception\n and provide a descriptive error message that can be retrieved using \nwhat()\n. For example:\n\n\nbool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Something bad happened\n throw std::exception(\"Error: Cannot do X with value Y.\");\n}\n\n\n\nThe exception will be handled by the Component Executable. It will immediately invoke \nEndSegment()\n to retrieve the current tracks. Then the component process and streaming job will be terminated.\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing a segment with the same \nVideoSegmentInfo\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
"title": "C++ Streaming Component API"
},
{
@@ -1542,7 +1562,7 @@
},
{
"location": "/CPP-Streaming-Component-API/index.html#processframemat",
- "text": "Process a single video frame for the current segment. Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment. If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a CONFIDENCE_THRESHOLD entry, then this function should only return true for a detection with a confidence value that meets or exceeds that threshold. After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed. Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment. Function Definition: bool ProcessFrame(const cv::Mat &frame, int frame_number) Parameters: Parameter Data Type Description frame const cv::Mat & OpenCV class containing frame data. See cv::Mat frame_number int A unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function. Returns: ( bool ) True when the component begins generating the first track for the current segment; false otherwise. Example: bool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}",
+ "text": "Process a single video frame for the current segment. Must return true when the component begins generating the first track for the current segment. After it returns true, the Component Executable will ignore the return value until the component begins processing the next segment. If the job_properties map contained in the MPFStreamingVideoJob struct passed to the component constructor contains a QUALITY_SELECTION_THRESHOLD entry, then this function should only return true for a detection with a quality value that meets or exceeds that threshold. Refer to the Quality Selection Guide . After the Component Executable invokes EndSegment() to retrieve the segment tracks, it will discard detections that are below the threshold. If all the detections in a track are below the threshold, then the entire track will be discarded. Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2 , then this function will only be invoked for every other frame since those are the only ones that need to be processed. Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment. Function Definition: bool ProcessFrame(const cv::Mat &frame, int frame_number) Parameters: Parameter Data Type Description frame const cv::Mat & OpenCV class containing frame data. See cv::Mat frame_number int A unique frame number (0-based index). Guaranteed to be greater than the frame number passed to the last invocation of this function. Returns: ( bool ) True when the component begins generating the first track for the current segment; false otherwise. Example: bool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {\n // Look for detections. Generate tracks and store them until the end of the segment.\n if (started_first_track_in_current_segment) {\n return true;\n } else {\n return false;\n }\n}",
"title": "ProcessFrame(Mat ...)"
},
{
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index 440043f913f1..1cc95f98739c 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,147 +2,152 @@
/index.html
- 2024-03-19
+ 2024-03-25daily/Release-Notes/index.html
- 2024-03-19
+ 2024-03-25daily/License-And-Distribution/index.html
- 2024-03-19
+ 2024-03-25daily/Acknowledgements/index.html
- 2024-03-19
+ 2024-03-25daily/Install-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Admin-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/User-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/OpenID-Connect-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Media-Segmentation-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Feed-Forward-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Derivative-Media-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Object-Storage-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Markup-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/TiesDb-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Trigger-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Roll-Up-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Health-Check-Guide/index.html
- 2024-03-19
+ 2024-03-25
+ daily
+
+
+ /Quality-Selection-Guide/index.html
+ 2024-03-25daily/REST-API/index.html
- 2024-03-19
+ 2024-03-25daily/Component-API-Overview/index.html
- 2024-03-19
+ 2024-03-25daily/Component-Descriptor-Reference/index.html
- 2024-03-19
+ 2024-03-25daily/CPP-Batch-Component-API/index.html
- 2024-03-19
+ 2024-03-25daily/Python-Batch-Component-API/index.html
- 2024-03-19
+ 2024-03-25daily/Java-Batch-Component-API/index.html
- 2024-03-19
+ 2024-03-25daily/GPU-Support-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Contributor-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Development-Environment-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Node-Guide/index.html
- 2024-03-19
+ 2024-03-25daily/Workflow-Manager-Architecture/index.html
- 2024-03-19
+ 2024-03-25daily/CPP-Streaming-Component-API/index.html
- 2024-03-19
+ 2024-03-25daily
\ No newline at end of file