diff --git a/_config.yml b/_config.yml index c01e6c36add8..5ef65db9b686 100644 --- a/_config.yml +++ b/_config.yml @@ -27,6 +27,7 @@ navbar-links: Home: "" Docs: - Overview: "docs/site/index.html" + - Release Notes: "docs/site/Release-Notes/index.html" - Install: "docs/site/Install-Guide/index.html" - API: "docs/site/Component-API-Overview/index.html" Open Source: diff --git a/docs/docs/Release-Notes.md b/docs/docs/Release-Notes.md index faf870c50054..9630fa6918a0 100644 --- a/docs/docs/Release-Notes.md +++ b/docs/docs/Release-Notes.md @@ -1,12 +1,149 @@ > **NOTICE:** This software (or technical data) was produced for the U.S. Government under contract, and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2020 The MITRE Corporation. All Rights Reserved. - OcvDnnDetection (vehicle color) +``` + +- For example: + +``` + "detectionProperties": { + "CLASSIFICATION": "car", + "CLASSIFICATION CONFIDENCE LIST": "0.397336", + "CLASSIFICATION LIST": "car", + "COLOR": "blue", + "COLOR CONFIDENCE LIST": "0.93507; 0.055744", + "COLOR LIST": "blue; gray" + } +``` + +- The OcvDnnDetection component now supports the following properties: + - `CLASSIFICATION_TYPE`: Set this value to change the `CLASSIFICATION*` part of each output property name to something else. For example, setting it to `COLOR` will generate `COLOR`, `COLOR LIST`, and `COLOR CONFIDENCE LIST`. When handling feed-foward detections, the pre-existing `CLASSIFICATION*` properties will be carried over and the `COLOR*` properties will be added to the detection. + - `FEED_FORWARD_WHITELIST_FILE`: When `FEED_FORWARD_TYPE` is provided and not set to `NONE`, only feed-forward detections with class names contained in the specified file will be processed. For, example, a file with only "car" in it will result in performing the exclude behavior (below) for all feed-foward detections that do not have a `CLASSIFICATION` of "car". + - `FEED_FORWARD_EXCLUDE_BEHAVIOR`: Specifies what to do when excluding detections not specified in the `FEED_FORWARD_WHITELIST_FILE`. Acceptable values are: + - `PASS_THROUGH`: Return the excluded detections, without modification, along with any annotated detections. + - `DROP`: Don't return the excluded detections. Only return annotated detections. + + +

Updates

+ +- Make interop package work with Java 8 to better support exernal job producers and consumers. + +# OpenMPF 5.0.5: August 2020 + +

Updates

+ +- Configure Camel not to auto-acknowledge messages. Users can now see the number of pending messages in the ActiveMQ management console for queues consumed by the Workflow Manager. +- Improve Tesseract OSD fallback behavior. This prevents selecting the OSD rotation from the fallback pass without the OSD script from the fallback pass. + +# OpenMPF 5.0.4: August 2020 + +

Updates

+ +- Retry job callbacks when they fail. The Workflow Manager now supports the `http.callback.timeout.ms` and `http.callback.retries` system properties. +- Drop "duplicate paged in from cursor" DLQ messages. + +# OpenMPF 5.0.3: July 2020 + +

Updates

+ +- Update ActiveMQ to 5.16.0. + +# OpenMPF 5.0.2: July 2020 + +

Updates

-- TODO: This new component detects objects in images and videos by making use of an [NVIDIA TensorRT Inference Server](https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/) (TRTIS), and calculates features that can later be used by other systems to recognize the same object in other media. We provide support for running the server as a separate service during a Docker deployment, but an external server instance can be used instead. By default, the ip_irv2_coco model is supported and will optionally classify detected objects using [COCO labels](https://github.com/openmpf/openmpf-components/blob/master/cpp/trtisdetection/plugin-files/models/ip_irv2_coco.labels). Additionally, features can be generated for whole frames, automatically-detected object regions, and user-specified regions. Refer to the [README](https://github.com/openmpf/openmpf-components/blob/master/cpp/trtisdetection/README.md). ---> +- Disable video segmentation for ACS Speech Detection to prevent issues when generating speaker ids. # OpenMPF 5.0.1: July 2020 @@ -121,7 +258,7 @@ - `[GET] /rest/actions`, `[GET] /rest/tasks`, `[GET] /rest/pipelines` - `[DELETE] /rest/actions`, `[DELETE] /rest/tasks`, `[DELETE] /rest/pipelines` - `[POST] /rest/actions` , `[POST] /rest/tasks`, `[POST] /rest/pipelines` -- All of the endpoints above are new with the exception of `[GET] /rest/pipelines`. The endpoint has changed since the last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a collection of tasks for each pipelines. Refer to the REST API. +- All of the endpoints above are new with the exception of `[GET] /rest/pipelines`. The endpoint has changed since the last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a collection of tasks for each pipelines. Refer to the REST API. - `[GET] /rest/algorithms` can be used to get information about algorithms. Note that algorithms are tied to registered components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the associated component's Docker container so it self-registers with the Workflow Manager.

Incomplete Actions, Tasks, and Pipelines

@@ -230,7 +367,7 @@

Updates

-- Now silently discarding ActiveMQ DLQ "Suppressing duplicate delivery on connection" messages in addition to "duplicate from store" messages. +- Now silently discarding ActiveMQ DLQ "Suppressing duplicate delivery on connection" messages in addition to "duplicate from store" messages. # OpenMPF 4.1.5: March 2020 @@ -380,7 +517,7 @@ within a Docker container. This isolates the build and execution environment fro

Late Additions: December 2019

-- Transitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL database in the cloud. +- Transitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL database in the cloud. - Updated the EAST component to support a `TEMPORARY_PADDING` and `FINAL_PADDING` property. The first property determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is effectively removed from the final detections. The second property is used to control the final amount of padding on the output regions. Refer to the [README](https://github.com/openmpf/openmpf-components/blob/master/python/EastTextDetection/README.md#properties). # OpenMPF 4.0.0: February 2019 diff --git a/docs/docs/html/REST-API.html b/docs/docs/html/REST-API.html index a35f15a9a8ad..a0cf578e0b5b 100644 --- a/docs/docs/html/REST-API.html +++ b/docs/docs/html/REST-API.html @@ -145,7 +145,7 @@

OpenMPF Workflow Manager REST API

-

Version 5.0.0 (build 20200616)

+

Version 5.1.0 (build 20200804)


NOTICE

@@ -158,7 +158,7 @@

Version 5.0.0 (build 20200616)


- +

Introduction

@@ -652,6 +652,18 @@

1.4. JobCreationRequest

A valid (properly encoded) URI to a single media source for the job. Yes + + metadata + object + A map of key-value pairs using strings which may be used to set the metadata for this + media. When sufficient metadata is provided, media inspection will be skipped. To skip inspection, the + `MEDIA_HASH` and `MIME_TYPE` keys must be set to the media's respective hash and media type. Audio files + will also require the metadata value for `DURATION`. Image files will also require the metadata values + for `FRAME_WIDTH` and `FRAME_HEIGHT`. Video files will also require the metadata values for + `FRAME_WIDTH`, `FRAME_HEIGHT`, `FRAME_COUNT`, `FPS`, and `DURATION`. + + No + properties object @@ -1891,6 +1903,12 @@

2. Paths

URIs must be properly encoded.

+ Within media, an optional metadata object containing String key-value pairs can override media + inspection once the required metadata information is provided for audio, image, generic, and + video jobs. For media metadata, note that optional parameters like `ROTATION` and + `HORIZONTAL_FLIP` can also be provided. +
+
The body of a POST job creation request will be similar to this example which uses a OCV FACE DETECTION (WITH MOG MOTION PREPROCESSOR) PIPELINE. Some unrequired parameters are omitted. Note that this example makes use of overloading by jobProperties, algorithmProperties and mediaProperties:
diff --git a/docs/docs/index.md b/docs/docs/index.md
index 95c8a631dc0c..fdebdca2a244 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -23,10 +23,13 @@ A list of algorithms currently integrated into the OpenMPF as distributed proces
 | Detection| Scene | OpenCV
 | Detection| Classification | OpenCV DNN
 | Detection/Tracking | Classification | Darknet
+| Detection/Tracking | Classification/Features | TensorRT
 | Detection| Text Region | EAST
 | Detection| Text (OCR) | Apache Tika
 | Detection| Text (OCR) | Tesseract OCR
 | Detection| Text (OCR) | Azure Cognitive Services
+| Detection| Form Structure (with OCR) | Azure Cognitive Services
+| Detection| Keywords | Boost Regular Expressions
 | Detection| Image (from document) | Apache Tika
 
 The OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.
diff --git a/docs/site/Release-Notes/index.html b/docs/site/Release-Notes/index.html
index fa929da414a6..1833ef184eaf 100644
--- a/docs/site/Release-Notes/index.html
+++ b/docs/site/Release-Notes/index.html
@@ -64,6 +64,33 @@
         
             
 
-
  • All of the endpoints above are new with the exception of [GET] /rest/pipelines. The endpoint has changed since the last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a collection of tasks for each pipelines. Refer to the REST API.
  • +
  • All of the endpoints above are new with the exception of [GET] /rest/pipelines. The endpoint has changed since the last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a collection of tasks for each pipelines. Refer to the REST API.
  • [GET] /rest/algorithms can be used to get information about algorithms. Note that algorithms are tied to registered components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the associated component's Docker container so it self-registers with the Workflow Manager.
  • Incomplete Actions, Tasks, and Pipelines

    @@ -590,7 +776,7 @@

    OpenMPF 4.1.6: April 2020

    Updates

    OpenMPF 4.1.5: March 2020

    Bug Fixes

    @@ -760,7 +946,7 @@

    Late Additions: November 2019

    Late Additions: December 2019

    OpenMPF 4.0.0: February 2019

    diff --git a/docs/site/html/REST-API.html b/docs/site/html/REST-API.html index a35f15a9a8ad..a0cf578e0b5b 100644 --- a/docs/site/html/REST-API.html +++ b/docs/site/html/REST-API.html @@ -145,7 +145,7 @@

    OpenMPF Workflow Manager REST API

    -

    Version 5.0.0 (build 20200616)

    +

    Version 5.1.0 (build 20200804)


    NOTICE

    @@ -158,7 +158,7 @@

    Version 5.0.0 (build 20200616)


    - +

    Introduction

    @@ -652,6 +652,18 @@

    1.4. JobCreationRequest

    A valid (properly encoded) URI to a single media source for the job. Yes + + metadata + object + A map of key-value pairs using strings which may be used to set the metadata for this + media. When sufficient metadata is provided, media inspection will be skipped. To skip inspection, the + `MEDIA_HASH` and `MIME_TYPE` keys must be set to the media's respective hash and media type. Audio files + will also require the metadata value for `DURATION`. Image files will also require the metadata values + for `FRAME_WIDTH` and `FRAME_HEIGHT`. Video files will also require the metadata values for + `FRAME_WIDTH`, `FRAME_HEIGHT`, `FRAME_COUNT`, `FPS`, and `DURATION`. + + No + properties object @@ -1891,6 +1903,12 @@

    2. Paths

    URIs must be properly encoded.

    + Within media, an optional metadata object containing String key-value pairs can override media + inspection once the required metadata information is provided for audio, image, generic, and + video jobs. For media metadata, note that optional parameters like `ROTATION` and + `HORIZONTAL_FLIP` can also be provided. +
    +
    The body of a POST job creation request will be similar to this example which uses a OCV FACE DETECTION (WITH MOG MOTION PREPROCESSOR) PIPELINE. Some unrequired parameters are omitted. Note that this example makes use of overloading by jobProperties, algorithmProperties and mediaProperties:
    diff --git a/docs/site/index.html b/docs/site/index.html
    index 53c277fcb242..8198537435c5 100644
    --- a/docs/site/index.html
    +++ b/docs/site/index.html
    @@ -322,6 +322,11 @@ 

    Overview

    Darknet +Detection/Tracking +Classification/Features +TensorRT + + Detection Text Region EAST @@ -343,6 +348,16 @@

    Overview

    Detection +Form Structure (with OCR) +Azure Cognitive Services + + +Detection +Keywords +Boost Regular Expressions + + +Detection Image (from document) Apache Tika @@ -398,5 +413,5 @@

    Overview

    diff --git a/docs/site/mkdocs/search_index.json b/docs/site/mkdocs/search_index.json index a51166bafafb..47cdd63d1545 100644 --- a/docs/site/mkdocs/search_index.json +++ b/docs/site/mkdocs/search_index.json @@ -2,19 +2,64 @@ "docs": [ { "location": "/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2020 The MITRE Corporation. All Rights Reserved.\n\n\n\n\nOverview\n\n\nThere are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.\n\n\nOpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.\n\n\nFor those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms.\n\n\nA list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here:\n\n\n\n\n\n\n\n\nOperation\n\n\nObject Type\n\n\nFramework\n\n\n\n\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nLBP-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nDlib\n\n\n\n\n\n\nDetection/Tracking\n\n\nPerson\n\n\nHOG-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nMOG w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nSuBSENSE w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nLicense Plate\n\n\nOpenALPR\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nSphinx\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nAzure Cognitive Services\n\n\n\n\n\n\nDetection\n\n\nScene\n\n\nOpenCV\n\n\n\n\n\n\nDetection\n\n\nClassification\n\n\nOpenCV DNN\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification\n\n\nDarknet\n\n\n\n\n\n\nDetection\n\n\nText Region\n\n\nEAST\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nApache Tika\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nTesseract OCR\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services\n\n\n\n\n\n\nDetection\n\n\nImage (from document)\n\n\nApache Tika\n\n\n\n\n\n\n\n\nThe OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2020 The MITRE Corporation. All Rights Reserved.\n\n\n\n\nOverview\n\n\nThere are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison.\n\n\nOpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection.\n\n\nFor those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms.\n\n\nA list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here:\n\n\n\n\n\n\n\n\nOperation\n\n\nObject Type\n\n\nFramework\n\n\n\n\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nLBP-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nFace\n\n\nDlib\n\n\n\n\n\n\nDetection/Tracking\n\n\nPerson\n\n\nHOG-Based OpenCV\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nMOG w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nMotion\n\n\nSuBSENSE w/ STRUCK\n\n\n\n\n\n\nDetection/Tracking\n\n\nLicense Plate\n\n\nOpenALPR\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nSphinx\n\n\n\n\n\n\nDetection\n\n\nSpeech\n\n\nAzure Cognitive Services\n\n\n\n\n\n\nDetection\n\n\nScene\n\n\nOpenCV\n\n\n\n\n\n\nDetection\n\n\nClassification\n\n\nOpenCV DNN\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification\n\n\nDarknet\n\n\n\n\n\n\nDetection/Tracking\n\n\nClassification/Features\n\n\nTensorRT\n\n\n\n\n\n\nDetection\n\n\nText Region\n\n\nEAST\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nApache Tika\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nTesseract OCR\n\n\n\n\n\n\nDetection\n\n\nText (OCR)\n\n\nAzure Cognitive Services\n\n\n\n\n\n\nDetection\n\n\nForm Structure (with OCR)\n\n\nAzure Cognitive Services\n\n\n\n\n\n\nDetection\n\n\nKeywords\n\n\nBoost Regular Expressions\n\n\n\n\n\n\nDetection\n\n\nImage (from document)\n\n\nApache Tika\n\n\n\n\n\n\n\n\nThe OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.", "title": "Home" }, { "location": "/index.html#overview", - "text": "There are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison. OpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection. For those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms. A list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here: Operation Object Type Framework Detection/Tracking Face LBP-Based OpenCV Detection/Tracking Face Dlib Detection/Tracking Person HOG-Based OpenCV Detection/Tracking Motion MOG w/ STRUCK Detection/Tracking Motion SuBSENSE w/ STRUCK Detection/Tracking License Plate OpenALPR Detection Speech Sphinx Detection Speech Azure Cognitive Services Detection Scene OpenCV Detection Classification OpenCV DNN Detection/Tracking Classification Darknet Detection Text Region EAST Detection Text (OCR) Apache Tika Detection Text (OCR) Tesseract OCR Detection Text (OCR) Azure Cognitive Services Detection Image (from document) Apache Tika The OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.", + "text": "There are numerous video and image exploitation capabilities available today. The Open Media Processing Framework (OpenMPF) provides a framework for chaining, combining, or replacing individual components for the purpose of experimentation and comparison. OpenMPF is a non-proprietary, scalable framework that permits practitioners and researchers to construct video, imagery, and audio exploitation capabilities using the available third-party components. Using OpenMPF, one can extract targeted entities in large-scale data environments, such as face and object detection. For those developing new exploitation capabilities, OpenMPF exposes a set of Application Program Interfaces (APIs) for extending media analytics functionality. The APIs allow integrators to introduce new algorithms capable of detecting new targeted entity types. For example, a backpack detection algorithm could be integrated into an OpenMPF instance. OpenMPF does not restrict the number of algorithms that can operate on a given media file, permitting researchers, practitioners, and developers to explore arbitrarily complex composites of exploitation algorithms. A list of algorithms currently integrated into the OpenMPF as distributed processing components is shown here: Operation Object Type Framework Detection/Tracking Face LBP-Based OpenCV Detection/Tracking Face Dlib Detection/Tracking Person HOG-Based OpenCV Detection/Tracking Motion MOG w/ STRUCK Detection/Tracking Motion SuBSENSE w/ STRUCK Detection/Tracking License Plate OpenALPR Detection Speech Sphinx Detection Speech Azure Cognitive Services Detection Scene OpenCV Detection Classification OpenCV DNN Detection/Tracking Classification Darknet Detection/Tracking Classification/Features TensorRT Detection Text Region EAST Detection Text (OCR) Apache Tika Detection Text (OCR) Tesseract OCR Detection Text (OCR) Azure Cognitive Services Detection Form Structure (with OCR) Azure Cognitive Services Detection Keywords Boost Regular Expressions Detection Image (from document) Apache Tika The OpenMPF exposes data processing and job management web services via a User Interface (UI). These services allow users to upload media, create media processing jobs, determine the status of jobs, and retrieve the artifacts associated with completed jobs. The web services give application developers flexibility to use the OpenMPF in their preferred environment and programming language.", "title": "Overview" }, { "location": "/Release-Notes/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2020 The MITRE Corporation. All Rights Reserved.\n\n\n\n\n\n\n\nOpenMPF 5.0.1: July 2020\n\n\nUpdates\n\n\n\n\n\nUpdated Tessseract component with \nMAX_PIXELS\n setting to prevent processing large images.\n\n\n\n\nOpenMPF 5.0.0: June 2020\n\n\nDocumentation\n\n\n\n\n\nUpdated the openmpf-docker repo \nREADME\n and \nSWARM\n guides to describe the new build process, which now includes automatically copying the openmpf repo source code into the openmpf-build image instead of using various bind mounts, and building all of the component base builder and executor images.\n\n\nUpdated the openmpf-docker repo \nREADME\n with the following sections:\n\n\nHow to \nUse Kibana for Log Viewing and Aggregation\n\n\nHow to \nRestrict Media Types that a Component Can Process\n\n\nHow to \nImport Root Certificates for Additional Certificate Authorities\n\n\n\n\n\n\nUpdated the \nCONTRIBUTING\n guide for Docker deployment with information on the new build process and component base builder and executor images.\n\n\nUpdated the \nInstall Guide\n with a pointer to the \"Quick Start\" section on DockerHub.\n\n\nUpdated the \nREST API\n with the new endpoints for getting, deleting, and creating actions, tasks, and pipelines, as well as a change to the \n[GET] /rest/info\n endpoint.\n\n\nUpdated the \nC++ Batch Component API\n to describe changes to the \nGetDetection()\n calls, which now return a collection of detections or tracks instead of an error code, and to describe improvements to exception handling.\n\n\nUpdated the \nC++ Batch Component API\n, \nPython Batch Component API\n, and \nJava Batch Component API\n with \nMIME_TYPE\n, \nFRAME_WIDTH\n, and \nFRAME_HEIGHT\n media properties.\n\n\nUpdated the \nPython Batch Component API\n with information on Python3 and the simplification of using a \ndict\n for some of the data members.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nRenamed \nstages\n to \ntasks\n for clarity and consistency with the rest of the code.\n\n\nThe \nmedia\n element no longer contains a \nmessage\n field.\n\n\nEach \ndetectionProcessingError\n element now contains a \ncode\n field.\n\n\nErrors and warnings are now grouped by \nmediaId\n and summarized using a \ndetails\n element that contains a \nsource\n, \ncode\n, and \nmessage\n field. Refer to \nthis comment\n for an example of the JSON structure. Note that errors and warnings generated by the Workflow Manager do not have a \nmediaId\n.\n\n\nWhen an error or warning occurs in multiple frames of a video for a single piece of media it will be represented in one \ndetails\n element and the \nmessage\n will list the frame ranges.\n\n\n\n\n\n\n\n\nInteroperability Package\n\n\n\n\n\nRenamed \nJsonStage.java\n to \nJsonTask.java\n.\n\n\nRemoved \nJsonJobRequest.java\n.\n\n\nModified \nJsonDetectionProcessingError.java\n by removing the \nstartOffset\n and \nstopOffset\n fields and adding the following new fields: \nstartOffsetFrame\n, \nstopOffsetFrame\n, \nstartOffsetTime\n, \nstopOffsetTime\n, and \ncode\n.\n\n\nUpdated \nJsonMediaOutputObject.java\n by removing \nmessage\n field.\n\n\nAdded \nJsonMediaIssue.java\n and \nJsonIssueDetails.java\n.\n\n\n\n\nPersistent Database\n\n\n\n\n\nThe \ninput_object\n column in the \njob_request\n table has been renamed to \njob\n and the content now contains a serialized form of \nBatchJob.java\n instead of \nJsonJobRequest.java\n.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nThe \nGetDetection()\n calls now return a collection instead of an error code:\n\n\nstd::vector\nMPFImageLocation\n GetDetections(const MPFImageJob \njob)\n\n\nstd::vector\nMPFVideoTrack\n GetDetections(const MPFVideoJob \njob)\n\n\nstd::vector\nMPFAudioTrack\n GetDetections(const MPFAudioJob \njob)\n\n\nstd::vector\nMPFGenericTrack\n GetDetections(const MPFGenericJob \njob)\n\n\n\n\n\n\nMPFDetectionException\n can now be constructed with a \nwhat\n parameter representing a descriptive error message:\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string \nwhat = \"\")\n\n\nMPFDetectionException(const std::string \nwhat)\n\n\n\n\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nSimplified the \ndetection_properties\n and \nframe_locations\n data members to use a Python \ndict\n instead of a custom data type.\n\n\n\n\nFull Docker Conversion\n\n\n\n\n\nEach component is now encapsulated in its own Docker image which self-registers with the Workflow Manager at runtime. This deconflicts component dependencies, and allows for greater flexibility when deciding which components to deploy at runtime.\n\n\nThe Node Manager image has been removed. For Docker deployments, component services should be managed using Docker tools external to OpenMPF.\n\n\nIn Docker deployments, streaming job REST endpoints are disabled, the Nodes web page is no longer available, component tar.gz packages cannot be registered through the Component Registration web page, and the \nmpf\n command line script can now only be run on the Workflow Manager container to modify user settings. The preexisting features are now reserved for non-Docker deployments and development environments.\n\n\nThe OpenMPF Docker stack can optionally be deployed with \nKibana\n (which depends on Elasticsearch and Filebeat) for viewing log files. Refer to the openmpf-docker \nREADME\n.\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\nA base builder image and executor image are provided for C++ (\nREADME\n), Python (\nREADME\n), and Java (\nREADME\n) component development. Component developers can also refer to the Dockerfile in the source code for each component as reference for how to make use of the base images.\n\n\n\n\nRestrict Media Types that a Component Can Process\n\n\n\n\n\nEach component service now supports an optional \nRESTRICT_MEDIA_TYPES\n Docker environment variable that specifies the types of media that service will process. For example, \nRESTRICT_MEDIA_TYPES: VIDEO,IMAGE\n will process both videos and images, while \nRESTRICT_MEDIA_TYPES: IMAGE\n will only process images. If not specified, the service will process all of the media types it natively supports. For example, this feature can be used to ensure that some services are always available to process images while others are processing long videos.\n\n\n\n\nImport Additional Root Certificates into the Workflow Manager\n\n\n\n\n\nAdditional root certificates can be imported into the Workflow Manager at runtime by adding an entry for \nMPF_CA_CERTS\n to the workflow-manager service's environment variables in \ndocker-compose.core.yml\n. \nMPF_CA_CERTS\n must contain a colon-delimited list of absolute file paths. Of note, a root certificate may be used to trust the identity of a remote object storage server.\n\n\n\n\nDockerHub\n\n\n\n\n\nPushed prebuilt OpenMPF Docker images to \nDockerHub\n. Refer to the \"Quick Start\" section of the OpenMPF Workflow Manager image \ndocumentation\n.\n\n\n\n\nVersion Updates\n\n\n\n\n\nUpdated from Oracle Java 8 to OpenJDK 11, which required updating to Tomcat 8.5.41. We now use \nCargo\n to run integration tests.\n\n\nUpdated OpenCV from 3.0.0 to 3.4.7 to update Deep Neural Networks (DNN) support.\n\n\nUpdated Python from 2.7 to 3.8.2.\n\n\n\n\nFFmpeg\n\n\n\n\n\nWe are no longer building separate audio and video encoders and decoders for FFmpeg. Instead, we are using the built-in decoders that come with FFmpeg by default. This simplifies the build process and redistribution via Docker images.\n\n\n\n\nArtifact Extraction\n\n\n\n\n\nThe \nARTIFACT_EXTRACTION_POLICY\n property can now be assigned a value of \nNONE\n, \nVISUAL_TYPES_ONLY\n, \nALL_TYPES\n, or \nALL_DETECTIONS\n.\n\n\nWith the \nVISUAL_TYPES_ONLY\n or \nALL_TYPES\n policy, artifacts will be extracted according to the \nARTIFACT_EXTRACTION_POLICY*\n properties. With the \nNONE\n and \nALL_DETECTIONS\n policies, those settings are ignored.\n\n\nNote that previously \nNONE\n, \nVISUAL_EXEMPLARS_ONLY\n, \nEXEMPLARS_ONLY\n, \nALL_VISUAL_DETECTIONS\n, and \nALL_DETECTIONS\n were supported.\n\n\n\n\n\n\nThe following \nARTIFACT_EXTRACTION_POLICY*\n properties are now supported:\n\n\nARTIFACT_EXTRACTION_POLICY_EXEMPLAR_FRAME_PLUS\n: Extract the exemplar frame from the track, plus this many frames before and after the exemplar.\n\n\nARTIFACT_EXTRACTION_POLICY_FIRST_FRAME\n: If true, extract the first frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_MIDDLE_FRAME\n: If true, extract the frame with a detection that is closest to the middle frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_LAST_FRAME\n: If true, extract the last frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n: Sort the detections in a track by confidence and then extract this many detections, starting with those which have the highest confidence.\n\n\nARTIFACT_EXTRACTION_POLICY_CROPPING\n: If true, an artifact will be extracted for each detection in each frame that is selected according to the other \nARTIFACT_EXTRACTION_POLICY*\n properties. The extracted artifact will be cropped to the width and height of the detection bounding box, and the artifact will be rotated according to the detection \nROTATION\n property. If false, the artifact extraction behavior is unchanged from the previous release: the entire frame will be extracted without any rotation.\n\n\n\n\n\n\nFor clarity, \nOUTPUT_EXEMPLARS_ONLY\n has been renamed to \nOUTPUT_ARTIFACTS_AND_EXEMPLARS_ONLY\n. Extracted artifacts will always be reported in the JSON output object.\n\n\nThe \nmpf.output.objects.exemplars.only\n system property has been renamed to \nmpf.output.objects.artifacts.and.exemplars.only\n. It works the same as before with the exception that if an artifact is extracted for a detection then that detection will always be represented in the JSON output object, whether it's an exemplar or not.\n\n\nThe \nmpf.output.objects.last.stage.only\n system property has been renamed to \nmpf.output.objects.last.task.only\n. It works the same as before with the exception that when set to true artifact extraction is skipped for all tasks but the last task.\n\n\n\n\nREST Endpoints\n\n\n\n\n\nModified \n[GET] /rest/info\n. Now returns output like \n{\"version\": \"4.1.0\", \"dockerEnabled\": true}\n.\n\n\nAdded the following REST endpoints for getting, removing, and creating actions, tasks, and pipelines. Refer to the \nREST API\n for more information:\n\n\n[GET] /rest/actions\n, \n[GET] /rest/tasks\n, \n[GET] /rest/pipelines\n\n\n[DELETE] /rest/actions\n, \n[DELETE] /rest/tasks\n, \n[DELETE] /rest/pipelines\n\n\n[POST] /rest/actions\n , \n[POST] /rest/tasks\n, \n[POST] /rest/pipelines\n\n\n\n\n\n\nAll of the endpoints above are new with the exception of \n[GET] /rest/pipelines\n. The endpoint has changed since the last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a collection of tasks for each pipelines. Refer to the REST API. \n\n\n[GET] /rest/algorithms\n can be used to get information about algorithms. Note that algorithms are tied to registered components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the associated component's Docker container so it self-registers with the Workflow Manager.\n\n\n\n\nIncomplete Actions, Tasks, and Pipelines\n\n\n\n\n\nThe previous version of OpenMPF would generate an error when attempting to register a component that included actions, tasks, or pipelines that depend on algorithms, actions, or tasks that are not yet registered with the Workflow Manager. This required components to be registered in a specific order. Also, when unregistering a component, it required the components which depend on it to be unregistered. These dependency checks are no longer enforced.\n\n\nIn general, the Workflow Manager now appropriately handles incomplete actions, tasks, and pipelines by checking if all of the elements are defined before executing a job, and then preserving that information in memory until the job is complete. This allows components to be registered and removed in an arbitrary order without affecting the state of other components, actions, tasks, or pipelines. This also allows actions and tasks to be removed using the new REST endpoints and then re-added at a later time while still preserving the elements that depend on them.\n\n\nNote that unregistering a component while a job is running will cause it to stall. Please ensure that no jobs are using a component before unregistering it.\n\n\n\n\nPython Arbitrary Rotation\n\n\n\n\n\nThe Python MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be normalized to that range. Floating point values are accepted. This is similar to the existing support for \nC++ arbitrary rotation\n.\n\n\n\n\nOpenCV Deep Neural Networks (DNN) Detection Component\n\n\n\n\n\nThis new component replaces the old CaffeDetection component. It supports the same GoogLeNet and Yahoo Not Suitable For Work (NSFW) models as the old component, but removes support for the Rezafuad vehicle color detection model in favor of a custom TensorFlow vehicle color detection model. In our tests, the new model has proven to be more generalizable and provide more accurate results on never-before-seen test data. Refer to the \nREADME\n.\n\n\n\n\nAzure Cognitive Services (ACS) Speech Detection Component\n\n\n\n\n\nThis new component utilizes the \nAzure Cognitive Services Batch Transcription REST endpoint\n to transcribe speech from audio and video files. Refer to the \nREADME\n.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nText tagging has been simplified to only support regular expression searches. Whole keyword searches are a subset of regular expression searches, and are therefore still supported. Also, the \ntext-tags.json\n file format has been updated to allow for specifying case-sensitive regular expression searches.\n\n\nAdditionally, the \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n detection properties are now supported, which list the OCR'd words that resulted in adding a \nTAG\n to the detection, and the character offset of those words within the OCR'd \nTEXT\n, respectively.\n\n\nKey changes to tagging output and \ntext-tags.json\n format are outlined below. Refer to the \nREADME\n for more information:\n\n\nRegex patterns should now be entered in the format \n{\"pattern\": \"regex_pattern\"}\n. Users can add and toggle the \n\"caseSensitive\"\n regex flag for each pattern.\n\n\nFor example: \n{\"pattern\": \"(\\\\b)bus(\\\\b)\", \"caseSensitive\": true}\n enables case-sensitive regex pattern matching.\n\n\nBy default, each regex pattern, including those in the legacy format, will be case-insensitive.\n\n\n\n\n\n\nAs part of the text tagging update, the \nTAGS\n outputs are now separated by semicolons \n;\n rather than commas \n,\n to be consistent with the delimiters for \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n output patterns.\n\n\nBecause semicolons can be part of the trigger word itself, those semicolons will be encapsulated in brackets.\n\n\nFor example, \ndetected trigger with a ;\n in the OCR'd \nTEXT\n is reported as \nTRIGGER_WORDS=detected trigger with a [;]; some other trigger\n.\n\n\n\n\n\n\nCommas are now used to group each set of \nTRIGGER_WORDS_OFFSET\n with its respective \nTRIGGER_WORDS\n output. Both \nTAGS\n and \nTRIGGER_WORDS\n are separated by semicolons only.\n\n\nFor example: \nTRIGGER_WORDS=trigger1; trigger2\n, \nTRIGGER_WORDS_OFFSET=0-5, 6-10; 12-15\n, means that \ntrigger1\n occurs twice in the text at the index ranges 0-5 and 6-10, and \ntrigger2\n occurs at index range 12-15.\n\n\n\n\n\n\n\n\n\n\nRegex tagging now follows the C++ ECMAS format (see \nexamples here\n) after resolving JSON string conversion for regex tags.\n\n\nAs a result the regex patterns \n\\b\n and \n\\p\n in the text tagging file must now be written as \n\\\\b\n and \n\\\\p\n, respectively, to match the format of other regex character patterns (ex. \n\\\\d\n, \n\\\\w\n, \n\\\\s\n, etc.).\n\n\n\n\n\n\nThe \nMAX_PARALLEL_SCRIPT_THREADS\n and \nMAX_PARALLEL_PAGE_THREADS\n properties are now supported. When processing images, the first property is used to determine how many threads to run in parallel. Each thread performs OCR using a different language or script model. When processing PDFs, the second property is used to determine how many threads to run in parallel. Each thread performs OCR on a different page of the PDF.\n\n\nThe \nENABLE_OSD_FALLBACK\n property is now supported. If enabled, an additional round of OSD is performed when the first round fails to generate script predictions that are above the OSD score and confidence thresholds. In the second pass, the component will run OSD on multiple copies of the input text image to get an improved prediction score and \nOSD_FALLBACK_OCCURRED\n detection property will be set to true.\n\n\nIf any OSD-detected models are missing, the new \nMISSING_LANGUAGE_MODELS\n detection property will list the missing models.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThe Tika text detection component now supports text tagging in the same way as the Tesseract component. Refer to the \nREADME\n.\n\n\n\n\nOther Improvements\n\n\n\n\n\nSimplified component \ndescriptor.json\n files by moving the specification of common properties, such as \nCONFIDENCE_THRESHOLD\n, \nFRAME_INTERVAL\n, \nMIN_SEGMENT_LENGTH\n, etc., to a single \nworkflow-properties.json\n file. Now when the Workflow Manager is updated to support new features, the component \ndescriptor.json\n file will not need to be updated.\n\n\nUpdated the Sphinx component to return \nTRANSCRIPT\n instead of \nTRANSCRIPTION\n, which is grammatically correct.\n\n\nWhitespace is now trimmed from property names when jobs are submitted via the REST API.\n\n\nThe Darknet Docker image now includes the YOLOv3 model weights.\n\n\nThe C++ and Python ModelsIniParser now allows users to specify optional fields.\n\n\nWhen a job completion callback fails, but otherwise the job is successful, the final state of the job will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#772\n] Can now create a custom pipeline with long action names using the Pipelines 2 UI.\n\n\n[\n#812\n] Now properly setting the start and stop index for elements in the \ndetectionProcessingErrors\n collection in the JSON output object. Errors reported for each job segment will now appear in the collection.\n\n\n[\n#941\n] Tesseract component no longer segfaults when handling corrupt media.\n\n\n[\n#1005\n] Fixed a bug that caused a NullPointerException when attempting to get output object JSON via REST before a job completes.\n\n\n[\n#1035\n] The search bar in the Job Status UI can once again for used to search for job id.\n\n\n[\n#1104\n] Fixed C++/Python component executor memory leaks.\n\n\n[\n#1108\n] Fixed a bug when handling frames and detections that are horizontally flipped. This affected both markup and feed-forward behaviors.\n\n\n[\n#1119\n] Fixed Tesseract component memory leaks and uninitialized read issues.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1028\n] Media inspection fails to handle Apple-optimized PNGs with the CgBI data chunk before the IHDR chunk.\n\n\n[\n#1109\n] We made the search bar in the Job Status UI more efficient by shifting it to a database query, but in doing so introduced a bug where the search operates on UTC time instead of local system time.\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n does not behave as expected for batch jobs. A user would expect it to control whether the JSON output object is generated, but it's generated regardless of that setting.\n\n\n[\n#1032\n] Jobs fail on corrupt QuickTime videos. For these videos, the OpenCV-reported frame count is more than twice the actual frame count.\n\n\n[\n#1106\n] When a job ends in ERROR the job status UI does not show an End Date.\n\n\n\n\nOpenMPF 4.1.14: June 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1120\n] The node-manager Docker image now correctly installs CUDA libraries so that GPU-enabled components on that image can run on the GPU.\n\n\n[\n#1064\n] Fixed memory leaks in the Darknet component for various network types, and when using GPU resources. This bug covers everything not addressed by \n#1062\n.\n\n\n\n\nOpenMPF 4.1.13: June 2020\n\n\nUpdates\n\n\n\n\n\nUpdated the OpenCV build and media inspection process to properly handle webp images.\n\n\n\n\nOpenMPF 4.1.12: May 2020\n\n\nUpdates\n\n\n\n\n\nUpdated JDK from \njdk-8u181-linux-x64.rpm\n to \njdk-8u251-linux-x64.rpm\n.\n\n\n\n\nOpenMPF 4.1.11: May 2020\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nAdded \nINVALID_MIN_IMAGE_SIZE\n job property to filter out images with extremely low width or height.\n\n\nUpdated image rescaling behavior to account for image dimension limits.\n\n\nFixed handling of \nnullptr\n returns from Tesseract API OCR calls.\n\n\n\n\nOpenMPF 4.1.8: May 2020\n\n\nAzure Cognitive Services (ACS) OCR Component\n\n\n\n\n\nThis new component utilizes the \nACS OCR REST endpoint\n to extract text from images and videos. Refer to the \nREADME\n.\n\n\n\n\nOpenMPF 4.1.6: April 2020\n\n\nUpdates\n\n\n\n\n\nNow silently discarding ActiveMQ DLQ \"Suppressing duplicate delivery on connection\" messages in addition to \"duplicate from store\" messages. \n\n\n\n\nOpenMPF 4.1.5: March 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1062\n] Fixed a memory leak in the Darknet component that occurred when running jobs on CPU resources with the Tiny YOLO model.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1064\n] The Darknet component has memory leaks for various network types, and potentially when using GPU resources. This bug covers everything not addressed by \n#1062\n.\n\n\n\n\nOpenMPF 4.1.4: March 2020\n\n\nUpdates\n\n\n\n\n\nUpdated from Hibernate 5.0.8 to 5.4.12 to support schema-based multitenancy. This allows multiple instances of OpenMPF to use the same PostgreSQL database as long as each instance connects to the database as a separate user, and the database is configured appropriately. This also required updating Tomcat from 7.0.72 to 7.0.76.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nUpdated the Workflow Manager to include an \noutputobjecturi\n in GET callbacks, and \noutputObjectUri\n in POST callbacks, when jobs complete. This URI specifies a file path, or path on the object storage server, depending on where the JSON output object is located.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nUpdated \nJsonCallbackBody.java\n to contain an \noutputObjectUri\n field.\n\n\n\n\nOpenMPF 4.1.3: February 2020\n\n\nFeatures\n\n\n\n\n\nAdded support for \nDETECTION_PADDING_X\n and \nDETECTION_PADDING_Y\n optional job properties. The value can be a percentage or whole-number pixel value. When positive, each detection region in each track will be expanded. When negative, the region will shrink. If the detection region is shrunk to nothing, the shrunk dimension(s) will be set to a value of 1 pixel and the \nSHRUNK_TO_NOTHING\n detection property will be set to true.\n\n\nAdded support for \nDISTANCE_CONFIDENCE_WEIGHT_FACTOR\n and \nSIZE_CONFIDENCE_WEIGHT_FACTOR\n SuBSENSE algorithm properties. Increasing the value of the first property will generate detection confidence values that favor being closer to the center frame of a track. Increasing the value of the second property will generate detection confidence values that favor large detection regions.\n\n\n\n\nOpenMPF 4.1.1: January 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1016\n] Fixed a bug that caused a deadlock situation when the media inspection process failed quickly when processing many jobs using a pipeline with more than one stage.\n\n\n\n\nOpenMPF 4.1.0: July 2019\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nC++ Batch Component API\n to describe the \nROTATION\n detection property. See the \nC++ Arbitrary Rotation\n section below.\n\n\nUpdated the \nREST API\n with new component registration REST endpoints. See the \nComponent Registration REST Endpoints\n section below.\n\n\nAdded a \nREADME\n for the EAST text region detection component. See the \nEAST Text Region Detection Component\n section below.\n\n\nUpdated the Tesseract OCR text detection component \nREADME\n. See the \nTesseract OCR Text Detection Component\n section below.\n\n\nUpdated the openmpf-docker repo \nREADME\n and \nSWARM\n guide to describe the new streamlined approach to using \ndocker-compose config\n. See the \nDocker Deployment\n section below.\n\n\nFixed the description of \nMIN_SEGMENT_LENGTH\n and associated examples in the \nUser Guide\n for issue \n#891\n.\n\n\nUpdated the \nJava Batch Component API\n with information on how to use Log4j2. Related to resolving issue \n#855\n.\n\n\nUpdated the \nInstall Guide\n to point to the Docker \nREADME\n.\n\n\nTransformed the Build Guide into a \nDevelopment Environment Guide\n.\n\n\n\n\n\n\nC++ Arbitrary Rotation\n\n\n\n\nThe C++ MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be normalized to that range. Floating point values are accepted.\n\n\nWhen using those tools to read frame data, they will automatically correct for rotation so that the returned frame is horizontally oriented toward the normal 3 o'clock position.\n\n\nWhen \nFEED_FORWARD_TYPE=REGION\n, these tools will look for a \nROTATION\n detection property in the feed-forward detections and automatically correct for rotation. For example, a detection property of \nROTATION=90\n represents that the region is rotated 90 degrees counter clockwise, and therefore must be rotated 90 degrees clockwise to correct for it.\n\n\nWhen \nFEED_FORWARD_TYPE=SUPERSET_REGION\n, these tools will properly account for the \nROTATION\n detection property associated with each feed-forward detection when calculating the bounding box that encapsulates all of those regions.\n\n\nWhen \nFEED_FORWARD_TYPE=FRAME\n, these tools will rotate the frame according to the \nROTATION\n job property. It's important to note that for rotations other than 0, 90, 180, and 270 degrees the rotated frame dimensions will be larger than the original frame dimensions. This is because the frame needs to be expanded to encapsulate the entirety of the original rotated frame region. Black pixels are used to fill the empty space near the edges of the original frame.\n\n\n\n\n\n\nThe Markup component now places a colored dot at the upper-left corner of each detection region so that users can determine the rotation of the region relative to the entire frame.\n\n\n\n\n\n\nComponent Registration REST Endpoints\n\n\n\n\nAdded a \n[POST] /rest/components/registerUnmanaged\n endpoint so that components running as separate Docker containers can self-register with the Workflow Manager.\n\n\nSince these components are not managed by the Node Manager, they are considered unmanaged OpenMPF components. These components are not displayed in Nodes web UI and are tagged as unmanaged in the Component Registration web UI where they can only be removed.\n\n\nNote that components uploaded to the Component Registration web UI as .tar.gz files are considered managed components.\n\n\n\n\n\n\nAdded a \n[DELETE] /rest/components/{componentName}\n endpoint that can be used to remove managed and unmanaged components.\n\n\n\n\nPython Component Executor Docker Image\n\n\n\n\n\nComponent developers can now use a Python component executor Docker image to write a Python component for OpenMPF that can be encapsulated\nwithin a Docker container. This isolates the build and execution environment from the rest of OpenMPF. For more information, see the \nREADME\n.\n\n\nComponents developed with this image are not managed by the Node Manager; rather, they self-register with the Workflow Manager and their lifetime is determined by their own Docker container.\n\n\n\n\n\n\nDocker Deployment\n\n\n\n\nStreamlined single-host \ndocker-compose up\n deployments and multi-host \ndocker stack deploy\n swarm deployments. Now users are instructed to create a single \ndocker-compose.yml\n file for both types of deployments.\n\n\nRemoved the \ndocker-generate-compose-files.sh\n script in favor of allowing users the flexibility of combining multiple \ndocker-compose.*.yml\n files together using \ndocker-compose config\n. See the \nGenerate docker-compose.yml\n section of the README.\n\n\nComponents based on the Python component executor Docker image can now be defined and configured directly in \ndocker-compose.yml\n.\n\n\nOpenMPF Docker images now make use of Docker labels.\n\n\n\n\n\n\nEAST Text Region Detection Component\n\n\n\n\nThis new component uses the Efficient and Accurate Scene Text (EAST) detection model to detect text regions in images and videos. It reports their location, angle of rotation, and text type (\nSTRUCTURED\n or \nUNSTRUCTURED\n), and supports a variety of settings to control the behavior of merging text regions into larger regions. It does not perform OCR on the text or track detections across video frames. Thus, each video track is at most one detection long. For more information, see the \nREADME\n.\n\n\nOptionally, this component can be built as a Docker image using the Python component executor Docker image, allowing it to exist apart from the Node Manager image.\n\n\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\nUpdated to support reading tessdata \n*.traineddata\n files at a specified \nMODELS_DIR_PATH\n. This allows users to install new \n*.traineddata\n files post deployment.\n\n\nUpdated to optionally perform Tesseract Orientation and Script Detection (OSD). When enabled, the component will attempt to use the orientation results of OSD to automatically rotate the image, as well as perform OCR using the scripts detected by OSD.\n\n\nUpdated to optionally rotate a feed-forward text region 180 degrees to account for upside-down text.\n\n\nNow supports the following preprocessing properties for both structured and unstructured text:\n\n\nText sharpening\n\n\nText rescaling\n\n\nOtsu image thresholding\n\n\nAdaptive thresholding\n\n\nHistogram equalization\n\n\nAdaptive histogram equalization (also known as Contrast Limited Adaptive Histogram Equalization (CLAHE))\n\n\n\n\n\n\nWill use the \nTEXT_TYPE\n detection property in feed-forward regions provided by the EAST component to determine which preprocessing steps to perform.\n\n\nFor more information on these new features, see the \nREADME\n.\n\n\nRemoved gibberish and string filters since they only worked on English text.\n\n\n\n\nActiveMQ Profiles\n\n\n\n\n\nThe ActiveMQ Docker image now supports custom profiles. The container selects an \nactivemq.xml\n and \nenv\n file to use at runtime based on the value of the \nACTIVE_MQ_PROFILE\n environment variable. Among others, these files contain configuration settings for Java heap space and component queue memory limits.\n\n\nThis release only supports a \ndefault\n profile setting, as defined by \nactivemq-default.xml\n and \nenv.default\n; however, developers are free to add other \nactivemq-\nprofile\n.xml\n and \nenv.\nprofile\n files to the ActiveMQ Docker image to suit their needs.\n\n\n\n\nDisabled ActiveMQ Prefetch\n\n\n\n\n\nDisabled ActiveMQ prefetching on all component queues. Previously, a prefetch value of one was resulting in situations where one component service could be dispatched two sub-jobs, thereby starving other available component services which could process one of those sub-jobs in parallel.\n\n\n\n\nSearch Region Percentages\n\n\n\n\n\nIn addition to using exact pixel values, users can now use percentages for the following properties when specifying search regions for C++ and Python components:\n\n\nSEARCH_REGION_TOP_LEFT_X_DETECTION\n\n\nSEARCH_REGION_TOP_LEFT_Y_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_X_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_Y_DETECTION\n\n\n\n\n\n\nFor example, setting \nSEARCH_REGION_TOP_LEFT_X_DETECTION=50%\n will result in components only processing the right half of an image or video.\n\n\nOptionally, users can specify exact pixel values of some of these properties and percentages for others.\n\n\n\n\nOther Improvements\n\n\n\n\n\nIncreased the number of ActiveMQ maxConcurrentConsumers for the \nMPF.COMPLETED_DETECTIONS\n queue from 30 to 60.\n\n\nThe Create Job web UI now only displays the content of the \n$MPF_HOME/share/remote-media\n directory instead of all of \n$MPF_HOME/share\n, which prevents the Workflow Manager from indexing generated JSON output files, artifacts, and markup. Indexing the latter resulted in Java heap space issues for large scale production systems. This is a mitigation for issue \n#897\n.\n\n\nThe Job Status web UI now makes proper use of pagination in SQL/Hibernate through the Workflow Manager to avoid retrieving the entire jobs table, which was inefficient.\n\n\nThe Workflow Manager will now silently discard all duplicate messages in the ActiveMQ Dead Letter Queue (DLQ), regardless of destination. Previously, only messages destined for component sub-job request queues were discarded.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#891\n] Fixed a bug where the Workflow Manager media segmenter generated short segments that were minimally \nMIN_SEGMENT_LENGTH+1\n in size instead of \nMIN_SEGMENT_LENGTH\n.\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is created. This seems to have been resolved by disabling ActiveMQ prefetch behavior on component queues.\n\n\n[\n#855\n] A logback circular reference suppressed exception no longer throws a StackOverflowError. This was resolved by transitioning the Workflow Manager and Java components from the Logback framework to Log4j2.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#897\n] OpenMPF will attempt to index files located in \n$MPF_HOME/share\n as soon as the webapp is started by Tomcat. This is so that those files can be listed in a directory tree in the Create Job web UI. The main problem is that once a file gets indexed it's never removed from the cache, even if the file is manually deleted, resulting in a memory leak.\n\n\n\n\nLate Additions: November 2019\n\n\n\n\n\nUser names, roles, and passwords can now be set by using an optional \nuser.properties\n file. This allows administrators to override the default OpenMPF users that come preconfigured, which may be a security risk. Refer to the \"Configure Users\" section of the openmpf-docker \nREADME\n for more information.\n\n\n\n\nLate Additions: December 2019\n\n\n\n\n\nTransitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL database in the cloud. \n\n\nUpdated the EAST component to support a \nTEMPORARY_PADDING\n and \nFINAL_PADDING\n property. The first property determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is effectively removed from the final detections. The second property is used to control the final amount of padding on the output regions. Refer to the \nREADME\n.\n\n\n\n\nOpenMPF 4.0.0: February 2019\n\n\nDocumentation\n\n\n\n\n\nAdded an \nObject Storage Guide\n with information on how to configure OpenMPF to work with a custom NGINX object storage server, and how to run jobs that use an S3 object storage server. Note that the system properties for the custom NGINX object storage server have changed since the last release.\n\n\n\n\nUpgrade to Tesseract 4.0\n\n\n\n\n\nBoth the Tesseract OCR Text Detection Component and OpenALPR License Plate Detection Components have been updated to use the new version of Tesseract.\n\n\nAdditionally, Leptonica has been upgraded from 1.72 to 1.75.\n\n\n\n\nDocker Deployment\n\n\n\n\n\nThe Docker images now use the yum package manager to install ImageMagick6 from a public RPM repository instead of downloading the RPMs directly from imagemagick.org. This resolves an issue with the OpenMPF Docker build where RPMs on \nimagemagick.org\n were no longer available.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to allow the user to set a \nTESSERACT_OEM\n property in order to select an OCR engine mode (OEM).\n\n\n\"script/Latin\" can now be specified as the \nTESSERACT_LANGUAGE\n. When selected, Tesseract will select all Latin characters, which can be from different Latin languages.\n\n\n\n\nCeph S3 Object Storage\n\n\n\n\n\nAdded support for downloading files from, and uploading files to, an S3 object storage server. The following job properties can be provided: \nS3_ACCESS_KEY\n, \nS3_SECRET_KEY\n, \nS3_RESULTS_BUCKET\n, \nS3_UPLOAD_ONLY\n.\n\n\nAt this time, only support for Ceph object storage has been tested. However, the Workflow Manager uses the AWS SDK for Java to communicate with the object store, so it is possible that other S3-compatible storage solutions may work as well.\n\n\n\n\nISO-8601 Timestamps\n\n\n\n\n\nAll timestamps in the JSON output object, and streaming video callbacks, are now in the ISO-8601 format (e.g. \"2018-12-19T12:12:59.995-05:00\"). This new format includes the time zone, which makes it possible to compare timestamps generated between systems in different time zones.\n\n\nThis change does not affect the track and detection start and stop offset times, which are still reported in milliseconds since the start of the video.\n\n\n\n\nReduced Redis Usage\n\n\n\n\n\nThe Workflow Manager has been refactored to reduce usage of the Redis in-memory database. In general, Redis is not necessary for storing job information and only resulted in introducing potential delays in accessing that data over the network stack.\n\n\nNow, only track and detection data is stored in Redis for batch jobs. This reduces the amount of memory the Workflow Manager requires of the Java Virtual Machine. Compared to the other job information, track and detection data can potentially be relatively much larger. In the future, we plan to store frame data in Redis for streaming jobs as well.\n\n\n\n\nCaffe Vehicle Color Estimation\n\n\n\n\n\nThe Caffe Component \nmodels.ini\n file has been updated with a \"vehicle_color\" section with links for downloading the \nReza Fuad Rachmadi's Vehicle Color Recognition Using Convolutional Neural Network\n model files.\n\n\nThe following pipelines have been added. These require the above model files to be placed in \n$MPF_HOME/share/models/CaffeDetection\n:\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM TINY YOLO VEHICLE DETECTOR) PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM YOLO VEHICLE DETECTOR) PIPELINE\n\n\n\n\n\n\n\n\nTrack Merging and Minimum Track Length\n\n\n\n\n\nThe following system properties now have \"video\" in their names:\n\n\ndetection.video.track.merging.enabled\n\n\ndetection.video.track.min.gap\n\n\ndetection.video.track.min.length\n\n\ndetection.video.track.overlap.threshold\n\n\n\n\n\n\nThe above properties can be overridden by the following job properties, respectively. These have not been renamed since the last release:\n\n\nMERGE_TRACKS\n\n\nMIN_GAP_BETWEEN_TRACKS\n\n\nMIN_TRACK_LENGTH\n\n\nMIN_OVERLAP\n\n\n\n\n\n\nThese system and job properties now only apply to video media. This resolves an issue where users had set \ndetection.track.min.length=5\n, which resulted in dropping all image media tracks. By design, each image track can only contain a single detection.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a bug where the Docker entrypoint scripts appended properties to the end of \n$MPF_HOME/share/config/mpf-custom.properties\n every time the Docker deployment was restarted, resulting in entries like \ndetection.segment.target.length=5000,5000,5000\n.\n\n\nUpgrading to Tesseract 4 fixes a bug where, when specifying \nTESSERACT_LANGUAGE\n, if one of the languages is Arabic, then Arabic must be specified last. Arabic can now be specified first, for example: \nara+eng\n.\n\n\nFixed a bug where the minimum track length property was being applied to image tracks. Now it's only applied to video tracks.\n\n\nFixed a bug where ImageMagick6 installation failed while building Docker images.\n\n\n\n\nOpenMPF 3.0.0: December 2018\n\n\n\n\nNOTE:\n The \nBuild Guide\n and \nInstall Guide\n are outdated. The old process for manually configuring a Build VM, using it to build an OpenMPF package, and installing that package, is deprecated in favor of Docker containers. Please refer to the openmpf-docker \nREADME\n.\n\n\nNOTE:\n Do not attempt to register or unregister a component through the Nodes UI in a Docker deployment. It may appear to succeed, but the changes will not affect the child Node Manager containers, only the Workflow Manager container. Also, do not attempt to use the \nmpf\n command line tools in a Docker deployment.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nREADME\n, \nSWARM\n guide, and \nCONTRIBUTING\n guide for Docker deployment.\n\n\nUpdated the \nUser Guide\n with information on how track properties and track confidence are handled when merging tracks.\n\n\nAdded README files for new components. Refer to the component sections below.\n\n\n\n\nDocker Support\n\n\n\n\n\nOpenMPF can now be built and distributed as 5 Docker images: openmpf_workflow_manager, openmpf_node_manager, openmpf_active_mq, mysql_database, and redis.\n\n\nThese images can be deployed on a single host using \ndocker-compose up\n.\n\n\nThey can also be deployed across multiple hosts in a Docker swarm cluster using \ndocker stack deploy\n.\n\n\nGPU support is enabled through the NVIDIA Docker runtime.\n\n\nBoth HTTP and HTTPS deployments are supported.\n\n\n\n\n\n\nJSON Output Object\n\n\n\n\nAdded a \ntrackProperties\n field at the track level that works in much the same way as the \ndetectionProperties\n field at the detection level. Both are maps that contain zero or more key-value pairs. The component APIs have always supported the ability to return track-level properties, but they were never represented in the JSON output object, until now.\n\n\nSimilarly, added a track \nconfidence\n field. The component APIs always supported setting it, but the value was never used in the JSON output object, until now.\n\n\nAdded \njobErrors\n and\njobWarnings\n fields. The \njobErrors\n field will mention that there are items in \ndetectionProcessingErrors\n fields.\n\n\nThe \noffset\n, \nstartOffset\n, and \nstopOffset\n fields have been removed in favor of the existing \noffsetFrame\n, \nstartOffsetFrame\n, and \nstopOffsetFrame\n fields, respectively. They were redundant and deprecated.\n\n\nAdded a \nmpf.output.objects.exemplars.only\n system property, and \nOUTPUT_EXEMPLARS_ONLY\n job property, that can be set to reduce the size of the JSON output object by only recording the track exemplars instead of all of the detections in each track.\n\n\nAdded a \nmpf.output.objects.last.stage.only\n system property, and \nOUTPUT_LAST_STAGE_ONLY\n job property, that can be set to reduce the size of the JSON output object by only recording the detections for the last non-markup stage of a pipeline.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThe Darknet component can now support processing streaming video.\n\n\nIn batch mode, video frames are prefetched, decoded, and stored in a buffer using a separate thread from the one that performs the detection. The size of the prefetch buffer can be configured by setting \nFRAME_QUEUE_CAPACITY\n.\n\n\nThe Darknet component can now perform basic tracking and generate video tracks with multiple detections. Both the default detection mode and preprocessor detection mode are supported.\n\n\nThe Darknet component has been updated to support the full and tiny YOLOv3 models. The YOLOv2 models are no longer supported.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nThis new component extracts text found in an image and reports it as a single-detection track.\n\n\nPDF documents can also be processed with one track detection per page.\n\n\nUsers may set the language of each track using the \nTESSERACT_LANGUAGE\n property as well as adjust other image preprocessing properties for text extraction.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nOpenCV Scene Change Detection Component\n\n\n\n\n\nThis new component detects and segments a given video by scenes. Each scene change is detected using histogram comparison, edge comparison, brightness (fade outs), and overall hue/saturation/value differences between adjacent frames.\n\n\nUsers can toggle each type of of scene change detection technique as well as threshold properties for each detection method.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThis new component extracts text contained in documents and performs language detection. 71 languages and most document formats (.txt, .pptx, .docx, .doc, .pdf, etc.) are supported.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Image Detection Component\n\n\n\n\n\nThis new component extracts images embedded in document formats (.pdf, .ppt, .doc) and stores them on disk in a specified directory.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTrack-Level Properties and Confidence\n\n\n\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n section.\n\n\nComponents have been updated to return meaningful track-level properties. Caffe and Darknet include \nCLASSIFICATION\n, OALPR includes the exemplar \nTEXT\n, and Sphinx includes the \nTRANSCRIPTION\n.\n\n\nThe Workflow Manager will now populate the track-level confidence. It is the same as the exemplar confidence, which is the max of all of the track detections.\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\n\n\n\nAdded \nhttp.object.storage.*\n system properties for configuring an optional custom NGINX object storage server on which to store generated detection artifacts, JSON output objects, and markup files.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n, which is the default behavior when an object storage server is not specified.\n\n\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field, and, if appropriate, the \nmarkupResult.message\n field. If the job completes without other issues, the final status will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nThe NGINX storage server runs custom server-side code which we can make available upon request. In the future, we plan to support more common storage server solutions, such as Amazon S3.\n\n\n\n\n\n\nActiveMQ\n\n\n\n\nThe \nMPF_OUTPUT\n queue is no longer supported and has been removed. Job producers can specify a callback URL when creating a job so that they are alerted when the job is complete. Users observed heap space issues with ActiveMQ after running thousands of jobs without consuming messages from the \nMPF_OUTPUT\n queue.\n\n\nThe Workflow Manager will now silently discard duplicate sub-job request messages in the ActiveMQ Dead Letter Queue (DLQ). This fixes a bug where the Workflow Manager would prematurely terminate jobs corresponding to the duplicate messages. It's assumed that ActiveMQ will only place a duplicate message in the DLQ if the original message, or another duplicate, can be delivered.\n\n\n\n\nNode Auto-Configuration\n\n\n\n\n\nAdded the \nnode.auto.config.enabled\n, \nnode.auto.unconfig.enabled\n, and \nnode.auto.config.num.services.per.component\n system properties for automatically managing the configuration of services when nodes join and leave the OpenMPF cluster.\n\n\nDocker will assign a a hostname with a randomly-generated id to containers in a swarm deployment. The above properties allow the Workflow Manager to automatically discover and configure services on child Node Manager components, which is convenient since the hostname of those containers cannot be known in advance, and new containers with new hostnames are created when the swarm is restarted.\n\n\n\n\nJob Status Web UI\n\n\n\n\n\nAdded the \nweb.broadcast.job.status.enabled\n and \nweb.job.polling.interval\n system properties that can be used to configure if the Workflow Manager automatically broadcasts updates to the Job Status web UI. By default, the broadcasts are enabled.\n\n\nIn a production environment that processes hundreds of jobs or more at the same time, this behavior can result in overloading the web UI, causing it to slow down and freeze up. To prevent this, set \nweb.broadcast.job.status.enabled\n to \nfalse\n. If \nweb.job.polling.interval\n is set to a non-zero value, the web UI will poll for updates at that interval (specified in milliseconds).\n\n\nTo disable broadcasts and polling, set \nweb.broadcast.job.status.enabled\n to \nfalse\n and \nweb.job.polling.interval\n to a zero or negative value. Users will then need to manually refresh the Job Status web page using their web browser.\n\n\n\n\nOther Improvements\n\n\n\n\n\nNow using variable-length text fields in the mySQL database for string data that may exceed 255 characters.\n\n\nUpdated the MPFImageReader tool to use OpenCV video capture behind the scenes to support reading data from HTTP URLs.\n\n\nPython components can now include pre-built wheel files in the plugin package.\n\n\nWe now use a \nJenkinsfile\n Groovy script for our Jenkins build process. This allows us to use revision control for our continuous integration process and share that process with the open source community.\n\n\nAdded \nremote.media.download.retries\n and \nremote.media.download.sleep\n system properties that can be used to configure how the Workflow Manager will attempt to retry downloading remote media if it encounters a problem.\n\n\nArtifact extraction now uses MPFVideoCapture, which employs various fallback strategies for extracting frames in cases where a video is not well-formed or corrupted. For components that use MPFVideoCapture, this enables better consistency between the frames they process and the artifacts that are later extracted.\n\n\n\n\nBug Fixes\n\n\n\n\n\nJobs now properly end in \nERROR\n if an invalid media URL is provided or there is a problem accessing remote media.\n\n\nJobs now end in \nCOMPLETE_WITH_ERRORS\n when a detection splitter error occurs due to missing system properties.\n\n\nComponents can now include their own version of the Google Protobuf library. It will not conflict with the version used by the rest of OpenMPF.\n\n\nThe Java component executor now sets the proper job id in the job name instead of using the ActiveMQ message request id.\n\n\nThe Java component executor now sets the run directory using \nsetRunDirectory()\n.\n\n\nActions can now be properly added using an \"extras\" component. An extras component only includes a \ndescriptor.json\n file and declares Actions, Tasks, and Pipelines using other component algorithms.\n\n\nRefer to the items listed in the \nActiveMQ\n section.\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n section.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is created. The reason is currently unknown.\n\n\n[\n#544\n] Image artifacts retain some permissions from source files available on the local host. This can result in some of the image artifacts having executable permissions.\n\n\n[\n#604\n] The Sphinx component cannot be unregistered because \n$MPF_HOME/plugins/SphinxSpeechDetection/lib\n is owned by root on a deployment machine.\n\n\n[\n#623\n] The Nodes UI does not work correctly when \n[POST] /rest/nodes/config\n is used at the same time. This is because the UI's state is not automatically updated to reflect changes made through the REST endpoint.\n\n\n[\n#783\n] The Tesseract OCR Text Detection Component has a \nknown issue\n because it uses Tesseract 3. If a combination of languages is specified using \nTESSERACT_LANGUAGE\n, and one of the languages is Arabic, then Arabic must be specified last. For example, for English and Arabic, \neng+ara\n will work, but \nara+eng\n will not.\n\n\n[\n#784\n] Sometimes services do not start on OpenMPF nodes, and those services cannot be started through the Nodes web UI. This is not a Docker-specific problem, but it has been observed in a Docker swarm deployment when auto-configuration is enabled. The workaround is to restart the Docker swarm deployment, or remove the entire node in the Nodes UI and add it again.\n\n\n\n\nOpenMPF 2.1.0: June 2018\n\n\n\n\nNOTE:\n If building this release on a machine used to build a previous version of OpenMPF, then please run \nsudo pip install --upgrade pip\n to update to at least pip 10.0.1. If not, the OpenMPF build script will fail to properly download .whl files for Python modules.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded the \nPython Batch Component API\n.\n\n\nAdded the \nNode Guide\n.\n\n\nAdded the \nGPU Support Guide\n.\n\n\nUpdated the \nInstall Guide\n with an \"(Optional) Install the NVIDIA CUDA Toolkit\" section.\n\n\nRenamed Admin Manual to Admin Guide for consistency.\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nDevelopers can now write batch components in Python using the mpf_component_api module.\n\n\nDependencies can be specified in a setup.py file. OpenMPF will automatically download the .whl files using pip at build time.\n\n\nWhen deployed, a virtualenv is created for the Python component so that it runs in a sandbox isolated from the rest of the system.\n\n\nOpenMPF ImageReader and VideoCapture tools are provided in the mpf_component_util module.\n\n\nExample Python components are provided for reference.\n\n\n\n\nSpare Nodes\n\n\n\n\n\nSpare nodes can join and leave an OpenMPF cluster while the Workflow Manager is running. You can create a spare node by cloning an existing OpenMPF child node. Refer to the \nNode Guide\n.\n\n\nNote that changes made using the Component Registration web page only affect core nodes, not spare nodes. Core nodes are those configured during the OpenMPF installation process.\n\n\nAdded \nmpf list-nodes\n command to list the core nodes and available spare nodes.\n\n\nOpenMPF now uses the JGroups FILE_PING protocol for peer discovery instead of TCPPING. This means that the list of OpenMPF nodes no longer needs to be fully specified when the Workflow Manager starts. Instead, the Workflow Manager, and Node Manager process on each node, use the files in \n$MPF_HOME/share/nodes\n to determine which nodes are currently available.\n\n\nUpdated JGroups from 3.6.4. to 4.0.11.\n\n\nThe environment variables specified in \n/etc/profile.d/mpf.sh\n have been simplified. Of note, \nALL_MPF_NODES\n has been replaced by \nCORE_MPF_NODES\n.\n\n\n\n\nDefault Detection System Properties\n\n\n\n\n\nThe detection properties that specify the default values when creating new jobs can now be updated at runtime without restarting the Workflow Manager. Changing these properties will only have an effect on new jobs, not jobs that are currently running.\n\n\nThese default detection system properties are separated from the general system properties in the Properties web page. The latter still require the Workflow Manager to be restarted for changes to take effect.\n\n\nThe Apache Commons Configuration library is now used to read and write properties files. When defining a property value using an environment variable in the Properties web page, or \n$MPF_HOME/config/mpf-custom.properties\n, be sure to prepend the variable name with \nenv:\n. For example:\n\n\n\n\ndetection.models.dir.path=${env:MPF_HOME}/models/\n\n\n\n\n\n\nAlternatively, you can define system properties using other system properties:\n\n\n\n\ndetection.models.dir.path=${mpf.share.path}/models/\n\n\n\n\nAdaptive Frame Interval\n\n\n\n\n\nThe \nFRAME_RATE_CAP\n property can be used to set a threshold on the maximum number of frames to process within one second of the native video time. This property takes precedence over the user-provided / pipeline-provided value for \nFRAME_INTERVAL\n. When the \nFRAME_RATE_CAP\n property is specified, an internal frame interval value is calculated as follows:\n\n\n\n\ncalcFrameInterval = max(1, floor(mediaNativeFPS / frameRateCapProp));\n\n\n\n\n\n\nFRAME_RATE_CAP\n may be disabled by setting it \n= 0. \nFRAME_INTERVAL\n can be disabled in the same way.\n\n\nIf \nFRAME_RATE_CAP\n is disabled, then \nFRAME_INTERVAL\n will be used instead.\n\n\nIf both \nFRAME_RATE_CAP\n and \nFRAME_INTERVAL\n are disabled, then a value of 1 will be used for \nFRAME_INTERVAL\n.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThis release includes a component that uses the \nDarknet neural network framework\n to perform detection and classification of objects using trained models.\n\n\nPipelines for the Tiny YOLO and YOLOv2 models are provided. Due to its large size, the YOLOv2 weights file must be downloaded separately and placed in \n$MPF_HOME/share/models/DarknetDetection\n in order to use the YOLOv2 pipelines. Refer to \nDarknetDetection/plugin-files/models/models.ini\n for more information.\n\n\nThis component supports a preprocessor mode and default mode of operation. If preprocessor mode is enabled, and multiple Darknet detections in a frame share the same classification, then those are merged into a single detection where the region corresponds to the superset region that encapsulates all of the original detections, and the confidence value is the probability that at least one of the original detections is a true positive. If disabled, multiple Darknet detections in a frame are not merged together.\n\n\nDetections are not tracked across frames. One track is generated per detection.\n\n\nThis component supports an optional \nCLASS_WHITELIST_FILE\n property. When provided, only detections with class names listed in the file will be generated.\n\n\nThis component can be compiled with GPU support if the NVIDIA CUDA Toolkit is installed on the build machine. Refer to the \nGPU Support Guide\n. If the toolkit is not found, then the component will compile with CPU support only.\n\n\nTo run on a GPU, set the \nCUDA_DEVICE_ID\n job property, or set the detection.cuda.device.id system property, \n= 0.\n\n\nWhen \nCUDA_DEVICE_ID\n \n= 0, you can set the \nFALLBACK_TO_CPU_WHEN_GPU_PROBLEM\n job property, or the detection.use.cpu.when.gpu.problem system property, to \nTRUE\n if you want to run the component logic on the CPU instead of the GPU when a GPU problem is detected.\n\n\n\n\nModels Directory\n\n\n\n\n\nThe\n$MPF_HOME/share/models\n directory is now used by the Darknet and Caffe components to store model files and associated files, such as classification names files, weights files, etc. This allows users to more easily add model files post-deployment. Instead of copying the model files to \n$MPF_HOME/plugins/\ncomponent-name\n/models\n directory on each node in the OpenMPF cluster, they only need to copy them to the shared directory once.\n\n\nTo add new models to the Darknet and Caffe component, add an entry to the respective \ncomponent-name\n/plugin-files/models/models.ini\n file.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nPython components are packaged with their respective dependencies as .whl files. This can be automated by providing a setup.py file. An example OpenCV Python component is provided that demonstrates how the component is packaged and deployed with the opencv-python module. When deployed, a virtualenv is created for the component with the .whl files installed in it.\n\n\nWhen deploying OpenMPF, \nLD_LIBRARY_PATH\n is no longer set system-wide. Refer to Known Issues.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Nodes page to distinguish between core nodes and spare nodes, and to show when a node is online or offline.\n\n\nUpdated the Component Registration page to list the core nodes as a reminder that changes will not affect spare nodes.\n\n\nUpdated the Properties page to separate the default detection properties from the general system properties.\n\n\n\n\nBug Fixes\n\n\n\n\n\nCustom Action, task, and pipeline names can now contain \"(\" and \")\" characters again.\n\n\nDetection location elements for audio tracks and generic tracks in a JSON output object will now have a y value of \n0\n instead of \n1\n.\n\n\nStreaming health report and summary report timestamps have been corrected to represent hours in the 0-23 range instead of 1-24.\n\n\nSingle-frame .gif files are now segmented properly and no longer result in a NullPointerException.\n\n\nLD_LIBRARY_PATH\n is now set at the process level for Tomcat, the Node Manager, and component services, instead of at the system level in \n/etc/profile.d/mpf.sh\n. Also, deployments no longer create \n/etc/ld.so.conf.d/mpf.conf\n. This better isolates OpenMPF from the rest of the system and prevents issues, such as being unable to use SSH, when system libraries are not compatible with OpenMPF libraries. The latter situation may occur when running \nyum update\n on the system, which can make OpenMPF unusable until a new deployment package with compatible libraries is installed.\n\n\nThe Workflow Manager will no longer generate an \"Error retrieving the SingleJobInfo model\" line in the log if someone is viewing the Job Status page when a job submitted through the REST API is in progress.\n\n\n\n\nKnown Issues\n\n\n\n\n\nWhen multiple component services of the same type on the same node log to the same file at the same time, sometimes log lines will not be captured in the log file. The logging frameworks (log4j and log4cxx) do not support that usage. This problem happens more frequently on systems running many component services at the same time.\n\n\nThe following exception was observed:\n\n\n\n\ncom.google.protobuf.InvalidProtocolBufferException: Message missing required fields: data_uri\n\n\n\n\n\n\n\nFurther debugging is necessary to determine the reason why that message was missing that field. The situation is not easily reproducible. It may occur when ActiveMQ and / or the system is under heavy load and sends duplicate messages in attempt to ensure message delivery. Some of those messages seem to end up in the dead letter queue (DLQ). For now, we've improved the way we handle messages in the DLQ. If OpenMPF can process a message successfully, the job is marked as \nCOMPLETED_WITH_ERRORS\n, and the message is moved from \nActiveMQ.DLQ\n to \nMPF.DLQ_PROCESSED_MESSAGES\n. If OpenMPF cannot process a message successfully, it is moved from \nActiveMQ.DLQ to MPF.DLQ_INVALID_MESSAGES\n.\n\n\n\n\n\n\nThe \nmpf stop\n command will stop the Workflow Manager, which will in turn send commands to all of the available nodes to stop all running component services. If a service is processing a sub-job when the quit command is received, that service process will not terminate until that sub-job is completely processed. Thus, the service may put a sub-job response on the ActiveMQ response queue after the Workflow Manager has terminated. That will not cause a problem because the queues are flushed the next time the Workflow Manager starts; however, there will be a problem if the service finishes processing the sub-job after the Workflow Manager is restarted. At that time, the Workflow Manager will have no knowledge of the old job and will in turn generate warnings in the log about how the job id is \"not known to the system\" and/or \"not found as a batch or a streaming job\". These can be safely ignored. Often, if these messages appear in the log, then C++ services were running after stopping the Workflow Manager. To address this, you may wish to run \nsudo killall amq_detection_component\n after running \nmpf stop\n.\n\n\n\n\nOpenMPF 2.0.0: February 2018\n\n\n\n\nNOTE:\n Components built for previous releases of OpenMPF are not compatible with OpenMPF 2.0.0 due to Batch Component API changes to support generic detections, and changes made to the format of the \ndescriptor.json\n file to support stream processing.\n\n\nNOTE:\n This release contains basic support for processing video streams. Currently, the only way to make use of that functionality is through the REST API. Streaming jobs and services cannot be created or monitored through the web UI. Only the SuBSENSE component has been updated to support streaming. Only single-stage pipelines are supported at this time.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated documents to distinguish the batch component APIs from the streaming component API.\n\n\nAdded the \nC++ Streaming Component API\n.\n\n\nUpdated the \nC++ Batch Component API\n to describe support for generic detections.\n\n\nUpdated the \nREST API\n with endpoints for streaming jobs.\n\n\n\n\nSupport for Generic Detections\n\n\n\n\n\nC++ and Java components can now declare support for the \nUNKNOWN\n data type. The respective batch APIs have been updated with a function that will enable a component to process an \nMPFGenericJob\n, which represents a piece of media that is not a video, image, or audio file.\n\n\nNote that these API changes make OpenMPF R2.0.0 incompatible with components built for previous releases of OpenMPF. Specifically, the new component executor will not be able to load the component logic library.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nAdded the following function to support generic detections:\n\n\nMPFDetectionError GetDetections(const MPFGenericJob \njob, vector\nMPFGenericTrack\n \ntracks)\n\n\n\n\n\n\n\n\nJava Batch Component API\n\n\n\n\n\nAdded the following method to support generic detections:\n\n\nList\nMPFGenericTrack\n getDetections(MPFGenericJob job)\n\n\n\n\n\n\n\n\nStreaming REST API\n\n\n\n\n\nAdded the following REST endpoints for streaming jobs:\n\n\n[GET] /rest/streaming/jobs\n: Returns a list of streaming job ids.\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job. Users can register for health report and summary report callbacks.\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job.\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job.\n\n\n\n\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nUpdated to support generic detections.\n\n\nUpdated Redis to store information about streaming jobs.\n\n\nAdded controllers for streaming job REST endpoints.\n\n\nAdded ability to generate health reports and segment summary reports for streaming jobs.\n\n\nImproved code flow between the Workflow Manager and master Node Manager to support streaming jobs.\n\n\nAdded ActiveMQ queues to enable the C++ Streaming Component Executor to send reports and job status to the Workflow Manager.\n\n\n\n\nNode Manager\n\n\n\n\n\nUpdated the master Node Manager and child Node Managers to spawn component services on demand to handle streaming jobs, cancel those jobs, and to monitor the status of those processes.\n\n\nUsing .ini files to represent streaming job properties and enable better communication between a child Node Manager and C++ Streaming Component Executor.\n\n\n\n\nC++ Streaming Component API\n\n\n\n\n\nDeveloped the C++ Streaming Component API with the following functions:\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob \njob)\n: Constructor that takes a streaming video job.\n\n\nstring GetDetectionType()\n: Returns the type of detection (i.e. \"FACE\").\n\n\nvoid BeginSegment(const VideoSegmentInfo \nsegment_info)\n: Indicates the beginning of a new video segment.\n\n\nbool ProcessFrame(const cv::Mat \nframe, int frame_number)\n: Processes a single frame for the current video segment.\n\n\nvector\nMPFVideoTrack\n EndSegment()\n: Indicates the end of the current video segment.\n\n\n\n\n\n\nUpdated the C++ Hello World component to support streaming jobs.\n\n\n\n\nC++ Streaming Component Executor\n\n\n\n\n\nDeveloped the C++ Streaming Component Executor to load a streaming component logic library, read frames from a video stream, and exercise the component logic through the C++ Streaming Component API.\n\n\nWhen the C++ Streaming Component Executor cannot read a frame from the stream, it will sleep for at least 1 millisecond, doubling the amount of sleep time per attempt until it reaches the \nstallTimeout\n value specified when the job was created. While stalled, the job status will be \nSTALLED\n. After the timeout is exceeded, the job will be \nTERMINATED\n.\n\n\nThe C++ Streaming Component Executor supports \nFRAME_INTERVAL\n, as well as rotation, horizontal flipping, and cropping (region of interest) properties. Does not support \nUSE_KEY_FRAMES\n.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded the following Java classes to the interoperability package to simplify third party integration:\n\n\nJsonHealthReportCollection\n: Represents the JSON content of a health report callback. Contains one or more \nJsonHealthReport\n objects.\n\n\nJsonSegmentSummaryReport\n: Represents the JSON content of a summary report callback. Content is similar to the JSON output object used for batch processing.\n\n\n\n\n\n\n\n\nSuBSENSE Component\n\n\n\n\n\nThe SuBSENSE component now supports both batch processing and stream processing.\n\n\nEach video segment will be processed independently of the rest. In other words, tracks will be generated on a segment-by-segment basis and tracks will not carry over between segments.\n\n\nNote that the last frame in the previous segment will be used to determine if there is motion in the first frame of the next segment.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nUpdated \ndescriptor.json\n fields to allow components to support batch and/or streaming jobs. Components that use the old \ndescriptor.json\n file format cannot be registered through the web UI. \n\n\nBatch component logic and streaming component logic are compiled into separate libraries.\n\n\nThe mySQL \nstreaming_job_request\n table has been updated with the following fields, which are used to populate the JSON health reports:\n\n\nstatus_detail\n: (Optional) A user-friendly description of the current job status.\n\n\nactivity_frame_id\n: The frame id associated with the last job activity. Activity is defined as the start of a new track for the current segment.\n\n\nactivity_timestamp\n: The timestamp associated with the last job activity.\n\n\n\n\n\n\n\n\nWeb User Interface\n\n\n\n\n\nAdded column names to the table that appears when the user clicks in the Media button associated with a job on the Job Status page. Now descriptive comments are provided when table cells are empty.\n\n\n\n\nBug Fixes\n\n\n\n\n\nUpgraded Tika to 1.17 to resolve an issue with improper indentation in a Python file (rotation.py) that resulted in generating at least one error message per image processed. When processing a large number of images, this would generate may error messages, causing the Automatic Bug Reporting Tool daemon (abrtd) process to run at 100% CPU. Once in that state, that process would stay there, essentially wasting on CPU core. This resulted in some of the Jenkins virtual machines we used for testing to become unresponsive.\n\n\n\n\nKnown Issues\n\n\n\n\n\n\n\nOpenCV 3.3.0 \ncv::imread()\n does not properly decode some TIFF images that have EXIF orientation metadata. It can handle images that are flipped horizontally, but not vertically. It also has issues with rotated images. Since most components rely on that function to read image data, those components may silently fail to generate detections for those kinds of images.\n\n\n\n\n\n\nUsing single quotes, apsotrophes, or double quotes in the name of an algorithm, action, task, or pipeline configured on an existing OpenMPF system will result in a failure to perform an OpenMPF upgrade on that system. Specifically, the step where pre-existing custom actions, tasks, and pipelines are carried over to the upgraded version of OpenMPF will fail. Please do not use those special characters while naming those elements. If this has been done already, then those elements should be manually renamed in the XML files prior to an upgrade attempt.\n\n\n\n\n\n\nOpenMPF uses OpenCV, which uses FFmpeg, to connect to video streams. If a proxy and/or firewall prevents the network connection from succeeding, then OpenCV, or the underlying FFmpeg library, will segfault. This causes the C++ Streaming Component Executor process to fail. In turn, the job status will be set to \nERROR\n with a status detail message of \"Unexpected error. See logs for details\". In this case, the logs will not contain any useful information. You can identify a segfault by the following line in the node-manager log:\n\n\n\n\n\n\n2018-02-15 16:01:21,814 INFO [pool-3-thread-4] o.m.m.nms.streaming.StreamingProcess - Process: Component exited with exit code 139\u00a0\n\n\n\n\n\n\nTo determine if FFmpeg can connect to the stream or not, run \nffmpeg -i \nstream-uri\n in a terminal window. Here's an example when it's successful:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[rtsp @ 0x1924240] UDP timeout, retrying with TCP\nInput #0, rtsp, from 'rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov':\n Metadata:\n title : BigBuckBunny_115k.mov\n Duration: 00:09:56.48, start: 0.000000, bitrate: N/A\n Stream #0:0: Audio: aac (LC), 12000 Hz, stereo, fltp\n Stream #0:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 240x160, 24 fps, 24 tbr, 90k tbn, 48 tbc\nAt least one output file must be specified\n\n\n\n\n\n\nHere's an example when it's not successful, so there may be network issues:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[tcp @ 0x171c300] Connection to tcp://184.72.239.149:554?timeout=0 failed: Invalid argument\nrtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov: Invalid argument\n\n\n\n\n\n\nTika 1.17 does not come pre-packaged with support for some embedded image formats in PDF files, possibly to avoid patent issues. OpenMPF does not handle embedded images in PDFs, so that's not a problem. Tika will print out the following warnings, which can be safely ignored:\n\n\n\n\nJan 22, 2018 11:02:15 AM org.apache.tika.config.InitializableProblemHandler$3 handleInitializableProblem\nWARNING: JBIG2ImageReader not loaded. jbig2 files will be ignored\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nTIFFImageWriter not loaded. tiff files will not be processed\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nJ2KImageReader not loaded. JPEG2000 files will not be processed.\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\n\n\n\n\n\nOpenMPF 1.0.0: October 2017\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nBuild Guide\n with instructions for installing the latest JDK, latest JRE, FFmpeg 3.3.3, new codecs, and OpenCV 3.3.\n\n\nAdded an \nAcknowledgements\n section that provides information on third party dependencies leveraged by the OpenMPF.\n\n\nAdded a \nFeed Forward Guide\n that explains feed forward processing and how to use it.\n\n\nAdded missing requirements checklist content to the \nInstall Guide\n.\n\n\nUpdated the README at the top level of each of the primary repositories to help with user navigation and provide general information.\n\n\n\n\nUpgrade to FFmpeg 3.3.3 and OpenCV 3.3\n\n\n\n\n\nUpdated core framework from FFmpeg 2.6.3 to FFmpeg 3.3.3.\n\n\nAdded the following FFmpeg codecs: x256, VP9, AAC, Opus, Speex.\n\n\nUpdated core framework and components from OpenCV 3.2 to OpenCV 3.3. No longer building with opencv_contrib.\n\n\n\n\nFeed Forward Behavior\n\n\n\n\n\nUpdated the workflow manager (WFM) and all video components to optionally perform feed forward processing for batch jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will only process the frames associated with the detections in those tracks. This differs from the default segmenting behavior, which does not preserve detection regions or track information between stages.\n\n\nTo enable this behavior, the optional \nFEED_FORWARD_TYPE\n property must be set to \nFRAME\n, \nSUPERSET_REGION\n, or \nREGION\n. If set to \nFRAME\n then the components in the next stage will process the whole frame region associated with each detection in the track passed forward. If set to \nSUPERSET_REGION\n then the components in the next stage will determine the bounding box that encapsulates all of the detection regions in the track, and only process the pixel data within that superset region. If set to \nREGION\n then the components in the next stage will process the region associated with each detection in the track passed forward, which may vary in size and position from frame to frame.\n\n\nThe optional \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n property can be set to a number to limit the number of detections passed forward in a track. For example, if set to \"5\", then only the top 5 detections in the track will be passed forward and processed by the next stage. The top detections are defined as those with the highest confidence values, or if the confidence values are the same, those with the lowest frame index.\n\n\nNote that setting the feed forward properties has no effect on the first pipeline stage because there is no prior stage that can pass tracks to it.\n\n\n\n\nCaffe Component\n\n\n\n\n\nUpdated the Caffe component to process images in the BGR color space instead of the RGB color space. This addresses a bug found in OpenCV. Refer to the Bug Fixes section below.\n\n\nAdded support for processing videos.\n\n\nAdded support for an optional \nACTIVATION_LAYER_LIST\n property. For each network layer specified in the list, the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is an encoded string of the JSON representation of an OpenCV matrix of the activation values for that layer. The activation values are obtained after the Caffe network has processed the frame data.\n\n\nAdded support for an optional \nSPECTRAL_HASH_FILE_LIST\n property. For each JSON file specified in the list, the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is a string of 0's and 1's representing the spectral hash calculated using the information in the spectral hash JSON file. The spectral hash is calculated using activation values after the Caffe network has processed the frame data.\n\n\nAdded a pipeline to showcase the above two features for the GoogLeNet Caffe model.\n\n\nRemoved the \nTRANSPOSE\n property from the Caffe component since it was not necessary.\n\n\nAdded red, green, and blue mean subtraction values to the GoogLeNet pipeline.\n\n\n\n\nUse Key Frames\n\n\n\n\n\nAdded support for an optional \nUSE_KEY_FRAMES\n property to each video component. When true the component will only look at key frames (I-frames) from the input video. Can be used in conjunction with \nFRAME_INTERVAL\n. For example, when \nUSE_KEY_FRAMES\n is true, and \nFRAME_INTERVAL\n is set to \"2\", then every other key frame will be processed.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\n\n\n\nUpdated the MPFVideoCapture and MPFImageReader tools to handle feed forward properties.\n\n\nUpdated the MPFVideoCapture tool to handle \nFRAME_INTERVAL\n and \nUSE_KEY_FRAMES\n properties.\n\n\nUpdated all existing components to leverage these tools as much as possible.\n\n\nWe encourage component developers to use these tools to automatically take care of common frame grabbing and frame manipulation behaviors, and not to reinvent the wheel.\n\n\n\n\nDead Letter Queue\n\n\n\n\n\nIf for some reason a sub-job request that should have gone to a component ends up on the ActiveMQ Dead Letter Queue (DLQ), then the WFM will now process that failed request so that the job can complete. The ActiveMQ management page will now show that \nActiveMQ.DLQ\n has 1 consumer. It will also show unconsumed messages in \nMPF.PROCESSED_DLQ_MESSAGES\n. Those are left for auditing purposes. The \"Message Detail\" for these shows the string representation of the original job request protobuf message.\n\n\n\n\nUpgrade Path\n\n\n\n\n\nRemoved the Release 0.8 to Release 0.9 upgrade path in the deployment scripts.\n\n\nAdded support for a Release 0.9 to Release 1.0.0 upgrade path, and a Release 0.10.0 to Release 1.0.0 upgrade path.\n\n\n\n\nMarkup\n\n\n\n\n\nBounding boxes are now drawn along the interpolated path between detection regions whenever there are one or more frames in a track which do not have detections associated with them.\n\n\nFor each track, the color of the bounding box is now a randomly selected hue in the HSV color space. The colors are evenly distributed using the golden ratio.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a \nbug in OpenCV\n where the Caffe example code was processing images in the RGB color space instead of the BGR color space. Updated the OpenMPF Caffe component accordingly.\n\n\nFixed a bug in the OpenCV person detection component that caused bounding boxes to be too large for detections near the edge of a frame.\n\n\nResubmitting jobs now properly carries over configured job properties.\n\n\nFixed a bug in the build order of the OpenMPF project so that test modules that the WFM depends on are built before the WFM itself.\n\n\nThe Markup component draws bounding boxes between detections when a \nFRAME_INTERVAL\n is specified. This is so that the bounding box in the marked-up video appears in every frame. Fixed a bug where the bounding boxes drawn on non-detection frames appeared to stand still rather than move along the interpolated path between detection regions.\n\n\nFixed a bug on the OALPR license plate detection component where it was not properly handling the \nSEARCH_REGION_*\n properties.\n\n\nSupport for the \nMIN_GAP_BETWEEN_SEGMENTS\n property was not implemented properly. When the gap between two segments is less than this property value then the segments should be merged; otherwise, the segments should remain separate. In some cases, the exact opposite was happening. This bug has been fixed.\n\n\n\n\nKnown Issues\n\n\n\n\n\nBecause of the number of additional ActiveMQ messages involved, enabling feed forward for low resolution video may take longer than the non-feed-forward behavior.\n\n\n\n\nOpenMPF 0.10.0: July 2017\n\n\n\n\nWARNING:\n There is no longer a \nDEFAULT CAFFE ACTION\n, \nDEFAULT CAFFE TASK\n, or \nDEFAULT CAFFE PIPELINE\n. There is now a \nCAFFE GOOGLENET DETECTION PIPELINE\n and \nCAFFE YAHOO NSFW DETECTION PIPELINE\n, which each have a respective action and task.\n\n\nNOTE:\n MPFImageReader has been re-enabled in this version of OpenMPF since we upgraded to OpenCV 3.2, which addressed the known issues with \nimread()\n, auto-orientation, and jpeg files in OpenCV 3.1.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nContributor Guide\n that provides guidelines for contributing to the OpenMPF codebase.\n\n\nUpdated the \nJava Batch Component API\n with links to the example Java components.\n\n\nUpdated the \nBuild Guide\n with instructions for OpenCV 3.2.\n\n\n\n\nUpgrade to OpenCV 3.2\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.1 to OpenCV 3.2.\n\n\n\n\nSupport for Animated gifs\n\n\n\n\n\nAll gifs are now treated as videos. Each gif will be handled as an MPFVideoJob.\n\n\nUnanimated gifs are treated as 1-frame videos.\n\n\nThe WFM Media Inspector now populates the \nmedia_properties\n map with a \nFRAME_COUNT\n entry (in addition to the \nDURATION\n and \nFPS\n entries).\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for the Yahoo Not Suitable for Work (NSFW) Caffe model for explicit material detection.\n\n\nUpdated the Caffe component to support the OpenCV 3.2 Deep Neural Network (DNN) module.\n\n\n\n\nFuture Support for Streaming Video\n\n\n\n\n\nNOTE:\n At this time, OpenMPF does not support streaming video. This section details what's being / has been done so far to prepare for that feature.\n\n\n\n\n\n\nThe codebase is being updated / refactored to support both the current \"batch\" job functionality and new \"streaming\" job functionality.\n\n\nbatch job: complete video files are written to disk before they are processed\n\n\nstreaming job: video frames are read from a streaming endpoint (such as RTSP) and processed in near real time\n\n\n\n\n\n\nThe REST API is being updated with endpoints for streaming jobs:\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job\n\n\n\n\n\n\nThe Redis and mySQL databases are being updated to support streaming video jobs.\n\n\nA batch job will never have the same id as a streaming job. The integer ids will always be unique.\n\n\n\n\n\n\n\n\nBug Fixes\n\n\n\n\n\nThe MOG and SuBSENSE component services could segfault and terminate if the \nUSE_MOTION_TRACKING\n property was set to \u201c1\u201d and a detection was found close to the edge of the frame. Specifically, this would only happen if the video had a width and/or height dimension that was not an exact power of two.\n\n\nThe reason was because the code downsamples each frame by a power of two and rounds the value of the width and height up to the nearest integer. Later on when upscaling detection rectangles back to a size that\u2019s relative to the original image, the resized rectangle sometimes extended beyond the bounds of the original frame.\n\n\n\n\n\n\n\n\nKnown Issues\n\n\n\n\n\nIf a job is submitted through the REST API, and a user to logged into the web UI and looking at the job status page, the WFM may generate \"Error retrieving the SingleJobInfo model for the job with id\" messages.\n\n\nThis is because the job status is only added to the HTTP session object if the job is submitted through the web UI. When the UI queries the job status it inspects this object.\n\n\nThis message does not appear if job status is obtained using the \n[GET] /rest/jobs/{id}\n endpoint.\n\n\n\n\n\n\nThe \n[GET] /rest/jobs/stats\n endpoint aggregates information about all of the jobs ever run on the system. If thousands of jobs have been run, this call could take minutes to complete. The code should be improved to execute a direct mySQL query.\n\n\n\n\nOpenMPF 0.9.0: April 2017\n\n\n\n\nWARNING:\n MPFImageReader has been disabled in this version of OpenMPF. Component developers should use MPFVideoCapture instead. This affects components developed against previous versions of OpenMPF and components developed against this version of OpenMPF. Please refer to the Known Issues section for more information.\n\n\nWARNING:\n The OALPR Text Detection Component has been renamed to OALPR \nLicense Plate\n Text Detection Component. This affects the name of the component package and the name of the actions, tasks, and pipelines. When upgrading from R0.8 to R0.9, if the old OALPR Text Detection Component is installed in R0.8 then you will be prompted to install it again at the end of the upgrade path script. We recommend declining this prompt because the old component will conflict with the new component.\n\n\nWARNING:\n Action, task, and pipeline names that started with \nMOTION DETECTION PREPROCESSOR\n have been renamed \nMOG MOTION DETECTION PREPROCESSOR\n. Similarly, \nWITH MOTION PREPROCESSOR\n has changed to \nWITH MOG MOTION PREPROCESSOR\n.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n to reflect job properties, algorithm-specific properties, and media-specific properties.\n\n\nStreamlined the \nC++ Batch Component API\n document for clarity and simplicity.\n\n\nCompleted the \nJava Batch Component API\n document.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to reflect web UI changes.\n\n\nUpdated the \nBuild Guide\n with instructions for GitHub repositories.\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nAdded support for job properties, which will override pre-defined pipeline properties.\n\n\nAdded support for algorithm-specific properties, which will apply to a single stage of the pipeline and will override job properties and pre-defined pipeline properties.\n\n\nAdded support for media-specific properties, which will apply to a single piece and media and will override job properties, algorithm-specific properties, and pre-defined pipeline properties.\n\n\nComponents can now be automatically registered and installed when the web application starts in Tomcat.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nThe \"Close All\" button on pop-up notifications now dismisses all notifications from the queue, not just the visible ones.\n\n\nJob completion notifications now only appear for jobs created during the current login session instead of all jobs.\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties can be set using the web interface when creating a job. Once files are selected for a job, these properties can be set individually or by groups of files.\n\n\nThe Node and Process Status page has been merged into the Node Configuration page for simplicity and ease of use.\n\n\nThe Media Markup results page has been merged into the Job Status page for simplicity and ease of use.\n\n\nThe File Manager UI has been improved to handle large numbers of files and symbolic links.\n\n\nThe side navigation menu is now replaced by a top navigation bar.\n\n\n\n\nREST API\n\n\n\n\n\nAdded an optional jobProperties object to the \n/rest/jobs/\n request which contains String key-value pairs which override the pipeline's pre-configured job properties.\n\n\nAdded an optional algorithmProperties object to the \n/rest/jobs/\n request which can be used to configure properties for specific algorithms in the pipeline. These properties override the pipeline's pre-configured job properties. They also override the values in the jobProperties object.\n\n\nUpdated the \n/rest/jobs/\n request to add more detail to media, replacing a list of mediaUri Strings with a list of media objects, each of which contains a mediaUri and an optional mediaProperties map. The mediaProperties map can be used to configure properties for the specific piece of media. These properties override the pipeline's pre-configured job properties, values in the jobProperties object, and values in the algorithmProperties object.\n\n\nStreamlined the actions, tasks, and pipelines endpoints that are used by the web UI.\n\n\n\n\nFlipping, Rotation, and Region of Interest\n\n\n\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties will no longer appear in the detectionProperties map in the JSON detection output object. When applied to an algorithm these properties now appear in the pipeline.stages.actions.properties element. When applied to a piece of media these properties will now appear in the the media.mediaProperties element.\n\n\nThe OpenMPF now supports multiple regions of interest in a single media file. Each region will produce tracks separately, and the tracks for each region will be listed in the JSON output as if from a separate media file.\n\n\n\n\nComponent API\n\n\n\n\n\nJava Batch Component API is functionally complete for third-party development, with the exception of Component Adapter and frame transformation utilities classes.\n\n\nRe-architected the Java Batch Component API to use a more traditional Java method structure of returning track lists and throwing exceptions (rather than modifying input track lists and returning statuses), and encapsulating job properties into MPFJob objects:\n\n\nList\nMPFVideoTrack\n getDetections(MPFVideoJob job) throws MPFComponentDetectionError\n\n\nList\nMPFAudioTrack\n getDetections(MPFAudioJob job) throws MPFComponentDetectionError\n\n\nList\nMPFImageLocation\n getDetections(MPFImageJob job) throws MPFComponentDetectionError\n\n\n\n\n\n\nCreated examples for the Java Batch Component API.\n\n\nReorganized the Java and C++ component source code to enable component development without the OpenMPF core, which will simplify component development and streamline the code base.\n\n\n\n\nJSON Output Objects\n\n\n\n\n\nThe JSON output object for the job now contains a jobProperties map which contains all properties defined for the job in the job request. For example, if the job request specifies a \nCONFIDENCE_THRESHOLD\n of then the jobProperties map in the output will also list a \nCONFIDENCE_THRESHOLD\n of 5.\n\n\nThe JSON output object for the job now contains a algorithmProperties element which contains all algorithm-specific properties defined for the job in the job request. For example, if the job request specifies a \nFRAME_INTERVAL\n of 2 for FACECV then the algorithmProperties element in the output will contain an entry for \"FACECV\" and that entry will list a \nFRAME_INTERVAL\n of 2.\n\n\nEach JSON media output object now contains a mediaProperties map which contains all media-specific properties defined by the job request. For example, if the job request specifies a \nROTATION\n of 90 degrees for a single piece of media then the mediaProperties map for that piece of piece will list a \nROTATION\n of 90.\n\n\nThe content of JSON output objects are now organized by detection type (e.g. MOTION, FACE, PERSON, TEXT, etc.) rather than action type.\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for flip, rotation, and cropping to regions of interest.\n\n\nAdded support for returning multiple classifications per detection based on user-defined settings. The classification list is in order of decreasing confidence value.\n\n\n\n\nNew Pipelines\n\n\n\n\n\nNew SuBSENSE motion preprocessor pipelines have been added to components that perform detection on video.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nActions.xml\n, \nAlgorithms.xml\n, \nnodeManagerConfig.xml\n, \nnodeServicesPalette.json\n, \nPipelines.xml\n, and \nTasks.xml\n are no longer stored within the Workflow Manager WAR file. They are now stored under \n$MPF_HOME/data\n. This makes it easier to upgrade the Workflow Manager and makes it easier for users to access these files.\n\n\nEach component can now be optionally installed and registered during deployment. Components not registered are set to the \nUPLOADED\n state. They can then be removed or registered through the Component Registration page.\n\n\nJava components are now packaged as tar.gz files instead of RPMs, bringing them into alignment with C++ components.\n\n\nOpenMPF R0.9 can be installed over OpenMPF R0.8. The deployment scripts will determine that an upgrade should take place.\n\n\nAfter the upgrade, user-defined actions, tasks, and pipelines will have \"CUSTOM\" prepended to their name.\n\n\nThe job_request table in the mySQL database will have a new \"output_object_version\" column. This column will have \"1.0\" for jobs created using OpenMPF R0.8 and \"2.0\" for jobs created using OpenMPF R0.9. The JSON output object schema has changed between these versions.\n\n\n\n\n\n\nReorganized source code repositories so that component SDKs can be downloaded separately from the OpenMPF core and so that components are grouped by license and maturity. Build scripts have been created to streamline and simplify the build process across the various repositories.\n\n\n\n\nUpgrade to OpenCV 3.1\n\n\n\n\n\nThe OpenMPF software has been ported to use OpenCV 3.1, including all of the C++ detection components and the markup component. For the OpenALPR license plate detection component, the versions of the openalpr, tesseract, and leptonica libraries were also upgraded to openalpr-2.3.0, tesseract-3.0.4, and leptonica-1.7.2. For the SuBSENSE motion component, the version of the SuBSENSE library was upgraded to use the code found at this location: \nhttps://bitbucket.org/pierre_luc_st_charles/subsense/src\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\nMOG motion detection always detected motion in frame 0 of a video. Because motion can only be detected between two adjacent frames, frame 1 is now the first frame in which motion can be detected.\n\n\nMOG motion detection never detected motion in the first frame of a video segment (other than the first video segment because of the frame 0 bug described above). Now, motion is detected using the first frame before the start of a segment, rather than the first frame of the segment.\n\n\nThe above bugs were also present in SuBSENSE motion detection and have been fixed.\n\n\nSuBSENSE motion detection generated tracks where the frame numbers were off by one. Corrected the frame index logic.\n\n\nVery large video files caused an out of memory error in the system during Workflow Manager media inspection.\n\n\nA job would fail when processing images with an invalid metadata tag for the camera flash setting.\n\n\nUsers were permitted to select invalid file types using the File Manager UI.\n\n\n\n\nKnown Issues\n\n\n\n\n\nMPFImageReader does not work reliably with the current release version of OpenCV 3.1\n: In OpenCV 3.1, new functionality was introduced to interpret EXIF information when reading jpeg files.\n\n\nThere are two issues with this new functionality that impact our ability to use the OpenCV \nimread()\n function with MPFImageReader:\n\n\nFirst, because of a bug in the OpenCV code, reading a jpeg file that contains exif information could cause it to hang. (See \nhttps://github.com/opencv/opencv/issues/6665\n.)\n\n\nSecond, it is not possible to tell the \nimread()\nfunction to ignore the EXIF data, so the image it returns is automatically rotated. (See \nhttps://github.com/opencv/opencv/issues/6348\n.) This results in the MPFImageReader applying a second rotation to the image due to the EXIF information.\n\n\n\n\n\n\nTo address these issues, we developed the following workarounds:\n\n\nCreated a version of the MPFVideoCapture that works with an MPFImageJob. The new MPFVideoCapture can pull frames from both video files and images. MPFVideoCapture leverages cv::VideoCapture, which does not have the two issues described above.\n\n\nDisabled the use of MPFImageReader to prevent new users from trying to develop code leveraging this previous functionality.", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2020 The MITRE Corporation. All Rights Reserved.\n\n\n\n\nOpenMPF 5.1.0: November 2020\n\n\nMedia Inspection Improvements\n\n\n\n\n\nThe Workflow Manager will now handle video files that don't have a video stream as an \nAUDIO\n type, and handle video files that don't have a video or audio stream as an \nUNKNOWN\n type. The JSON output object contains a new \nmedia.mediaType\n field that will be set to \nVIDEO\n, \nAUDIO\n, \nIMAGE\n, or \nUNKNOWN\n.\n\n\nThe Workflow Manager now configures Tika with \ncustom MIME type support\n. Currently, this enables the detection of \nvideo/vnd.dlna.mpeg-tts\n and \nimage/jxr\n MIME types.\n\n\nIf the Workflow Manager cannot use Tika to determine the media MIME type then it will fall back to using the Linux \nfile\n command with a \ncustom magicfile\n.\n\n\nOpenMPF now supports Apple-optimized PNGs and HEIC images. Refer to the Bug Fixes section below.\n\n\n\n\nEAST Text Region Detection Component Improvements\n\n\n\n\n\nThe \nTEMPORARY_PADDING\n property has been separated into \nTEMPORARY_PADDING_X\n and \nTEMPORARY_PADDING_Y\n so that X and Y padding can be configured independently.\n\n\nThe \nMERGE_MIN_OVERLAP\n property has been renamed to \nMERGE_OVERLAP_THRESHOLD\n so that setting it to a value of 0 will merge all regions that touch, regardless of how small the amount of overlap.\n\n\nRefer to the \nREADME\n for details.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tool Improvements\n\n\n\n\n\nThese tools now support a \nROTATION_FILL_COLOR\n property for setting the fill color for pixels near the corners and edges of frames when performing non-orthogonal rotations. Previously, the color was hardcoded to \nBLACK\n. That is still the default setting for most components. Now the color can be set to \nWHITE\n, which is the default setting for the Tesseract component.\n\n\nThese tools now support a \nROTATION_THRESHOLD\n property for adjusting the threshold at which the frame transformer performs rotation. Previously, the value was hardcoded to 0.1 degrees. That is still the default value. Rotation is not performed on any \nROTATION\n value less than that threshold. The motivation is that some algorithms detect small rotations (for example, on structured text) when there is no rotation. In such cases rotating the frame results in fewer detections.\n\n\nOpenMPF now uses FFmpeg when counting video frames. Refer to the Bug Fixes section below.\n\n\n\n\nAzure Cognitive Services (ACS) Form Detection Component\n\n\n\n\n\nThis new component utilizes the \nAzure Cognitive Services Form Detection REST endpoint\n to extract formatted text from documents (PDFs) and images. Refer to the \nREADME\n for details.\n\n\nThis component is capable of performing detections using a specified ACS endpoint URL. For example, different endpoints support receipt detection, business card detection, layout analysis, and support for custom models trained with or without labeled data.\n\n\nThis component may output the following detection properties depending on the endpoint, model, and media being processed: \nTEXT\n, \nTABLE_CSV_OUTPUT\n, \nKEY_VALUE_PAIRS_JSON\n, and \nDOCUMENT_JSON_FIELDS\n.\n\n\n\n\nKeyword Tagging Component\n\n\n\n\n\nThis new component performs the same keyword tagging behavior that was previously part of the Tesseract component, but does so on feed-forward tracks that generate detections with \nTEXT\n and \nTRANSCRIPT\n properties. Refer to the \nREADME\n for details.\n\n\nIn addition to the Tesseract component, keyword tagging behavior has been removed from the Tika Text component and ACS OCR component.\n\n\nExample pipelines have been added to the following components which make use of a final Keyword Tagging component stage:\n\n\nTesseract\n\n\nTika Text\n\n\nACS OCR\n\n\nSphinx\n\n\nACS Speech\n\n\n\n\n\n\n\n\nOptionally Skip Media Inspection\n\n\n\n\n\nThe Workflow Manager will skip media inspection if all of the required media metadata is provided in the job request. The \nMEDIA_HASH\n and \nMIME_TYPE\n fields are always required. Depending on the media data type, other fields may be required or optional:\n\n\nImages\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nVideos\n\n\nRequired: \nFRAME_WIDTH\n, \nFRAME_HEIGHT\n, \nFRAME_COUNT\n, \nFPS\n, \nDURATION\n\n\nOptional: \nHORIZONTAL_FLIP\n, \nROTATION\n\n\n\n\n\n\nAudio files\n\n\nRequired: \nDURATION\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nUpdate OpenMPF Python SDK exception handling for Python 3. Now instead of raising an \nEnvironmentError\n, which has been deprecated in Python 3, the SDK will raise an \nmpf.DetectionError\n or allow the underlying exception to be thrown.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1028\n] OpenMPF can now properly handle Apple-optimized PNGs, which have a non-standard data chunk named CgBI before the IHDR chunk. The Workflow Manager uses \npngdefry\n to convert the image into a standard PNG for processing. Before this fix, Tika would throw an error when trying to determine the MIME type of the Apple-optimized PNG.\n\n\n[\n#1130\n] OpenMPF can now properly handle HEIC images. The Workflow Manager uses \nlibheif\n to convert the image into a standard PNG for processing. Before this fix, the HEIC image was sometimes falsely identified as a video and the Workflow Manager would fail to count the number of frames.\n\n\n[\n#1171\n] The MIME type in the JSON output object is no longer null when there is a frame counting exception.\n\n\n[\n#1192\n] When processing videos, the frame count is now obtained from both OpenCV and FFmpeg. The lower of the two is used. If they don't match, a \nFRAME_COUNT\n warning is generated. Before this fix, on some videos OpenCV would return frame counts that were magnitudes higher than the frames that could actually be read. This resulted in failing to process many video segments with a \nBAD_FRAME_SIZE\n error.\n\n\n\n\nOpenMPF 5.0.9: October 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1200\n] The MPFVideoCapture and MPFImageReader tools now properly handle cropping to frame regions when the region coordinates fall outside of the frame boundary. There was a bug that would result in an OpenCV error. Note that the bug only occurred when cropping was not performed with rotation or flipping.\n\n\n\n\nOpenMPF 5.0.8: October 2020\n\n\nUpdates\n\n\n\n\n\nThe Tesseract component now supports a \nTESSDATA_MODELS_SUBDIRECTORY\n property. The component will look for tessdata files in \nMODELS_DIR_PATH\n/\nTESSDATA_MODELS_SUBDIRECTORY\n. This allows users to easily switch between \ntessdata\n, \ntessdata_best\n, and \ntessdata_fast\n subdirectories.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#1199\n] Added missing synchronized to InProgressBatchJobsService, which was resulting in some jobs staying \nIN_PROGRESS\n indefinitely.\n\n\n\n\nOpenMPF 5.0.7: September 2020\n\n\nTensorRT Inference Server (TRTIS) Object Detection Component\n\n\n\n\n\nThis new component detects objects in images and videos by making use of an \nNVIDIA TensorRT Inference Server\n (TRTIS), and calculates features that can later be used by other systems to recognize the same object in other media. We provide support for running the server as a separate service during a Docker deployment, but an external server instance can be used instead.\n\n\nBy default, the ip_irv2_coco model is supported and will optionally classify detected objects using \nCOCO labels\n. Additionally, features can be generated for whole frames, automatically-detected object regions, and user-specified regions. Refer to the \nREADME\n.\n\n\n\n\nOpenMPF 5.0.6: August 2020\n\n\nEnable OcvDnnDetection to Annotate Feed-forward Detections\n\n\n\n\n\nThe OcvDnnDetection component can now by configured to operate only on certain feed-forward detections and annotate them with supplementary information. For example, the following pipeline can be configured to generate detections that have both \nCLASSIFICATION\n and \nCOLOR\n detection properties:\n\n\n\n\nDarknetDetection (person + vehicle) --\n OcvDnnDetection (vehicle color)\n\n\n\n\n\n\nFor example:\n\n\n\n\n \ndetectionProperties\n: {\n \nCLASSIFICATION\n: \ncar\n,\n \nCLASSIFICATION CONFIDENCE LIST\n: \n0.397336\n,\n \nCLASSIFICATION LIST\n: \ncar\n,\n \nCOLOR\n: \nblue\n,\n \nCOLOR CONFIDENCE LIST\n: \n0.93507; 0.055744\n,\n \nCOLOR LIST\n: \nblue; gray\n\n }\n\n\n\n\n\n\nThe OcvDnnDetection component now supports the following properties:\n\n\nCLASSIFICATION_TYPE\n: Set this value to change the \nCLASSIFICATION*\n part of each output property name to something else. For example, setting it to \nCOLOR\n will generate \nCOLOR\n, \nCOLOR LIST\n, and \nCOLOR CONFIDENCE LIST\n. When handling feed-foward detections, the pre-existing \nCLASSIFICATION*\n properties will be carried over and the \nCOLOR*\n properties will be added to the detection.\n\n\nFEED_FORWARD_WHITELIST_FILE\n: When \nFEED_FORWARD_TYPE\n is provided and not set to \nNONE\n, only feed-forward detections with class names contained in the specified file will be processed. For, example, a file with only \"car\" in it will result in performing the exclude behavior (below) for all feed-foward detections that do not have a \nCLASSIFICATION\n of \"car\".\n\n\nFEED_FORWARD_EXCLUDE_BEHAVIOR\n: Specifies what to do when excluding detections not specified in the \nFEED_FORWARD_WHITELIST_FILE\n. Acceptable values are:\n\n\nPASS_THROUGH\n: Return the excluded detections, without modification, along with any annotated detections.\n\n\nDROP\n: Don't return the excluded detections. Only return annotated detections.\n\n\n\n\n\n\n\n\n\n\n\n\nUpdates\n\n\n\n\n\nMake interop package work with Java 8 to better support exernal job producers and consumers.\n\n\n\n\nOpenMPF 5.0.5: August 2020\n\n\nUpdates\n\n\n\n\n\nConfigure Camel not to auto-acknowledge messages. Users can now see the number of pending messages in the ActiveMQ management console for queues consumed by the Workflow Manager.\n\n\nImprove Tesseract OSD fallback behavior. This prevents selecting the OSD rotation from the fallback pass without the OSD script from the fallback pass.\n\n\n\n\nOpenMPF 5.0.4: August 2020\n\n\nUpdates\n\n\n\n\n\nRetry job callbacks when they fail. The Workflow Manager now supports the \nhttp.callback.timeout.ms\n and \nhttp.callback.retries\n system properties.\n\n\nDrop \"duplicate paged in from cursor\" DLQ messages.\n\n\n\n\nOpenMPF 5.0.3: July 2020\n\n\nUpdates\n\n\n\n\n\nUpdate ActiveMQ to 5.16.0.\n\n\n\n\nOpenMPF 5.0.2: July 2020\n\n\nUpdates\n\n\n\n\n\nDisable video segmentation for ACS Speech Detection to prevent issues when generating speaker ids.\n\n\n\n\nOpenMPF 5.0.1: July 2020\n\n\nUpdates\n\n\n\n\n\nUpdated Tessseract component with \nMAX_PIXELS\n setting to prevent processing large images.\n\n\n\n\nOpenMPF 5.0.0: June 2020\n\n\nDocumentation\n\n\n\n\n\nUpdated the openmpf-docker repo \nREADME\n and \nSWARM\n guides to describe the new build process, which now includes automatically copying the openmpf repo source code into the openmpf-build image instead of using various bind mounts, and building all of the component base builder and executor images.\n\n\nUpdated the openmpf-docker repo \nREADME\n with the following sections:\n\n\nHow to \nUse Kibana for Log Viewing and Aggregation\n\n\nHow to \nRestrict Media Types that a Component Can Process\n\n\nHow to \nImport Root Certificates for Additional Certificate Authorities\n\n\n\n\n\n\nUpdated the \nCONTRIBUTING\n guide for Docker deployment with information on the new build process and component base builder and executor images.\n\n\nUpdated the \nInstall Guide\n with a pointer to the \"Quick Start\" section on DockerHub.\n\n\nUpdated the \nREST API\n with the new endpoints for getting, deleting, and creating actions, tasks, and pipelines, as well as a change to the \n[GET] /rest/info\n endpoint.\n\n\nUpdated the \nC++ Batch Component API\n to describe changes to the \nGetDetection()\n calls, which now return a collection of detections or tracks instead of an error code, and to describe improvements to exception handling.\n\n\nUpdated the \nC++ Batch Component API\n, \nPython Batch Component API\n, and \nJava Batch Component API\n with \nMIME_TYPE\n, \nFRAME_WIDTH\n, and \nFRAME_HEIGHT\n media properties.\n\n\nUpdated the \nPython Batch Component API\n with information on Python3 and the simplification of using a \ndict\n for some of the data members.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nRenamed \nstages\n to \ntasks\n for clarity and consistency with the rest of the code.\n\n\nThe \nmedia\n element no longer contains a \nmessage\n field.\n\n\nEach \ndetectionProcessingError\n element now contains a \ncode\n field.\n\n\nErrors and warnings are now grouped by \nmediaId\n and summarized using a \ndetails\n element that contains a \nsource\n, \ncode\n, and \nmessage\n field. Refer to \nthis comment\n for an example of the JSON structure. Note that errors and warnings generated by the Workflow Manager do not have a \nmediaId\n.\n\n\nWhen an error or warning occurs in multiple frames of a video for a single piece of media it will be represented in one \ndetails\n element and the \nmessage\n will list the frame ranges.\n\n\n\n\n\n\n\n\nInteroperability Package\n\n\n\n\n\nRenamed \nJsonStage.java\n to \nJsonTask.java\n.\n\n\nRemoved \nJsonJobRequest.java\n.\n\n\nModified \nJsonDetectionProcessingError.java\n by removing the \nstartOffset\n and \nstopOffset\n fields and adding the following new fields: \nstartOffsetFrame\n, \nstopOffsetFrame\n, \nstartOffsetTime\n, \nstopOffsetTime\n, and \ncode\n.\n\n\nUpdated \nJsonMediaOutputObject.java\n by removing \nmessage\n field.\n\n\nAdded \nJsonMediaIssue.java\n and \nJsonIssueDetails.java\n.\n\n\n\n\nPersistent Database\n\n\n\n\n\nThe \ninput_object\n column in the \njob_request\n table has been renamed to \njob\n and the content now contains a serialized form of \nBatchJob.java\n instead of \nJsonJobRequest.java\n.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nThe \nGetDetection()\n calls now return a collection instead of an error code:\n\n\nstd::vector\nMPFImageLocation\n GetDetections(const MPFImageJob \njob)\n\n\nstd::vector\nMPFVideoTrack\n GetDetections(const MPFVideoJob \njob)\n\n\nstd::vector\nMPFAudioTrack\n GetDetections(const MPFAudioJob \njob)\n\n\nstd::vector\nMPFGenericTrack\n GetDetections(const MPFGenericJob \njob)\n\n\n\n\n\n\nMPFDetectionException\n can now be constructed with a \nwhat\n parameter representing a descriptive error message:\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string \nwhat = \"\")\n\n\nMPFDetectionException(const std::string \nwhat)\n\n\n\n\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nSimplified the \ndetection_properties\n and \nframe_locations\n data members to use a Python \ndict\n instead of a custom data type.\n\n\n\n\nFull Docker Conversion\n\n\n\n\n\nEach component is now encapsulated in its own Docker image which self-registers with the Workflow Manager at runtime. This deconflicts component dependencies, and allows for greater flexibility when deciding which components to deploy at runtime.\n\n\nThe Node Manager image has been removed. For Docker deployments, component services should be managed using Docker tools external to OpenMPF.\n\n\nIn Docker deployments, streaming job REST endpoints are disabled, the Nodes web page is no longer available, component tar.gz packages cannot be registered through the Component Registration web page, and the \nmpf\n command line script can now only be run on the Workflow Manager container to modify user settings. The preexisting features are now reserved for non-Docker deployments and development environments.\n\n\nThe OpenMPF Docker stack can optionally be deployed with \nKibana\n (which depends on Elasticsearch and Filebeat) for viewing log files. Refer to the openmpf-docker \nREADME\n.\n\n\n\n\nDocker Component Base Images\n\n\n\n\n\nA base builder image and executor image are provided for C++ (\nREADME\n), Python (\nREADME\n), and Java (\nREADME\n) component development. Component developers can also refer to the Dockerfile in the source code for each component as reference for how to make use of the base images.\n\n\n\n\nRestrict Media Types that a Component Can Process\n\n\n\n\n\nEach component service now supports an optional \nRESTRICT_MEDIA_TYPES\n Docker environment variable that specifies the types of media that service will process. For example, \nRESTRICT_MEDIA_TYPES: VIDEO,IMAGE\n will process both videos and images, while \nRESTRICT_MEDIA_TYPES: IMAGE\n will only process images. If not specified, the service will process all of the media types it natively supports. For example, this feature can be used to ensure that some services are always available to process images while others are processing long videos.\n\n\n\n\nImport Additional Root Certificates into the Workflow Manager\n\n\n\n\n\nAdditional root certificates can be imported into the Workflow Manager at runtime by adding an entry for \nMPF_CA_CERTS\n to the workflow-manager service's environment variables in \ndocker-compose.core.yml\n. \nMPF_CA_CERTS\n must contain a colon-delimited list of absolute file paths. Of note, a root certificate may be used to trust the identity of a remote object storage server.\n\n\n\n\nDockerHub\n\n\n\n\n\nPushed prebuilt OpenMPF Docker images to \nDockerHub\n. Refer to the \"Quick Start\" section of the OpenMPF Workflow Manager image \ndocumentation\n.\n\n\n\n\nVersion Updates\n\n\n\n\n\nUpdated from Oracle Java 8 to OpenJDK 11, which required updating to Tomcat 8.5.41. We now use \nCargo\n to run integration tests.\n\n\nUpdated OpenCV from 3.0.0 to 3.4.7 to update Deep Neural Networks (DNN) support.\n\n\nUpdated Python from 2.7 to 3.8.2.\n\n\n\n\nFFmpeg\n\n\n\n\n\nWe are no longer building separate audio and video encoders and decoders for FFmpeg. Instead, we are using the built-in decoders that come with FFmpeg by default. This simplifies the build process and redistribution via Docker images.\n\n\n\n\nArtifact Extraction\n\n\n\n\n\nThe \nARTIFACT_EXTRACTION_POLICY\n property can now be assigned a value of \nNONE\n, \nVISUAL_TYPES_ONLY\n, \nALL_TYPES\n, or \nALL_DETECTIONS\n.\n\n\nWith the \nVISUAL_TYPES_ONLY\n or \nALL_TYPES\n policy, artifacts will be extracted according to the \nARTIFACT_EXTRACTION_POLICY*\n properties. With the \nNONE\n and \nALL_DETECTIONS\n policies, those settings are ignored.\n\n\nNote that previously \nNONE\n, \nVISUAL_EXEMPLARS_ONLY\n, \nEXEMPLARS_ONLY\n, \nALL_VISUAL_DETECTIONS\n, and \nALL_DETECTIONS\n were supported.\n\n\n\n\n\n\nThe following \nARTIFACT_EXTRACTION_POLICY*\n properties are now supported:\n\n\nARTIFACT_EXTRACTION_POLICY_EXEMPLAR_FRAME_PLUS\n: Extract the exemplar frame from the track, plus this many frames before and after the exemplar.\n\n\nARTIFACT_EXTRACTION_POLICY_FIRST_FRAME\n: If true, extract the first frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_MIDDLE_FRAME\n: If true, extract the frame with a detection that is closest to the middle frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_LAST_FRAME\n: If true, extract the last frame from the track.\n\n\nARTIFACT_EXTRACTION_POLICY_TOP_CONFIDENCE_COUNT\n: Sort the detections in a track by confidence and then extract this many detections, starting with those which have the highest confidence.\n\n\nARTIFACT_EXTRACTION_POLICY_CROPPING\n: If true, an artifact will be extracted for each detection in each frame that is selected according to the other \nARTIFACT_EXTRACTION_POLICY*\n properties. The extracted artifact will be cropped to the width and height of the detection bounding box, and the artifact will be rotated according to the detection \nROTATION\n property. If false, the artifact extraction behavior is unchanged from the previous release: the entire frame will be extracted without any rotation.\n\n\n\n\n\n\nFor clarity, \nOUTPUT_EXEMPLARS_ONLY\n has been renamed to \nOUTPUT_ARTIFACTS_AND_EXEMPLARS_ONLY\n. Extracted artifacts will always be reported in the JSON output object.\n\n\nThe \nmpf.output.objects.exemplars.only\n system property has been renamed to \nmpf.output.objects.artifacts.and.exemplars.only\n. It works the same as before with the exception that if an artifact is extracted for a detection then that detection will always be represented in the JSON output object, whether it's an exemplar or not.\n\n\nThe \nmpf.output.objects.last.stage.only\n system property has been renamed to \nmpf.output.objects.last.task.only\n. It works the same as before with the exception that when set to true artifact extraction is skipped for all tasks but the last task.\n\n\n\n\nREST Endpoints\n\n\n\n\n\nModified \n[GET] /rest/info\n. Now returns output like \n{\"version\": \"4.1.0\", \"dockerEnabled\": true}\n.\n\n\nAdded the following REST endpoints for getting, removing, and creating actions, tasks, and pipelines. Refer to the \nREST API\n for more information:\n\n\n[GET] /rest/actions\n, \n[GET] /rest/tasks\n, \n[GET] /rest/pipelines\n\n\n[DELETE] /rest/actions\n, \n[DELETE] /rest/tasks\n, \n[DELETE] /rest/pipelines\n\n\n[POST] /rest/actions\n , \n[POST] /rest/tasks\n, \n[POST] /rest/pipelines\n\n\n\n\n\n\nAll of the endpoints above are new with the exception of \n[GET] /rest/pipelines\n. The endpoint has changed since the last version of OpenMPF. Some fields in the response JSON have been removed and renamed. Also, it now returns a collection of tasks for each pipelines. Refer to the REST API.\n\n\n[GET] /rest/algorithms\n can be used to get information about algorithms. Note that algorithms are tied to registered components, so to remove an algorithm you must unregister the associated component. To add an algorithm, start the associated component's Docker container so it self-registers with the Workflow Manager.\n\n\n\n\nIncomplete Actions, Tasks, and Pipelines\n\n\n\n\n\nThe previous version of OpenMPF would generate an error when attempting to register a component that included actions, tasks, or pipelines that depend on algorithms, actions, or tasks that are not yet registered with the Workflow Manager. This required components to be registered in a specific order. Also, when unregistering a component, it required the components which depend on it to be unregistered. These dependency checks are no longer enforced.\n\n\nIn general, the Workflow Manager now appropriately handles incomplete actions, tasks, and pipelines by checking if all of the elements are defined before executing a job, and then preserving that information in memory until the job is complete. This allows components to be registered and removed in an arbitrary order without affecting the state of other components, actions, tasks, or pipelines. This also allows actions and tasks to be removed using the new REST endpoints and then re-added at a later time while still preserving the elements that depend on them.\n\n\nNote that unregistering a component while a job is running will cause it to stall. Please ensure that no jobs are using a component before unregistering it.\n\n\n\n\nPython Arbitrary Rotation\n\n\n\n\n\nThe Python MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be normalized to that range. Floating point values are accepted. This is similar to the existing support for \nC++ arbitrary rotation\n.\n\n\n\n\nOpenCV Deep Neural Networks (DNN) Detection Component\n\n\n\n\n\nThis new component replaces the old CaffeDetection component. It supports the same GoogLeNet and Yahoo Not Suitable For Work (NSFW) models as the old component, but removes support for the Rezafuad vehicle color detection model in favor of a custom TensorFlow vehicle color detection model. In our tests, the new model has proven to be more generalizable and provide more accurate results on never-before-seen test data. Refer to the \nREADME\n.\n\n\n\n\nAzure Cognitive Services (ACS) Speech Detection Component\n\n\n\n\n\nThis new component utilizes the \nAzure Cognitive Services Batch Transcription REST endpoint\n to transcribe speech from audio and video files. Refer to the \nREADME\n.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nText tagging has been simplified to only support regular expression searches. Whole keyword searches are a subset of regular expression searches, and are therefore still supported. Also, the \ntext-tags.json\n file format has been updated to allow for specifying case-sensitive regular expression searches.\n\n\nAdditionally, the \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n detection properties are now supported, which list the OCR'd words that resulted in adding a \nTAG\n to the detection, and the character offset of those words within the OCR'd \nTEXT\n, respectively.\n\n\nKey changes to tagging output and \ntext-tags.json\n format are outlined below. Refer to the \nREADME\n for more information:\n\n\nRegex patterns should now be entered in the format \n{\"pattern\": \"regex_pattern\"}\n. Users can add and toggle the \n\"caseSensitive\"\n regex flag for each pattern.\n\n\nFor example: \n{\"pattern\": \"(\\\\b)bus(\\\\b)\", \"caseSensitive\": true}\n enables case-sensitive regex pattern matching.\n\n\nBy default, each regex pattern, including those in the legacy format, will be case-insensitive.\n\n\n\n\n\n\nAs part of the text tagging update, the \nTAGS\n outputs are now separated by semicolons \n;\n rather than commas \n,\n to be consistent with the delimiters for \nTRIGGER_WORDS\n and \nTRIGGER_WORDS_OFFSET\n output patterns.\n\n\nBecause semicolons can be part of the trigger word itself, those semicolons will be encapsulated in brackets.\n\n\nFor example, \ndetected trigger with a ;\n in the OCR'd \nTEXT\n is reported as \nTRIGGER_WORDS=detected trigger with a [;]; some other trigger\n.\n\n\n\n\n\n\nCommas are now used to group each set of \nTRIGGER_WORDS_OFFSET\n with its respective \nTRIGGER_WORDS\n output. Both \nTAGS\n and \nTRIGGER_WORDS\n are separated by semicolons only.\n\n\nFor example: \nTRIGGER_WORDS=trigger1; trigger2\n, \nTRIGGER_WORDS_OFFSET=0-5, 6-10; 12-15\n, means that \ntrigger1\n occurs twice in the text at the index ranges 0-5 and 6-10, and \ntrigger2\n occurs at index range 12-15.\n\n\n\n\n\n\n\n\n\n\nRegex tagging now follows the C++ ECMAS format (see \nexamples here\n) after resolving JSON string conversion for regex tags.\n\n\nAs a result the regex patterns \n\\b\n and \n\\p\n in the text tagging file must now be written as \n\\\\b\n and \n\\\\p\n, respectively, to match the format of other regex character patterns (ex. \n\\\\d\n, \n\\\\w\n, \n\\\\s\n, etc.).\n\n\n\n\n\n\nThe \nMAX_PARALLEL_SCRIPT_THREADS\n and \nMAX_PARALLEL_PAGE_THREADS\n properties are now supported. When processing images, the first property is used to determine how many threads to run in parallel. Each thread performs OCR using a different language or script model. When processing PDFs, the second property is used to determine how many threads to run in parallel. Each thread performs OCR on a different page of the PDF.\n\n\nThe \nENABLE_OSD_FALLBACK\n property is now supported. If enabled, an additional round of OSD is performed when the first round fails to generate script predictions that are above the OSD score and confidence thresholds. In the second pass, the component will run OSD on multiple copies of the input text image to get an improved prediction score and \nOSD_FALLBACK_OCCURRED\n detection property will be set to true.\n\n\nIf any OSD-detected models are missing, the new \nMISSING_LANGUAGE_MODELS\n detection property will list the missing models.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThe Tika text detection component now supports text tagging in the same way as the Tesseract component. Refer to the \nREADME\n.\n\n\n\n\nOther Improvements\n\n\n\n\n\nSimplified component \ndescriptor.json\n files by moving the specification of common properties, such as \nCONFIDENCE_THRESHOLD\n, \nFRAME_INTERVAL\n, \nMIN_SEGMENT_LENGTH\n, etc., to a single \nworkflow-properties.json\n file. Now when the Workflow Manager is updated to support new features, the component \ndescriptor.json\n file will not need to be updated.\n\n\nUpdated the Sphinx component to return \nTRANSCRIPT\n instead of \nTRANSCRIPTION\n, which is grammatically correct.\n\n\nWhitespace is now trimmed from property names when jobs are submitted via the REST API.\n\n\nThe Darknet Docker image now includes the YOLOv3 model weights.\n\n\nThe C++ and Python ModelsIniParser now allows users to specify optional fields.\n\n\nWhen a job completion callback fails, but otherwise the job is successful, the final state of the job will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#772\n] Can now create a custom pipeline with long action names using the Pipelines 2 UI.\n\n\n[\n#812\n] Now properly setting the start and stop index for elements in the \ndetectionProcessingErrors\n collection in the JSON output object. Errors reported for each job segment will now appear in the collection.\n\n\n[\n#941\n] Tesseract component no longer segfaults when handling corrupt media.\n\n\n[\n#1005\n] Fixed a bug that caused a NullPointerException when attempting to get output object JSON via REST before a job completes.\n\n\n[\n#1035\n] The search bar in the Job Status UI can once again for used to search for job id.\n\n\n[\n#1104\n] Fixed C++/Python component executor memory leaks.\n\n\n[\n#1108\n] Fixed a bug when handling frames and detections that are horizontally flipped. This affected both markup and feed-forward behaviors.\n\n\n[\n#1119\n] Fixed Tesseract component memory leaks and uninitialized read issues.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1028\n] Media inspection fails to handle Apple-optimized PNGs with the CgBI data chunk before the IHDR chunk.\n\n\n[\n#1109\n] We made the search bar in the Job Status UI more efficient by shifting it to a database query, but in doing so introduced a bug where the search operates on UTC time instead of local system time.\n\n\n[\n#1010\n] \nmpf.output.objects.enabled\n does not behave as expected for batch jobs. A user would expect it to control whether the JSON output object is generated, but it's generated regardless of that setting.\n\n\n[\n#1032\n] Jobs fail on corrupt QuickTime videos. For these videos, the OpenCV-reported frame count is more than twice the actual frame count.\n\n\n[\n#1106\n] When a job ends in ERROR the job status UI does not show an End Date.\n\n\n\n\nOpenMPF 4.1.14: June 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1120\n] The node-manager Docker image now correctly installs CUDA libraries so that GPU-enabled components on that image can run on the GPU.\n\n\n[\n#1064\n] Fixed memory leaks in the Darknet component for various network types, and when using GPU resources. This bug covers everything not addressed by \n#1062\n.\n\n\n\n\nOpenMPF 4.1.13: June 2020\n\n\nUpdates\n\n\n\n\n\nUpdated the OpenCV build and media inspection process to properly handle webp images.\n\n\n\n\nOpenMPF 4.1.12: May 2020\n\n\nUpdates\n\n\n\n\n\nUpdated JDK from \njdk-8u181-linux-x64.rpm\n to \njdk-8u251-linux-x64.rpm\n.\n\n\n\n\nOpenMPF 4.1.11: May 2020\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nAdded \nINVALID_MIN_IMAGE_SIZE\n job property to filter out images with extremely low width or height.\n\n\nUpdated image rescaling behavior to account for image dimension limits.\n\n\nFixed handling of \nnullptr\n returns from Tesseract API OCR calls.\n\n\n\n\nOpenMPF 4.1.8: May 2020\n\n\nAzure Cognitive Services (ACS) OCR Component\n\n\n\n\n\nThis new component utilizes the \nACS OCR REST endpoint\n to extract text from images and videos. Refer to the \nREADME\n.\n\n\n\n\nOpenMPF 4.1.6: April 2020\n\n\nUpdates\n\n\n\n\n\nNow silently discarding ActiveMQ DLQ \"Suppressing duplicate delivery on connection\" messages in addition to \"duplicate from store\" messages.\n\n\n\n\nOpenMPF 4.1.5: March 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1062\n] Fixed a memory leak in the Darknet component that occurred when running jobs on CPU resources with the Tiny YOLO model.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#1064\n] The Darknet component has memory leaks for various network types, and potentially when using GPU resources. This bug covers everything not addressed by \n#1062\n.\n\n\n\n\nOpenMPF 4.1.4: March 2020\n\n\nUpdates\n\n\n\n\n\nUpdated from Hibernate 5.0.8 to 5.4.12 to support schema-based multitenancy. This allows multiple instances of OpenMPF to use the same PostgreSQL database as long as each instance connects to the database as a separate user, and the database is configured appropriately. This also required updating Tomcat from 7.0.72 to 7.0.76.\n\n\n\n\nJSON Output Object\n\n\n\n\n\nUpdated the Workflow Manager to include an \noutputobjecturi\n in GET callbacks, and \noutputObjectUri\n in POST callbacks, when jobs complete. This URI specifies a file path, or path on the object storage server, depending on where the JSON output object is located.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nUpdated \nJsonCallbackBody.java\n to contain an \noutputObjectUri\n field.\n\n\n\n\nOpenMPF 4.1.3: February 2020\n\n\nFeatures\n\n\n\n\n\nAdded support for \nDETECTION_PADDING_X\n and \nDETECTION_PADDING_Y\n optional job properties. The value can be a percentage or whole-number pixel value. When positive, each detection region in each track will be expanded. When negative, the region will shrink. If the detection region is shrunk to nothing, the shrunk dimension(s) will be set to a value of 1 pixel and the \nSHRUNK_TO_NOTHING\n detection property will be set to true.\n\n\nAdded support for \nDISTANCE_CONFIDENCE_WEIGHT_FACTOR\n and \nSIZE_CONFIDENCE_WEIGHT_FACTOR\n SuBSENSE algorithm properties. Increasing the value of the first property will generate detection confidence values that favor being closer to the center frame of a track. Increasing the value of the second property will generate detection confidence values that favor large detection regions.\n\n\n\n\nOpenMPF 4.1.1: January 2020\n\n\nBug Fixes\n\n\n\n\n\n[\n#1016\n] Fixed a bug that caused a deadlock situation when the media inspection process failed quickly when processing many jobs using a pipeline with more than one stage.\n\n\n\n\nOpenMPF 4.1.0: July 2019\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nC++ Batch Component API\n to describe the \nROTATION\n detection property. See the \nC++ Arbitrary Rotation\n section below.\n\n\nUpdated the \nREST API\n with new component registration REST endpoints. See the \nComponent Registration REST Endpoints\n section below.\n\n\nAdded a \nREADME\n for the EAST text region detection component. See the \nEAST Text Region Detection Component\n section below.\n\n\nUpdated the Tesseract OCR text detection component \nREADME\n. See the \nTesseract OCR Text Detection Component\n section below.\n\n\nUpdated the openmpf-docker repo \nREADME\n and \nSWARM\n guide to describe the new streamlined approach to using \ndocker-compose config\n. See the \nDocker Deployment\n section below.\n\n\nFixed the description of \nMIN_SEGMENT_LENGTH\n and associated examples in the \nUser Guide\n for issue \n#891\n.\n\n\nUpdated the \nJava Batch Component API\n with information on how to use Log4j2. Related to resolving issue \n#855\n.\n\n\nUpdated the \nInstall Guide\n to point to the Docker \nREADME\n.\n\n\nTransformed the Build Guide into a \nDevelopment Environment Guide\n.\n\n\n\n\n\n\nC++ Arbitrary Rotation\n\n\n\n\nThe C++ MPFVideoCapture and MPFImageReader tools now support \nROTATION\n values other than 0, 90, 180, and 270 degrees. Users can now specify a clockwise \nROTATION\n job property in the range [0, 360). Values outside that range will be normalized to that range. Floating point values are accepted.\n\n\nWhen using those tools to read frame data, they will automatically correct for rotation so that the returned frame is horizontally oriented toward the normal 3 o'clock position.\n\n\nWhen \nFEED_FORWARD_TYPE=REGION\n, these tools will look for a \nROTATION\n detection property in the feed-forward detections and automatically correct for rotation. For example, a detection property of \nROTATION=90\n represents that the region is rotated 90 degrees counter clockwise, and therefore must be rotated 90 degrees clockwise to correct for it.\n\n\nWhen \nFEED_FORWARD_TYPE=SUPERSET_REGION\n, these tools will properly account for the \nROTATION\n detection property associated with each feed-forward detection when calculating the bounding box that encapsulates all of those regions.\n\n\nWhen \nFEED_FORWARD_TYPE=FRAME\n, these tools will rotate the frame according to the \nROTATION\n job property. It's important to note that for rotations other than 0, 90, 180, and 270 degrees the rotated frame dimensions will be larger than the original frame dimensions. This is because the frame needs to be expanded to encapsulate the entirety of the original rotated frame region. Black pixels are used to fill the empty space near the edges of the original frame.\n\n\n\n\n\n\nThe Markup component now places a colored dot at the upper-left corner of each detection region so that users can determine the rotation of the region relative to the entire frame.\n\n\n\n\n\n\nComponent Registration REST Endpoints\n\n\n\n\nAdded a \n[POST] /rest/components/registerUnmanaged\n endpoint so that components running as separate Docker containers can self-register with the Workflow Manager.\n\n\nSince these components are not managed by the Node Manager, they are considered unmanaged OpenMPF components. These components are not displayed in Nodes web UI and are tagged as unmanaged in the Component Registration web UI where they can only be removed.\n\n\nNote that components uploaded to the Component Registration web UI as .tar.gz files are considered managed components.\n\n\n\n\n\n\nAdded a \n[DELETE] /rest/components/{componentName}\n endpoint that can be used to remove managed and unmanaged components.\n\n\n\n\nPython Component Executor Docker Image\n\n\n\n\n\nComponent developers can now use a Python component executor Docker image to write a Python component for OpenMPF that can be encapsulated\nwithin a Docker container. This isolates the build and execution environment from the rest of OpenMPF. For more information, see the \nREADME\n.\n\n\nComponents developed with this image are not managed by the Node Manager; rather, they self-register with the Workflow Manager and their lifetime is determined by their own Docker container.\n\n\n\n\n\n\nDocker Deployment\n\n\n\n\nStreamlined single-host \ndocker-compose up\n deployments and multi-host \ndocker stack deploy\n swarm deployments. Now users are instructed to create a single \ndocker-compose.yml\n file for both types of deployments.\n\n\nRemoved the \ndocker-generate-compose-files.sh\n script in favor of allowing users the flexibility of combining multiple \ndocker-compose.*.yml\n files together using \ndocker-compose config\n. See the \nGenerate docker-compose.yml\n section of the README.\n\n\nComponents based on the Python component executor Docker image can now be defined and configured directly in \ndocker-compose.yml\n.\n\n\nOpenMPF Docker images now make use of Docker labels.\n\n\n\n\n\n\nEAST Text Region Detection Component\n\n\n\n\nThis new component uses the Efficient and Accurate Scene Text (EAST) detection model to detect text regions in images and videos. It reports their location, angle of rotation, and text type (\nSTRUCTURED\n or \nUNSTRUCTURED\n), and supports a variety of settings to control the behavior of merging text regions into larger regions. It does not perform OCR on the text or track detections across video frames. Thus, each video track is at most one detection long. For more information, see the \nREADME\n.\n\n\nOptionally, this component can be built as a Docker image using the Python component executor Docker image, allowing it to exist apart from the Node Manager image.\n\n\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\nUpdated to support reading tessdata \n*.traineddata\n files at a specified \nMODELS_DIR_PATH\n. This allows users to install new \n*.traineddata\n files post deployment.\n\n\nUpdated to optionally perform Tesseract Orientation and Script Detection (OSD). When enabled, the component will attempt to use the orientation results of OSD to automatically rotate the image, as well as perform OCR using the scripts detected by OSD.\n\n\nUpdated to optionally rotate a feed-forward text region 180 degrees to account for upside-down text.\n\n\nNow supports the following preprocessing properties for both structured and unstructured text:\n\n\nText sharpening\n\n\nText rescaling\n\n\nOtsu image thresholding\n\n\nAdaptive thresholding\n\n\nHistogram equalization\n\n\nAdaptive histogram equalization (also known as Contrast Limited Adaptive Histogram Equalization (CLAHE))\n\n\n\n\n\n\nWill use the \nTEXT_TYPE\n detection property in feed-forward regions provided by the EAST component to determine which preprocessing steps to perform.\n\n\nFor more information on these new features, see the \nREADME\n.\n\n\nRemoved gibberish and string filters since they only worked on English text.\n\n\n\n\nActiveMQ Profiles\n\n\n\n\n\nThe ActiveMQ Docker image now supports custom profiles. The container selects an \nactivemq.xml\n and \nenv\n file to use at runtime based on the value of the \nACTIVE_MQ_PROFILE\n environment variable. Among others, these files contain configuration settings for Java heap space and component queue memory limits.\n\n\nThis release only supports a \ndefault\n profile setting, as defined by \nactivemq-default.xml\n and \nenv.default\n; however, developers are free to add other \nactivemq-\nprofile\n.xml\n and \nenv.\nprofile\n files to the ActiveMQ Docker image to suit their needs.\n\n\n\n\nDisabled ActiveMQ Prefetch\n\n\n\n\n\nDisabled ActiveMQ prefetching on all component queues. Previously, a prefetch value of one was resulting in situations where one component service could be dispatched two sub-jobs, thereby starving other available component services which could process one of those sub-jobs in parallel.\n\n\n\n\nSearch Region Percentages\n\n\n\n\n\nIn addition to using exact pixel values, users can now use percentages for the following properties when specifying search regions for C++ and Python components:\n\n\nSEARCH_REGION_TOP_LEFT_X_DETECTION\n\n\nSEARCH_REGION_TOP_LEFT_Y_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_X_DETECTION\n\n\nSEARCH_REGION_BOTTOM_RIGHT_Y_DETECTION\n\n\n\n\n\n\nFor example, setting \nSEARCH_REGION_TOP_LEFT_X_DETECTION=50%\n will result in components only processing the right half of an image or video.\n\n\nOptionally, users can specify exact pixel values of some of these properties and percentages for others.\n\n\n\n\nOther Improvements\n\n\n\n\n\nIncreased the number of ActiveMQ maxConcurrentConsumers for the \nMPF.COMPLETED_DETECTIONS\n queue from 30 to 60.\n\n\nThe Create Job web UI now only displays the content of the \n$MPF_HOME/share/remote-media\n directory instead of all of \n$MPF_HOME/share\n, which prevents the Workflow Manager from indexing generated JSON output files, artifacts, and markup. Indexing the latter resulted in Java heap space issues for large scale production systems. This is a mitigation for issue \n#897\n.\n\n\nThe Job Status web UI now makes proper use of pagination in SQL/Hibernate through the Workflow Manager to avoid retrieving the entire jobs table, which was inefficient.\n\n\nThe Workflow Manager will now silently discard all duplicate messages in the ActiveMQ Dead Letter Queue (DLQ), regardless of destination. Previously, only messages destined for component sub-job request queues were discarded.\n\n\n\n\nBug Fixes\n\n\n\n\n\n[\n#891\n] Fixed a bug where the Workflow Manager media segmenter generated short segments that were minimally \nMIN_SEGMENT_LENGTH+1\n in size instead of \nMIN_SEGMENT_LENGTH\n.\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is created. This seems to have been resolved by disabling ActiveMQ prefetch behavior on component queues.\n\n\n[\n#855\n] A logback circular reference suppressed exception no longer throws a StackOverflowError. This was resolved by transitioning the Workflow Manager and Java components from the Logback framework to Log4j2.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#897\n] OpenMPF will attempt to index files located in \n$MPF_HOME/share\n as soon as the webapp is started by Tomcat. This is so that those files can be listed in a directory tree in the Create Job web UI. The main problem is that once a file gets indexed it's never removed from the cache, even if the file is manually deleted, resulting in a memory leak.\n\n\n\n\nLate Additions: November 2019\n\n\n\n\n\nUser names, roles, and passwords can now be set by using an optional \nuser.properties\n file. This allows administrators to override the default OpenMPF users that come preconfigured, which may be a security risk. Refer to the \"Configure Users\" section of the openmpf-docker \nREADME\n for more information.\n\n\n\n\nLate Additions: December 2019\n\n\n\n\n\nTransitioned from using a mySQL persistent database to PostgreSQL to support users that use an external PostgreSQL database in the cloud.\n\n\nUpdated the EAST component to support a \nTEMPORARY_PADDING\n and \nFINAL_PADDING\n property. The first property determines how much padding is added to detections during the non-maximum suppression or merging step. This padding is effectively removed from the final detections. The second property is used to control the final amount of padding on the output regions. Refer to the \nREADME\n.\n\n\n\n\nOpenMPF 4.0.0: February 2019\n\n\nDocumentation\n\n\n\n\n\nAdded an \nObject Storage Guide\n with information on how to configure OpenMPF to work with a custom NGINX object storage server, and how to run jobs that use an S3 object storage server. Note that the system properties for the custom NGINX object storage server have changed since the last release.\n\n\n\n\nUpgrade to Tesseract 4.0\n\n\n\n\n\nBoth the Tesseract OCR Text Detection Component and OpenALPR License Plate Detection Components have been updated to use the new version of Tesseract.\n\n\nAdditionally, Leptonica has been upgraded from 1.72 to 1.75.\n\n\n\n\nDocker Deployment\n\n\n\n\n\nThe Docker images now use the yum package manager to install ImageMagick6 from a public RPM repository instead of downloading the RPMs directly from imagemagick.org. This resolves an issue with the OpenMPF Docker build where RPMs on \nimagemagick.org\n were no longer available.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nUpdated to allow the user to set a \nTESSERACT_OEM\n property in order to select an OCR engine mode (OEM).\n\n\n\"script/Latin\" can now be specified as the \nTESSERACT_LANGUAGE\n. When selected, Tesseract will select all Latin characters, which can be from different Latin languages.\n\n\n\n\nCeph S3 Object Storage\n\n\n\n\n\nAdded support for downloading files from, and uploading files to, an S3 object storage server. The following job properties can be provided: \nS3_ACCESS_KEY\n, \nS3_SECRET_KEY\n, \nS3_RESULTS_BUCKET\n, \nS3_UPLOAD_ONLY\n.\n\n\nAt this time, only support for Ceph object storage has been tested. However, the Workflow Manager uses the AWS SDK for Java to communicate with the object store, so it is possible that other S3-compatible storage solutions may work as well.\n\n\n\n\nISO-8601 Timestamps\n\n\n\n\n\nAll timestamps in the JSON output object, and streaming video callbacks, are now in the ISO-8601 format (e.g. \"2018-12-19T12:12:59.995-05:00\"). This new format includes the time zone, which makes it possible to compare timestamps generated between systems in different time zones.\n\n\nThis change does not affect the track and detection start and stop offset times, which are still reported in milliseconds since the start of the video.\n\n\n\n\nReduced Redis Usage\n\n\n\n\n\nThe Workflow Manager has been refactored to reduce usage of the Redis in-memory database. In general, Redis is not necessary for storing job information and only resulted in introducing potential delays in accessing that data over the network stack.\n\n\nNow, only track and detection data is stored in Redis for batch jobs. This reduces the amount of memory the Workflow Manager requires of the Java Virtual Machine. Compared to the other job information, track and detection data can potentially be relatively much larger. In the future, we plan to store frame data in Redis for streaming jobs as well.\n\n\n\n\nCaffe Vehicle Color Estimation\n\n\n\n\n\nThe Caffe Component \nmodels.ini\n file has been updated with a \"vehicle_color\" section with links for downloading the \nReza Fuad Rachmadi's Vehicle Color Recognition Using Convolutional Neural Network\n model files.\n\n\nThe following pipelines have been added. These require the above model files to be placed in \n$MPF_HOME/share/models/CaffeDetection\n:\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM TINY YOLO VEHICLE DETECTOR) PIPELINE\n\n\nCAFFE REZAFUAD VEHICLE COLOR DETECTION (WITH FF REGION FROM YOLO VEHICLE DETECTOR) PIPELINE\n\n\n\n\n\n\n\n\nTrack Merging and Minimum Track Length\n\n\n\n\n\nThe following system properties now have \"video\" in their names:\n\n\ndetection.video.track.merging.enabled\n\n\ndetection.video.track.min.gap\n\n\ndetection.video.track.min.length\n\n\ndetection.video.track.overlap.threshold\n\n\n\n\n\n\nThe above properties can be overridden by the following job properties, respectively. These have not been renamed since the last release:\n\n\nMERGE_TRACKS\n\n\nMIN_GAP_BETWEEN_TRACKS\n\n\nMIN_TRACK_LENGTH\n\n\nMIN_OVERLAP\n\n\n\n\n\n\nThese system and job properties now only apply to video media. This resolves an issue where users had set \ndetection.track.min.length=5\n, which resulted in dropping all image media tracks. By design, each image track can only contain a single detection.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a bug where the Docker entrypoint scripts appended properties to the end of \n$MPF_HOME/share/config/mpf-custom.properties\n every time the Docker deployment was restarted, resulting in entries like \ndetection.segment.target.length=5000,5000,5000\n.\n\n\nUpgrading to Tesseract 4 fixes a bug where, when specifying \nTESSERACT_LANGUAGE\n, if one of the languages is Arabic, then Arabic must be specified last. Arabic can now be specified first, for example: \nara+eng\n.\n\n\nFixed a bug where the minimum track length property was being applied to image tracks. Now it's only applied to video tracks.\n\n\nFixed a bug where ImageMagick6 installation failed while building Docker images.\n\n\n\n\nOpenMPF 3.0.0: December 2018\n\n\n\n\nNOTE:\n The \nBuild Guide\n and \nInstall Guide\n are outdated. The old process for manually configuring a Build VM, using it to build an OpenMPF package, and installing that package, is deprecated in favor of Docker containers. Please refer to the openmpf-docker \nREADME\n.\n\n\nNOTE:\n Do not attempt to register or unregister a component through the Nodes UI in a Docker deployment. It may appear to succeed, but the changes will not affect the child Node Manager containers, only the Workflow Manager container. Also, do not attempt to use the \nmpf\n command line tools in a Docker deployment.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nREADME\n, \nSWARM\n guide, and \nCONTRIBUTING\n guide for Docker deployment.\n\n\nUpdated the \nUser Guide\n with information on how track properties and track confidence are handled when merging tracks.\n\n\nAdded README files for new components. Refer to the component sections below.\n\n\n\n\nDocker Support\n\n\n\n\n\nOpenMPF can now be built and distributed as 5 Docker images: openmpf_workflow_manager, openmpf_node_manager, openmpf_active_mq, mysql_database, and redis.\n\n\nThese images can be deployed on a single host using \ndocker-compose up\n.\n\n\nThey can also be deployed across multiple hosts in a Docker swarm cluster using \ndocker stack deploy\n.\n\n\nGPU support is enabled through the NVIDIA Docker runtime.\n\n\nBoth HTTP and HTTPS deployments are supported.\n\n\n\n\n\n\nJSON Output Object\n\n\n\n\nAdded a \ntrackProperties\n field at the track level that works in much the same way as the \ndetectionProperties\n field at the detection level. Both are maps that contain zero or more key-value pairs. The component APIs have always supported the ability to return track-level properties, but they were never represented in the JSON output object, until now.\n\n\nSimilarly, added a track \nconfidence\n field. The component APIs always supported setting it, but the value was never used in the JSON output object, until now.\n\n\nAdded \njobErrors\n and\njobWarnings\n fields. The \njobErrors\n field will mention that there are items in \ndetectionProcessingErrors\n fields.\n\n\nThe \noffset\n, \nstartOffset\n, and \nstopOffset\n fields have been removed in favor of the existing \noffsetFrame\n, \nstartOffsetFrame\n, and \nstopOffsetFrame\n fields, respectively. They were redundant and deprecated.\n\n\nAdded a \nmpf.output.objects.exemplars.only\n system property, and \nOUTPUT_EXEMPLARS_ONLY\n job property, that can be set to reduce the size of the JSON output object by only recording the track exemplars instead of all of the detections in each track.\n\n\nAdded a \nmpf.output.objects.last.stage.only\n system property, and \nOUTPUT_LAST_STAGE_ONLY\n job property, that can be set to reduce the size of the JSON output object by only recording the detections for the last non-markup stage of a pipeline.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThe Darknet component can now support processing streaming video.\n\n\nIn batch mode, video frames are prefetched, decoded, and stored in a buffer using a separate thread from the one that performs the detection. The size of the prefetch buffer can be configured by setting \nFRAME_QUEUE_CAPACITY\n.\n\n\nThe Darknet component can now perform basic tracking and generate video tracks with multiple detections. Both the default detection mode and preprocessor detection mode are supported.\n\n\nThe Darknet component has been updated to support the full and tiny YOLOv3 models. The YOLOv2 models are no longer supported.\n\n\n\n\nTesseract OCR Text Detection Component\n\n\n\n\n\nThis new component extracts text found in an image and reports it as a single-detection track.\n\n\nPDF documents can also be processed with one track detection per page.\n\n\nUsers may set the language of each track using the \nTESSERACT_LANGUAGE\n property as well as adjust other image preprocessing properties for text extraction.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nOpenCV Scene Change Detection Component\n\n\n\n\n\nThis new component detects and segments a given video by scenes. Each scene change is detected using histogram comparison, edge comparison, brightness (fade outs), and overall hue/saturation/value differences between adjacent frames.\n\n\nUsers can toggle each type of of scene change detection technique as well as threshold properties for each detection method.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Text Detection Component\n\n\n\n\n\nThis new component extracts text contained in documents and performs language detection. 71 languages and most document formats (.txt, .pptx, .docx, .doc, .pdf, etc.) are supported.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTika Image Detection Component\n\n\n\n\n\nThis new component extracts images embedded in document formats (.pdf, .ppt, .doc) and stores them on disk in a specified directory.\n\n\nRefer to the \nREADME\n.\n\n\n\n\nTrack-Level Properties and Confidence\n\n\n\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n section.\n\n\nComponents have been updated to return meaningful track-level properties. Caffe and Darknet include \nCLASSIFICATION\n, OALPR includes the exemplar \nTEXT\n, and Sphinx includes the \nTRANSCRIPTION\n.\n\n\nThe Workflow Manager will now populate the track-level confidence. It is the same as the exemplar confidence, which is the max of all of the track detections.\n\n\n\n\nCustom NGINX HTTP Object Storage\n\n\n\n\n\nAdded \nhttp.object.storage.*\n system properties for configuring an optional custom NGINX object storage server on which to store generated detection artifacts, JSON output objects, and markup files.\n\n\nWhen a file cannot be uploaded to the server, the Workflow Manager will fall back to storing it in \n$MPF_HOME/share\n, which is the default behavior when an object storage server is not specified.\n\n\nIf and when a failure occurs, the JSON output object will contain a descriptive message in the \njobWarnings\n field, and, if appropriate, the \nmarkupResult.message\n field. If the job completes without other issues, the final status will be \nCOMPLETE_WITH_WARNINGS\n.\n\n\nThe NGINX storage server runs custom server-side code which we can make available upon request. In the future, we plan to support more common storage server solutions, such as Amazon S3.\n\n\n\n\n\n\nActiveMQ\n\n\n\n\nThe \nMPF_OUTPUT\n queue is no longer supported and has been removed. Job producers can specify a callback URL when creating a job so that they are alerted when the job is complete. Users observed heap space issues with ActiveMQ after running thousands of jobs without consuming messages from the \nMPF_OUTPUT\n queue.\n\n\nThe Workflow Manager will now silently discard duplicate sub-job request messages in the ActiveMQ Dead Letter Queue (DLQ). This fixes a bug where the Workflow Manager would prematurely terminate jobs corresponding to the duplicate messages. It's assumed that ActiveMQ will only place a duplicate message in the DLQ if the original message, or another duplicate, can be delivered.\n\n\n\n\nNode Auto-Configuration\n\n\n\n\n\nAdded the \nnode.auto.config.enabled\n, \nnode.auto.unconfig.enabled\n, and \nnode.auto.config.num.services.per.component\n system properties for automatically managing the configuration of services when nodes join and leave the OpenMPF cluster.\n\n\nDocker will assign a a hostname with a randomly-generated id to containers in a swarm deployment. The above properties allow the Workflow Manager to automatically discover and configure services on child Node Manager components, which is convenient since the hostname of those containers cannot be known in advance, and new containers with new hostnames are created when the swarm is restarted.\n\n\n\n\nJob Status Web UI\n\n\n\n\n\nAdded the \nweb.broadcast.job.status.enabled\n and \nweb.job.polling.interval\n system properties that can be used to configure if the Workflow Manager automatically broadcasts updates to the Job Status web UI. By default, the broadcasts are enabled.\n\n\nIn a production environment that processes hundreds of jobs or more at the same time, this behavior can result in overloading the web UI, causing it to slow down and freeze up. To prevent this, set \nweb.broadcast.job.status.enabled\n to \nfalse\n. If \nweb.job.polling.interval\n is set to a non-zero value, the web UI will poll for updates at that interval (specified in milliseconds).\n\n\nTo disable broadcasts and polling, set \nweb.broadcast.job.status.enabled\n to \nfalse\n and \nweb.job.polling.interval\n to a zero or negative value. Users will then need to manually refresh the Job Status web page using their web browser.\n\n\n\n\nOther Improvements\n\n\n\n\n\nNow using variable-length text fields in the mySQL database for string data that may exceed 255 characters.\n\n\nUpdated the MPFImageReader tool to use OpenCV video capture behind the scenes to support reading data from HTTP URLs.\n\n\nPython components can now include pre-built wheel files in the plugin package.\n\n\nWe now use a \nJenkinsfile\n Groovy script for our Jenkins build process. This allows us to use revision control for our continuous integration process and share that process with the open source community.\n\n\nAdded \nremote.media.download.retries\n and \nremote.media.download.sleep\n system properties that can be used to configure how the Workflow Manager will attempt to retry downloading remote media if it encounters a problem.\n\n\nArtifact extraction now uses MPFVideoCapture, which employs various fallback strategies for extracting frames in cases where a video is not well-formed or corrupted. For components that use MPFVideoCapture, this enables better consistency between the frames they process and the artifacts that are later extracted.\n\n\n\n\nBug Fixes\n\n\n\n\n\nJobs now properly end in \nERROR\n if an invalid media URL is provided or there is a problem accessing remote media.\n\n\nJobs now end in \nCOMPLETE_WITH_ERRORS\n when a detection splitter error occurs due to missing system properties.\n\n\nComponents can now include their own version of the Google Protobuf library. It will not conflict with the version used by the rest of OpenMPF.\n\n\nThe Java component executor now sets the proper job id in the job name instead of using the ActiveMQ message request id.\n\n\nThe Java component executor now sets the run directory using \nsetRunDirectory()\n.\n\n\nActions can now be properly added using an \"extras\" component. An extras component only includes a \ndescriptor.json\n file and declares Actions, Tasks, and Pipelines using other component algorithms.\n\n\nRefer to the items listed in the \nActiveMQ\n section.\n\n\nRefer to the addition of track-level properties and confidence in the \nJSON Output Object\n section.\n\n\n\n\nKnown Issues\n\n\n\n\n\n[\n#745\n] In environments where thousands of jobs are processed, users have observed that, on occasion, pending sub-job messages in ActiveMQ queues are not processed until a new job is created. The reason is currently unknown.\n\n\n[\n#544\n] Image artifacts retain some permissions from source files available on the local host. This can result in some of the image artifacts having executable permissions.\n\n\n[\n#604\n] The Sphinx component cannot be unregistered because \n$MPF_HOME/plugins/SphinxSpeechDetection/lib\n is owned by root on a deployment machine.\n\n\n[\n#623\n] The Nodes UI does not work correctly when \n[POST] /rest/nodes/config\n is used at the same time. This is because the UI's state is not automatically updated to reflect changes made through the REST endpoint.\n\n\n[\n#783\n] The Tesseract OCR Text Detection Component has a \nknown issue\n because it uses Tesseract 3. If a combination of languages is specified using \nTESSERACT_LANGUAGE\n, and one of the languages is Arabic, then Arabic must be specified last. For example, for English and Arabic, \neng+ara\n will work, but \nara+eng\n will not.\n\n\n[\n#784\n] Sometimes services do not start on OpenMPF nodes, and those services cannot be started through the Nodes web UI. This is not a Docker-specific problem, but it has been observed in a Docker swarm deployment when auto-configuration is enabled. The workaround is to restart the Docker swarm deployment, or remove the entire node in the Nodes UI and add it again.\n\n\n\n\nOpenMPF 2.1.0: June 2018\n\n\n\n\nNOTE:\n If building this release on a machine used to build a previous version of OpenMPF, then please run \nsudo pip install --upgrade pip\n to update to at least pip 10.0.1. If not, the OpenMPF build script will fail to properly download .whl files for Python modules.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded the \nPython Batch Component API\n.\n\n\nAdded the \nNode Guide\n.\n\n\nAdded the \nGPU Support Guide\n.\n\n\nUpdated the \nInstall Guide\n with an \"(Optional) Install the NVIDIA CUDA Toolkit\" section.\n\n\nRenamed Admin Manual to Admin Guide for consistency.\n\n\n\n\nPython Batch Component API\n\n\n\n\n\nDevelopers can now write batch components in Python using the mpf_component_api module.\n\n\nDependencies can be specified in a setup.py file. OpenMPF will automatically download the .whl files using pip at build time.\n\n\nWhen deployed, a virtualenv is created for the Python component so that it runs in a sandbox isolated from the rest of the system.\n\n\nOpenMPF ImageReader and VideoCapture tools are provided in the mpf_component_util module.\n\n\nExample Python components are provided for reference.\n\n\n\n\nSpare Nodes\n\n\n\n\n\nSpare nodes can join and leave an OpenMPF cluster while the Workflow Manager is running. You can create a spare node by cloning an existing OpenMPF child node. Refer to the \nNode Guide\n.\n\n\nNote that changes made using the Component Registration web page only affect core nodes, not spare nodes. Core nodes are those configured during the OpenMPF installation process.\n\n\nAdded \nmpf list-nodes\n command to list the core nodes and available spare nodes.\n\n\nOpenMPF now uses the JGroups FILE_PING protocol for peer discovery instead of TCPPING. This means that the list of OpenMPF nodes no longer needs to be fully specified when the Workflow Manager starts. Instead, the Workflow Manager, and Node Manager process on each node, use the files in \n$MPF_HOME/share/nodes\n to determine which nodes are currently available.\n\n\nUpdated JGroups from 3.6.4. to 4.0.11.\n\n\nThe environment variables specified in \n/etc/profile.d/mpf.sh\n have been simplified. Of note, \nALL_MPF_NODES\n has been replaced by \nCORE_MPF_NODES\n.\n\n\n\n\nDefault Detection System Properties\n\n\n\n\n\nThe detection properties that specify the default values when creating new jobs can now be updated at runtime without restarting the Workflow Manager. Changing these properties will only have an effect on new jobs, not jobs that are currently running.\n\n\nThese default detection system properties are separated from the general system properties in the Properties web page. The latter still require the Workflow Manager to be restarted for changes to take effect.\n\n\nThe Apache Commons Configuration library is now used to read and write properties files. When defining a property value using an environment variable in the Properties web page, or \n$MPF_HOME/config/mpf-custom.properties\n, be sure to prepend the variable name with \nenv:\n. For example:\n\n\n\n\ndetection.models.dir.path=${env:MPF_HOME}/models/\n\n\n\n\n\n\nAlternatively, you can define system properties using other system properties:\n\n\n\n\ndetection.models.dir.path=${mpf.share.path}/models/\n\n\n\n\nAdaptive Frame Interval\n\n\n\n\n\nThe \nFRAME_RATE_CAP\n property can be used to set a threshold on the maximum number of frames to process within one second of the native video time. This property takes precedence over the user-provided / pipeline-provided value for \nFRAME_INTERVAL\n. When the \nFRAME_RATE_CAP\n property is specified, an internal frame interval value is calculated as follows:\n\n\n\n\ncalcFrameInterval = max(1, floor(mediaNativeFPS / frameRateCapProp));\n\n\n\n\n\n\nFRAME_RATE_CAP\n may be disabled by setting it \n= 0. \nFRAME_INTERVAL\n can be disabled in the same way.\n\n\nIf \nFRAME_RATE_CAP\n is disabled, then \nFRAME_INTERVAL\n will be used instead.\n\n\nIf both \nFRAME_RATE_CAP\n and \nFRAME_INTERVAL\n are disabled, then a value of 1 will be used for \nFRAME_INTERVAL\n.\n\n\n\n\nDarknet Component\n\n\n\n\n\nThis release includes a component that uses the \nDarknet neural network framework\n to perform detection and classification of objects using trained models.\n\n\nPipelines for the Tiny YOLO and YOLOv2 models are provided. Due to its large size, the YOLOv2 weights file must be downloaded separately and placed in \n$MPF_HOME/share/models/DarknetDetection\n in order to use the YOLOv2 pipelines. Refer to \nDarknetDetection/plugin-files/models/models.ini\n for more information.\n\n\nThis component supports a preprocessor mode and default mode of operation. If preprocessor mode is enabled, and multiple Darknet detections in a frame share the same classification, then those are merged into a single detection where the region corresponds to the superset region that encapsulates all of the original detections, and the confidence value is the probability that at least one of the original detections is a true positive. If disabled, multiple Darknet detections in a frame are not merged together.\n\n\nDetections are not tracked across frames. One track is generated per detection.\n\n\nThis component supports an optional \nCLASS_WHITELIST_FILE\n property. When provided, only detections with class names listed in the file will be generated.\n\n\nThis component can be compiled with GPU support if the NVIDIA CUDA Toolkit is installed on the build machine. Refer to the \nGPU Support Guide\n. If the toolkit is not found, then the component will compile with CPU support only.\n\n\nTo run on a GPU, set the \nCUDA_DEVICE_ID\n job property, or set the detection.cuda.device.id system property, \n= 0.\n\n\nWhen \nCUDA_DEVICE_ID\n \n= 0, you can set the \nFALLBACK_TO_CPU_WHEN_GPU_PROBLEM\n job property, or the detection.use.cpu.when.gpu.problem system property, to \nTRUE\n if you want to run the component logic on the CPU instead of the GPU when a GPU problem is detected.\n\n\n\n\nModels Directory\n\n\n\n\n\nThe\n$MPF_HOME/share/models\n directory is now used by the Darknet and Caffe components to store model files and associated files, such as classification names files, weights files, etc. This allows users to more easily add model files post-deployment. Instead of copying the model files to \n$MPF_HOME/plugins/\ncomponent-name\n/models\n directory on each node in the OpenMPF cluster, they only need to copy them to the shared directory once.\n\n\nTo add new models to the Darknet and Caffe component, add an entry to the respective \ncomponent-name\n/plugin-files/models/models.ini\n file.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nPython components are packaged with their respective dependencies as .whl files. This can be automated by providing a setup.py file. An example OpenCV Python component is provided that demonstrates how the component is packaged and deployed with the opencv-python module. When deployed, a virtualenv is created for the component with the .whl files installed in it.\n\n\nWhen deploying OpenMPF, \nLD_LIBRARY_PATH\n is no longer set system-wide. Refer to Known Issues.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nUpdated the Nodes page to distinguish between core nodes and spare nodes, and to show when a node is online or offline.\n\n\nUpdated the Component Registration page to list the core nodes as a reminder that changes will not affect spare nodes.\n\n\nUpdated the Properties page to separate the default detection properties from the general system properties.\n\n\n\n\nBug Fixes\n\n\n\n\n\nCustom Action, task, and pipeline names can now contain \"(\" and \")\" characters again.\n\n\nDetection location elements for audio tracks and generic tracks in a JSON output object will now have a y value of \n0\n instead of \n1\n.\n\n\nStreaming health report and summary report timestamps have been corrected to represent hours in the 0-23 range instead of 1-24.\n\n\nSingle-frame .gif files are now segmented properly and no longer result in a NullPointerException.\n\n\nLD_LIBRARY_PATH\n is now set at the process level for Tomcat, the Node Manager, and component services, instead of at the system level in \n/etc/profile.d/mpf.sh\n. Also, deployments no longer create \n/etc/ld.so.conf.d/mpf.conf\n. This better isolates OpenMPF from the rest of the system and prevents issues, such as being unable to use SSH, when system libraries are not compatible with OpenMPF libraries. The latter situation may occur when running \nyum update\n on the system, which can make OpenMPF unusable until a new deployment package with compatible libraries is installed.\n\n\nThe Workflow Manager will no longer generate an \"Error retrieving the SingleJobInfo model\" line in the log if someone is viewing the Job Status page when a job submitted through the REST API is in progress.\n\n\n\n\nKnown Issues\n\n\n\n\n\nWhen multiple component services of the same type on the same node log to the same file at the same time, sometimes log lines will not be captured in the log file. The logging frameworks (log4j and log4cxx) do not support that usage. This problem happens more frequently on systems running many component services at the same time.\n\n\nThe following exception was observed:\n\n\n\n\ncom.google.protobuf.InvalidProtocolBufferException: Message missing required fields: data_uri\n\n\n\n\n\n\n\nFurther debugging is necessary to determine the reason why that message was missing that field. The situation is not easily reproducible. It may occur when ActiveMQ and / or the system is under heavy load and sends duplicate messages in attempt to ensure message delivery. Some of those messages seem to end up in the dead letter queue (DLQ). For now, we've improved the way we handle messages in the DLQ. If OpenMPF can process a message successfully, the job is marked as \nCOMPLETED_WITH_ERRORS\n, and the message is moved from \nActiveMQ.DLQ\n to \nMPF.DLQ_PROCESSED_MESSAGES\n. If OpenMPF cannot process a message successfully, it is moved from \nActiveMQ.DLQ to MPF.DLQ_INVALID_MESSAGES\n.\n\n\n\n\n\n\nThe \nmpf stop\n command will stop the Workflow Manager, which will in turn send commands to all of the available nodes to stop all running component services. If a service is processing a sub-job when the quit command is received, that service process will not terminate until that sub-job is completely processed. Thus, the service may put a sub-job response on the ActiveMQ response queue after the Workflow Manager has terminated. That will not cause a problem because the queues are flushed the next time the Workflow Manager starts; however, there will be a problem if the service finishes processing the sub-job after the Workflow Manager is restarted. At that time, the Workflow Manager will have no knowledge of the old job and will in turn generate warnings in the log about how the job id is \"not known to the system\" and/or \"not found as a batch or a streaming job\". These can be safely ignored. Often, if these messages appear in the log, then C++ services were running after stopping the Workflow Manager. To address this, you may wish to run \nsudo killall amq_detection_component\n after running \nmpf stop\n.\n\n\n\n\nOpenMPF 2.0.0: February 2018\n\n\n\n\nNOTE:\n Components built for previous releases of OpenMPF are not compatible with OpenMPF 2.0.0 due to Batch Component API changes to support generic detections, and changes made to the format of the \ndescriptor.json\n file to support stream processing.\n\n\nNOTE:\n This release contains basic support for processing video streams. Currently, the only way to make use of that functionality is through the REST API. Streaming jobs and services cannot be created or monitored through the web UI. Only the SuBSENSE component has been updated to support streaming. Only single-stage pipelines are supported at this time.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated documents to distinguish the batch component APIs from the streaming component API.\n\n\nAdded the \nC++ Streaming Component API\n.\n\n\nUpdated the \nC++ Batch Component API\n to describe support for generic detections.\n\n\nUpdated the \nREST API\n with endpoints for streaming jobs.\n\n\n\n\nSupport for Generic Detections\n\n\n\n\n\nC++ and Java components can now declare support for the \nUNKNOWN\n data type. The respective batch APIs have been updated with a function that will enable a component to process an \nMPFGenericJob\n, which represents a piece of media that is not a video, image, or audio file.\n\n\nNote that these API changes make OpenMPF R2.0.0 incompatible with components built for previous releases of OpenMPF. Specifically, the new component executor will not be able to load the component logic library.\n\n\n\n\nC++ Batch Component API\n\n\n\n\n\nAdded the following function to support generic detections:\n\n\nMPFDetectionError GetDetections(const MPFGenericJob \njob, vector\nMPFGenericTrack\n \ntracks)\n\n\n\n\n\n\n\n\nJava Batch Component API\n\n\n\n\n\nAdded the following method to support generic detections:\n\n\nList\nMPFGenericTrack\n getDetections(MPFGenericJob job)\n\n\n\n\n\n\n\n\nStreaming REST API\n\n\n\n\n\nAdded the following REST endpoints for streaming jobs:\n\n\n[GET] /rest/streaming/jobs\n: Returns a list of streaming job ids.\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job. Users can register for health report and summary report callbacks.\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job.\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job.\n\n\n\n\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nUpdated to support generic detections.\n\n\nUpdated Redis to store information about streaming jobs.\n\n\nAdded controllers for streaming job REST endpoints.\n\n\nAdded ability to generate health reports and segment summary reports for streaming jobs.\n\n\nImproved code flow between the Workflow Manager and master Node Manager to support streaming jobs.\n\n\nAdded ActiveMQ queues to enable the C++ Streaming Component Executor to send reports and job status to the Workflow Manager.\n\n\n\n\nNode Manager\n\n\n\n\n\nUpdated the master Node Manager and child Node Managers to spawn component services on demand to handle streaming jobs, cancel those jobs, and to monitor the status of those processes.\n\n\nUsing .ini files to represent streaming job properties and enable better communication between a child Node Manager and C++ Streaming Component Executor.\n\n\n\n\nC++ Streaming Component API\n\n\n\n\n\nDeveloped the C++ Streaming Component API with the following functions:\n\n\nMPFStreamingDetectionComponent(const MPFStreamingVideoJob \njob)\n: Constructor that takes a streaming video job.\n\n\nstring GetDetectionType()\n: Returns the type of detection (i.e. \"FACE\").\n\n\nvoid BeginSegment(const VideoSegmentInfo \nsegment_info)\n: Indicates the beginning of a new video segment.\n\n\nbool ProcessFrame(const cv::Mat \nframe, int frame_number)\n: Processes a single frame for the current video segment.\n\n\nvector\nMPFVideoTrack\n EndSegment()\n: Indicates the end of the current video segment.\n\n\n\n\n\n\nUpdated the C++ Hello World component to support streaming jobs.\n\n\n\n\nC++ Streaming Component Executor\n\n\n\n\n\nDeveloped the C++ Streaming Component Executor to load a streaming component logic library, read frames from a video stream, and exercise the component logic through the C++ Streaming Component API.\n\n\nWhen the C++ Streaming Component Executor cannot read a frame from the stream, it will sleep for at least 1 millisecond, doubling the amount of sleep time per attempt until it reaches the \nstallTimeout\n value specified when the job was created. While stalled, the job status will be \nSTALLED\n. After the timeout is exceeded, the job will be \nTERMINATED\n.\n\n\nThe C++ Streaming Component Executor supports \nFRAME_INTERVAL\n, as well as rotation, horizontal flipping, and cropping (region of interest) properties. Does not support \nUSE_KEY_FRAMES\n.\n\n\n\n\nInteroperability Package\n\n\n\n\n\nAdded the following Java classes to the interoperability package to simplify third party integration:\n\n\nJsonHealthReportCollection\n: Represents the JSON content of a health report callback. Contains one or more \nJsonHealthReport\n objects.\n\n\nJsonSegmentSummaryReport\n: Represents the JSON content of a summary report callback. Content is similar to the JSON output object used for batch processing.\n\n\n\n\n\n\n\n\nSuBSENSE Component\n\n\n\n\n\nThe SuBSENSE component now supports both batch processing and stream processing.\n\n\nEach video segment will be processed independently of the rest. In other words, tracks will be generated on a segment-by-segment basis and tracks will not carry over between segments.\n\n\nNote that the last frame in the previous segment will be used to determine if there is motion in the first frame of the next segment.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nUpdated \ndescriptor.json\n fields to allow components to support batch and/or streaming jobs. Components that use the old \ndescriptor.json\n file format cannot be registered through the web UI. \n\n\nBatch component logic and streaming component logic are compiled into separate libraries.\n\n\nThe mySQL \nstreaming_job_request\n table has been updated with the following fields, which are used to populate the JSON health reports:\n\n\nstatus_detail\n: (Optional) A user-friendly description of the current job status.\n\n\nactivity_frame_id\n: The frame id associated with the last job activity. Activity is defined as the start of a new track for the current segment.\n\n\nactivity_timestamp\n: The timestamp associated with the last job activity.\n\n\n\n\n\n\n\n\nWeb User Interface\n\n\n\n\n\nAdded column names to the table that appears when the user clicks in the Media button associated with a job on the Job Status page. Now descriptive comments are provided when table cells are empty.\n\n\n\n\nBug Fixes\n\n\n\n\n\nUpgraded Tika to 1.17 to resolve an issue with improper indentation in a Python file (rotation.py) that resulted in generating at least one error message per image processed. When processing a large number of images, this would generate may error messages, causing the Automatic Bug Reporting Tool daemon (abrtd) process to run at 100% CPU. Once in that state, that process would stay there, essentially wasting on CPU core. This resulted in some of the Jenkins virtual machines we used for testing to become unresponsive.\n\n\n\n\nKnown Issues\n\n\n\n\n\n\n\nOpenCV 3.3.0 \ncv::imread()\n does not properly decode some TIFF images that have EXIF orientation metadata. It can handle images that are flipped horizontally, but not vertically. It also has issues with rotated images. Since most components rely on that function to read image data, those components may silently fail to generate detections for those kinds of images.\n\n\n\n\n\n\nUsing single quotes, apsotrophes, or double quotes in the name of an algorithm, action, task, or pipeline configured on an existing OpenMPF system will result in a failure to perform an OpenMPF upgrade on that system. Specifically, the step where pre-existing custom actions, tasks, and pipelines are carried over to the upgraded version of OpenMPF will fail. Please do not use those special characters while naming those elements. If this has been done already, then those elements should be manually renamed in the XML files prior to an upgrade attempt.\n\n\n\n\n\n\nOpenMPF uses OpenCV, which uses FFmpeg, to connect to video streams. If a proxy and/or firewall prevents the network connection from succeeding, then OpenCV, or the underlying FFmpeg library, will segfault. This causes the C++ Streaming Component Executor process to fail. In turn, the job status will be set to \nERROR\n with a status detail message of \"Unexpected error. See logs for details\". In this case, the logs will not contain any useful information. You can identify a segfault by the following line in the node-manager log:\n\n\n\n\n\n\n2018-02-15 16:01:21,814 INFO [pool-3-thread-4] o.m.m.nms.streaming.StreamingProcess - Process: Component exited with exit code 139\u00a0\n\n\n\n\n\n\nTo determine if FFmpeg can connect to the stream or not, run \nffmpeg -i \nstream-uri\n in a terminal window. Here's an example when it's successful:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[rtsp @ 0x1924240] UDP timeout, retrying with TCP\nInput #0, rtsp, from 'rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov':\n Metadata:\n title : BigBuckBunny_115k.mov\n Duration: 00:09:56.48, start: 0.000000, bitrate: N/A\n Stream #0:0: Audio: aac (LC), 12000 Hz, stereo, fltp\n Stream #0:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 240x160, 24 fps, 24 tbr, 90k tbn, 48 tbc\nAt least one output file must be specified\n\n\n\n\n\n\nHere's an example when it's not successful, so there may be network issues:\n\n\n\n\n[mpf@localhost bin]$ ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov\nffmpeg version n3.3.3-1-ge51e07c Copyright (c) 2000-2017 the FFmpeg developers\n built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)\n configuration: --prefix=/apps/install --extra-cflags=-I/apps/install/include --extra-ldflags=-L/apps/install/lib --bindir=/apps/install/bin --enable-gpl --enable-nonfree --enable-libtheora --enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libx264 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-version3 --enable-shared --disable-libsoxr --enable-avresample\n libavutil 55. 58.100 / 55. 58.100\n libavcodec 57. 89.100 / 57. 89.100\n libavformat 57. 71.100 / 57. 71.100\n libavdevice 57. 6.100 / 57. 6.100\n libavfilter 6. 82.100 / 6. 82.100\n libavresample 3. 5. 0 / 3. 5. 0\n libswscale 4. 6.100 / 4. 6.100\n libswresample 2. 7.100 / 2. 7.100\n libpostproc 54. 5.100 / 54. 5.100\n[tcp @ 0x171c300] Connection to tcp://184.72.239.149:554?timeout=0 failed: Invalid argument\nrtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov: Invalid argument\n\n\n\n\n\n\nTika 1.17 does not come pre-packaged with support for some embedded image formats in PDF files, possibly to avoid patent issues. OpenMPF does not handle embedded images in PDFs, so that's not a problem. Tika will print out the following warnings, which can be safely ignored:\n\n\n\n\nJan 22, 2018 11:02:15 AM org.apache.tika.config.InitializableProblemHandler$3 handleInitializableProblem\nWARNING: JBIG2ImageReader not loaded. jbig2 files will be ignored\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nTIFFImageWriter not loaded. tiff files will not be processed\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\nJ2KImageReader not loaded. JPEG2000 files will not be processed.\nSee https://pdfbox.apache.org/2.0/dependencies.html#jai-image-io\nfor optional dependencies.\n\n\n\n\n\nOpenMPF 1.0.0: October 2017\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nBuild Guide\n with instructions for installing the latest JDK, latest JRE, FFmpeg 3.3.3, new codecs, and OpenCV 3.3.\n\n\nAdded an \nAcknowledgements\n section that provides information on third party dependencies leveraged by the OpenMPF.\n\n\nAdded a \nFeed Forward Guide\n that explains feed forward processing and how to use it.\n\n\nAdded missing requirements checklist content to the \nInstall Guide\n.\n\n\nUpdated the README at the top level of each of the primary repositories to help with user navigation and provide general information.\n\n\n\n\nUpgrade to FFmpeg 3.3.3 and OpenCV 3.3\n\n\n\n\n\nUpdated core framework from FFmpeg 2.6.3 to FFmpeg 3.3.3.\n\n\nAdded the following FFmpeg codecs: x256, VP9, AAC, Opus, Speex.\n\n\nUpdated core framework and components from OpenCV 3.2 to OpenCV 3.3. No longer building with opencv_contrib.\n\n\n\n\nFeed Forward Behavior\n\n\n\n\n\nUpdated the workflow manager (WFM) and all video components to optionally perform feed forward processing for batch jobs. This allows tracks to be passed forward from one pipeline stage to the next. Components in the next stage will only process the frames associated with the detections in those tracks. This differs from the default segmenting behavior, which does not preserve detection regions or track information between stages.\n\n\nTo enable this behavior, the optional \nFEED_FORWARD_TYPE\n property must be set to \nFRAME\n, \nSUPERSET_REGION\n, or \nREGION\n. If set to \nFRAME\n then the components in the next stage will process the whole frame region associated with each detection in the track passed forward. If set to \nSUPERSET_REGION\n then the components in the next stage will determine the bounding box that encapsulates all of the detection regions in the track, and only process the pixel data within that superset region. If set to \nREGION\n then the components in the next stage will process the region associated with each detection in the track passed forward, which may vary in size and position from frame to frame.\n\n\nThe optional \nFEED_FORWARD_TOP_CONFIDENCE_COUNT\n property can be set to a number to limit the number of detections passed forward in a track. For example, if set to \"5\", then only the top 5 detections in the track will be passed forward and processed by the next stage. The top detections are defined as those with the highest confidence values, or if the confidence values are the same, those with the lowest frame index.\n\n\nNote that setting the feed forward properties has no effect on the first pipeline stage because there is no prior stage that can pass tracks to it.\n\n\n\n\nCaffe Component\n\n\n\n\n\nUpdated the Caffe component to process images in the BGR color space instead of the RGB color space. This addresses a bug found in OpenCV. Refer to the Bug Fixes section below.\n\n\nAdded support for processing videos.\n\n\nAdded support for an optional \nACTIVATION_LAYER_LIST\n property. For each network layer specified in the list, the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is an encoded string of the JSON representation of an OpenCV matrix of the activation values for that layer. The activation values are obtained after the Caffe network has processed the frame data.\n\n\nAdded support for an optional \nSPECTRAL_HASH_FILE_LIST\n property. For each JSON file specified in the list, the \ndetectionProperties\n map in the JSON output object will contain one entry. The value is a string of 0's and 1's representing the spectral hash calculated using the information in the spectral hash JSON file. The spectral hash is calculated using activation values after the Caffe network has processed the frame data.\n\n\nAdded a pipeline to showcase the above two features for the GoogLeNet Caffe model.\n\n\nRemoved the \nTRANSPOSE\n property from the Caffe component since it was not necessary.\n\n\nAdded red, green, and blue mean subtraction values to the GoogLeNet pipeline.\n\n\n\n\nUse Key Frames\n\n\n\n\n\nAdded support for an optional \nUSE_KEY_FRAMES\n property to each video component. When true the component will only look at key frames (I-frames) from the input video. Can be used in conjunction with \nFRAME_INTERVAL\n. For example, when \nUSE_KEY_FRAMES\n is true, and \nFRAME_INTERVAL\n is set to \"2\", then every other key frame will be processed.\n\n\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\n\n\n\nUpdated the MPFVideoCapture and MPFImageReader tools to handle feed forward properties.\n\n\nUpdated the MPFVideoCapture tool to handle \nFRAME_INTERVAL\n and \nUSE_KEY_FRAMES\n properties.\n\n\nUpdated all existing components to leverage these tools as much as possible.\n\n\nWe encourage component developers to use these tools to automatically take care of common frame grabbing and frame manipulation behaviors, and not to reinvent the wheel.\n\n\n\n\nDead Letter Queue\n\n\n\n\n\nIf for some reason a sub-job request that should have gone to a component ends up on the ActiveMQ Dead Letter Queue (DLQ), then the WFM will now process that failed request so that the job can complete. The ActiveMQ management page will now show that \nActiveMQ.DLQ\n has 1 consumer. It will also show unconsumed messages in \nMPF.PROCESSED_DLQ_MESSAGES\n. Those are left for auditing purposes. The \"Message Detail\" for these shows the string representation of the original job request protobuf message.\n\n\n\n\nUpgrade Path\n\n\n\n\n\nRemoved the Release 0.8 to Release 0.9 upgrade path in the deployment scripts.\n\n\nAdded support for a Release 0.9 to Release 1.0.0 upgrade path, and a Release 0.10.0 to Release 1.0.0 upgrade path.\n\n\n\n\nMarkup\n\n\n\n\n\nBounding boxes are now drawn along the interpolated path between detection regions whenever there are one or more frames in a track which do not have detections associated with them.\n\n\nFor each track, the color of the bounding box is now a randomly selected hue in the HSV color space. The colors are evenly distributed using the golden ratio.\n\n\n\n\nBug Fixes\n\n\n\n\n\nFixed a \nbug in OpenCV\n where the Caffe example code was processing images in the RGB color space instead of the BGR color space. Updated the OpenMPF Caffe component accordingly.\n\n\nFixed a bug in the OpenCV person detection component that caused bounding boxes to be too large for detections near the edge of a frame.\n\n\nResubmitting jobs now properly carries over configured job properties.\n\n\nFixed a bug in the build order of the OpenMPF project so that test modules that the WFM depends on are built before the WFM itself.\n\n\nThe Markup component draws bounding boxes between detections when a \nFRAME_INTERVAL\n is specified. This is so that the bounding box in the marked-up video appears in every frame. Fixed a bug where the bounding boxes drawn on non-detection frames appeared to stand still rather than move along the interpolated path between detection regions.\n\n\nFixed a bug on the OALPR license plate detection component where it was not properly handling the \nSEARCH_REGION_*\n properties.\n\n\nSupport for the \nMIN_GAP_BETWEEN_SEGMENTS\n property was not implemented properly. When the gap between two segments is less than this property value then the segments should be merged; otherwise, the segments should remain separate. In some cases, the exact opposite was happening. This bug has been fixed.\n\n\n\n\nKnown Issues\n\n\n\n\n\nBecause of the number of additional ActiveMQ messages involved, enabling feed forward for low resolution video may take longer than the non-feed-forward behavior.\n\n\n\n\nOpenMPF 0.10.0: July 2017\n\n\n\n\nWARNING:\n There is no longer a \nDEFAULT CAFFE ACTION\n, \nDEFAULT CAFFE TASK\n, or \nDEFAULT CAFFE PIPELINE\n. There is now a \nCAFFE GOOGLENET DETECTION PIPELINE\n and \nCAFFE YAHOO NSFW DETECTION PIPELINE\n, which each have a respective action and task.\n\n\nNOTE:\n MPFImageReader has been re-enabled in this version of OpenMPF since we upgraded to OpenCV 3.2, which addressed the known issues with \nimread()\n, auto-orientation, and jpeg files in OpenCV 3.1.\n\n\n\n\nDocumentation\n\n\n\n\n\nAdded a \nContributor Guide\n that provides guidelines for contributing to the OpenMPF codebase.\n\n\nUpdated the \nJava Batch Component API\n with links to the example Java components.\n\n\nUpdated the \nBuild Guide\n with instructions for OpenCV 3.2.\n\n\n\n\nUpgrade to OpenCV 3.2\n\n\n\n\n\nUpdated core framework and components from OpenCV 3.1 to OpenCV 3.2.\n\n\n\n\nSupport for Animated gifs\n\n\n\n\n\nAll gifs are now treated as videos. Each gif will be handled as an MPFVideoJob.\n\n\nUnanimated gifs are treated as 1-frame videos.\n\n\nThe WFM Media Inspector now populates the \nmedia_properties\n map with a \nFRAME_COUNT\n entry (in addition to the \nDURATION\n and \nFPS\n entries).\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for the Yahoo Not Suitable for Work (NSFW) Caffe model for explicit material detection.\n\n\nUpdated the Caffe component to support the OpenCV 3.2 Deep Neural Network (DNN) module.\n\n\n\n\nFuture Support for Streaming Video\n\n\n\n\n\nNOTE:\n At this time, OpenMPF does not support streaming video. This section details what's being / has been done so far to prepare for that feature.\n\n\n\n\n\n\nThe codebase is being updated / refactored to support both the current \"batch\" job functionality and new \"streaming\" job functionality.\n\n\nbatch job: complete video files are written to disk before they are processed\n\n\nstreaming job: video frames are read from a streaming endpoint (such as RTSP) and processed in near real time\n\n\n\n\n\n\nThe REST API is being updated with endpoints for streaming jobs:\n\n\n[POST] /rest/streaming/jobs\n: Creates and submits a streaming job\n\n\n[POST] /rest/streaming/jobs/{id}/cancel\n: Cancels a streaming job\n\n\n[GET] /rest/streaming/jobs/{id}\n: Gets information about a streaming job\n\n\n\n\n\n\nThe Redis and mySQL databases are being updated to support streaming video jobs.\n\n\nA batch job will never have the same id as a streaming job. The integer ids will always be unique.\n\n\n\n\n\n\n\n\nBug Fixes\n\n\n\n\n\nThe MOG and SuBSENSE component services could segfault and terminate if the \nUSE_MOTION_TRACKING\n property was set to \u201c1\u201d and a detection was found close to the edge of the frame. Specifically, this would only happen if the video had a width and/or height dimension that was not an exact power of two.\n\n\nThe reason was because the code downsamples each frame by a power of two and rounds the value of the width and height up to the nearest integer. Later on when upscaling detection rectangles back to a size that\u2019s relative to the original image, the resized rectangle sometimes extended beyond the bounds of the original frame.\n\n\n\n\n\n\n\n\nKnown Issues\n\n\n\n\n\nIf a job is submitted through the REST API, and a user to logged into the web UI and looking at the job status page, the WFM may generate \"Error retrieving the SingleJobInfo model for the job with id\" messages.\n\n\nThis is because the job status is only added to the HTTP session object if the job is submitted through the web UI. When the UI queries the job status it inspects this object.\n\n\nThis message does not appear if job status is obtained using the \n[GET] /rest/jobs/{id}\n endpoint.\n\n\n\n\n\n\nThe \n[GET] /rest/jobs/stats\n endpoint aggregates information about all of the jobs ever run on the system. If thousands of jobs have been run, this call could take minutes to complete. The code should be improved to execute a direct mySQL query.\n\n\n\n\nOpenMPF 0.9.0: April 2017\n\n\n\n\nWARNING:\n MPFImageReader has been disabled in this version of OpenMPF. Component developers should use MPFVideoCapture instead. This affects components developed against previous versions of OpenMPF and components developed against this version of OpenMPF. Please refer to the Known Issues section for more information.\n\n\nWARNING:\n The OALPR Text Detection Component has been renamed to OALPR \nLicense Plate\n Text Detection Component. This affects the name of the component package and the name of the actions, tasks, and pipelines. When upgrading from R0.8 to R0.9, if the old OALPR Text Detection Component is installed in R0.8 then you will be prompted to install it again at the end of the upgrade path script. We recommend declining this prompt because the old component will conflict with the new component.\n\n\nWARNING:\n Action, task, and pipeline names that started with \nMOTION DETECTION PREPROCESSOR\n have been renamed \nMOG MOTION DETECTION PREPROCESSOR\n. Similarly, \nWITH MOTION PREPROCESSOR\n has changed to \nWITH MOG MOTION PREPROCESSOR\n.\n\n\n\n\nDocumentation\n\n\n\n\n\nUpdated the \nREST API\n to reflect job properties, algorithm-specific properties, and media-specific properties.\n\n\nStreamlined the \nC++ Batch Component API\n document for clarity and simplicity.\n\n\nCompleted the \nJava Batch Component API\n document.\n\n\nUpdated the \nAdmin Guide\n and \nUser Guide\n to reflect web UI changes.\n\n\nUpdated the \nBuild Guide\n with instructions for GitHub repositories.\n\n\n\n\nWorkflow Manager\n\n\n\n\n\nAdded support for job properties, which will override pre-defined pipeline properties.\n\n\nAdded support for algorithm-specific properties, which will apply to a single stage of the pipeline and will override job properties and pre-defined pipeline properties.\n\n\nAdded support for media-specific properties, which will apply to a single piece and media and will override job properties, algorithm-specific properties, and pre-defined pipeline properties.\n\n\nComponents can now be automatically registered and installed when the web application starts in Tomcat.\n\n\n\n\nWeb User Interface\n\n\n\n\n\nThe \"Close All\" button on pop-up notifications now dismisses all notifications from the queue, not just the visible ones.\n\n\nJob completion notifications now only appear for jobs created during the current login session instead of all jobs.\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties can be set using the web interface when creating a job. Once files are selected for a job, these properties can be set individually or by groups of files.\n\n\nThe Node and Process Status page has been merged into the Node Configuration page for simplicity and ease of use.\n\n\nThe Media Markup results page has been merged into the Job Status page for simplicity and ease of use.\n\n\nThe File Manager UI has been improved to handle large numbers of files and symbolic links.\n\n\nThe side navigation menu is now replaced by a top navigation bar.\n\n\n\n\nREST API\n\n\n\n\n\nAdded an optional jobProperties object to the \n/rest/jobs/\n request which contains String key-value pairs which override the pipeline's pre-configured job properties.\n\n\nAdded an optional algorithmProperties object to the \n/rest/jobs/\n request which can be used to configure properties for specific algorithms in the pipeline. These properties override the pipeline's pre-configured job properties. They also override the values in the jobProperties object.\n\n\nUpdated the \n/rest/jobs/\n request to add more detail to media, replacing a list of mediaUri Strings with a list of media objects, each of which contains a mediaUri and an optional mediaProperties map. The mediaProperties map can be used to configure properties for the specific piece of media. These properties override the pipeline's pre-configured job properties, values in the jobProperties object, and values in the algorithmProperties object.\n\n\nStreamlined the actions, tasks, and pipelines endpoints that are used by the web UI.\n\n\n\n\nFlipping, Rotation, and Region of Interest\n\n\n\n\n\nThe \nROTATION\n, \nHORIZONTAL_FLIP\n, and \nSEARCH_REGION_*\n properties will no longer appear in the detectionProperties map in the JSON detection output object. When applied to an algorithm these properties now appear in the pipeline.stages.actions.properties element. When applied to a piece of media these properties will now appear in the the media.mediaProperties element.\n\n\nThe OpenMPF now supports multiple regions of interest in a single media file. Each region will produce tracks separately, and the tracks for each region will be listed in the JSON output as if from a separate media file.\n\n\n\n\nComponent API\n\n\n\n\n\nJava Batch Component API is functionally complete for third-party development, with the exception of Component Adapter and frame transformation utilities classes.\n\n\nRe-architected the Java Batch Component API to use a more traditional Java method structure of returning track lists and throwing exceptions (rather than modifying input track lists and returning statuses), and encapsulating job properties into MPFJob objects:\n\n\nList\nMPFVideoTrack\n getDetections(MPFVideoJob job) throws MPFComponentDetectionError\n\n\nList\nMPFAudioTrack\n getDetections(MPFAudioJob job) throws MPFComponentDetectionError\n\n\nList\nMPFImageLocation\n getDetections(MPFImageJob job) throws MPFComponentDetectionError\n\n\n\n\n\n\nCreated examples for the Java Batch Component API.\n\n\nReorganized the Java and C++ component source code to enable component development without the OpenMPF core, which will simplify component development and streamline the code base.\n\n\n\n\nJSON Output Objects\n\n\n\n\n\nThe JSON output object for the job now contains a jobProperties map which contains all properties defined for the job in the job request. For example, if the job request specifies a \nCONFIDENCE_THRESHOLD\n of then the jobProperties map in the output will also list a \nCONFIDENCE_THRESHOLD\n of 5.\n\n\nThe JSON output object for the job now contains a algorithmProperties element which contains all algorithm-specific properties defined for the job in the job request. For example, if the job request specifies a \nFRAME_INTERVAL\n of 2 for FACECV then the algorithmProperties element in the output will contain an entry for \"FACECV\" and that entry will list a \nFRAME_INTERVAL\n of 2.\n\n\nEach JSON media output object now contains a mediaProperties map which contains all media-specific properties defined by the job request. For example, if the job request specifies a \nROTATION\n of 90 degrees for a single piece of media then the mediaProperties map for that piece of piece will list a \nROTATION\n of 90.\n\n\nThe content of JSON output objects are now organized by detection type (e.g. MOTION, FACE, PERSON, TEXT, etc.) rather than action type.\n\n\n\n\nCaffe Component\n\n\n\n\n\nAdded support for flip, rotation, and cropping to regions of interest.\n\n\nAdded support for returning multiple classifications per detection based on user-defined settings. The classification list is in order of decreasing confidence value.\n\n\n\n\nNew Pipelines\n\n\n\n\n\nNew SuBSENSE motion preprocessor pipelines have been added to components that perform detection on video.\n\n\n\n\nPackaging and Deployment\n\n\n\n\n\nActions.xml\n, \nAlgorithms.xml\n, \nnodeManagerConfig.xml\n, \nnodeServicesPalette.json\n, \nPipelines.xml\n, and \nTasks.xml\n are no longer stored within the Workflow Manager WAR file. They are now stored under \n$MPF_HOME/data\n. This makes it easier to upgrade the Workflow Manager and makes it easier for users to access these files.\n\n\nEach component can now be optionally installed and registered during deployment. Components not registered are set to the \nUPLOADED\n state. They can then be removed or registered through the Component Registration page.\n\n\nJava components are now packaged as tar.gz files instead of RPMs, bringing them into alignment with C++ components.\n\n\nOpenMPF R0.9 can be installed over OpenMPF R0.8. The deployment scripts will determine that an upgrade should take place.\n\n\nAfter the upgrade, user-defined actions, tasks, and pipelines will have \"CUSTOM\" prepended to their name.\n\n\nThe job_request table in the mySQL database will have a new \"output_object_version\" column. This column will have \"1.0\" for jobs created using OpenMPF R0.8 and \"2.0\" for jobs created using OpenMPF R0.9. The JSON output object schema has changed between these versions.\n\n\n\n\n\n\nReorganized source code repositories so that component SDKs can be downloaded separately from the OpenMPF core and so that components are grouped by license and maturity. Build scripts have been created to streamline and simplify the build process across the various repositories.\n\n\n\n\nUpgrade to OpenCV 3.1\n\n\n\n\n\nThe OpenMPF software has been ported to use OpenCV 3.1, including all of the C++ detection components and the markup component. For the OpenALPR license plate detection component, the versions of the openalpr, tesseract, and leptonica libraries were also upgraded to openalpr-2.3.0, tesseract-3.0.4, and leptonica-1.7.2. For the SuBSENSE motion component, the version of the SuBSENSE library was upgraded to use the code found at this location: \nhttps://bitbucket.org/pierre_luc_st_charles/subsense/src\n.\n\n\n\n\nBug Fixes\n\n\n\n\n\nMOG motion detection always detected motion in frame 0 of a video. Because motion can only be detected between two adjacent frames, frame 1 is now the first frame in which motion can be detected.\n\n\nMOG motion detection never detected motion in the first frame of a video segment (other than the first video segment because of the frame 0 bug described above). Now, motion is detected using the first frame before the start of a segment, rather than the first frame of the segment.\n\n\nThe above bugs were also present in SuBSENSE motion detection and have been fixed.\n\n\nSuBSENSE motion detection generated tracks where the frame numbers were off by one. Corrected the frame index logic.\n\n\nVery large video files caused an out of memory error in the system during Workflow Manager media inspection.\n\n\nA job would fail when processing images with an invalid metadata tag for the camera flash setting.\n\n\nUsers were permitted to select invalid file types using the File Manager UI.\n\n\n\n\nKnown Issues\n\n\n\n\n\nMPFImageReader does not work reliably with the current release version of OpenCV 3.1\n: In OpenCV 3.1, new functionality was introduced to interpret EXIF information when reading jpeg files.\n\n\nThere are two issues with this new functionality that impact our ability to use the OpenCV \nimread()\n function with MPFImageReader:\n\n\nFirst, because of a bug in the OpenCV code, reading a jpeg file that contains exif information could cause it to hang. (See \nhttps://github.com/opencv/opencv/issues/6665\n.)\n\n\nSecond, it is not possible to tell the \nimread()\nfunction to ignore the EXIF data, so the image it returns is automatically rotated. (See \nhttps://github.com/opencv/opencv/issues/6348\n.) This results in the MPFImageReader applying a second rotation to the image due to the EXIF information.\n\n\n\n\n\n\nTo address these issues, we developed the following workarounds:\n\n\nCreated a version of the MPFVideoCapture that works with an MPFImageJob. The new MPFVideoCapture can pull frames from both video files and images. MPFVideoCapture leverages cv::VideoCapture, which does not have the two issues described above.\n\n\nDisabled the use of MPFImageReader to prevent new users from trying to develop code leveraging this previous functionality.", "title": "Release Notes" }, + { + "location": "/Release-Notes/index.html#openmpf-510-november-2020", + "text": "", + "title": "OpenMPF 5.1.0: November 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-509-october-2020", + "text": "", + "title": "OpenMPF 5.0.9: October 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-508-october-2020", + "text": "", + "title": "OpenMPF 5.0.8: October 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-507-september-2020", + "text": "", + "title": "OpenMPF 5.0.7: September 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-506-august-2020", + "text": "", + "title": "OpenMPF 5.0.6: August 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-505-august-2020", + "text": "", + "title": "OpenMPF 5.0.5: August 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-504-august-2020", + "text": "", + "title": "OpenMPF 5.0.4: August 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-503-july-2020", + "text": "", + "title": "OpenMPF 5.0.3: July 2020" + }, + { + "location": "/Release-Notes/index.html#openmpf-502-july-2020", + "text": "", + "title": "OpenMPF 5.0.2: July 2020" + }, { "location": "/Release-Notes/index.html#openmpf-501-july-2020", "text": "", diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml index 07ba218f9b69..d3f95e41ad1d 100644 --- a/docs/site/sitemap.xml +++ b/docs/site/sitemap.xml @@ -4,7 +4,7 @@ /index.html - 2020-07-29 + 2020-11-16 daily @@ -13,55 +13,55 @@ /Release-Notes/index.html - 2020-07-29 + 2020-11-16 daily /Install-Guide/index.html - 2020-07-29 + 2020-11-16 daily /User-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Admin-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Node-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Object-Storage-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Contributor-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Development-Environment-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Acknowledgements/index.html - 2020-07-29 + 2020-11-16 daily @@ -71,49 +71,49 @@ /Component-API-Overview/index.html - 2020-07-29 + 2020-11-16 daily /CPP-Batch-Component-API/index.html - 2020-07-29 + 2020-11-16 daily /CPP-Streaming-Component-API/index.html - 2020-07-29 + 2020-11-16 daily /Java-Batch-Component-API/index.html - 2020-07-29 + 2020-11-16 daily /Python-Batch-Component-API/index.html - 2020-07-29 + 2020-11-16 daily /GPU-Support-Guide/index.html - 2020-07-29 + 2020-11-16 daily /Packaging-and-Registering-a-Component/index.html - 2020-07-29 + 2020-11-16 daily /Feed-Forward-Guide/index.html - 2020-07-29 + 2020-11-16 daily @@ -123,13 +123,13 @@ /Workflow-Manager/index.html - 2020-07-29 + 2020-11-16 daily /REST-API/index.html - 2020-07-29 + 2020-11-16 daily diff --git a/index.html b/index.html index d995922eb4ae..2b047867ea31 100644 --- a/index.html +++ b/index.html @@ -71,9 +71,12 @@

    Open Plugin API

  • License Plates - OpenALPR
  • Speech - Sphinx, Azure Cognitive Services
  • Scenes - OpenCV
  • -
  • Objects (Classification) - OpenCV DNN, Darknet
  • +
  • Objects (Classification) - OpenCV DNN, Darknet, TensorRT
  • +
  • Features - TensorRT
  • Text Regions - EAST
  • Text (OCR) - Tesseract, Apache Tika, Azure Cognitive Services
  • +
  • Form Structure (with OCR) - Azure Cognitive Services
  • +
  • Keywords - Boost Regular Expressions
  • Document Images - Apache Tika