diff --git a/.gitignore b/.gitignore
index 77b1f2df7ed2..faa987faa3df 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,8 +4,13 @@ _site
.vagrant
*.iml
*.idea*
+.vscode
# general
.DS_Store
Thumbs.db
-ehthumbs.db
\ No newline at end of file
+ehthumbs.db
+
+# temp
+*.bkp
+*.dtmp
\ No newline at end of file
diff --git a/_workfiles/triggers-dynamic-speech-full.drawio b/_workfiles/triggers-dynamic-speech-full.drawio
new file mode 100644
index 000000000000..7ba5a4b6f006
--- /dev/null
+++ b/_workfiles/triggers-dynamic-speech-full.drawio
@@ -0,0 +1,121 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/_workfiles/triggers-dynamic-speech-simple.drawio b/_workfiles/triggers-dynamic-speech-simple.drawio
new file mode 100644
index 000000000000..01398d511f0c
--- /dev/null
+++ b/_workfiles/triggers-dynamic-speech-simple.drawio
@@ -0,0 +1,98 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/_workfiles/triggers-yolo-full.drawio b/_workfiles/triggers-yolo-full.drawio
new file mode 100644
index 000000000000..f9cf43c4d82a
--- /dev/null
+++ b/_workfiles/triggers-yolo-full.drawio
@@ -0,0 +1,109 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/_workfiles/triggers-yolo-simple.drawio b/_workfiles/triggers-yolo-simple.drawio
new file mode 100644
index 000000000000..4432eeb67ea7
--- /dev/null
+++ b/_workfiles/triggers-yolo-simple.drawio
@@ -0,0 +1,67 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/docs/CPP-Batch-Component-API.md b/docs/docs/CPP-Batch-Component-API.md
index 9ff34de2e845..84b3395455a1 100644
--- a/docs/docs/CPP-Batch-Component-API.md
+++ b/docs/docs/CPP-Batch-Component-API.md
@@ -304,33 +304,13 @@ bool SampleComponent::Supports(MPFDetectionDataType data_type) {
}
```
-#### GetDetectionType()
-
-Returns the type of object detected by the component.
-
-* Function Definition:
-```c++
-string GetDetectionType()
-```
-
-* Parameters: none
-
-* Returns: (`string`) The type of object detected by the component. Should be in all CAPS. Examples include: `FACE`, `MOTION`, `PERSON`, `SPEECH`, `CLASS` (for object classification), or `TEXT`.
-
-* Example:
-
-```c++
-string SampleComponent::GetDetectionType() {
- return "FACE";
-}
-```
#### GetDetections(MPFImageJob …)
-Used to detect objects in an image file. The MPFImageJob structure contains
+Used to detect objects in an image file. The MPFImageJob structure contains
the data_uri specifying the location of the image file.
-Currently, the data_uri is always a local file path. For example, "/opt/mpf/share/remote-media/test-file.jpg".
+Currently, the data_uri is always a local file path. For example, "/opt/mpf/share/remote-media/test-file.jpg".
This is because all media is copied to the OpenMPF server before the job is executed.
* Function Definition:
@@ -349,9 +329,9 @@ std::vector GetDetections(const MPFImageJob &job);
#### GetDetections(MPFVideoJob …)
-Used to detect objects in a video file. Prior to being sent to the component, videos are split into logical "segments"
-of video data and each segment (containing a range of frames) is assigned to a different job. Components are not
-guaranteed to receive requests in any order. For example, the first request processed by a component might receive
+Used to detect objects in a video file. Prior to being sent to the component, videos are split into logical "segments"
+of video data and each segment (containing a range of frames) is assigned to a different job. Components are not
+guaranteed to receive requests in any order. For example, the first request processed by a component might receive
a request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.
* Function Definition:
@@ -365,12 +345,12 @@ std::vector GetDetections(const MPFVideoJob &job);
|---|---|---|
| job | `const MPFVideoJob&` | Structure containing details about the work to be performed. See [`MPFVideoJob`](#mpfvideojob) |
-* Returns: (`std::vector`) The [`MPFVideoTrack`](#mpfvideotrack) data for each detected object.
+* Returns: (`std::vector`) The [`MPFVideoTrack`](#mpfvideotrack) data for each detected object.
#### GetDetections(MPFAudioJob …)
-Used to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain
+Used to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain
the entirety of the audio file.
* Function Definition:
@@ -389,7 +369,7 @@ std::vector GetDetections(const MPFAudioJob &job);
#### GetDetections(MPFGenericJob …)
-Used to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and
+Used to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and
handled generically. These files are not logically segmented, so a job will contain the entirety of the file.
* Function Definition:
@@ -445,12 +425,12 @@ MPFJob(
| media_properties | `const Properties &` | Contains a map of `` of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below. |
-Job properties can also be set through environment variables prefixed with `MPF_PROP_`. This allows
-users to set job properties in their
-[docker-compose files.](https://github.com/openmpf/openmpf-docker/blob/32d072c9578441f2a07ec2da3bc3765aa1ff9cce/docker-compose.components.yml#L96)
-These will take precedence over all other property types (job, algorithm, media, etc). It is not
-possible to change the value of properties set via environment variables at runtime and therefore
-they should only be used to specify properties that will not change throughout the entire lifetime
+Job properties can also be set through environment variables prefixed with `MPF_PROP_`. This allows
+users to set job properties in their
+[docker-compose files.](https://github.com/openmpf/openmpf-docker/blob/32d072c9578441f2a07ec2da3bc3765aa1ff9cce/docker-compose.components.yml#L96)
+These will take precedence over all other property types (job, algorithm, media, etc). It is not
+possible to change the value of properties set via environment variables at runtime and therefore
+they should only be used to specify properties that will not change throughout the entire lifetime
of the service (e.g. Docker container).
@@ -586,7 +566,7 @@ MPFVideoJob(
stop_frame
const int
The last frame number (0-based index) of the video that should be processed to look for detections.
-
+
track
const MPFVideoTrack &
@@ -645,7 +625,7 @@ MPFAudioJob(
const string &data_uri,
int start_time,
int stop_time,
- const MPFAudioTrack &track,
+ const MPFAudioTrack &track,
const Properties &job_properties,
const Properties &media_properties)
```
@@ -810,9 +790,9 @@ MPFImageLocation(
A component that performs generic object classification can add an entry to `detection_properties` where the key is `CLASSIFICATION` and the value is the type of object detected.
@@ -831,11 +811,11 @@ The Workflow Manager performs the following algorithm to draw the bounding box w
- Draw the rectangle ignoring rotation and flip.
+ Draw the rectangle ignoring rotation and flip.
- Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.
+ Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.
@@ -850,15 +830,15 @@ Step 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is
The detection for the image above is:
Note that the `x_left_upper`, `y_left_upper`, `width`, and `height` values describe the red rectangle. The addition
of the `ROTATION` property results in the blue rectangle, and the addition of the `HORIZONTAL_FLIP` property results
-in the green rectangle.
+in the green rectangle.
One way to think about the process is "draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,
and then rotate and flip around the pin".
@@ -871,22 +851,22 @@ The Workflow Manager generated the above image by performing markup on the origi
detection:
The markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no
-`HORIZONTAL_FLIP`.
+`HORIZONTAL_FLIP`.
In order to properly extract the detection region from the original image, such as when generating an artifact, you
would need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the
-bottom-left corner so that the face is in the proper upright position.
+bottom-left corner so that the face is in the proper upright position.
When the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding
box. That is why its position is described using the `x_left_upper`, and `y_left_upper` variables. They refer to the
-top-left corner of the correctly oriented region.
+top-left corner of the correctly oriented region.
#### MPFVideoTrack
@@ -977,7 +957,7 @@ MPFGenericTrack(
#### MPFDetectionException
-Exception that should be thrown by the `GetDetections()` methods when an error occurs.
+Exception that should be thrown by the `GetDetections()` methods when an error occurs.
The content of the `error_code` and `what()` members will appear in the JSON output object.
* Constructors:
@@ -996,7 +976,7 @@ MPFDetectionException(const std::string &what)
#### MPFDetectionError
-Enum used to indicate the type of error that occurred in a `GetDetections()` method. It is used as a parameter to
+Enum used to indicate the type of error that occurred in a `GetDetections()` method. It is used as a parameter to
the `MPFDetectionException` constructor. A component is not required to support all error types.
| ENUM | Description |
@@ -1025,11 +1005,11 @@ For convenience, the OpenMPF provides the `MPFImageReader` ([source](https://git
# C++ Component Build Environment
-A C++ component library must be built for the same C++ compiler and Linux
-version that is used by the OpenMPF Component Executable. This is to ensure
-compatibility between the executable and the library functions at the
-Application Binary Interface (ABI) level. At this writing, the OpenMPF runs on
-Ubuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component
+A C++ component library must be built for the same C++ compiler and Linux
+version that is used by the OpenMPF Component Executable. This is to ensure
+compatibility between the executable and the library functions at the
+Application Binary Interface (ABI) level. At this writing, the OpenMPF runs on
+Ubuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component
Executable is built with g++ (GCC) 9.3.0-17.
Components should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.
@@ -1057,27 +1037,27 @@ componentName
│ └── descriptor.json
└── lib
└──libComponentName.so - Compiled component library
-```
+```
Once built, components should be packaged into a .tar.gz containing the contents of the directory shown above.
## Logging
-It is recommended to use [Apache log4cxx](https://logging.apache.org/log4cxx/index.html) for
-OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
-The Component Executor will configure log4cxx globally. Components should call
-`log4cxx::Logger::getLogger("")` to a get a reference to the logger. If you
+It is recommended to use [Apache log4cxx](https://logging.apache.org/log4cxx/index.html) for
+OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
+The Component Executor will configure log4cxx globally. Components should call
+`log4cxx::Logger::getLogger("")` to a get a reference to the logger. If you
are using a different logging framework, you should make sure its behavior is similar to how
-the Component Executor configures log4cxx as described below.
+the Component Executor configures log4cxx as described below.
The following log LEVELs are supported: `FATAL, ERROR, WARN, INFO, DEBUG, TRACE`.
-The `LOG_LEVEL` environment variable can be set to one of the log levels to change the logging
+The `LOG_LEVEL` environment variable can be set to one of the log levels to change the logging
verbosity. When `LOG_LEVEL` is absent, `INFO` is used.
-Note that multiple instances of the same component can log to the same file.
+Note that multiple instances of the same component can log to the same file.
Also, logging content can span multiple lines.
-The logger will write to both standard error and
+The logger will write to both standard error and
`${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log`.
Each log statement will take the form:
diff --git a/docs/docs/CPP-Streaming-Component-API.md b/docs/docs/CPP-Streaming-Component-API.md
index 0a2f45edbb47..b2a036be83fd 100644
--- a/docs/docs/CPP-Streaming-Component-API.md
+++ b/docs/docs/CPP-Streaming-Component-API.md
@@ -40,7 +40,7 @@ while (has_next_frame) {
}
if (is_end_of_segment) {
streaming_video_tracks = component->EndSegment()
- SendSummaryReport(frame_number, component->getDetectionType(), streaming_video_tracks)
+ SendSummaryReport(frame_number, streaming_video_tracks)
}
}
```
@@ -146,26 +146,6 @@ SampleComponent::SampleComponent(const MPFStreamingVideoJob &job)
}
```
-### GetDetectionType()
-
-Returns the type of object detected by the component.
-
-* Function Definition:
-```c++
-string GetDetectionType()
-```
-
-* Parameters: none
-
-* Returns: (`string`) The type of object detected by the component. Should be in all CAPS. Examples include: `FACE`, `MOTION`, `PERSON`, `CLASS` (for object classification), or `TEXT`.
-
-* Example:
-
-```c++
-string SampleComponent::GetDetectionType() {
- return "FACE";
-}
-```
### BeginSegment(VideoSegmentInfo)
@@ -189,7 +169,7 @@ void BeginSegment(const VideoSegmentInfo &segment_info)
void SampleComponent::BeginSegment(const VideoSegmentInfo &segment_info) {
// Prepare for next segment
}
-```
+```
### ProcessFrame(Mat ...)
@@ -203,7 +183,7 @@ Note that this function may not be invoked for every frame in the current segmen
Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
-* Function Definition:
+* Function Definition:
```c++
bool ProcessFrame(const cv::Mat &frame, int frame_number)
```
@@ -222,12 +202,12 @@ bool ProcessFrame(const cv::Mat &frame, int frame_number)
bool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {
// Look for detections. Generate tracks and store them until the end of the segment.
if (started_first_track_in_current_segment) {
- return true;
+ return true;
} else {
return false;
}
}
-```
+```
### EndSegment()
@@ -442,27 +422,27 @@ componentName
│ └── descriptor.json
└── lib
└──libComponentName.so - Compiled component library
-```
+```
Once built, components should be packaged into a .tar.gz containing the contents of the directory shown above.
## Logging
-It is recommended to use [Apache log4cxx](https://logging.apache.org/log4cxx/index.html) for
-OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
-The Component Executor will configure log4cxx globally. Components should call
-`log4cxx::Logger::getLogger("")` to a get a reference to the logger. If you
+It is recommended to use [Apache log4cxx](https://logging.apache.org/log4cxx/index.html) for
+OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
+The Component Executor will configure log4cxx globally. Components should call
+`log4cxx::Logger::getLogger("")` to a get a reference to the logger. If you
are using a different logging framework, you should make sure its behavior is similar to how
-the Component Executor configures log4cxx as described below.
+the Component Executor configures log4cxx as described below.
The following log LEVELs are supported: `FATAL, ERROR, WARN, INFO, DEBUG, TRACE`.
-The `LOG_LEVEL` environment variable can be set to one of the log levels to change the logging
+The `LOG_LEVEL` environment variable can be set to one of the log levels to change the logging
verbosity. When `LOG_LEVEL` is absent, `INFO` is used.
-Note that multiple instances of the same component can log to the same file.
+Note that multiple instances of the same component can log to the same file.
Also, logging content can span multiple lines.
-The logger will write to both standard error and
+The logger will write to both standard error and
`${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log`.
Each log statement will take the form:
diff --git a/docs/docs/Component-Descriptor-Reference.md b/docs/docs/Component-Descriptor-Reference.md
index f3b531e8b831..4fee2ead745d 100644
--- a/docs/docs/Component-Descriptor-Reference.md
+++ b/docs/docs/Component-Descriptor-Reference.md
@@ -127,6 +127,11 @@ Contains the following sub-fields:
Required. Defines the type of processing that the algorithm performs. Must be set to `DETECTION`.
+* **trackType:**
+ Required. The type of object detected by the component. Should be in all CAPS. Examples
+ include: `FACE`, `MOTION`, `PERSON`, `SPEECH`, `CLASS` (for object classification), or `TEXT`.
+
+
* **outputChangedCounter:**
Optional. An integer that should be incremented when the component is changed in a way that
would cause it to produce different output.
diff --git a/docs/docs/Java-Batch-Component-API.md b/docs/docs/Java-Batch-Component-API.md
index 7f913c0788d9..e5623a45fa69 100644
--- a/docs/docs/Java-Batch-Component-API.md
+++ b/docs/docs/Java-Batch-Component-API.md
@@ -255,25 +255,6 @@ public boolean supports(MPFDataType dataType) {
}
```
-#### getDetectionType()
-
-Returns the type of object detected by the component.
-
-* Method Definition:
-```java
-public String getDetectionType()
-```
-
-* Parameters: none
-
-* Returns: (`String`) The type of object detected by the component. Should be in all CAPS. Examples include: `FACE`, `MOTION`, `PERSON`, `SPEECH`, `CLASS` (for object classification), or `TEXT`.
-
-* Example:
-```java
-public String getDetectionType() {
- return "FACE";
-}
-```
### getDetections(MPFImageJob)
@@ -585,7 +566,7 @@ public MPFVideoJob(
stopFrame
int
The last frame number (0-based index) of the video that should be processed to look for detections.
-
+
jobProperties
Map<String, String>
@@ -680,7 +661,7 @@ public MPFAudioJob(
stopTime
int
The time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.
-
+
jobProperties
Map<String, String>
@@ -759,7 +740,7 @@ public MPFGenericJob(
stopTime
int
The time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.
-
+
jobProperties
Map<String, String>
diff --git a/docs/docs/Python-Batch-Component-API.md b/docs/docs/Python-Batch-Component-API.md
index a1c529d658d9..c500686e7d95 100644
--- a/docs/docs/Python-Batch-Component-API.md
+++ b/docs/docs/Python-Batch-Component-API.md
@@ -32,7 +32,6 @@ The basic pseudocode for the Component Executable is as follows:
```python
component_cls = locate_component_class()
component = component_cls()
-detection_type = component.detection_type
while True:
job = receive_job()
@@ -60,8 +59,7 @@ The Component Executable receives and parses requests from the WFM, invokes meth
detection objects, and subsequently populates responses with the component output and sends them to the WFM.
A component developer implements a detection component by creating a class that defines one or more of the
-get_detections_from_* methods and has a [`detection_type`](#componentdetection_type) field.
-See the [API Specification](#api-specification) for more information.
+get_detections_from_* methods. See the [API Specification](#api-specification) for more information.
The figures below present high-level component diagrams of the Python Batch Component API.
This figure shows the basic structure:
@@ -250,7 +248,6 @@ import mpf_component_util as mpf_util
logger = logging.getLogger('MyComponent')
class MyComponent(mpf_util.VideoCaptureMixin):
- detection_type = 'FACE'
@staticmethod
def get_detections_from_video_capture(video_job, video_capture):
@@ -347,7 +344,6 @@ import logging
logger = logging.getLogger('MyComponent')
class MyComponent:
- detection_type = 'FACE'
@staticmethod
def get_detections_from_video(video_job):
@@ -368,7 +364,7 @@ ComponentName
├── dependency.py
└── descriptor
└── descriptor.json
-```
+```
To create the plugin packages you can run the build script as follows:
```
~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent
@@ -386,8 +382,7 @@ See the [README](https://github.com/openmpf/openmpf-docker/tree/master/component
# API Specification
-An OpenMPF Python component is a class that defines one or more of the get_detections_from_\* methods and has a
-`detection_type` field.
+An OpenMPF Python component is a class that defines one or more of the get_detections_from_\* methods.
#### component.get_detections_from_\* methods
@@ -426,17 +421,6 @@ All get_detections_from_\* methods must return an iterable of the appropriate de
but any iterable can be used.
-#### component.detection_type
-* `str` field describing the type of object that is detected by the component. Should be in all CAPS.
-Examples include: `FACE`, `MOTION`, `PERSON`, `SPEECH`, `CLASS` (for object classification), or `TEXT`.
-* Example:
-```python
-class MyComponent:
- detection_type = 'FACE'
-
-```
-
-
## Image API
#### component.get_detections_from_image(image_job)
@@ -689,12 +673,12 @@ Class containing data used for detection of objects in a video file.
start_frame
int
The first frame number (0-based index) of the video that should be processed to look for detections.
-
+
stop_frame
int
The last frame number (0-based index) of the video that should be processed to look for detections.
-
+
job_properties
dict[str, str]
@@ -930,7 +914,7 @@ Currently, audio files are not logically segmented, so a job will contain the en
stop_time
int
The time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.
-
+
job_properties
dict[str, str]
@@ -1121,20 +1105,20 @@ generating an exception, choose the type that best describes your error.
# Python Component Build Environment
-All Python components must work with CPython 3.8.10. Also, Python components
-must work with the Linux version that is used by the OpenMPF Component
-Executable. At this writing, OpenMPF runs on
-Ubuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any
-OS, but incompatibility issues can arise when using Python libraries that
-include compiled extension modules. Python libraries are typically distributed
-as wheel files. The wheel format requires that the file name follows the pattern
-of `----.whl`.
-`--` are called
-[compatibility tags](https://www.python.org/dev/peps/pep-0425). For example,
-`mpf_component_api` is pure Python, so the name of its wheel file is
-`mpf_component_api-0.1-py3-none-any.whl`. `py3` means it will work with any
-Python 3 implementation because it does not use any implementation-specific
-features. `none` means that it does not use the Python ABI. `any` means it will
+All Python components must work with CPython 3.8.10. Also, Python components
+must work with the Linux version that is used by the OpenMPF Component
+Executable. At this writing, OpenMPF runs on
+Ubuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any
+OS, but incompatibility issues can arise when using Python libraries that
+include compiled extension modules. Python libraries are typically distributed
+as wheel files. The wheel format requires that the file name follows the pattern
+of `----.whl`.
+`--` are called
+[compatibility tags](https://www.python.org/dev/peps/pep-0425). For example,
+`mpf_component_api` is pure Python, so the name of its wheel file is
+`mpf_component_api-0.1-py3-none-any.whl`. `py3` means it will work with any
+Python 3 implementation because it does not use any implementation-specific
+features. `none` means that it does not use the Python ABI. `any` means it will
work on any platform.
The following combinations of compatibility tags are supported:
@@ -1227,7 +1211,7 @@ The following combinations of compatibility tags are supported:
* `py31-none-any`
* `py30-none-any`
-The list above was generated with the following command:
+The list above was generated with the following command:
`python3 -c 'import pip._internal.pep425tags as tags; print("\n".join(str(t) for t in tags.get_supported()))'`
Components should be supplied as a tar file, which includes not only the component library, but any other libraries or
@@ -1248,16 +1232,16 @@ OpenMPF components should be stateless in operation and give identical output fo
## Logging
-It recommended that components use Python's built-in
-[`logging` module.](https://docs.python.org/3/library/logging.html) The component should
-`import logging` and call `logging.getLogger('')` to get a logger instance.
-The component should not configure logging itself. The Component Executor will configure the
-`logging` module for the component. The logger will write log messages to standard error and
-`${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log`. Note that multiple instances of the
-same component can log to the same file. Also, logging content can span multiple lines.
-
-The following log levels are supported: `FATAL, ERROR, WARN, INFO, DEBUG`.
-The `LOG_LEVEL` environment variable can be set to one of the log levels to change the logging
+It recommended that components use Python's built-in
+[`logging` module.](https://docs.python.org/3/library/logging.html) The component should
+`import logging` and call `logging.getLogger('')` to get a logger instance.
+The component should not configure logging itself. The Component Executor will configure the
+`logging` module for the component. The logger will write log messages to standard error and
+`${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log`. Note that multiple instances of the
+same component can log to the same file. Also, logging content can span multiple lines.
+
+The following log levels are supported: `FATAL, ERROR, WARN, INFO, DEBUG`.
+The `LOG_LEVEL` environment variable can be set to one of the log levels to change the logging
verbosity. When `LOG_LEVEL` is absent, `INFO` is used.
The format of the log messages is:
diff --git a/docs/docs/Trigger-Guide.md b/docs/docs/Trigger-Guide.md
new file mode 100644
index 000000000000..8ca15bba68cb
--- /dev/null
+++ b/docs/docs/Trigger-Guide.md
@@ -0,0 +1,235 @@
+**NOTICE:** This software (or technical data) was produced for the U.S. Government under contract,
+and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023
+The MITRE Corporation. All Rights Reserved.
+
+
+# Trigger Overview
+
+The `TRIGGER` property enables pipelines that use [feed forward](Feed-Forward-Guide) to have
+pipeline stages that only process certain tracks based on their track properties. It can be used
+to select the best algorithm when there are multiple similar algorithms that each perform better
+under certain circumstances. It can also be used to iteratively filter down tracks at each stage of
+a pipeline.
+
+
+# Syntax
+
+The syntax for the `TRIGGER` property is: `=[;...]`.
+The left hand side of the equals sign is the name of track property that will be used to determine
+if a track matches the trigger. The right hand side specifies the required value for the specified
+track property. More than one value can be specified by separating them with a semicolon. When
+multiple properties are specified the track property must match any one of the specified values.
+If the value should match a track property that contains a semicolon or backslash,
+they must be escaped with a leading backslash. For example, `CLASSIFICATION=dog;cat` will match
+"dog" or "cat". `CLASSIFICATION=dog\;cat` will match "dog;cat". `CLASSIFICATION=dog\\cat` will
+match "dog\cat". When specifying a trigger in JSON it will need to [doubly escaped](#json-escaping).
+
+
+# Algorithm Selection Using Triggers
+
+The example pipeline below will be used to describe the way that the Workflow Manager uses the
+`TRIGGER` property. Each task in the pipeline is composed of one action, so only the actions are
+shown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.
+
+1. WHISPER SPEECH LANGUAGE DETECTION ACTION
+ - (No TRIGGER)
+2. SPHINX SPEECH DETECTION ACTION
+ - TRIGGER: `ISO_LANGUAGE=eng`
+ - FEED_FORWARD_TYPE: `REGION`
+3. WHISPER SPEECH DETECTION ACTION
+ - TRIGGER: `ISO_LANGUAGE=spa`
+ - FEED_FORWARD_TYPE: `REGION`
+4. ARGOS TRANSLATION ACTION
+ - TRIGGER: `ISO_LANGUAGE=spa`
+ - FEED_FORWARD_TYPE: `REGION`
+5. KEYWORD TAGGING ACTION
+ - (No TRIGGER)
+ - FEED_FORWARD_TYPE: `REGION`
+
+The pipeline can be represented as a flow chart:
+
+![Triggers Dynamic Speech Full Diagram](img/triggers-dynamic-speech-full.png "Triggers Dynamic Speech Full Diagram")
+
+The goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,
+says a keyword that the user is interested in. The complication is that the input file could either
+be in English, Spanish, or another language the user is not interested in. Spanish audio must be
+translated to English before looking for keywords.
+
+We are going to pretend that Whisper language detection can return multiple tracks, one per language
+detected in the audio, although in reality it is limited to detecting one language for the entire
+piece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are
+pretending that Sphinx performs better than Whisper on English audio, and the user wants to use
+Whisper for transcribing Spanish audio.
+
+The first stage should not have a trigger condition. If one is set, it will be ignored. The
+Workflow Manager will take all of the tracks generated by stage 1 and determine if the trigger
+condition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this
+case, if stage 1 detected the language as English and set `ISO_LANGUAGE` to `eng`, then those
+tracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.
+
+If any of the Whisper tracks do not meet the condition for the stage 2, they are later considered
+as possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger
+diamond pointing down to the stage 3 trigger diamond.
+
+The Workflow Manager will take all of the tracks generated by stage 2, the
+`SPHINX SPEECH DETECTION ACTION`, as well as the tracks that didn't satisfy the stage 2 trigger, and
+determine if the trigger condition for stage 3 is met.
+
+Note that the Sphinx component does not generate tracks with the `ISO_LANGUAGE` property, so
+it's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later
+flow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the
+Sphinx tracks cannot satisfy that trigger either.
+
+Even if the Sphinx component did generate tracks with the `ISO_LANGUAGE` property, it would be set
+to `eng` and would not satisfy the `spa` condition (they are mutually exclusive). Either way,
+eventually the tracks from stage 2 will flow into stage 5.
+
+The Workflow Manager will take all of the tracks generated by stage 3, the
+`WHISPER SPEECH DETECTION ACTION`, as well as the tracks that did not satisfy the stage 2 and 3
+triggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by
+stage 3 will have the `ISO_LANGUAGE` property set to `spa`, because the stage 3 trigger only
+matched Spanish tracks and when Whisper performs transcription, it sets the `ISO_LANGUAGE` property.
+Since the stage 4 trigger, like the stage 3 trigger, is `ISO_LANGUAGE=spa`, all of the tracks
+produced by stage 3 will be fed in to stage 4.
+
+The Workflow Manager will take all of the tracks generated by stage 4, the
+`ARGOS TRANSLATION (WITH FF REGION) ACTION`, as well as the tracks that did not satisfy the stage 2,
+3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger
+condition, so all of those tracks flow into stage 5 by default.
+
+The above diagram can be simplified as follows:
+
+![Triggers Dynamic Speech Simple Diagram](img/triggers-dynamic-speech-simple.png "Triggers Dynamic Speech Simple Diagram")
+
+In this diagram the trigger diamonds have been replaced with the orange boxes at the top of each
+stage. Also, all of the arrows for flows that are not logically possible have been removed,
+leaving only arrows that flow from one stage to another.
+
+What remains shows that this pipeline has three main flows of execution:
+
+1. English audio is transcribed by the Sphinx component and then processed by keyword tagging.
+2. Spanish audio is transcribed by the Whisper component, translated by the Argos component, and
+ then processed by keyword tagging.
+3. All other languages are not transcribed and those tracks pass directly to keyword tagging. Since
+ there is no transcript to look at, keyword tagging essentially ignores them.
+
+
+## Further Understanding
+
+In general, triggers work as a mechanism to decide which tracks are passed forward to later stages
+of a pipeline. It is important to note that not only are the tracks from the previous stage
+considered, but also tracks from stages that were not fed into any previous stage.
+
+For example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3
+would never be triggered. This is because Sphinx tracks don't have an `ISO_LANGUAGE` property. Even
+if they did have that property, it would be set to `eng`, not `spa`, which would not satisfy the
+stage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks
+from stage 1 should only be processed by one speech-to-text algorithm (i.e. one `SPEECH DETECTION`
+stage). Both algorithms should be considered, but only one should be selected based on the language.
+To accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs
+to stage 3.
+
+Additionally, it's important to note that when a stage is triggered, the tracks passed into that
+stage are no longer considered for later stages. Instead, the tracks generated by that stage can be
+passed to later stages.
+
+For example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If
+all of the tracks generated in prior stages could be passed to stage 4, then the `spa` tracks
+generated in stage 1 would trigger stage 4. Since those have not passed through the Whisper
+speech-to-text stage 3 they would not have a transcript to translate.
+
+
+# Filtering Using Triggers
+
+The pipeline in the previous section shows an example of how triggers can be used to conditionally
+execute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In
+cases like that, the individual triggers are logically `AND`ed together. This allows you to produce
+pipelines that search for very specific things.
+
+Consider the example pipeline defined below. Again, each task in the pipeline is composed of one
+action, so only the actions are shown. Also, note that this is a hypothetical pipeline and not
+intended for use in a real real deployment:
+
+1. OCV YOLO OBJECT DETECTION ACTION
+ - (No TRIGGER)
+2. CAFFE GOOGLENET DETECTION ACTION
+ - TRIGGER: `CLASSIFICATION=truck`
+ - FEED_FORWARD_TYPE: `REGION`
+3. TENSORFLOW VEHICLE COLOR DETECTION ACTION
+ - TRIGGER: `CLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle`
+ - FEED_FORWARD_TYPE: `REGION`
+4. OALPR LICENSE PLATE TEXT DETECTION ACTION
+ - TRIGGER: `CLASSIFICATION=blue`
+ - FEED_FORWARD_TYPE: `REGION`
+
+The pipeline can be represented as a flow chart:
+
+![Triggers YOLO Full Diagram](img/triggers-yolo-full.png "Triggers YOLO Full Diagram")
+
+The goal of this pipeline is to extract the license plate numbers for all blue trucks that have
+photos of ice cream or popsicles on their exterior.
+
+Stage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same
+detection regions in the feed-forward tracks. Specifically, if YOLO generates `truck` tracks in
+stage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the
+truck region to determine the ImageNet class with the highest confidence. If that class corresponds
+to ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same
+truck region to determine the vehicle color. Tracks corresponding to `blue` trucks will be fed
+into stage 4, which will try to detect the license plate region and text. OALPR will operate on
+the same truck region passed forward all of the way from YOLO in stage 1.
+
+Tracks generated by any stage in the pipeline that don't meet the three trigger criteria do not
+flow into the final license plate detection stage, and are therefore unused.
+
+It's important to note that the possible `CLASSIFICATION` values generated by stages 1, 2, and 3 are
+mutually exclusive. This means, for example, that YOLO will not generate a `blue` track in stage 1
+that will later satisfy the trigger for stage 4.
+
+Also, note that stages 1, 2, and 3 can all accept an optional `WHITELIST_FILE` property that can be
+used to discard tracks with a `CLASSIFICATION` not listed in that file. It is possible to recreate
+the behavior of the above pipeline without using triggers and instead only using whitelist files to
+ensure each of those stages can only generate the track types the user is interested in. The
+disadvantage of the whitelist approach is that the final JSON output object will not contain all of
+the YOLO tracks, only `truck` tracks. Using triggers is better when a user wants to know about those
+other track types. Using triggers also enables a user to create a version of this pipeline where
+`person` tracks from YOLO are fed into OpenCV face. `person` is just an example of one other type of
+YOLO track a user might be interested in.
+
+
+The above diagram can be simplified as follows:
+
+![Triggers YOLO Simple Diagram](img/triggers-yolo-simple.png "Triggers YOLO Simple Diagram")
+
+Removing all of the flows that aren't logically possible, or result in unused tracks, only
+leaves one flow that passes through all of the stages. Again, this flow essentially `AND`s the
+trigger conditions together.
+
+
+# JSON escaping
+
+Many times job properties are defined using JSON and track properties appear in the JSON output
+object. JSON also uses backslash as its escape character. Since the `TRIGGER` property and JSON both
+use backslash as the escape character, when specifying the `TRIGGER` property in JSON, the string
+must be doubly escaped.
+
+If the job request contains this JSON fragment:
+```json
+{ "algorithmProperties": { "DNNCV": {"TRIGGER": "CLASS=dog;cat"} } }
+```
+it will match either "dog" or "cat", but not "dog;cat".
+
+
+This JSON fragment:
+```json
+{ "algorithmProperties": { "DNNCV": {"TRIGGER": "CLASS=dog\\;cat"} } }
+```
+would only match "dog;cat".
+
+This JSON fragment:
+```json
+{ "algorithmProperties": { "DNNCV": {"TRIGGER": "CLASS=dog\\\\cat"} } }
+```
+would only match "dog\cat". The track property in the JSON output object would appear as:
+```json
+{ "trackProperties": { "CLASSIFICATION": "dog\\cat" } }
+```
diff --git a/docs/docs/img/triggers-dynamic-speech-full.png b/docs/docs/img/triggers-dynamic-speech-full.png
new file mode 100644
index 000000000000..645dd694e662
Binary files /dev/null and b/docs/docs/img/triggers-dynamic-speech-full.png differ
diff --git a/docs/docs/img/triggers-dynamic-speech-simple.png b/docs/docs/img/triggers-dynamic-speech-simple.png
new file mode 100644
index 000000000000..b41697516f9c
Binary files /dev/null and b/docs/docs/img/triggers-dynamic-speech-simple.png differ
diff --git a/docs/docs/img/triggers-yolo-full.png b/docs/docs/img/triggers-yolo-full.png
new file mode 100644
index 000000000000..7a98848f3cdd
Binary files /dev/null and b/docs/docs/img/triggers-yolo-full.png differ
diff --git a/docs/docs/img/triggers-yolo-simple.png b/docs/docs/img/triggers-yolo-simple.png
new file mode 100644
index 000000000000..6d742a4312b9
Binary files /dev/null and b/docs/docs/img/triggers-yolo-simple.png differ
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index 0eff10635daa..92ddac343bd8 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -25,6 +25,7 @@ pages:
- Object Storage Guide: Object-Storage-Guide.md
- Markup Guide: Markup-Guide.md
- TiesDb Guide: TiesDb-Guide.md
+ - Trigger Guide: Trigger-Guide.md
- REST API: REST-API.md
- Component Development:
- Component API Overview: Component-API-Overview.md
diff --git a/docs/site/404.html b/docs/site/404.html
index 29732762964d..21f6faa456e9 100644
--- a/docs/site/404.html
+++ b/docs/site/404.html
@@ -106,6 +106,10 @@
Returns the type of object detected by the component.
-
-
Function Definition:
-
-
string GetDetectionType()
-
-
-
-
Parameters: none
-
-
-
Returns: (string) The type of object detected by the component. Should be in all CAPS. Examples include: FACE, MOTION, PERSON, SPEECH, CLASS (for object classification), or TEXT.
Used to detect objects in an image file. The MPFImageJob structure contains
+
Used to detect objects in an image file. The MPFImageJob structure contains
the data_uri specifying the location of the image file.
-
Currently, the data_uri is always a local file path. For example, "/opt/mpf/share/remote-media/test-file.jpg".
+
Currently, the data_uri is always a local file path. For example, "/opt/mpf/share/remote-media/test-file.jpg".
This is because all media is copied to the OpenMPF server before the job is executed.
Function Definition:
@@ -601,9 +583,9 @@
GetDetections(MPFImageJob …)
Returns: (std::vector<MPFImageLocation>) The MPFImageLocation data for each detected object.
GetDetections(MPFVideoJob …)
-
Used to detect objects in a video file. Prior to being sent to the component, videos are split into logical "segments"
-of video data and each segment (containing a range of frames) is assigned to a different job. Components are not
-guaranteed to receive requests in any order. For example, the first request processed by a component might receive
+
Used to detect objects in a video file. Prior to being sent to the component, videos are split into logical "segments"
+of video data and each segment (containing a range of frames) is assigned to a different job. Components are not
+guaranteed to receive requests in any order. For example, the first request processed by a component might receive
a request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.
Function Definition:
@@ -630,10 +612,10 @@
GetDetections(MPFVideoJob …)
-
Returns: (std::vector<MPFVideoTrack>) The MPFVideoTrack data for each detected object.
+
Returns: (std::vector<MPFVideoTrack>) The MPFVideoTrack data for each detected object.
GetDetections(MPFAudioJob …)
-
Used to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain
+
Used to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain
the entirety of the audio file.
Function Definition:
@@ -663,7 +645,7 @@
GetDetections(MPFAudioJob …)
Returns: (std::vector<MPFAudioTrack>) The MPFAudioTrack data for each detected object.
GetDetections(MPFGenericJob …)
-
Used to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and
+
Used to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and
handled generically. These files are not logically segmented, so a job will contain the entirety of the file.
Function Definition:
@@ -752,12 +734,12 @@
MPFJob
-
Job properties can also be set through environment variables prefixed with MPF_PROP_. This allows
-users to set job properties in their
-docker-compose files.
-These will take precedence over all other property types (job, algorithm, media, etc). It is not
-possible to change the value of properties set via environment variables at runtime and therefore
-they should only be used to specify properties that will not change throughout the entire lifetime
+
Job properties can also be set through environment variables prefixed with MPF_PROP_. This allows
+users to set job properties in their
+docker-compose files.
+These will take precedence over all other property types (job, algorithm, media, etc). It is not
+possible to change the value of properties set via environment variables at runtime and therefore
+they should only be used to specify properties that will not change throughout the entire lifetime
of the service (e.g. Docker container).
A component that performs generic object classification can add an entry to detection_properties where the key is CLASSIFICATION and the value is the type of object detected.
The Workflow Manager performs the following algorithm to draw the bounding box when generating markup:
- Draw the rectangle ignoring rotation and flip.
+ Draw the rectangle ignoring rotation and flip.
- Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.
+ Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.
@@ -1172,15 +1154,15 @@
Rotation and Horizontal Flip
Step 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.
The detection for the image above is:
Note that the x_left_upper, y_left_upper, width, and height values describe the red rectangle. The addition
of the ROTATION property results in the blue rectangle, and the addition of the HORIZONTAL_FLIP property results
-in the green rectangle.
+in the green rectangle.
One way to think about the process is "draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,
and then rotate and flip around the pin".
Rotation-Only Example
@@ -1188,20 +1170,20 @@
Rotation-Only Example
The Workflow Manager generated the above image by performing markup on the original image with the following
detection:
The markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no
-HORIZONTAL_FLIP.
+HORIZONTAL_FLIP.
In order to properly extract the detection region from the original image, such as when generating an artifact, you
would need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the
-bottom-left corner so that the face is in the proper upright position.
+bottom-left corner so that the face is in the proper upright position.
When the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding
box. That is why its position is described using the x_left_upper, and y_left_upper variables. They refer to the
-top-left corner of the correctly oriented region.
+top-left corner of the correctly oriented region.
MPFVideoTrack
Structure used to store the location of detected objects in a video file.
@@ -1353,7 +1335,7 @@
MPFGenericTrack
Exception Types
MPFDetectionException
-
Exception that should be thrown by the GetDetections() methods when an error occurs.
+
Exception that should be thrown by the GetDetections() methods when an error occurs.
The content of the error_code and what() members will appear in the JSON output object.
Constructors:
@@ -1384,7 +1366,7 @@
MPFDetectionException
Enumeration Types
MPFDetectionError
-
Enum used to indicate the type of error that occurred in a GetDetections() method. It is used as a parameter to
+
Enum used to indicate the type of error that occurred in a GetDetections() method. It is used as a parameter to
the MPFDetectionException constructor. A component is not required to support all error types.
@@ -1463,11 +1445,11 @@
MPFDetectionError
Utility Classes
For convenience, the OpenMPF provides the MPFImageReader (source) and MPFVideoCapture (source) utility classes to perform horizontal flipping, rotation, and cropping to a region of interest. Note, that when using these classes, the component will also need to utilize the class to perform a reverse transform to convert the transformed pixel coordinates back to the original (e.g. pre-flipped, pre-rotated, and pre-cropped) coordinate space.
C++ Component Build Environment
-
A C++ component library must be built for the same C++ compiler and Linux
-version that is used by the OpenMPF Component Executable. This is to ensure
-compatibility between the executable and the library functions at the
-Application Binary Interface (ABI) level. At this writing, the OpenMPF runs on
-Ubuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component
+
A C++ component library must be built for the same C++ compiler and Linux
+version that is used by the OpenMPF Component Executable. This is to ensure
+compatibility between the executable and the library functions at the
+Application Binary Interface (ABI) level. At this writing, the OpenMPF runs on
+Ubuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component
Executable is built with g++ (GCC) 9.3.0-17.
Components should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.
Component Development Best Practices
@@ -1488,18 +1470,18 @@
Component Structure for
Once built, components should be packaged into a .tar.gz containing the contents of the directory shown above.
Logging
-
It is recommended to use Apache log4cxx for
-OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
-The Component Executor will configure log4cxx globally. Components should call
-log4cxx::Logger::getLogger("<componentName>") to a get a reference to the logger. If you
+
It is recommended to use Apache log4cxx for
+OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
+The Component Executor will configure log4cxx globally. Components should call
+log4cxx::Logger::getLogger("<componentName>") to a get a reference to the logger. If you
are using a different logging framework, you should make sure its behavior is similar to how
-the Component Executor configures log4cxx as described below.
+the Component Executor configures log4cxx as described below.
The following log LEVELs are supported: FATAL, ERROR, WARN, INFO, DEBUG, TRACE.
-The LOG_LEVEL environment variable can be set to one of the log levels to change the logging
+The LOG_LEVEL environment variable can be set to one of the log levels to change the logging
verbosity. When LOG_LEVEL is absent, INFO is used.
-
Note that multiple instances of the same component can log to the same file.
+
Note that multiple instances of the same component can log to the same file.
Also, logging content can span multiple lines.
-
The logger will write to both standard error and
+
The logger will write to both standard error and
${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/<componentName>.log.
Each log statement will take the form:
DATE TIME LEVEL CONTENT
Returns the type of object detected by the component.
-
-
Function Definition:
-
-
string GetDetectionType()
-
-
-
-
Parameters: none
-
-
-
Returns: (string) The type of object detected by the component. Should be in all CAPS. Examples include: FACE, MOTION, PERSON, CLASS (for object classification), or TEXT.
Indicate the beginning of a new video segment. The next call to ProcessFrame() will be the first frame of the new segment. ProcessFrame() will never be called before this function.
@@ -455,7 +437,7 @@
ProcessFrame(Mat ...)
Note that this function may not be invoked for every frame in the current segment. For example, if FRAME_INTERVAL = 2, then this function will only be invoked for every other frame since those are the only ones that need to be processed.
Also, it may not be invoked for the first nor last frame in the segment. For example, if FRAME_INTERVAL = 3 and the segment size is 10, then it will be invoked for frames {0, 3, 6, 9} for the first segment, and frames {12, 15, 18} for the second segment.
-
Function Definition:
+
Function Definition:
bool ProcessFrame(const cv::Mat &frame, int frame_number)
@@ -494,7 +476,7 @@
ProcessFrame(Mat ...)
bool SampleComponent::ProcessFrame(const cv::Mat &frame, int frame_number) {
// Look for detections. Generate tracks and store them until the end of the segment.
if (started_first_track_in_current_segment) {
- return true;
+ return true;
} else {
return false;
}
@@ -804,18 +786,18 @@
Component Structure
Once built, components should be packaged into a .tar.gz containing the contents of the directory shown above.
Logging
-
It is recommended to use Apache log4cxx for
-OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
-The Component Executor will configure log4cxx globally. Components should call
-log4cxx::Logger::getLogger("<componentName>") to a get a reference to the logger. If you
+
It is recommended to use Apache log4cxx for
+OpenMPF Component logging. Components using log4cxx should not configure logging themselves.
+The Component Executor will configure log4cxx globally. Components should call
+log4cxx::Logger::getLogger("<componentName>") to a get a reference to the logger. If you
are using a different logging framework, you should make sure its behavior is similar to how
-the Component Executor configures log4cxx as described below.
+the Component Executor configures log4cxx as described below.
The following log LEVELs are supported: FATAL, ERROR, WARN, INFO, DEBUG, TRACE.
-The LOG_LEVEL environment variable can be set to one of the log levels to change the logging
+The LOG_LEVEL environment variable can be set to one of the log levels to change the logging
verbosity. When LOG_LEVEL is absent, INFO is used.
-
Note that multiple instances of the same component can log to the same file.
+
Note that multiple instances of the same component can log to the same file.
Also, logging content can span multiple lines.
-
The logger will write to both standard error and
+
The logger will write to both standard error and
${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/<componentName>.log.
Each log statement will take the form:
DATE TIME LEVEL CONTENT
Required. Defines the type of processing that the algorithm performs. Must be set to DETECTION.
+
trackType:
+ Required. The type of object detected by the component. Should be in all CAPS. Examples
+ include: FACE, MOTION, PERSON, SPEECH, CLASS (for object classification), or TEXT.
+
+
outputChangedCounter:
Optional. An integer that should be incremented when the component is changed in a way that
would cause it to produce different output.
Returns the type of object detected by the component.
-
-
Method Definition:
-
-
public String getDetectionType()
-
-
-
-
Parameters: none
-
-
-
Returns: (String) The type of object detected by the component. Should be in all CAPS. Examples include: FACE, MOTION, PERSON, SPEECH, CLASS (for object classification), or TEXT.
-
-
-
Example:
-
-
-
public String getDetectionType() {
- return "FACE";
-}
-
getDetections(MPFImageJob)
Used to detect objects in image files. The MPFImageJob class contains the URI specifying the location of the image file.
Currently, the dataUri is always a local file path. For example, "/opt/mpf/share/remote-media/test-file.jpg". This is because all media is copied to the OpenMPF server before the job is executed.
@@ -933,7 +915,7 @@
MPFVideoJob
stopFrame
int
The last frame number (0-based index) of the video that should be processed to look for detections.
-
+
jobProperties
Map<String, String>
@@ -1026,7 +1008,7 @@
MPFAudioJob
stopTime
int
The time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.
-
+
jobProperties
Map<String, String>
@@ -1103,7 +1085,7 @@
MPFGenericJob
stopTime
int
The time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.
The Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get
detection objects, and subsequently populates responses with the component output and sends them to the WFM.
A component developer implements a detection component by creating a class that defines one or more of the
-get_detections_from_* methods and has a detection_type field.
-See the API Specification for more information.
+get_detections_from_* methods. See the API Specification for more information.
The figures below present high-level component diagrams of the Python Batch Component API.
This figure shows the basic structure:
@@ -476,7 +476,6 @@
How to Create a Setup
logger = logging.getLogger('MyComponent')
class MyComponent(mpf_util.VideoCaptureMixin):
- detection_type = 'FACE'
@staticmethod
def get_detections_from_video_capture(video_job, video_capture):
@@ -548,7 +547,6 @@
How to Create a Basic Python Com
logger = logging.getLogger('MyComponent')
class MyComponent:
- detection_type = 'FACE'
@staticmethod
def get_detections_from_video(video_job):
@@ -576,8 +574,7 @@
An OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods and has a
-detection_type field.
+
An OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.
component.get_detections_from_* methods
All get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed
in is an appropriate job object (e.g. mpf_component_api.ImageJob, mpf_component_api.VideoJob). Since the methods
@@ -605,16 +602,6 @@
component.get_detections_from_* m
All get_detections_from_* methods must return an iterable of the appropriate detection type
(e.g. mpf_component_api.ImageLocation, mpf_component_api.VideoTrack). The return value is normally a list or generator,
but any iterable can be used.
-
component.detection_type
-
-
str field describing the type of object that is detected by the component. Should be in all CAPS.
-Examples include: FACE, MOTION, PERSON, SPEECH, CLASS (for object classification), or TEXT.
-
Example:
-
-
class MyComponent:
- detection_type = 'FACE'
-
-
Image API
component.get_detections_from_image(image_job)
Used to detect objects in an image file.
@@ -897,12 +884,12 @@
mpf_component_api.VideoJob
start_frame
int
The first frame number (0-based index) of the video that should be processed to look for detections.
-
+
stop_frame
int
The last frame number (0-based index) of the video that should be processed to look for detections.
-
+
job_properties
dict[str, str]
@@ -1157,7 +1144,7 @@
mpf_component_api.AudioJob
stop_time
int
The time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.
-
+
job_properties
dict[str, str]
@@ -1382,20 +1369,20 @@
How to Report Errors
the MPF_ prefix. You can replace the MISSING_PROPERTY part in the above code with any other error type. When
generating an exception, choose the type that best describes your error.
Python Component Build Environment
-
All Python components must work with CPython 3.8.10. Also, Python components
-must work with the Linux version that is used by the OpenMPF Component
-Executable. At this writing, OpenMPF runs on
-Ubuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any
-OS, but incompatibility issues can arise when using Python libraries that
-include compiled extension modules. Python libraries are typically distributed
-as wheel files. The wheel format requires that the file name follows the pattern
-of <dist_name>-<version>-<python_tag>-<abi_tag>-<platform_tag>.whl.
-<python_tag>-<abi_tag>-<platform_tag> are called
-compatibility tags. For example,
-mpf_component_api is pure Python, so the name of its wheel file is
-mpf_component_api-0.1-py3-none-any.whl. py3 means it will work with any
-Python 3 implementation because it does not use any implementation-specific
-features. none means that it does not use the Python ABI. any means it will
+
All Python components must work with CPython 3.8.10. Also, Python components
+must work with the Linux version that is used by the OpenMPF Component
+Executable. At this writing, OpenMPF runs on
+Ubuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any
+OS, but incompatibility issues can arise when using Python libraries that
+include compiled extension modules. Python libraries are typically distributed
+as wheel files. The wheel format requires that the file name follows the pattern
+of <dist_name>-<version>-<python_tag>-<abi_tag>-<platform_tag>.whl.
+<python_tag>-<abi_tag>-<platform_tag> are called
+compatibility tags. For example,
+mpf_component_api is pure Python, so the name of its wheel file is
+mpf_component_api-0.1-py3-none-any.whl. py3 means it will work with any
+Python 3 implementation because it does not use any implementation-specific
+features. none means that it does not use the Python ABI. any means it will
work on any platform.
The following combinations of compatibility tags are supported:
@@ -1487,7 +1474,7 @@
Python Component Build Environment
py31-none-any
py30-none-any
-
The list above was generated with the following command:
+
The list above was generated with the following command:
python3 -c 'import pip._internal.pep425tags as tags; print("\n".join(str(t) for t in tags.get_supported()))'
Components should be supplied as a tar file, which includes not only the component library, but any other libraries or
files needed for execution. This includes all other non-standard libraries used by the component
@@ -1500,15 +1487,15 @@
Stateless Behavior
OpenMPF components should be stateless in operation and give identical output for a provided input
(i.e. when processing the same job).
Logging
-
It recommended that components use Python's built-in
-logging module. The component should
-import logging and call logging.getLogger('<componentName>') to get a logger instance.
-The component should not configure logging itself. The Component Executor will configure the
-logging module for the component. The logger will write log messages to standard error and
-${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/<componentName>.log. Note that multiple instances of the
-same component can log to the same file. Also, logging content can span multiple lines.
-
The following log levels are supported: FATAL, ERROR, WARN, INFO, DEBUG.
-The LOG_LEVEL environment variable can be set to one of the log levels to change the logging
+
It recommended that components use Python's built-in
+logging module. The component should
+import logging and call logging.getLogger('<componentName>') to get a logger instance.
+The component should not configure logging itself. The Component Executor will configure the
+logging module for the component. The logger will write log messages to standard error and
+${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/<componentName>.log. Note that multiple instances of the
+same component can log to the same file. Also, logging content can span multiple lines.
+
The following log levels are supported: FATAL, ERROR, WARN, INFO, DEBUG.
+The LOG_LEVEL environment variable can be set to one of the log levels to change the logging
verbosity. When LOG_LEVEL is absent, INFO is used.
The format of the log messages is:
DATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE
diff --git a/docs/site/REST-API/index.html b/docs/site/REST-API/index.html
index 8712d367ba00..c5b55bf08d32 100644
--- a/docs/site/REST-API/index.html
+++ b/docs/site/REST-API/index.html
@@ -110,6 +110,10 @@
NOTICE: This software (or technical data) was produced for the U.S. Government under contract,
+and is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023
+The MITRE Corporation. All Rights Reserved.
+
Trigger Overview
+
The TRIGGER property enables pipelines that use feed forward to have
+pipeline stages that only process certain tracks based on their track properties. It can be used
+to select the best algorithm when there are multiple similar algorithms that each perform better
+under certain circumstances. It can also be used to iteratively filter down tracks at each stage of
+a pipeline.
+
Syntax
+
The syntax for the TRIGGER property is: <prop_name>=<prop_value1>[;<prop_value2>...].
+The left hand side of the equals sign is the name of track property that will be used to determine
+if a track matches the trigger. The right hand side specifies the required value for the specified
+track property. More than one value can be specified by separating them with a semicolon. When
+multiple properties are specified the track property must match any one of the specified values.
+If the value should match a track property that contains a semicolon or backslash,
+they must be escaped with a leading backslash. For example, CLASSIFICATION=dog;cat will match
+"dog" or "cat". CLASSIFICATION=dog\;cat will match "dog;cat". CLASSIFICATION=dog\\cat will
+match "dog\cat". When specifying a trigger in JSON it will need to doubly escaped.
+
Algorithm Selection Using Triggers
+
The example pipeline below will be used to describe the way that the Workflow Manager uses the
+TRIGGER property. Each task in the pipeline is composed of one action, so only the actions are
+shown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.
+
+
WHISPER SPEECH LANGUAGE DETECTION ACTION
+
(No TRIGGER)
+
+
+
SPHINX SPEECH DETECTION ACTION
+
TRIGGER: ISO_LANGUAGE=eng
+
FEED_FORWARD_TYPE: REGION
+
+
+
WHISPER SPEECH DETECTION ACTION
+
TRIGGER: ISO_LANGUAGE=spa
+
FEED_FORWARD_TYPE: REGION
+
+
+
ARGOS TRANSLATION ACTION
+
TRIGGER: ISO_LANGUAGE=spa
+
FEED_FORWARD_TYPE: REGION
+
+
+
KEYWORD TAGGING ACTION
+
(No TRIGGER)
+
FEED_FORWARD_TYPE: REGION
+
+
+
+
The pipeline can be represented as a flow chart:
+
+
The goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,
+says a keyword that the user is interested in. The complication is that the input file could either
+be in English, Spanish, or another language the user is not interested in. Spanish audio must be
+translated to English before looking for keywords.
+
We are going to pretend that Whisper language detection can return multiple tracks, one per language
+detected in the audio, although in reality it is limited to detecting one language for the entire
+piece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are
+pretending that Sphinx performs better than Whisper on English audio, and the user wants to use
+Whisper for transcribing Spanish audio.
+
The first stage should not have a trigger condition. If one is set, it will be ignored. The
+Workflow Manager will take all of the tracks generated by stage 1 and determine if the trigger
+condition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this
+case, if stage 1 detected the language as English and set ISO_LANGUAGE to eng, then those
+tracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.
+
If any of the Whisper tracks do not meet the condition for the stage 2, they are later considered
+as possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger
+diamond pointing down to the stage 3 trigger diamond.
+
The Workflow Manager will take all of the tracks generated by stage 2, the
+SPHINX SPEECH DETECTION ACTION, as well as the tracks that didn't satisfy the stage 2 trigger, and
+determine if the trigger condition for stage 3 is met.
+
Note that the Sphinx component does not generate tracks with the ISO_LANGUAGE property, so
+it's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later
+flow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the
+Sphinx tracks cannot satisfy that trigger either.
+
Even if the Sphinx component did generate tracks with the ISO_LANGUAGE property, it would be set
+to eng and would not satisfy the spa condition (they are mutually exclusive). Either way,
+eventually the tracks from stage 2 will flow into stage 5.
+
The Workflow Manager will take all of the tracks generated by stage 3, the
+WHISPER SPEECH DETECTION ACTION, as well as the tracks that did not satisfy the stage 2 and 3
+triggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by
+stage 3 will have the ISO_LANGUAGE property set to spa, because the stage 3 trigger only
+matched Spanish tracks and when Whisper performs transcription, it sets the ISO_LANGUAGE property.
+Since the stage 4 trigger, like the stage 3 trigger, is ISO_LANGUAGE=spa, all of the tracks
+produced by stage 3 will be fed in to stage 4.
+
The Workflow Manager will take all of the tracks generated by stage 4, the
+ARGOS TRANSLATION (WITH FF REGION) ACTION, as well as the tracks that did not satisfy the stage 2,
+3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger
+condition, so all of those tracks flow into stage 5 by default.
+
The above diagram can be simplified as follows:
+
+
In this diagram the trigger diamonds have been replaced with the orange boxes at the top of each
+stage. Also, all of the arrows for flows that are not logically possible have been removed,
+leaving only arrows that flow from one stage to another.
+
What remains shows that this pipeline has three main flows of execution:
+
+
English audio is transcribed by the Sphinx component and then processed by keyword tagging.
+
Spanish audio is transcribed by the Whisper component, translated by the Argos component, and
+ then processed by keyword tagging.
+
All other languages are not transcribed and those tracks pass directly to keyword tagging. Since
+ there is no transcript to look at, keyword tagging essentially ignores them.
+
+
Further Understanding
+
In general, triggers work as a mechanism to decide which tracks are passed forward to later stages
+of a pipeline. It is important to note that not only are the tracks from the previous stage
+considered, but also tracks from stages that were not fed into any previous stage.
+
For example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3
+would never be triggered. This is because Sphinx tracks don't have an ISO_LANGUAGE property. Even
+if they did have that property, it would be set to eng, not spa, which would not satisfy the
+stage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks
+from stage 1 should only be processed by one speech-to-text algorithm (i.e. one SPEECH DETECTION
+stage). Both algorithms should be considered, but only one should be selected based on the language.
+To accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs
+to stage 3.
+
Additionally, it's important to note that when a stage is triggered, the tracks passed into that
+stage are no longer considered for later stages. Instead, the tracks generated by that stage can be
+passed to later stages.
+
For example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If
+all of the tracks generated in prior stages could be passed to stage 4, then the spa tracks
+generated in stage 1 would trigger stage 4. Since those have not passed through the Whisper
+speech-to-text stage 3 they would not have a transcript to translate.
+
Filtering Using Triggers
+
The pipeline in the previous section shows an example of how triggers can be used to conditionally
+execute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In
+cases like that, the individual triggers are logically ANDed together. This allows you to produce
+pipelines that search for very specific things.
+
Consider the example pipeline defined below. Again, each task in the pipeline is composed of one
+action, so only the actions are shown. Also, note that this is a hypothetical pipeline and not
+intended for use in a real real deployment:
The goal of this pipeline is to extract the license plate numbers for all blue trucks that have
+photos of ice cream or popsicles on their exterior.
+
Stage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same
+detection regions in the feed-forward tracks. Specifically, if YOLO generates truck tracks in
+stage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the
+truck region to determine the ImageNet class with the highest confidence. If that class corresponds
+to ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same
+truck region to determine the vehicle color. Tracks corresponding to blue trucks will be fed
+into stage 4, which will try to detect the license plate region and text. OALPR will operate on
+the same truck region passed forward all of the way from YOLO in stage 1.
+
Tracks generated by any stage in the pipeline that don't meet the three trigger criteria do not
+flow into the final license plate detection stage, and are therefore unused.
+
It's important to note that the possible CLASSIFICATION values generated by stages 1, 2, and 3 are
+mutually exclusive. This means, for example, that YOLO will not generate a blue track in stage 1
+that will later satisfy the trigger for stage 4.
+
Also, note that stages 1, 2, and 3 can all accept an optional WHITELIST_FILE property that can be
+used to discard tracks with a CLASSIFICATION not listed in that file. It is possible to recreate
+the behavior of the above pipeline without using triggers and instead only using whitelist files to
+ensure each of those stages can only generate the track types the user is interested in. The
+disadvantage of the whitelist approach is that the final JSON output object will not contain all of
+the YOLO tracks, only truck tracks. Using triggers is better when a user wants to know about those
+other track types. Using triggers also enables a user to create a version of this pipeline where
+person tracks from YOLO are fed into OpenCV face. person is just an example of one other type of
+YOLO track a user might be interested in.
+
The above diagram can be simplified as follows:
+
+
Removing all of the flows that aren't logically possible, or result in unused tracks, only
+leaves one flow that passes through all of the stages. Again, this flow essentially ANDs the
+trigger conditions together.
+
JSON escaping
+
Many times job properties are defined using JSON and track properties appear in the JSON output
+object. JSON also uses backslash as its escape character. Since the TRIGGER property and JSON both
+use backslash as the escape character, when specifying the TRIGGER property in JSON, the string
+must be doubly escaped.
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index f09acf1c7034..23b7759fdd04 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -480,6 +480,41 @@
"text": "When the TIES_DB_S3_COPY_ENABLED job property or ties.db.s3.copy.enabled system property is\ntrue and a matching job is found in TiesDb, Workflow Manager will copy the artifacts, markup,\nand derivative media to the bucket specified in the current job's S3_RESULTS_BUCKET job property\nor s3.results.bucket system property. Since the job's artifacts, markup, and derivative media\nare in a new location, the output object must be updated before it is uploaded to the new S3 bucket.\nIn the updated output object, the tiesDbSourceJobId property will be set to the previous job's ID\nand tiesDbSourceMediaPath will be set to the path of the previous job's media. When the S3 copy\nis enabled and the results bucket is the same as the previous job, a new output object is created,\nbut copies of the artifacts, markup, and derivative media are not created. If the S3 copy is\ndisabled, tiesDbSourceJobId and tiesDbSourceMediaPath are not added because the original job's\noutput object is used without changes. If the copy fails, a link to the old JSON output object will\nbe provided. When performing the S3 copy, the S3 job properties like S3_ACCESS_KEY and S3_SECRET_KEY use the values from the current job and apply to the\ndestination of the copy. If the values for the S3 properties should be different for the source of\nthe copy, the properties prefixed with TIES_DB_COPY_SRC_ can be set. If for a given property the TIES_DB_COPY_SRC_ prefixed version is not set, the non-prefixed version will be used. For example, if a job is received with the following properties set: S3_SECRET_KEY = new-secret-key S3_ACCESS_KEY = access-key TIES_DB_COPY_SRC_S3_SECRET_KEY = old-secret-key then, when accessing the previous job's results access-key will be used for the access key and old-secret-key will be used for the secret key. When uploading the results to the new bucket, access-key will be used for the access key and new-secret-key will be used for the secret key.",
"title": "S3 Copy"
},
+ {
+ "location": "/Trigger-Guide/index.html",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nTrigger Overview\n\n\nThe \nTRIGGER\n property enables pipelines that use \nfeed forward\n to have\npipeline stages that only process certain tracks based on their track properties. It can be used\nto select the best algorithm when there are multiple similar algorithms that each perform better\nunder certain circumstances. It can also be used to iteratively filter down tracks at each stage of\na pipeline.\n\n\nSyntax\n\n\nThe syntax for the \nTRIGGER\n property is: \n=[;...]\n.\nThe left hand side of the equals sign is the name of track property that will be used to determine\nif a track matches the trigger. The right hand side specifies the required value for the specified\ntrack property. More than one value can be specified by separating them with a semicolon. When\nmultiple properties are specified the track property must match any one of the specified values.\nIf the value should match a track property that contains a semicolon or backslash,\nthey must be escaped with a leading backslash. For example, \nCLASSIFICATION=dog;cat\n will match\n\"dog\" or \"cat\". \nCLASSIFICATION=dog\\;cat\n will match \"dog;cat\". \nCLASSIFICATION=dog\\\\cat\n will\nmatch \"dog\\cat\". When specifying a trigger in JSON it will need to \ndoubly escaped\n.\n\n\nAlgorithm Selection Using Triggers\n\n\nThe example pipeline below will be used to describe the way that the Workflow Manager uses the\n\nTRIGGER\n property. Each task in the pipeline is composed of one action, so only the actions are\nshown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.\n\n\n\n\nWHISPER SPEECH LANGUAGE DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nSPHINX SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=eng\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nWHISPER SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nARGOS TRANSLATION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nKEYWORD TAGGING ACTION\n\n\n(No TRIGGER)\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,\nsays a keyword that the user is interested in. The complication is that the input file could either\nbe in English, Spanish, or another language the user is not interested in. Spanish audio must be\ntranslated to English before looking for keywords.\n\n\nWe are going to pretend that Whisper language detection can return multiple tracks, one per language\ndetected in the audio, although in reality it is limited to detecting one language for the entire\npiece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are\npretending that Sphinx performs better than Whisper on English audio, and the user wants to use\nWhisper for transcribing Spanish audio.\n\n\nThe first stage should not have a trigger condition. If one is set, it will be ignored. The\nWorkflow Manager will take all of the tracks generated by stage 1 and determine if the trigger\ncondition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this\ncase, if stage 1 detected the language as English and set \nISO_LANGUAGE\n to \neng\n, then those\ntracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.\n\n\nIf any of the Whisper tracks do not meet the condition for the stage 2, they are later considered\nas possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger\ndiamond pointing down to the stage 3 trigger diamond.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 2, the\n\nSPHINX SPEECH DETECTION ACTION\n, as well as the tracks that didn't satisfy the stage 2 trigger, and\ndetermine if the trigger condition for stage 3 is met.\n\n\nNote that the Sphinx component does not generate tracks with the \nISO_LANGUAGE\n property, so\nit's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later\nflow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the\nSphinx tracks cannot satisfy that trigger either.\n\n\nEven if the Sphinx component did generate tracks with the \nISO_LANGUAGE\n property, it would be set\nto \neng\n and would not satisfy the \nspa\n condition (they are mutually exclusive). Either way,\neventually the tracks from stage 2 will flow into stage 5.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 3, the\n\nWHISPER SPEECH DETECTION ACTION\n, as well as the tracks that did not satisfy the stage 2 and 3\ntriggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by\nstage 3 will have the \nISO_LANGUAGE\n property set to \nspa\n, because the stage 3 trigger only\nmatched Spanish tracks and when Whisper performs transcription, it sets the \nISO_LANGUAGE\n property.\nSince the stage 4 trigger, like the stage 3 trigger, is \nISO_LANGUAGE=spa\n, all of the tracks\nproduced by stage 3 will be fed in to stage 4.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 4, the\n\nARGOS TRANSLATION (WITH FF REGION) ACTION\n, as well as the tracks that did not satisfy the stage 2,\n3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger\ncondition, so all of those tracks flow into stage 5 by default.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nIn this diagram the trigger diamonds have been replaced with the orange boxes at the top of each\nstage. Also, all of the arrows for flows that are not logically possible have been removed,\nleaving only arrows that flow from one stage to another.\n\n\nWhat remains shows that this pipeline has three main flows of execution:\n\n\n\n\nEnglish audio is transcribed by the Sphinx component and then processed by keyword tagging.\n\n\nSpanish audio is transcribed by the Whisper component, translated by the Argos component, and\n then processed by keyword tagging.\n\n\nAll other languages are not transcribed and those tracks pass directly to keyword tagging. Since\n there is no transcript to look at, keyword tagging essentially ignores them.\n\n\n\n\nFurther Understanding\n\n\nIn general, triggers work as a mechanism to decide which tracks are passed forward to later stages\nof a pipeline. It is important to note that not only are the tracks from the previous stage\nconsidered, but also tracks from stages that were not fed into any previous stage.\n\n\nFor example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3\nwould never be triggered. This is because Sphinx tracks don't have an \nISO_LANGUAGE\n property. Even\nif they did have that property, it would be set to \neng\n, not \nspa\n, which would not satisfy the\nstage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks\nfrom stage 1 should only be processed by one speech-to-text algorithm (i.e. one \nSPEECH DETECTION\n\nstage). Both algorithms should be considered, but only one should be selected based on the language.\nTo accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs\nto stage 3.\n\n\nAdditionally, it's important to note that when a stage is triggered, the tracks passed into that\nstage are no longer considered for later stages. Instead, the tracks generated by that stage can be\npassed to later stages.\n\n\nFor example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If\nall of the tracks generated in prior stages could be passed to stage 4, then the \nspa\n tracks\ngenerated in stage 1 would trigger stage 4. Since those have not passed through the Whisper\nspeech-to-text stage 3 they would not have a transcript to translate.\n\n\nFiltering Using Triggers\n\n\nThe pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically \nAND\ned together. This allows you to produce\npipelines that search for very specific things.\n\n\nConsider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment:\n\n\n\n\nOCV YOLO OBJECT DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nCAFFE GOOGLENET DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=truck\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nTENSORFLOW VEHICLE COLOR DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nOALPR LICENSE PLATE TEXT DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=blue\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior.\n\n\nStage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates \ntruck\n tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to \nblue\n trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1.\n\n\nTracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused.\n\n\nIt's important to note that the possible \nCLASSIFICATION\n values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a \nblue\n track in stage 1\nthat will later satisfy the trigger for stage 4.\n\n\nAlso, note that stages 1, 2, and 3 can all accept an optional \nWHITELIST_FILE\n property that can be\nused to discard tracks with a \nCLASSIFICATION\n not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using whitelist files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the whitelist approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only \ntruck\n tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where\n\nperson\n tracks from YOLO are fed into OpenCV face. \nperson\n is just an example of one other type of\nYOLO track a user might be interested in.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nRemoving all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially \nAND\ns the\ntrigger conditions together.\n\n\nJSON escaping\n\n\nMany times job properties are defined using JSON and track properties appear in the JSON output\nobject. JSON also uses backslash as its escape character. Since the \nTRIGGER\n property and JSON both\nuse backslash as the escape character, when specifying the \nTRIGGER\n property in JSON, the string\nmust be doubly escaped.\n\n\nIf the job request contains this JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog;cat\"} } }\n\n\n\nit will match either \"dog\" or \"cat\", but not \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\;cat\"} } }\n\n\n\nwould only match \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\\\\\cat\"} } }\n\n\n\nwould only match \"dog\\cat\". The track property in the JSON output object would appear as:\n\n\n{ \"trackProperties\": { \"CLASSIFICATION\": \"dog\\\\cat\" } }",
+ "title": "Trigger Guide"
+ },
+ {
+ "location": "/Trigger-Guide/index.html#trigger-overview",
+ "text": "The TRIGGER property enables pipelines that use feed forward to have\npipeline stages that only process certain tracks based on their track properties. It can be used\nto select the best algorithm when there are multiple similar algorithms that each perform better\nunder certain circumstances. It can also be used to iteratively filter down tracks at each stage of\na pipeline.",
+ "title": "Trigger Overview"
+ },
+ {
+ "location": "/Trigger-Guide/index.html#syntax",
+ "text": "The syntax for the TRIGGER property is: =[;...] .\nThe left hand side of the equals sign is the name of track property that will be used to determine\nif a track matches the trigger. The right hand side specifies the required value for the specified\ntrack property. More than one value can be specified by separating them with a semicolon. When\nmultiple properties are specified the track property must match any one of the specified values.\nIf the value should match a track property that contains a semicolon or backslash,\nthey must be escaped with a leading backslash. For example, CLASSIFICATION=dog;cat will match\n\"dog\" or \"cat\". CLASSIFICATION=dog\\;cat will match \"dog;cat\". CLASSIFICATION=dog\\\\cat will\nmatch \"dog\\cat\". When specifying a trigger in JSON it will need to doubly escaped .",
+ "title": "Syntax"
+ },
+ {
+ "location": "/Trigger-Guide/index.html#algorithm-selection-using-triggers",
+ "text": "The example pipeline below will be used to describe the way that the Workflow Manager uses the TRIGGER property. Each task in the pipeline is composed of one action, so only the actions are\nshown. Note that this is a hypothetical pipeline and not intended for use in a real deployment. WHISPER SPEECH LANGUAGE DETECTION ACTION (No TRIGGER) SPHINX SPEECH DETECTION ACTION TRIGGER: ISO_LANGUAGE=eng FEED_FORWARD_TYPE: REGION WHISPER SPEECH DETECTION ACTION TRIGGER: ISO_LANGUAGE=spa FEED_FORWARD_TYPE: REGION ARGOS TRANSLATION ACTION TRIGGER: ISO_LANGUAGE=spa FEED_FORWARD_TYPE: REGION KEYWORD TAGGING ACTION (No TRIGGER) FEED_FORWARD_TYPE: REGION The pipeline can be represented as a flow chart: The goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,\nsays a keyword that the user is interested in. The complication is that the input file could either\nbe in English, Spanish, or another language the user is not interested in. Spanish audio must be\ntranslated to English before looking for keywords. We are going to pretend that Whisper language detection can return multiple tracks, one per language\ndetected in the audio, although in reality it is limited to detecting one language for the entire\npiece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are\npretending that Sphinx performs better than Whisper on English audio, and the user wants to use\nWhisper for transcribing Spanish audio. The first stage should not have a trigger condition. If one is set, it will be ignored. The\nWorkflow Manager will take all of the tracks generated by stage 1 and determine if the trigger\ncondition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this\ncase, if stage 1 detected the language as English and set ISO_LANGUAGE to eng , then those\ntracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box. If any of the Whisper tracks do not meet the condition for the stage 2, they are later considered\nas possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger\ndiamond pointing down to the stage 3 trigger diamond. The Workflow Manager will take all of the tracks generated by stage 2, the SPHINX SPEECH DETECTION ACTION , as well as the tracks that didn't satisfy the stage 2 trigger, and\ndetermine if the trigger condition for stage 3 is met. Note that the Sphinx component does not generate tracks with the ISO_LANGUAGE property, so\nit's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later\nflow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the\nSphinx tracks cannot satisfy that trigger either. Even if the Sphinx component did generate tracks with the ISO_LANGUAGE property, it would be set\nto eng and would not satisfy the spa condition (they are mutually exclusive). Either way,\neventually the tracks from stage 2 will flow into stage 5. The Workflow Manager will take all of the tracks generated by stage 3, the WHISPER SPEECH DETECTION ACTION , as well as the tracks that did not satisfy the stage 2 and 3\ntriggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by\nstage 3 will have the ISO_LANGUAGE property set to spa , because the stage 3 trigger only\nmatched Spanish tracks and when Whisper performs transcription, it sets the ISO_LANGUAGE property.\nSince the stage 4 trigger, like the stage 3 trigger, is ISO_LANGUAGE=spa , all of the tracks\nproduced by stage 3 will be fed in to stage 4. The Workflow Manager will take all of the tracks generated by stage 4, the ARGOS TRANSLATION (WITH FF REGION) ACTION , as well as the tracks that did not satisfy the stage 2,\n3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger\ncondition, so all of those tracks flow into stage 5 by default. The above diagram can be simplified as follows: In this diagram the trigger diamonds have been replaced with the orange boxes at the top of each\nstage. Also, all of the arrows for flows that are not logically possible have been removed,\nleaving only arrows that flow from one stage to another. What remains shows that this pipeline has three main flows of execution: English audio is transcribed by the Sphinx component and then processed by keyword tagging. Spanish audio is transcribed by the Whisper component, translated by the Argos component, and\n then processed by keyword tagging. All other languages are not transcribed and those tracks pass directly to keyword tagging. Since\n there is no transcript to look at, keyword tagging essentially ignores them.",
+ "title": "Algorithm Selection Using Triggers"
+ },
+ {
+ "location": "/Trigger-Guide/index.html#further-understanding",
+ "text": "In general, triggers work as a mechanism to decide which tracks are passed forward to later stages\nof a pipeline. It is important to note that not only are the tracks from the previous stage\nconsidered, but also tracks from stages that were not fed into any previous stage. For example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3\nwould never be triggered. This is because Sphinx tracks don't have an ISO_LANGUAGE property. Even\nif they did have that property, it would be set to eng , not spa , which would not satisfy the\nstage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks\nfrom stage 1 should only be processed by one speech-to-text algorithm (i.e. one SPEECH DETECTION \nstage). Both algorithms should be considered, but only one should be selected based on the language.\nTo accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs\nto stage 3. Additionally, it's important to note that when a stage is triggered, the tracks passed into that\nstage are no longer considered for later stages. Instead, the tracks generated by that stage can be\npassed to later stages. For example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If\nall of the tracks generated in prior stages could be passed to stage 4, then the spa tracks\ngenerated in stage 1 would trigger stage 4. Since those have not passed through the Whisper\nspeech-to-text stage 3 they would not have a transcript to translate.",
+ "title": "Further Understanding"
+ },
+ {
+ "location": "/Trigger-Guide/index.html#filtering-using-triggers",
+ "text": "The pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically AND ed together. This allows you to produce\npipelines that search for very specific things. Consider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment: OCV YOLO OBJECT DETECTION ACTION (No TRIGGER) CAFFE GOOGLENET DETECTION ACTION TRIGGER: CLASSIFICATION=truck FEED_FORWARD_TYPE: REGION TENSORFLOW VEHICLE COLOR DETECTION ACTION TRIGGER: CLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle FEED_FORWARD_TYPE: REGION OALPR LICENSE PLATE TEXT DETECTION ACTION TRIGGER: CLASSIFICATION=blue FEED_FORWARD_TYPE: REGION The pipeline can be represented as a flow chart: The goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior. Stage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates truck tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to blue trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1. Tracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused. It's important to note that the possible CLASSIFICATION values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a blue track in stage 1\nthat will later satisfy the trigger for stage 4. Also, note that stages 1, 2, and 3 can all accept an optional WHITELIST_FILE property that can be\nused to discard tracks with a CLASSIFICATION not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using whitelist files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the whitelist approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only truck tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where person tracks from YOLO are fed into OpenCV face. person is just an example of one other type of\nYOLO track a user might be interested in. The above diagram can be simplified as follows: Removing all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially AND s the\ntrigger conditions together.",
+ "title": "Filtering Using Triggers"
+ },
+ {
+ "location": "/Trigger-Guide/index.html#json-escaping",
+ "text": "Many times job properties are defined using JSON and track properties appear in the JSON output\nobject. JSON also uses backslash as its escape character. Since the TRIGGER property and JSON both\nuse backslash as the escape character, when specifying the TRIGGER property in JSON, the string\nmust be doubly escaped. If the job request contains this JSON fragment: { \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog;cat\"} } } it will match either \"dog\" or \"cat\", but not \"dog;cat\". This JSON fragment: { \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\;cat\"} } } would only match \"dog;cat\". This JSON fragment: { \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\\\\\cat\"} } } would only match \"dog\\cat\". The track property in the JSON output object would appear as: { \"trackProperties\": { \"CLASSIFICATION\": \"dog\\\\cat\" } }",
+ "title": "JSON escaping"
+ },
{
"location": "/REST-API/index.html",
"text": "The OpenMPF REST API is provided by Swagger and is available within the OpenMPF Workflow Manager web application. Swagger enables users to test the endpoints using the running instance of OpenMPF.\n\n\nClick \nhere\n for a generated version of the content.\n\n\nNote that in a Docker deployment the \n/rest/nodes\n and \n/rest/streaming\n endpoints are disabled.",
@@ -517,7 +552,7 @@
},
{
"location": "/Component-Descriptor-Reference/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nIn order to be registered within OpenMPF, each component must provide a JavaScript Object Notation (JSON) descriptor file which provides contextual information about the component.\n\n\nThis file must be named \"descriptor.json\".\n\n\nFor an example, please see: \nHello World JSON Descriptor\n\n\nData Elements\n\n\nContained within the JSON file should be the following elements:\n\n\ncomponentName\n\n\nRequired.\n\n\nContains the component\u2019s name. Should follow CamelCaseFormat.\n\n\nExample:\n\n\"componentName\" : \"SampleComponent\"\n\n\ncomponentVersion\n\n\nRequired.\n\n\nContains the component\u2019s version. Does not need to match the \ncomponentAPIVersion\n.\n\n\nExample:\n\n\"componentVersion\" : \"2.0.1\"\n\n\nmiddlewareVersion\n\n\nRequired.\n\n\nContains the version of the OpenMPF Component API that the component was built with.\n\n\nExample:\n\n\"middlewareVersion\" : \"2.0.0\"\n\n\nsourceLanguage\n\n\nRequired.\n\n\nContains the language the component is coded in. Should be \"c++\", \"python\", or \"java\".\n\n\nExample:\n\n\"sourceLanguage\" : \"c++\"\n\n\nbatchLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for batch processing once the component is deployed.\n\n\nFor Java components, this contains the name of the jar which contains the component implementation used for batch processing.\n\n\nFor setuptools-based Python components, this contains the component's distribution name, which is declared in the\ncomponent's \nsetup.py\n file. The distribution name is usually the same name as the component.\n\n\nFor basic Python components, this contains the full path to the Python file containing the component class.\n\n\nExample (C++):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libbatchSampleComponent.so\n\n\nExample (Java):\n\n\"batchLibrary\" : \"batch-sample-component-2.0.1.jar\"\n\n\nExample (setuptools-based Python):\n\n\"batchLibrary\" : \"SampleComponent\"\n\n\nExample (basic Python):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/sample_component.py\"\n\n\nstreamLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for stream processing once the component is deployed.\n\n\nNote that Python and Java components currently do not support stream processing, so this field should be omitted from Python and Java component descriptor files.\n\n\nExample (C++):\n\n\"streamLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libstreamSampleComponent.so\n\n\nenvironmentVariables\n\n\nRequired; can be empty.\n\n\nDefines a collection of environment variables that will be set when executing the OpenMPF Component Executable.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Name of the environment variable.\n\n\n\n\n\n\nvalue:\n\n Value of the environment variable.\n Note that value can be a list of values separated by \u201c:\u201d.\n\n\n\n\n\n\nsep:\n\n The \nsep\n field (short for \u201cseparator\u201d) should be set to \u201cnull\u201d or \u201c:\u201d. When set to \u201cnull,\u201d the content of the environment variable specified by \nname\n is the content of \nvalue\n; for an existing variable, its former value will be replaced, otherwise, a new variable will be created and assigned this value. When set to \u201c:\u201d any prior value of the environment variable is retained and the content of \nvalue\n is simply appended to the end after a \u201c:\u201d character.\n\n\n\n\n\n\n\n\nIMPORTANT\n: For C++ components, the LD_LIBRARY_PATH needs to be set in order for the Component Executable to load the component\u2019s shared object library as well as any dependent libraries installed with the component. The usual form of the LD_LIBRARY_PATH variable should be \n${MPF_HOME}/plugins//lib/\n. Additional directories can be appended after a \u201c:\u201d delimiter.\n\n\n\n\nExample:\n\n\n\"environmentVariables\": [\n {\n \"name\": \"LD_LIBRARY_PATH\",\n \"value\": \"${MPF_HOME}/plugins/SampleComponent/lib\",\n \"sep\": \":\"\n }\n ]\n\n\n\nalgorithm\n\n\nRequired.\n\n\nSpecifies information about the component\u2019s algorithm.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the algorithm\u2019s name. Should be unique and all CAPS.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the algorithm.\n\n\n\n\n\n\nactionType:\n\n Required. Defines the type of processing that the algorithm performs. Must be set to \nDETECTION\n.\n\n\n\n\n\n\noutputChangedCounter:\n\n Optional. An integer that should be incremented when the component is changed in a way that\n would cause it to produce different output.\n\n\n\n\n\n\nrequiresCollection:\n\n Required, can be empty. Contains the state(s) that must be produced by previous algorithms in the pipeline.\n \nThis value should be empty \nunless\n the component depends on the results of another algorithm.\n\n\n\n\n\n\nprovidesCollection:\n\n Contains the following sub-fields:\n\n\n\n\nstates:\n Required. Contains the state(s) that the algorithm provides.\n Should contain the following values:\n\n\nDETECTION\n\n\nDETECTION_TYPE\n, where \nTYPE\n is the \nalgorithm.detectionType\n\n\nDETECTION_TYPE_ALGORITHM\n, where \nTYPE\n is the value of \nalgorithm.detectionType\n and \nALGORITHM\n is the value of \nalgorithm.name\n\nExample:\n\n\n\"states\": [\n \"DETECTION\",\n \"DETECTION_FACE\",\n \"DETECTION_FACE_SAMPLECOMPONENT\"]\n\n\n\n\n\n\n\n\nproperties:\n\nRequired; can be empty. Declares a list of the configurable properties that the algorithm exposes.\nContains the following sub-fields:\n\n\nname:\n\n Required.\n\n\ntype:\n\n Required.\n \nBOOLEAN\n, \nFLOAT\n, \nDOUBLE\n, \nINT\n, \nLONG\n, or \nSTRING\n.\n\n\ndefaultValue:\n\n Required.\n Must be provided in order to create a default action associated with the algorithm, where an action is a specific instance of an algorithm configured with a set of property values.\n\n\ndescription:\n\n Required.\n Description of the property. By convention, the default value for a property should be described in its description text.\n\n\n\n\n\n\n\n\n\n\n\n\nactions\n\n\nOptional.\n\n\nActions are used in the development of pipelines. Provides a list of custom actions that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default action will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the action\u2019s name. Must be unique among all actions, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the action.\n\n\n\n\n\n\nalgorithm:\n\n Required. Contains the name of the algorithm for this action. The algorithm must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nproperties:\n\n Optional. List of properties that will be passed to the algorithm. Each property has an associated name and value sub-field, which are both required. Name must be one of the properties specified in the algorithm definition for this action.\n\n\n\n\n\n\nExample:\n\n\n\"actions\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION ACTION\",\n \"description\": \"Executes the sample component face detection algorithm using the default parameters.\",\n \"algorithm\": \"SAMPLECOMPONENT\",\n \"properties\": []\n }\n]\n\n\n\ntasks\n\n\nOptional.\n\n\nA list of custom tasks that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default task will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the task's name. Must be unique among all tasks, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the task.\n\n\n\n\n\n\nactions:\n\n Required. Minimum length is 1. Contains the names of the actions that this task uses. Actions must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"tasks\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION TASK\",\n \"description\": \"Performs sample component face detection.\",\n \"actions\": [\n \"SAMPLE COMPONENT FACE DETECTION ACTION\"\n ]\n }\n]\n\n\n\npipelines\n\n\nOptional.\n\n\nA list of custom pipelines that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default pipeline will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the pipeline's name. Must be unique among all pipelines, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the pipeline.\n\n\n\n\n\n\ntasks:\n\n Required. Minimum length is 1. Contains the names of the tasks that this pipeline uses. Tasks must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"pipelines\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION PIPELINE\",\n \"description\": \"Performs sample component face detection.\",\n \"tasks\": [\n \"SAMPLE COMPONENT FACE DETECTION TASK\"\n ]\n }\n]",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nOverview\n\n\nIn order to be registered within OpenMPF, each component must provide a JavaScript Object Notation (JSON) descriptor file which provides contextual information about the component.\n\n\nThis file must be named \"descriptor.json\".\n\n\nFor an example, please see: \nHello World JSON Descriptor\n\n\nData Elements\n\n\nContained within the JSON file should be the following elements:\n\n\ncomponentName\n\n\nRequired.\n\n\nContains the component\u2019s name. Should follow CamelCaseFormat.\n\n\nExample:\n\n\"componentName\" : \"SampleComponent\"\n\n\ncomponentVersion\n\n\nRequired.\n\n\nContains the component\u2019s version. Does not need to match the \ncomponentAPIVersion\n.\n\n\nExample:\n\n\"componentVersion\" : \"2.0.1\"\n\n\nmiddlewareVersion\n\n\nRequired.\n\n\nContains the version of the OpenMPF Component API that the component was built with.\n\n\nExample:\n\n\"middlewareVersion\" : \"2.0.0\"\n\n\nsourceLanguage\n\n\nRequired.\n\n\nContains the language the component is coded in. Should be \"c++\", \"python\", or \"java\".\n\n\nExample:\n\n\"sourceLanguage\" : \"c++\"\n\n\nbatchLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for batch processing once the component is deployed.\n\n\nFor Java components, this contains the name of the jar which contains the component implementation used for batch processing.\n\n\nFor setuptools-based Python components, this contains the component's distribution name, which is declared in the\ncomponent's \nsetup.py\n file. The distribution name is usually the same name as the component.\n\n\nFor basic Python components, this contains the full path to the Python file containing the component class.\n\n\nExample (C++):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libbatchSampleComponent.so\n\n\nExample (Java):\n\n\"batchLibrary\" : \"batch-sample-component-2.0.1.jar\"\n\n\nExample (setuptools-based Python):\n\n\"batchLibrary\" : \"SampleComponent\"\n\n\nExample (basic Python):\n\n\"batchLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/sample_component.py\"\n\n\nstreamLibrary\n\n\nOptional. At least one of \nbatchLibrary\n or \nstreamLibrary\n must be provided.\n\n\nFor C++ components, this contains the full path to the Component Logic shared object library used for stream processing once the component is deployed.\n\n\nNote that Python and Java components currently do not support stream processing, so this field should be omitted from Python and Java component descriptor files.\n\n\nExample (C++):\n\n\"streamLibrary\" : \"${MPF_HOME}/plugins/SampleComponent/lib/libstreamSampleComponent.so\n\n\nenvironmentVariables\n\n\nRequired; can be empty.\n\n\nDefines a collection of environment variables that will be set when executing the OpenMPF Component Executable.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Name of the environment variable.\n\n\n\n\n\n\nvalue:\n\n Value of the environment variable.\n Note that value can be a list of values separated by \u201c:\u201d.\n\n\n\n\n\n\nsep:\n\n The \nsep\n field (short for \u201cseparator\u201d) should be set to \u201cnull\u201d or \u201c:\u201d. When set to \u201cnull,\u201d the content of the environment variable specified by \nname\n is the content of \nvalue\n; for an existing variable, its former value will be replaced, otherwise, a new variable will be created and assigned this value. When set to \u201c:\u201d any prior value of the environment variable is retained and the content of \nvalue\n is simply appended to the end after a \u201c:\u201d character.\n\n\n\n\n\n\n\n\nIMPORTANT\n: For C++ components, the LD_LIBRARY_PATH needs to be set in order for the Component Executable to load the component\u2019s shared object library as well as any dependent libraries installed with the component. The usual form of the LD_LIBRARY_PATH variable should be \n${MPF_HOME}/plugins//lib/\n. Additional directories can be appended after a \u201c:\u201d delimiter.\n\n\n\n\nExample:\n\n\n\"environmentVariables\": [\n {\n \"name\": \"LD_LIBRARY_PATH\",\n \"value\": \"${MPF_HOME}/plugins/SampleComponent/lib\",\n \"sep\": \":\"\n }\n ]\n\n\n\nalgorithm\n\n\nRequired.\n\n\nSpecifies information about the component\u2019s algorithm.\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the algorithm\u2019s name. Should be unique and all CAPS.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the algorithm.\n\n\n\n\n\n\nactionType:\n\n Required. Defines the type of processing that the algorithm performs. Must be set to \nDETECTION\n.\n\n\n\n\n\n\ntrackType:\n\n Required. The type of object detected by the component. Should be in all CAPS. Examples\n include: \nFACE\n, \nMOTION\n, \nPERSON\n, \nSPEECH\n, \nCLASS\n (for object classification), or \nTEXT\n.\n\n\n\n\n\n\noutputChangedCounter:\n\n Optional. An integer that should be incremented when the component is changed in a way that\n would cause it to produce different output.\n\n\n\n\n\n\nrequiresCollection:\n\n Required, can be empty. Contains the state(s) that must be produced by previous algorithms in the pipeline.\n \nThis value should be empty \nunless\n the component depends on the results of another algorithm.\n\n\n\n\n\n\nprovidesCollection:\n\n Contains the following sub-fields:\n\n\n\n\nstates:\n Required. Contains the state(s) that the algorithm provides.\n Should contain the following values:\n\n\nDETECTION\n\n\nDETECTION_TYPE\n, where \nTYPE\n is the \nalgorithm.detectionType\n\n\nDETECTION_TYPE_ALGORITHM\n, where \nTYPE\n is the value of \nalgorithm.detectionType\n and \nALGORITHM\n is the value of \nalgorithm.name\n\nExample:\n\n\n\"states\": [\n \"DETECTION\",\n \"DETECTION_FACE\",\n \"DETECTION_FACE_SAMPLECOMPONENT\"]\n\n\n\n\n\n\n\n\nproperties:\n\nRequired; can be empty. Declares a list of the configurable properties that the algorithm exposes.\nContains the following sub-fields:\n\n\nname:\n\n Required.\n\n\ntype:\n\n Required.\n \nBOOLEAN\n, \nFLOAT\n, \nDOUBLE\n, \nINT\n, \nLONG\n, or \nSTRING\n.\n\n\ndefaultValue:\n\n Required.\n Must be provided in order to create a default action associated with the algorithm, where an action is a specific instance of an algorithm configured with a set of property values.\n\n\ndescription:\n\n Required.\n Description of the property. By convention, the default value for a property should be described in its description text.\n\n\n\n\n\n\n\n\n\n\n\n\nactions\n\n\nOptional.\n\n\nActions are used in the development of pipelines. Provides a list of custom actions that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default action will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the action\u2019s name. Must be unique among all actions, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the action.\n\n\n\n\n\n\nalgorithm:\n\n Required. Contains the name of the algorithm for this action. The algorithm must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nproperties:\n\n Optional. List of properties that will be passed to the algorithm. Each property has an associated name and value sub-field, which are both required. Name must be one of the properties specified in the algorithm definition for this action.\n\n\n\n\n\n\nExample:\n\n\n\"actions\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION ACTION\",\n \"description\": \"Executes the sample component face detection algorithm using the default parameters.\",\n \"algorithm\": \"SAMPLECOMPONENT\",\n \"properties\": []\n }\n]\n\n\n\ntasks\n\n\nOptional.\n\n\nA list of custom tasks that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default task will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the task's name. Must be unique among all tasks, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the task.\n\n\n\n\n\n\nactions:\n\n Required. Minimum length is 1. Contains the names of the actions that this task uses. Actions must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"tasks\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION TASK\",\n \"description\": \"Performs sample component face detection.\",\n \"actions\": [\n \"SAMPLE COMPONENT FACE DETECTION ACTION\"\n ]\n }\n]\n\n\n\npipelines\n\n\nOptional.\n\n\nA list of custom pipelines that will be added during component registration.\n\n\n\n\nNOTE:\n For convenience, a default pipeline will be created upon component registration if this element is not provided in the descriptor file.\n\n\n\n\nContains the following sub-fields:\n\n\n\n\n\n\nname:\n\n Required. Contains the pipeline's name. Must be unique among all pipelines, including those that already exist on the system and those specified in this descriptor.\n\n\n\n\n\n\ndescription:\n\n Required. Contains a brief description of the pipeline.\n\n\n\n\n\n\ntasks:\n\n Required. Minimum length is 1. Contains the names of the tasks that this pipeline uses. Tasks must either already exist on the system or be defined in this descriptor.\n\n\n\n\n\n\nExample:\n\n\n\"pipelines\": [\n {\n \"name\": \"SAMPLE COMPONENT FACE DETECTION PIPELINE\",\n \"description\": \"Performs sample component face detection.\",\n \"tasks\": [\n \"SAMPLE COMPONENT FACE DETECTION TASK\"\n ]\n }\n]",
"title": "Component Descriptor Reference"
},
{
@@ -532,7 +567,7 @@
},
{
"location": "/CPP-Batch-Component-API/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent->SetRunDirectory(...)\ncomponent->Init()\nwhile (true) {\n job = ReceiveJob()\n if (component->Supports(job.data_type))\n component->GetDetections(...) // Component logic does the work here\n SendJobResponse()\n}\ncomponent->Close()\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes functions on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponent\n.\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF C++ detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponent\n interface functions; and \nComponent Utilities\n, which perform actions such as image rotation, and cropping.\n\n\nComponent Interface\n\n\n\n\nMPFComponent\n - Abstract base class for components.\n\n\n\n\nDetection Component Interface\n\n\n\n\nMPFDetectionComponent\n extends \nMPFComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform batch processing.\n\n\n\n\nJob Definitions\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponents must also include two \nComponent Factory Functions\n.\n\n\nComponent Interface\n\n\nThe \nMPFComponent\n class is the abstract base class utilized by all OpenMPF C++ components that perform batch processing.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n This interface should not be directly implemented, because no mechanism exists for launching components based off of it. Currently, the only supported type of component is detection, and all batch detection components should instead extend \nMPFDetectionComponent\n.\n\n\n\n\nSetRunDirectory(string)\n\n\nSets the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid SetRunDirectory(const string &run_dir)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrun_dir\n\n\nconst string &\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nSetRunDirectory\n is called by the Component Executable to set the correct path. This function should not be called within your implementation.\n\n\n\n\nGetRunDirectory()\n\n\nReturns the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\nFunction Definition:\n\n\n\n\nstring GetRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nstring\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nstring run_dir = GetRunDirectory();\nstring plugin_path = run_dir + \"/SampleComponent\";\nstring config_path = plugin_path + \"/config\";\n\n\n\nInit()\n\n\nThe component should perform all initialization operations in the \nInit\n member function.\nThis will be executed once by the Component Executable, on component startup, before the first job, after \nSetRunDirectory\n.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if initialization is successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Init() {\n // Get component paths\n string run_dir = GetRunDirectory();\n string plugin_path = run_dir + \"/SampleComponent\";\n string config_path = plugin_path + \"/config\";\n\n // Setup logger, load data models, etc.\n\n return true;\n}\n\n\n\nClose()\n\n\nThe component should perform all shutdown operations in the \nClose\n member function.\nThis will be executed once by the Component Executable, on component shutdown, usually after the last job.\n\n\nThis function is called before the component instance is deleted (see \nComponent Factory Functions\n).\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Close() {\n // Free memory, etc.\n return true;\n}\n\n\n\nGetComponentType()\n\n\nThe GetComponentType() member function allows the C++ Batch Component API to determine the component \"type.\" Currently \nMPF_DETECTION_COMPONENT\n is the only supported component type. APIs for other component types may be developed in the future.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFComponentType GetComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nMPF_DETECTION_COMPONENT\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nMPFComponentType SampleComponent::GetComponentType() {\n return MPF_DETECTION_COMPONENT;\n};\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macros in its implementation:\n\n\nMPF_COMPONENT_CREATOR(TYPENAME);\n\n\n\nMPF_COMPONENT_DELETER();\n\n\n\nThe creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThe deleter macro creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThese macros must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nMPF_COMPONENT_CREATOR(HelloWorld);\nMPF_COMPONENT_DELETER();\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform batch processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \nGetDetections()\n functions or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\nreturn MPF_UNSUPPORTED_DATA_TYPE;\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, developers may extend one of several convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several functions in \nMPFDetectionComponent\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapters are provided:\n\n\n\n\nImage Detection (\nsource\n)\n\n\nVideo Detection (\nsource\n)\n\n\nImage and Video Detection (\nsource\n)\n\n\nAudio Detection (\nsource\n)\n\n\nAudio and Video Detection (\nsource\n)\n\n\nGeneric Detection (\nsource\n)\n\n\n\n\n\n\nExample: Creating Adaptors to Perform Naive Tracking:\n\nA simple detector that operates on videos may simply go through the video frame-by-frame, extract each frame\u2019s data, and perform detections on that data as though it were processing a new unrelated image each time. As each frame is processed, one or more \nMPFImageLocations\n are generated.\n\n\nGenerally, it is preferred that a detection component that supports \nVIDEO\n data is able to perform tracking across video frames to appropriately correlate \nMPFImageLocation\n detections across frames.\n\n\nAn adapter could be developed to perform simple tracking. This would correlate \nMPFImageLocation\n detections across frames by na\u00efvely looking for bounding box regions in each contiguous frame that overlap by a given threshold such as 50%.\n\n\n\n\nSupports(MPFDetectionDataType)\n\n\nReturns true or false depending on the data type is supported or not.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Supports(MPFDetectionDataType data_type)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndata_type\n\n\nMPFDetectionDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample component that supports only image and video files\nbool SampleComponent::Supports(MPFDetectionDataType data_type) {\n return data_type == MPFDetectionDataType::IMAGE || data_type == MPFDetectionDataType::VIDEO;\n}\n\n\n\nGetDetectionType()\n\n\nReturns the type of object detected by the component.\n\n\n\n\nFunction Definition:\n\n\n\n\nstring GetDetectionType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nstring\n) The type of object detected by the component. Should be in all CAPS. Examples include: \nFACE\n, \nMOTION\n, \nPERSON\n, \nSPEECH\n, \nCLASS\n (for object classification), or \nTEXT\n.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nstring SampleComponent::GetDetectionType() {\n return \"FACE\";\n}\n\n\n\nGetDetections(MPFImageJob \u2026)\n\n\nUsed to detect objects in an image file. The MPFImageJob structure contains \nthe data_uri specifying the location of the image file.\n\n\nCurrently, the data_uri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". \nThis is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFImageJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFImageJob&\n\n\nStructure containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\nGetDetections(MPFVideoJob \u2026)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\" \nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not \nguaranteed to receive requests in any order. For example, the first request processed by a component might receive \na request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFVideoJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFVideoJob&\n\n\nStructure containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFVideoTrack\n data for each detected object. \n\n\n\n\nGetDetections(MPFAudioJob \u2026)\n\n\nUsed to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain \nthe entirety of the audio file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFAudioJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFAudioJob &\n\n\nStructure containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFGenericJob \u2026)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and \nhandled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFGenericJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFGenericJob &\n\n\nStructure containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nStructure containing information about a job to be performed on a piece of media.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name \n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndata_uri \n\n\nconst string &\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows \nusers to set job properties in their \n\ndocker-compose files.\n \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not \npossible to change the value of properties set via environment variables at runtime and therefore \nthey should only be used to specify properties that will not change throughout the entire lifetime \nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const MPFImageLocation &location,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nlocation\n\n \nconst MPFImageLocation &\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const MPFVideoTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_frame\n\n \nconst int\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nconst int\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n \n \n\n \ntrack\n\n \nconst MPFVideoTrack &\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an audio file. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const MPFAudioTrack &track, \n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFAudioTrack &\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const MPFGenericTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \ntrack\n\n \nconst MPFGenericTrack &\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS. See the \nsection\n for \nROTATION\n and \nHORIZONTAL_FLIP\n below,\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\n\nMPFImageLocation { \n x_left_upper = 0, y_left_upper = 0, width = 100, height = 50, confidence = 1.0,\n { {\"CLASSIFICATION\", \"backpack\"} } \n}\n\n\n\n\n\n\nRotation and Horizontal Flip\n\n\nWhen the \ndetection_properties\n map contains a \nROTATION\n key, it should be a floating point value in the interval\n\n[0.0, 360.0)\n indicating the orientation of the detection in degrees in the counter-clockwise direction.\nIn order to view the detection in the upright orientation, it must be rotated the given number of degrees in the\nclockwise direction.\n\n\nThe \ndetection_properties\n map can also contain a \nHORIZONTAL_FLIP\n property that will either be \n\"true\"\n or \n\"false\"\n.\nThe \ndetection_properties\n map may have both \nHORIZONTAL_FLIP\n and \nROTATION\n keys.\n\n\nThe Workflow Manager performs the following algorithm to draw the bounding box when generating markup:\n\n\n\n\n\n Draw the rectangle ignoring rotation and flip. \n\n\n\n\n\n Rotate the rectangle counter-clockwise the given number of degrees around its top left corner. \n\n\n\n\n\n If the rectangle is flipped, flip horizontally around the top left corner.\n\n\n\n\n\n\n\n\nIn the image above you can see the three steps required to properly draw a bounding box.\nStep 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.\nThe detection for the image above is:\n\n\n\nMPFImageLocation { \n x_left_upper = 210, y_left_upper = 189, width = 177, height = 41, confidence = 1.0,\n { {\"ROTATION\", \"15\"}, { \"HORIZONTAL_FLIP\", \"true\" } } \n}\n\n\n\n\nNote that the \nx_left_upper\n, \ny_left_upper\n, \nwidth\n, and \nheight\n values describe the red rectangle. The addition\nof the \nROTATION\n property results in the blue rectangle, and the addition of the \nHORIZONTAL_FLIP\n property results\nin the green rectangle. \n\n\nOne way to think about the process is \"draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,\nand then rotate and flip around the pin\".\n\n\nRotation-Only Example\n\n\n\n\nThe Workflow Manager generated the above image by performing markup on the original image with the following\ndetection:\n\n\n\nMPFImageLocation { \n x_left_upper = 156, y_left_upper = 339, width = 194, height = 243, confidence = 1.0,\n { {\"ROTATION\", \"90.0\"} } \n}\n\n\n\n\nThe markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no\n\nHORIZONTAL_FLIP\n. \n\n\nIn order to properly extract the detection region from the original image, such as when generating an artifact, you\nwould need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the\nbottom-left corner so that the face is in the proper upright position. \n\n\nWhen the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding\nbox. That is why its position is described using the \nx_left_upper\n, and \ny_left_upper\n variables. They refer to the\ntop-left corner of the correctly oriented region. \n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nMPFAudioTrack\n\n\nStructure used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioTrack()\nMPFAudioTrack(\n int start_time,\n int stop_time,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nStructure used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericTrack()\nMPFGenericTrack(\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nException Types\n\n\nMPFDetectionException\n\n\nException that should be thrown by the \nGetDetections()\n methods when an error occurs. \nThe content of the \nerror_code\n and \nwhat()\n members will appear in the JSON output object.\n\n\n\n\nConstructors:\n\n\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror_code\n\n\nMPFDetectionError\n\n\nSpecifies the error type. See \nMPFDetectionError\n.\n\n\n\n\n\n\nwhat()\n\n\nconst char*\n\n\nTextual description of the specific error. (Inherited from \nstd::exception\n)\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the type of error that occurred in a \nGetDetections()\n method. It is used as a parameter to \nthe \nMPFDetectionException\n constructor. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component function has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size. For example, if a call to \ncv::imread()\n returns a frame of data with either the number of rows or columns less than or equal to 0.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nFor convenience, the OpenMPF provides the \nMPFImageReader\n (\nsource\n) and \nMPFVideoCapture\n (\nsource\n) utility classes to perform horizontal flipping, rotation, and cropping to a region of interest. Note, that when using these classes, the component will also need to utilize the class to perform a reverse transform to convert the transformed pixel coordinates back to the original (e.g. pre-flipped, pre-rotated, and pre-cropped) coordinate space.\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux \nversion that is used by the OpenMPF Component Executable. This is to ensure \ncompatibility between the executable and the library functions at the \nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on \nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component \nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Optional component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for \nOpenMPF Component logging. Components using log4cxx should not configure logging themselves. \nThe Component Executor will configure log4cxx globally. Components should call \n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you \nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below. \n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging \nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file. \nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and \n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executable loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes functions on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent->SetRunDirectory(...)\ncomponent->Init()\nwhile (true) {\n job = ReceiveJob()\n if (component->Supports(job.data_type))\n component->GetDetections(...) // Component logic does the work here\n SendJobResponse()\n}\ncomponent->Close()\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes functions on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponent\n.\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the C++ Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF C++ detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponent\n.\n\n\nBuilding the component into a shared object library. (See \nHelloWorldComponent CMakeLists.txt\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the C++ Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponent\n interface functions; and \nComponent Utilities\n, which perform actions such as image rotation, and cropping.\n\n\nComponent Interface\n\n\n\n\nMPFComponent\n - Abstract base class for components.\n\n\n\n\nDetection Component Interface\n\n\n\n\nMPFDetectionComponent\n extends \nMPFComponent\n - Abstract class that should be extended by all OpenMPF C++ detection components that perform batch processing.\n\n\n\n\nJob Definitions\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponents must also include two \nComponent Factory Functions\n.\n\n\nComponent Interface\n\n\nThe \nMPFComponent\n class is the abstract base class utilized by all OpenMPF C++ components that perform batch processing.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n This interface should not be directly implemented, because no mechanism exists for launching components based off of it. Currently, the only supported type of component is detection, and all batch detection components should instead extend \nMPFDetectionComponent\n.\n\n\n\n\nSetRunDirectory(string)\n\n\nSets the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed.\n\n\n\n\nFunction Definition:\n\n\n\n\nvoid SetRunDirectory(const string &run_dir)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrun_dir\n\n\nconst string &\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nSetRunDirectory\n is called by the Component Executable to set the correct path. This function should not be called within your implementation.\n\n\n\n\nGetRunDirectory()\n\n\nReturns the value of the private \nrun_directory\n data member which contains the full path of the parent folder above where the component is installed. This parent folder is also known as the plugin folder.\n\n\n\n\nFunction Definition:\n\n\n\n\nstring GetRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nstring\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nstring run_dir = GetRunDirectory();\nstring plugin_path = run_dir + \"/SampleComponent\";\nstring config_path = plugin_path + \"/config\";\n\n\n\nInit()\n\n\nThe component should perform all initialization operations in the \nInit\n member function.\nThis will be executed once by the Component Executable, on component startup, before the first job, after \nSetRunDirectory\n.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if initialization is successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Init() {\n // Get component paths\n string run_dir = GetRunDirectory();\n string plugin_path = run_dir + \"/SampleComponent\";\n string config_path = plugin_path + \"/config\";\n\n // Setup logger, load data models, etc.\n\n return true;\n}\n\n\n\nClose()\n\n\nThe component should perform all shutdown operations in the \nClose\n member function.\nThis will be executed once by the Component Executable, on component shutdown, usually after the last job.\n\n\nThis function is called before the component instance is deleted (see \nComponent Factory Functions\n).\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nbool\n) Return true if successful, otherwise return false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nbool SampleComponent::Close() {\n // Free memory, etc.\n return true;\n}\n\n\n\nGetComponentType()\n\n\nThe GetComponentType() member function allows the C++ Batch Component API to determine the component \"type.\" Currently \nMPF_DETECTION_COMPONENT\n is the only supported component type. APIs for other component types may be developed in the future.\n\n\n\n\nFunction Definition:\n\n\n\n\nMPFComponentType GetComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nMPF_DETECTION_COMPONENT\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nMPFComponentType SampleComponent::GetComponentType() {\n return MPF_DETECTION_COMPONENT;\n};\n\n\n\nComponent Factory Functions\n\n\nEvery detection component must include the following macros in its implementation:\n\n\nMPF_COMPONENT_CREATOR(TYPENAME);\n\n\n\nMPF_COMPONENT_DELETER();\n\n\n\nThe creator macro takes the \nTYPENAME\n of the detection component (for example, \u201cHelloWorld\u201d). This macro creates the factory function that the OpenMPF Component Executable will call in order to instantiate the detection component. The creation function is called once, to obtain an instance of the component, after the component library has been loaded into memory.\n\n\nThe deleter macro creates the factory function that the Component Executable will use to delete that instance of the detection component.\n\n\nThese macros must be used outside of a class declaration, preferably at the bottom or top of a component source (.cpp) file.\n\n\nExample:\n\n\n// Note: Do not put the TypeName/Class Name in quotes\nMPF_COMPONENT_CREATOR(HelloWorld);\nMPF_COMPONENT_DELETER();\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponent\n class is the abstract class utilized by all OpenMPF C++ detection components that perform batch processing. This class provides functions for developers to integrate detection logic into OpenMPF.\n\n\nSee the latest source here.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \nGetDetections()\n functions or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\nreturn MPF_UNSUPPORTED_DATA_TYPE;\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponent\n directly, developers may extend one of several convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several functions in \nMPFDetectionComponent\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapters are provided:\n\n\n\n\nImage Detection (\nsource\n)\n\n\nVideo Detection (\nsource\n)\n\n\nImage and Video Detection (\nsource\n)\n\n\nAudio Detection (\nsource\n)\n\n\nAudio and Video Detection (\nsource\n)\n\n\nGeneric Detection (\nsource\n)\n\n\n\n\n\n\nExample: Creating Adaptors to Perform Naive Tracking:\n\nA simple detector that operates on videos may simply go through the video frame-by-frame, extract each frame\u2019s data, and perform detections on that data as though it were processing a new unrelated image each time. As each frame is processed, one or more \nMPFImageLocations\n are generated.\n\n\nGenerally, it is preferred that a detection component that supports \nVIDEO\n data is able to perform tracking across video frames to appropriately correlate \nMPFImageLocation\n detections across frames.\n\n\nAn adapter could be developed to perform simple tracking. This would correlate \nMPFImageLocation\n detections across frames by na\u00efvely looking for bounding box regions in each contiguous frame that overlap by a given threshold such as 50%.\n\n\n\n\nSupports(MPFDetectionDataType)\n\n\nReturns true or false depending on the data type is supported or not.\n\n\n\n\nFunction Definition:\n\n\n\n\nbool Supports(MPFDetectionDataType data_type)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndata_type\n\n\nMPFDetectionDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nbool\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample component that supports only image and video files\nbool SampleComponent::Supports(MPFDetectionDataType data_type) {\n return data_type == MPFDetectionDataType::IMAGE || data_type == MPFDetectionDataType::VIDEO;\n}\n\n\n\nGetDetections(MPFImageJob \u2026)\n\n\nUsed to detect objects in an image file. The MPFImageJob structure contains\nthe data_uri specifying the location of the image file.\n\n\nCurrently, the data_uri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\nThis is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFImageJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFImageJob&\n\n\nStructure containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\nGetDetections(MPFVideoJob \u2026)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive\na request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFVideoJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFVideoJob&\n\n\nStructure containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFAudioJob \u2026)\n\n\nUsed to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain\nthe entirety of the audio file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFAudioJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFAudioJob &\n\n\nStructure containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\nGetDetections(MPFGenericJob \u2026)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nFunction Definition:\n\n\n\n\nstd::vector GetDetections(const MPFGenericJob &job);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nconst MPFGenericJob &\n\n\nStructure containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\nReturns: (\nstd::vector\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\nDetection Job Data Structures\n\n\nThe following data structures contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following data structures define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nStructure containing information about a job to be performed on a piece of media.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob_name \n\n\nconst string &\n\n\nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndata_uri \n\n\nconst string &\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njob_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job. \n Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmedia_properties \n\n\nconst Properties &\n\n\nContains a map of \n\n of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFImageJob(\n const string &job_name,\n const string &data_uri,\n const MPFImageLocation &location,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nlocation\n\n \nconst MPFImageLocation &\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const MPFVideoTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_frame\n\n \nconst int\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nconst int\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFVideoTrack &\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in an audio file. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const MPFAudioTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \nstart_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nconst int\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \ntrack\n\n \nconst MPFAudioTrack &\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nStructure containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties)\n\n\n\nMPFGenericJob(\n const string &job_name,\n const string &data_uri,\n const MPFGenericTrack &track,\n const Properties &job_properties,\n const Properties &media_properties)\n}\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nconst string &\n\n \nSee \nMPFJob.job_name\n for description.\n\n \n\n \n\n \ndata_uri\n\n \nconst string &\n\n \nSee \nMPFJob.data_uri\n for description.\n\n \n\n \n\n \ntrack\n\n \nconst MPFGenericTrack &\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n \njob_properties\n\n \nconst Properties &\n\n \nSee \nMPFJob.job_properties\n for description.\n\n \n\n \n\n \nmedia_properties\n\n \nconst Properties &\n\n \n\n See \nMPFJob.media_properties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nStructure used to store the location of detected objects in a image file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS. See the \nsection\n for \nROTATION\n and \nHORIZONTAL_FLIP\n below,\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\n\nMPFImageLocation {\n x_left_upper = 0, y_left_upper = 0, width = 100, height = 50, confidence = 1.0,\n { {\"CLASSIFICATION\", \"backpack\"} }\n}\n\n\n\n\n\n\nRotation and Horizontal Flip\n\n\nWhen the \ndetection_properties\n map contains a \nROTATION\n key, it should be a floating point value in the interval\n\n[0.0, 360.0)\n indicating the orientation of the detection in degrees in the counter-clockwise direction.\nIn order to view the detection in the upright orientation, it must be rotated the given number of degrees in the\nclockwise direction.\n\n\nThe \ndetection_properties\n map can also contain a \nHORIZONTAL_FLIP\n property that will either be \n\"true\"\n or \n\"false\"\n.\nThe \ndetection_properties\n map may have both \nHORIZONTAL_FLIP\n and \nROTATION\n keys.\n\n\nThe Workflow Manager performs the following algorithm to draw the bounding box when generating markup:\n\n\n\n\n\n Draw the rectangle ignoring rotation and flip.\n\n\n\n\n\n Rotate the rectangle counter-clockwise the given number of degrees around its top left corner.\n\n\n\n\n\n If the rectangle is flipped, flip horizontally around the top left corner.\n\n\n\n\n\n\n\n\nIn the image above you can see the three steps required to properly draw a bounding box.\nStep 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.\nThe detection for the image above is:\n\n\n\nMPFImageLocation {\n x_left_upper = 210, y_left_upper = 189, width = 177, height = 41, confidence = 1.0,\n { {\"ROTATION\", \"15\"}, { \"HORIZONTAL_FLIP\", \"true\" } }\n}\n\n\n\n\nNote that the \nx_left_upper\n, \ny_left_upper\n, \nwidth\n, and \nheight\n values describe the red rectangle. The addition\nof the \nROTATION\n property results in the blue rectangle, and the addition of the \nHORIZONTAL_FLIP\n property results\nin the green rectangle.\n\n\nOne way to think about the process is \"draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,\nand then rotate and flip around the pin\".\n\n\nRotation-Only Example\n\n\n\n\nThe Workflow Manager generated the above image by performing markup on the original image with the following\ndetection:\n\n\n\nMPFImageLocation {\n x_left_upper = 156, y_left_upper = 339, width = 194, height = 243, confidence = 1.0,\n { {\"ROTATION\", \"90.0\"} }\n}\n\n\n\n\nThe markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no\n\nHORIZONTAL_FLIP\n.\n\n\nIn order to properly extract the detection region from the original image, such as when generating an artifact, you\nwould need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the\nbottom-left corner so that the face is in the proper upright position.\n\n\nWhen the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding\nbox. That is why its position is described using the \nx_left_upper\n, and \ny_left_upper\n variables. They refer to the\ntop-left corner of the correctly oriented region.\n\n\nMPFVideoTrack\n\n\nStructure used to store the location of detected objects in a video file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFVideoTrack()\nMPFVideoTrack(\n int start_frame,\n int stop_frame,\n float confidence = -1,\n map frame_locations,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframe_locations\n\n\nmap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text can add an entry to \ndetection_properties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMPFVideoTrack track;\ntrack.start_frame = 0;\ntrack.stop_frame = 5;\ntrack.confidence = 1.0;\ntrack.frame_locations = frame_locations;\ntrack.detection_properties[\"TRANSCRIPT\"] = \"RE5ULTS FR0M A TEXT DETECTER\";\n\n\n\nMPFAudioTrack\n\n\nStructure used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFAudioTrack()\nMPFAudioTrack(\n int start_time,\n int stop_time,\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detection_properties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nStructure used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\nMPFGenericTrack()\nMPFGenericTrack(\n float confidence = -1,\n const Properties &detection_properties = {})\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\nProperties &\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nException Types\n\n\nMPFDetectionException\n\n\nException that should be thrown by the \nGetDetections()\n methods when an error occurs.\nThe content of the \nerror_code\n and \nwhat()\n members will appear in the JSON output object.\n\n\n\n\nConstructors:\n\n\n\n\nMPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\nMPFDetectionException(const std::string &what)\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror_code\n\n\nMPFDetectionError\n\n\nSpecifies the error type. See \nMPFDetectionError\n.\n\n\n\n\n\n\nwhat()\n\n\nconst char*\n\n\nTextual description of the specific error. (Inherited from \nstd::exception\n)\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the type of error that occurred in a \nGetDetections()\n method. It is used as a parameter to\nthe \nMPFDetectionException\n constructor. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component function has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size. For example, if a call to \ncv::imread()\n returns a frame of data with either the number of rows or columns less than or equal to 0.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nFor convenience, the OpenMPF provides the \nMPFImageReader\n (\nsource\n) and \nMPFVideoCapture\n (\nsource\n) utility classes to perform horizontal flipping, rotation, and cropping to a region of interest. Note, that when using these classes, the component will also need to utilize the class to perform a reverse transform to convert the transformed pixel coordinates back to the original (e.g. pre-flipped, pre-rotated, and pre-cropped) coordinate space.\n\n\nC++ Component Build Environment\n\n\nA C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nGPU Support\n\n\nFor components that want to take advantage of NVIDA GPU processors, please read the \nGPU Support Guide\n. Also ensure that your build environment has the NVIDIA CUDA Toolkit installed, as described in the \nBuild Environment Setup Guide\n.\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that C++ components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Optional component-specific configuration files\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib\n \u2514\u2500\u2500libComponentName.so - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nApache log4cxx\n for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call\n\nlog4cxx::Logger::getLogger(\"\")\n to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below.\n\n\nThe following log LEVELs are supported: \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nNote that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines.\n\n\nThe logger will write to both standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n.\n\n\nEach log statement will take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
"title": "C++ Batch Component API"
},
{
@@ -605,29 +640,24 @@
"text": "Returns true or false depending on the data type is supported or not. Function Definition: bool Supports(MPFDetectionDataType data_type) Parameters: Parameter Data Type Description data_type MPFDetectionDataType Return true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing. Returns: ( bool ) True if the component supports the data type, otherwise false. Example: // Sample component that supports only image and video files\nbool SampleComponent::Supports(MPFDetectionDataType data_type) {\n return data_type == MPFDetectionDataType::IMAGE || data_type == MPFDetectionDataType::VIDEO;\n}",
"title": "Supports(MPFDetectionDataType)"
},
- {
- "location": "/CPP-Batch-Component-API/index.html#getdetectiontype",
- "text": "Returns the type of object detected by the component. Function Definition: string GetDetectionType() Parameters: none Returns: ( string ) The type of object detected by the component. Should be in all CAPS. Examples include: FACE , MOTION , PERSON , SPEECH , CLASS (for object classification), or TEXT . Example: string SampleComponent::GetDetectionType() {\n return \"FACE\";\n}",
- "title": "GetDetectionType()"
- },
{
"location": "/CPP-Batch-Component-API/index.html#getdetectionsmpfimagejob",
- "text": "Used to detect objects in an image file. The MPFImageJob structure contains \nthe data_uri specifying the location of the image file. Currently, the data_uri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". \nThis is because all media is copied to the OpenMPF server before the job is executed. Function Definition: std::vector GetDetections(const MPFImageJob &job); Parameters: Parameter Data Type Description job const MPFImageJob& Structure containing details about the work to be performed. See MPFImageJob Returns: ( std::vector ) The MPFImageLocation data for each detected object.",
+ "text": "Used to detect objects in an image file. The MPFImageJob structure contains\nthe data_uri specifying the location of the image file. Currently, the data_uri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\nThis is because all media is copied to the OpenMPF server before the job is executed. Function Definition: std::vector GetDetections(const MPFImageJob &job); Parameters: Parameter Data Type Description job const MPFImageJob& Structure containing details about the work to be performed. See MPFImageJob Returns: ( std::vector ) The MPFImageLocation data for each detected object.",
"title": "GetDetections(MPFImageJob \u2026)"
},
{
"location": "/CPP-Batch-Component-API/index.html#getdetectionsmpfvideojob",
- "text": "Used to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\" \nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not \nguaranteed to receive requests in any order. For example, the first request processed by a component might receive \na request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B. Function Definition: std::vector GetDetections(const MPFVideoJob &job); Parameters: Parameter Data Type Description job const MPFVideoJob& Structure containing details about the work to be performed. See MPFVideoJob Returns: ( std::vector ) The MPFVideoTrack data for each detected object.",
+ "text": "Used to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive\na request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B. Function Definition: std::vector GetDetections(const MPFVideoJob &job); Parameters: Parameter Data Type Description job const MPFVideoJob& Structure containing details about the work to be performed. See MPFVideoJob Returns: ( std::vector ) The MPFVideoTrack data for each detected object.",
"title": "GetDetections(MPFVideoJob \u2026)"
},
{
"location": "/CPP-Batch-Component-API/index.html#getdetectionsmpfaudiojob",
- "text": "Used to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain \nthe entirety of the audio file. Function Definition: std::vector GetDetections(const MPFAudioJob &job); Parameters: Parameter Data Type Description job const MPFAudioJob & Structure containing details about the work to be performed. See MPFAudioJob Returns: ( std::vector ) The MPFAudioTrack data for each detected object.",
+ "text": "Used to detect objects in an audio file. Currently, audio files are not logically segmented, so a job will contain\nthe entirety of the audio file. Function Definition: std::vector GetDetections(const MPFAudioJob &job); Parameters: Parameter Data Type Description job const MPFAudioJob & Structure containing details about the work to be performed. See MPFAudioJob Returns: ( std::vector ) The MPFAudioTrack data for each detected object.",
"title": "GetDetections(MPFAudioJob \u2026)"
},
{
"location": "/CPP-Batch-Component-API/index.html#getdetectionsmpfgenericjob",
- "text": "Used to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and \nhandled generically. These files are not logically segmented, so a job will contain the entirety of the file. Function Definition: std::vector GetDetections(const MPFGenericJob &job); Parameters: Parameter Data Type Description job const MPFGenericJob & Structure containing details about the work to be performed. See MPFGenericJob Returns: ( std::vector ) The MPFGenericTrack data for each detected object.",
+ "text": "Used to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically. These files are not logically segmented, so a job will contain the entirety of the file. Function Definition: std::vector GetDetections(const MPFGenericJob &job); Parameters: Parameter Data Type Description job const MPFGenericJob & Structure containing details about the work to be performed. See MPFGenericJob Returns: ( std::vector ) The MPFGenericTrack data for each detected object.",
"title": "GetDetections(MPFGenericJob \u2026)"
},
{
@@ -637,7 +667,7 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#mpfjob",
- "text": "Structure containing information about a job to be performed on a piece of media. Constructor(s): MPFJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties) Members: Member Data Type Description job_name const string & A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. data_uri const string & The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\". job_properties const Properties & Contains a map of which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job. Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value. media_properties const Properties & Contains a map of of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below. Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows \nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not \npossible to change the value of properties set via environment variables at runtime and therefore \nthey should only be used to specify properties that will not change throughout the entire lifetime \nof the service (e.g. Docker container).",
+ "text": "Structure containing information about a job to be performed on a piece of media. Constructor(s): MPFJob(\n const string &job_name,\n const string &data_uri,\n const Properties &job_properties,\n const Properties &media_properties) Members: Member Data Type Description job_name const string & A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. data_uri const string & The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\". job_properties const Properties & Contains a map of which represents the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job. Note: The job_properties map may not contain the full set of job properties. For properties not contained in the map, the component must use a default value. media_properties const Properties & Contains a map of of metadata about the media associated with the job. The entries in the map vary depending on the type of media. Refer to the type-specific job structures below. Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
"title": "MPFJob"
},
{
@@ -647,12 +677,12 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#mpfvideojob",
- "text": "Extends MPFJob Structure containing data used for detection of objects in a video file. Constructor(s): MPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const Properties &job_properties,\n const Properties &media_properties) MPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const MPFVideoTrack &track,\n const Properties &job_properties,\n const Properties &media_properties) Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n const string & \n See MPFJob.job_name for description. \n \n \n data_uri \n const string & \n See MPFJob.data_uri for description. \n \n \n start_frame \n const int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stop_frame \n const int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n track \n const MPFVideoTrack & \n An MPFVideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n \n job_properties \n const Properties & \n See MPFJob.job_properties for description. \n \n \n media_properties \n const Properties & \n \n See MPFJob.media_properties for description.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.",
+ "text": "Extends MPFJob Structure containing data used for detection of objects in a video file. Constructor(s): MPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const Properties &job_properties,\n const Properties &media_properties) MPFVideoJob(\n const string &job_name,\n const string &data_uri,\n int start_frame,\n int stop_frame,\n const MPFVideoTrack &track,\n const Properties &job_properties,\n const Properties &media_properties) Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n const string & \n See MPFJob.job_name for description. \n \n \n data_uri \n const string & \n See MPFJob.data_uri for description. \n \n \n start_frame \n const int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stop_frame \n const int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n track \n const MPFVideoTrack & \n An MPFVideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n \n job_properties \n const Properties & \n See MPFJob.job_properties for description. \n \n \n media_properties \n const Properties & \n \n See MPFJob.media_properties for description.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.",
"title": "MPFVideoJob"
},
{
"location": "/CPP-Batch-Component-API/index.html#mpfaudiojob",
- "text": "Extends MPFJob Structure containing data used for detection of objects in an audio file. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file. Constructor(s): MPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const Properties &job_properties,\n const Properties &media_properties) MPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const MPFAudioTrack &track, \n const Properties &job_properties,\n const Properties &media_properties) Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n const string & \n See MPFJob.job_name for description. \n \n \n data_uri \n const string & \n See MPFJob.data_uri for description. \n \n \n start_time \n const int \n The time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections. \n \n \n stop_time \n const int \n The time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections. \n \n \n track \n const MPFAudioTrack & \n An MPFAudioTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n \n job_properties \n const Properties & \n See MPFJob.job_properties for description. \n \n \n media_properties \n const Properties & \n \n See MPFJob.media_properties for description.\n \n Includes the following key-value pairs:\n \n DURATION : length of audio file in milliseconds \n MIME_TYPE : the MIME type of the media",
+ "text": "Extends MPFJob Structure containing data used for detection of objects in an audio file. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file. Constructor(s): MPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const Properties &job_properties,\n const Properties &media_properties) MPFAudioJob(\n const string &job_name,\n const string &data_uri,\n int start_time,\n int stop_time,\n const MPFAudioTrack &track,\n const Properties &job_properties,\n const Properties &media_properties) Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n const string & \n See MPFJob.job_name for description. \n \n \n data_uri \n const string & \n See MPFJob.data_uri for description. \n \n \n start_time \n const int \n The time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections. \n \n \n stop_time \n const int \n The time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections. \n \n \n track \n const MPFAudioTrack & \n An MPFAudioTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n \n job_properties \n const Properties & \n See MPFJob.job_properties for description. \n \n \n media_properties \n const Properties & \n \n See MPFJob.media_properties for description.\n \n Includes the following key-value pairs:\n \n DURATION : length of audio file in milliseconds \n MIME_TYPE : the MIME type of the media",
"title": "MPFAudioJob"
},
{
@@ -667,17 +697,17 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#mpfimagelocation",
- "text": "Structure used to store the location of detected objects in a image file. Constructor(s): MPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {}) Members: Member Data Type Description x_left_upper int Upper left X coordinate of the detected object. y_left_upper int Upper left Y coordinate of the detected object. width int The width of the detected object. height int The height of the detected object. confidence float Represents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0. detection_properties Properties & Optional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS. See the section for ROTATION and HORIZONTAL_FLIP below, Example: A component that performs generic object classification can add an entry to detection_properties where the key is CLASSIFICATION and the value is the type of object detected. \nMPFImageLocation { \n x_left_upper = 0, y_left_upper = 0, width = 100, height = 50, confidence = 1.0,\n { {\"CLASSIFICATION\", \"backpack\"} } \n}",
+ "text": "Structure used to store the location of detected objects in a image file. Constructor(s): MPFImageLocation()\nMPFImageLocation(\n int x_left_upper,\n int y_left_upper,\n int width,\n int height,\n float confidence = -1,\n const Properties &detection_properties = {}) Members: Member Data Type Description x_left_upper int Upper left X coordinate of the detected object. y_left_upper int Upper left Y coordinate of the detected object. width int The width of the detected object. height int The height of the detected object. confidence float Represents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0. detection_properties Properties & Optional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the detection_properties map. For best practice, keys should be in all CAPS. See the section for ROTATION and HORIZONTAL_FLIP below, Example: A component that performs generic object classification can add an entry to detection_properties where the key is CLASSIFICATION and the value is the type of object detected. \nMPFImageLocation {\n x_left_upper = 0, y_left_upper = 0, width = 100, height = 50, confidence = 1.0,\n { {\"CLASSIFICATION\", \"backpack\"} }\n}",
"title": "MPFImageLocation"
},
{
"location": "/CPP-Batch-Component-API/index.html#rotation-and-horizontal-flip",
- "text": "When the detection_properties map contains a ROTATION key, it should be a floating point value in the interval [0.0, 360.0) indicating the orientation of the detection in degrees in the counter-clockwise direction.\nIn order to view the detection in the upright orientation, it must be rotated the given number of degrees in the\nclockwise direction. The detection_properties map can also contain a HORIZONTAL_FLIP property that will either be \"true\" or \"false\" .\nThe detection_properties map may have both HORIZONTAL_FLIP and ROTATION keys. The Workflow Manager performs the following algorithm to draw the bounding box when generating markup: \n Draw the rectangle ignoring rotation and flip. \n Rotate the rectangle counter-clockwise the given number of degrees around its top left corner. \n If the rectangle is flipped, flip horizontally around the top left corner. In the image above you can see the three steps required to properly draw a bounding box.\nStep 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.\nThe detection for the image above is: \nMPFImageLocation { \n x_left_upper = 210, y_left_upper = 189, width = 177, height = 41, confidence = 1.0,\n { {\"ROTATION\", \"15\"}, { \"HORIZONTAL_FLIP\", \"true\" } } \n} Note that the x_left_upper , y_left_upper , width , and height values describe the red rectangle. The addition\nof the ROTATION property results in the blue rectangle, and the addition of the HORIZONTAL_FLIP property results\nin the green rectangle. One way to think about the process is \"draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,\nand then rotate and flip around the pin\".",
+ "text": "When the detection_properties map contains a ROTATION key, it should be a floating point value in the interval [0.0, 360.0) indicating the orientation of the detection in degrees in the counter-clockwise direction.\nIn order to view the detection in the upright orientation, it must be rotated the given number of degrees in the\nclockwise direction. The detection_properties map can also contain a HORIZONTAL_FLIP property that will either be \"true\" or \"false\" .\nThe detection_properties map may have both HORIZONTAL_FLIP and ROTATION keys. The Workflow Manager performs the following algorithm to draw the bounding box when generating markup: \n Draw the rectangle ignoring rotation and flip. \n Rotate the rectangle counter-clockwise the given number of degrees around its top left corner. \n If the rectangle is flipped, flip horizontally around the top left corner. In the image above you can see the three steps required to properly draw a bounding box.\nStep 1 is drawn in red. Step 2 is drawn in blue. Step 3 and the final result is drawn in green.\nThe detection for the image above is: \nMPFImageLocation {\n x_left_upper = 210, y_left_upper = 189, width = 177, height = 41, confidence = 1.0,\n { {\"ROTATION\", \"15\"}, { \"HORIZONTAL_FLIP\", \"true\" } }\n} Note that the x_left_upper , y_left_upper , width , and height values describe the red rectangle. The addition\nof the ROTATION property results in the blue rectangle, and the addition of the HORIZONTAL_FLIP property results\nin the green rectangle. One way to think about the process is \"draw the unrotated and unflipped rectangle, stick a pin in the upper left corner,\nand then rotate and flip around the pin\".",
"title": "Rotation and Horizontal Flip"
},
{
"location": "/CPP-Batch-Component-API/index.html#rotation-only-example",
- "text": "The Workflow Manager generated the above image by performing markup on the original image with the following\ndetection: \nMPFImageLocation { \n x_left_upper = 156, y_left_upper = 339, width = 194, height = 243, confidence = 1.0,\n { {\"ROTATION\", \"90.0\"} } \n} The markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no HORIZONTAL_FLIP . In order to properly extract the detection region from the original image, such as when generating an artifact, you\nwould need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the\nbottom-left corner so that the face is in the proper upright position. When the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding\nbox. That is why its position is described using the x_left_upper , and y_left_upper variables. They refer to the\ntop-left corner of the correctly oriented region.",
+ "text": "The Workflow Manager generated the above image by performing markup on the original image with the following\ndetection: \nMPFImageLocation {\n x_left_upper = 156, y_left_upper = 339, width = 194, height = 243, confidence = 1.0,\n { {\"ROTATION\", \"90.0\"} }\n} The markup process followed steps 1 and 2 in the previous section, skipping step 3 because there is no HORIZONTAL_FLIP . In order to properly extract the detection region from the original image, such as when generating an artifact, you\nwould need to rotate the region in the above image 90 degrees clockwise around the cyan dot currently shown in the\nbottom-left corner so that the face is in the proper upright position. When the rotation is properly corrected in this way, the cyan dot will appear in the top-left corner of the bounding\nbox. That is why its position is described using the x_left_upper , and y_left_upper variables. They refer to the\ntop-left corner of the correctly oriented region.",
"title": "Rotation-Only Example"
},
{
@@ -702,7 +732,7 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#mpfdetectionexception",
- "text": "Exception that should be thrown by the GetDetections() methods when an error occurs. \nThe content of the error_code and what() members will appear in the JSON output object. Constructors: MPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\nMPFDetectionException(const std::string &what) Member Data Type Description error_code MPFDetectionError Specifies the error type. See MPFDetectionError . what() const char* Textual description of the specific error. (Inherited from std::exception )",
+ "text": "Exception that should be thrown by the GetDetections() methods when an error occurs.\nThe content of the error_code and what() members will appear in the JSON output object. Constructors: MPFDetectionException(MPFDetectionError error_code, const std::string &what = \"\")\nMPFDetectionException(const std::string &what) Member Data Type Description error_code MPFDetectionError Specifies the error type. See MPFDetectionError . what() const char* Textual description of the specific error. (Inherited from std::exception )",
"title": "MPFDetectionException"
},
{
@@ -712,7 +742,7 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#mpfdetectionerror",
- "text": "Enum used to indicate the type of error that occurred in a GetDetections() method. It is used as a parameter to \nthe MPFDetectionException constructor. A component is not required to support all error types. ENUM Description MPF_DETECTION_SUCCESS The component function completed successfully. MPF_OTHER_DETECTION_ERROR_TYPE The component function has failed for a reason that is not captured by any of the other error codes. MPF_DETECTION_NOT_INITIALIZED The initialization of the component, or the initialization of any of its dependencies, has failed for any reason. MPF_UNSUPPORTED_DATA_TYPE The job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received. MPF_COULD_NOT_OPEN_DATAFILE The data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. Use MPF_COULD_NOT_OPEN_MEDIA for media files. MPF_COULD_NOT_READ_DATAFILE There is a failure reading data from a successfully opened input data file. Use MPF_COULD_NOT_READ_MEDIA for media files. MPF_FILE_WRITE_ERROR The component received a failure for any reason when attempting to write to a file. MPF_BAD_FRAME_SIZE The frame data retrieved has an incorrect or invalid frame size. For example, if a call to cv::imread() returns a frame of data with either the number of rows or columns less than or equal to 0. MPF_DETECTION_FAILED General failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects. MPF_INVALID_PROPERTY The component received a property that is unrecognized or has an invalid/out-of-bounds value. MPF_MISSING_PROPERTY The component received a job that is missing a required property. MPF_MEMORY_ALLOCATION_FAILED The component failed to allocate memory for any reason. MPF_GPU_ERROR The job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected. MPF_NETWORK_ERROR The component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout. MPF_COULD_NOT_OPEN_MEDIA The media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. MPF_COULD_NOT_READ_MEDIA There is a failure reading data from a successfully opened media file.",
+ "text": "Enum used to indicate the type of error that occurred in a GetDetections() method. It is used as a parameter to\nthe MPFDetectionException constructor. A component is not required to support all error types. ENUM Description MPF_DETECTION_SUCCESS The component function completed successfully. MPF_OTHER_DETECTION_ERROR_TYPE The component function has failed for a reason that is not captured by any of the other error codes. MPF_DETECTION_NOT_INITIALIZED The initialization of the component, or the initialization of any of its dependencies, has failed for any reason. MPF_UNSUPPORTED_DATA_TYPE The job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received. MPF_COULD_NOT_OPEN_DATAFILE The data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. Use MPF_COULD_NOT_OPEN_MEDIA for media files. MPF_COULD_NOT_READ_DATAFILE There is a failure reading data from a successfully opened input data file. Use MPF_COULD_NOT_READ_MEDIA for media files. MPF_FILE_WRITE_ERROR The component received a failure for any reason when attempting to write to a file. MPF_BAD_FRAME_SIZE The frame data retrieved has an incorrect or invalid frame size. For example, if a call to cv::imread() returns a frame of data with either the number of rows or columns less than or equal to 0. MPF_DETECTION_FAILED General failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects. MPF_INVALID_PROPERTY The component received a property that is unrecognized or has an invalid/out-of-bounds value. MPF_MISSING_PROPERTY The component received a job that is missing a required property. MPF_MEMORY_ALLOCATION_FAILED The component failed to allocate memory for any reason. MPF_GPU_ERROR The job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected. MPF_NETWORK_ERROR The component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout. MPF_COULD_NOT_OPEN_MEDIA The media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. MPF_COULD_NOT_READ_MEDIA There is a failure reading data from a successfully opened media file.",
"title": "MPFDetectionError"
},
{
@@ -722,7 +752,7 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#c-component-build-environment",
- "text": "A C++ component library must be built for the same C++ compiler and Linux \nversion that is used by the OpenMPF Component Executable. This is to ensure \ncompatibility between the executable and the library functions at the \nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on \nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component \nExecutable is built with g++ (GCC) 9.3.0-17. Components should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.",
+ "text": "A C++ component library must be built for the same C++ compiler and Linux\nversion that is used by the OpenMPF Component Executable. This is to ensure\ncompatibility between the executable and the library functions at the\nApplication Binary Interface (ABI) level. At this writing, the OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30), and the OpenMPF C++ Component\nExecutable is built with g++ (GCC) 9.3.0-17. Components should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and C++ libraries), and any configuration or data files.",
"title": "C++ Component Build Environment"
},
{
@@ -752,12 +782,12 @@
},
{
"location": "/CPP-Batch-Component-API/index.html#logging",
- "text": "It is recommended to use Apache log4cxx for \nOpenMPF Component logging. Components using log4cxx should not configure logging themselves. \nThe Component Executor will configure log4cxx globally. Components should call log4cxx::Logger::getLogger(\"\") to a get a reference to the logger. If you \nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below. The following log LEVELs are supported: FATAL, ERROR, WARN, INFO, DEBUG, TRACE .\nThe LOG_LEVEL environment variable can be set to one of the log levels to change the logging \nverbosity. When LOG_LEVEL is absent, INFO is used. Note that multiple instances of the same component can log to the same file. \nAlso, logging content can span multiple lines. The logger will write to both standard error and ${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log . Each log statement will take the form: DATE TIME LEVEL CONTENT For example: 2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
+ "text": "It is recommended to use Apache log4cxx for\nOpenMPF Component logging. Components using log4cxx should not configure logging themselves.\nThe Component Executor will configure log4cxx globally. Components should call log4cxx::Logger::getLogger(\"\") to a get a reference to the logger. If you\nare using a different logging framework, you should make sure its behavior is similar to how\nthe Component Executor configures log4cxx as described below. The following log LEVELs are supported: FATAL, ERROR, WARN, INFO, DEBUG, TRACE .\nThe LOG_LEVEL environment variable can be set to one of the log levels to change the logging\nverbosity. When LOG_LEVEL is absent, INFO is used. Note that multiple instances of the same component can log to the same file.\nAlso, logging content can span multiple lines. The logger will write to both standard error and ${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log . Each log statement will take the form: DATE TIME LEVEL CONTENT For example: 2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]",
"title": "Logging"
},
{
"location": "/Python-Batch-Component-API/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used detect\nobjects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n.\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent_cls = locate_component_class()\ncomponent = component_cls()\ndetection_type = component.detection_type\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections)\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods and has a \ndetection_type\n field.\nSee the \nAPI Specification\n for more information.\n\n\nThe figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe Component Executor determines that it is running a Python component so it creates an instance of the\n\nPythonComponentHandle\n\nclass. The \nPythonComponentHandle\n class creates an instance of the component class and calls one of the\n\nget_detections_from_*\n methods on the component instance. The example\nabove is an image component, so \nPythonComponentHandle\n calls \nExampleImageFaceDetection.get_detections_from_image\n\non the component instance. The component instance creates an instance of\n\nmpf_component_util.ImageReader\n to access the image. Components that support video\nwould implement \nget_detections_from_video\n and use\n\nmpf_component_util.VideoCapture\n instead.\n\n\nThis figure show the structure when the mixin classes are used:\n\n\n\n\nThe figure above shows a video component, \nExampleVideoFaceDetection\n, that extends the\n\nmpf_component_util.VideoCaptureMixin\n class. \nPythonComponentHandle\n will\ncall \nget_detections_from_video\n on an instance of \nExampleVideoFaceDetection\n. \nExampleVideoFaceDetection\n does not\nimplement \nget_detections_from_video\n, so the implementation inherited from \nmpf_component_util.VideoCaptureMixin\n\ngets called. \nmpf_component_util.VideoCaptureMixin.get_detections_from_video\n creates an instance of\n\nmpf_component_util.VideoCapture\n and calls\n\nExampleVideoFaceDetection.get_detections_from_video_capture\n, passing in the \nmpf_component_util.VideoCapture\n it\njust created. \nExampleVideoFaceDetection.get_detections_from_video_capture\n is where the component reads the video\nusing the passed-in \nmpf_component_util.VideoCapture\n and attempts to find detections. Components that support images\nwould extend \nmpf_component_util.ImageReaderMixin\n, implement\n\nget_detections_from_image_reader\n, and access the image using the passed-in\n\nmpf_component_util.ImageReader\n.\n\n\nDuring component registration a \nvirtualenv\n is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is \nmpf_component_api\n. \nmpf_component_api\n is the package containing the job classes\n(e.g. \nmpf_component_api.ImageJob\n,\n\nmpf_component_api.VideoJob\n) and detection result classes\n(e.g. \nmpf_component_api.ImageLocation\n,\n\nmpf_component_api.VideoTrack\n).\n\n\nHow to Create a Python Component\n\n\nThere are two types of Python components that are supported, setuptools-based components and basic Python components.\nBasic Python components are quicker to set up, but have no built-in support for dependency management.\nAll dependencies must be handled by the developer. Setuptools-based components are recommended since they use\nsetuptools and pip for dependency management.\n\n\nEither way, the end goal is to create a Docker image. This document describes the steps for developing a component\noutside of Docker. Many developers prefer to do that first and then focus on building and running their component\nwithin Docker after they are confident it works in a local environment. Alternatively, some developers feel confident\ndeveloping their component entirely within Docker. When you're ready for the Docker steps, refer to the\n\nREADME\n.\n\n\nGet openmpf-python-component-sdk\n\n\nIn order to create a Python component you will need to clone the\n\nopenmpf-python-component-sdk repository\n if you don't\nalready have it. While not technically required, it is recommended to also clone the\n\nopenmpf-build-tools repository\n.\nThe rest of the steps assume you cloned openmpf-python-component-sdk to\n\n~/openmpf-projects/openmpf-python-component-sdk\n. The rest of the steps also assume that if you cloned the\nopenmpf-build-tools repository, you cloned it to \n~/openmpf-projects/openmpf-build-tools\n.\n\n\nSetup Python Component Libraries\n\n\nThe component packaging steps require that wheel files for \nmpf_component_api\n, \nmpf_component_util\n, and\ntheir dependencies are available in the \n~/mpf-sdk-install/python/wheelhouse\n directory.\n\n\nIf you have openmpf-build-tools, then you can run:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk\n\n\n\nTo setup the libraries manually you can run:\n\n\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/api\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/component_util\n\n\n\nHow to Create a Setuptools-based Python Component\n\n\nIn this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json\n\n\n\n2. Create pyproject.toml file in project's top-level directory:\n\n\npyproject.toml\n should contain the following content:\n\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n\n3. Create setup.cfg file in project's top-level directory:\n\n\nExample of a minimal setup.cfg file:\n\n\n[metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/*\n\n\n\nThe \nname\n parameter defines the distribution name. Typically the distribution name matches the component name.\n\n\nAny dependencies that component requires should be listed in the \ninstall_requires\n field.\n\n\nThe Component Executor looks in the \nentry_points\n element and uses the \nmpf.exported_component\n field to determine\nthe component class. The right hand side of \ncomponent =\n should be the dotted module name, followed by a \n:\n,\nfollowed by the name of the class. The general pattern is\n\n'mpf.exported_component': 'component = .:'\n. In the above example,\n\nMyComponent\n is the class name. The module is listed as \nmy_component.my_component\n because the \nmy_component\n\npackage contains the \nmy_component.py\n file and the \nmy_component.py\n file contains the \nMyComponent\n class.\n\n\nThe \n[options.package_data]\n section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed.\n\n\n4. Create descriptor.json file in MyComponent/plugin-files/descriptor:\n\n\nThe \nbatchLibrary\n field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \n\"batchLibrary\" : \"MyComponent\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n5. Implement your component class:\n\n\nBelow is an example of the structure of a simple component. This component extends\n\nmpf_component_util.VideoCaptureMixin\n to simplify the use of\n\nmpf_component_util.VideoCapture\n. You would replace the call to\n\nrun_detection_algorithm_on_frame\n with your component-specific logic.\n\n\nimport logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n detection_type = 'FACE'\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track\n\n\n\n6. Optional: Add prebuilt wheel files if not available on PyPi:\n\n\nIf your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's \nplugin-files/wheelhouse\n directory.\nThe prebuilt library names must be listed in your \nsetup.cfg\n file's \ninstall_requires\n field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's \nplugin-files/wheelhouse\n directory.\n\n\n7. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nMyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-1.18.4-cp38-cp38-manylinux1_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.2.0.34-cp38-cp38-manylinux1_x86_64.whl\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following commands:\n\n\nmkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n8. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nHow to Create a Basic Python Component\n\n\nIn this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py\n\n\n\n2. Create descriptor.json file in MyComponent/descriptor:\n\n\nThe \nbatchLibrary\n field should be the full path to the Python file containing your component class.\nIn this example the field should be: \n\"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n3. Implement your component class:\n\n\nBelow is an example of the structure of a simple component that does not use\n\nmpf_component_util.VideoCaptureMixin\n. You would replace the call to\n\nrun_detection_algorithm\n with your component-specific logic.\n\n\nimport logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n detection_type = 'FACE'\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent\n\n\n\nThe Component Executor looks for a module-level variable named \nEXPORT_MPF_COMPONENT\n to specify which class\nis the component.\n\n\n4. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following command:\n\n\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n5. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nAPI Specification\n\n\nAn OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods and has a\n\ndetection_type\n field.\n\n\ncomponent.get_detections_from_* methods\n\n\nAll get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. \nmpf_component_api.ImageJob\n, \nmpf_component_api.VideoJob\n). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example:\n\n\ninstance method:\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nstatic method:\n\n\nclass MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nclass method:\n\n\nclass MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nAll get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. \nmpf_component_api.ImageLocation\n, \nmpf_component_api.VideoTrack\n). The return value is normally a list or generator,\nbut any iterable can be used.\n\n\ncomponent.detection_type\n\n\n\n\nstr\n field describing the type of object that is detected by the component. Should be in all CAPS.\nExamples include: \nFACE\n, \nMOTION\n, \nPERSON\n, \nSPEECH\n, \nCLASS\n (for object classification), or \nTEXT\n.\n\n\nExample:\n\n\n\n\nclass MyComponent:\n detection_type = 'FACE'\n\n\n\n\nImage API\n\n\ncomponent.get_detections_from_image(image_job)\n\n\nUsed to detect objects in an image file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nget_detections_from_image\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nimage_job\n\n\nmpf_component_api.ImageJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.ImageLocation\n\n\n\n\nmpf_component_api.ImageJob\n\n\nClass containing data used for detection of objects in an image file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nfeed_forward_location\n\n \nNone\n or \nmpf_component_api.ImageLocation\n\n \nAn \nmpf_component_api.ImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.ImageLocation\n\n\nClass used to store the location of detected objects in a image file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, x_left_upper, y_left_upper, width, height, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nSee here for information about rotation and horizontal flipping.\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\nmpf_component_api.ImageLocation(0, 0, 100, 100, 1.0, {'CLASSIFICATION': 'backpack'})\n\n\n\nmpf_component_util.ImageReader\n\n\nmpf_component_util.ImageReader\n is a utility class for accessing images. It is the image equivalent to\n\nmpf_component_util.VideoCapture\n. Like \nmpf_component_util.VideoCapture\n,\nit may modify the read-in frame data based on job_properties. From the point of view of someone using\n\nmpf_component_util.ImageReader\n, these modifications are mostly transparent. \nmpf_component_util.ImageReader\n makes\nit look like you are reading the original image file as though it has already been rotated, flipped, cropped, etc.\n\n\nOne issue with this approach is that the detection bounding boxes will be relative to the\nmodified frame data, not the original. To make the detections relative to the original image\nthe \nmpf_component_util.ImageReader.reverse_transform(image_location)\n method must be called on each\n\nmpf_component_api.ImageLocation\n. Since the use of \nmpf_component_util.ImageReader\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.ImageReader\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_image(image_job):\n image_reader = mpf_component_util.ImageReader(image_job)\n image = image_reader.get_image()\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_image_locations = run_component_specific_algorithm(image)\n for result in result_image_locations:\n image_reader.reverse_transform(result)\n yield result\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.ImageReaderMixin\n for a more concise way to use\n\nmpf_component_util.ImageReader\n below.\n\n\nmpf_component_util.ImageReaderMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.ImageReader\n.\n\nmpf_component_util.ImageReaderMixin\n takes care of initializing a \nmpf_component_util.ImageReader\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.ImageReaderMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.ImageReaderMixin\n.\n\n\nThe component must implement \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must read the image using the \nmpf_component_util.ImageReader\n\n that is passed in to \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must NOT implement \nget_detections_from_image(image_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.ImageReader.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.ImageReaderMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.ImageReaderMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_image_reader(image_job, image_reader):\n image = image_reader.get_image()\n\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n return run_component_specific_algorithm(image)\n\n\n\nmpf_component_util.ImageReaderMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\n\n\nVideo API\n\n\ncomponent.get_detections_from_video(video_job)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive a\nrequest for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_video(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_video\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.VideoJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.VideoJob\n\n\nClass containing data used for detection of objects in a video file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n \n\n \n\n \nstart_frame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n \n \n\n \nstop_frame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n \n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.VideoTrack\n\n \nAn \nmpf_component_api.VideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.VideoTrack\n\n\nClass used to store the location of detected objects in a video file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\nframe_locations\n\n\ndict[int, mpf_component_api.ImageLocation]\n\n\nA dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a \nmpf_component_api.ImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.VideoTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\ntrack = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())\n\n\n\nmpf_component_util.VideoCapture\n\n\nmpf_component_util.VideoCapture\n is a utility class for reading videos. \nmpf_component_util.VideoCapture\n works very\nsimilarly to \ncv2.VideoCapture\n, except that it might modify the video frames based on job properties. From the point\nof view of someone using \nmpf_component_util.VideoCapture\n, these modifications are mostly transparent.\n\nmpf_component_util.VideoCapture\n makes it look like you are reading the original video file as though it has already\nbeen rotated, flipped, cropped, etc. Also, if frame skipping is enabled, such as by setting the value of the\n\nFRAME_INTERVAL\n job property, it makes it look like you are reading the video as though it never contained the\nskipped frames.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the\nmodified video, not the original. To make the detections relative to the original video\nthe \nmpf_component_util.VideoCapture.reverse_transform(video_track)\n method must be called on each\n\nmpf_component_api.VideoTrack\n. Since the use of \nmpf_component_util.VideoCapture\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.VideoCapture\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n video_capture = mpf_component_util.VideoCapture(video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n video_capture.reverse_transform(track)\n yield track\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.VideoCaptureMixin\n for a more concise way to use\n\nmpf_component_util.VideoCapture\n below.\n\n\nmpf_component_util.VideoCaptureMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.VideoCapture\n.\n\nmpf_component_util.VideoCaptureMixin\n takes care of initializing a \nmpf_component_util.VideoCapture\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.VideoCaptureMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.VideoCaptureMixin\n.\n\n\nThe component must implement \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must read the video using the \nmpf_component_util.VideoCapture\n\n that is passed in to \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must NOT implement \nget_detections_from_video(video_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.VideoCapture.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.VideoCaptureMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_video_capture(video_job, video_capture):\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield track\n\n\n\nmpf_component_util.VideoCaptureMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\nFor example:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin, mpf_component_util.ImageReaderMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n ...\n\n @staticmethod\n def get_detections_from_image_reader(image_job, image_reader):\n ...\n\n\n\nAudio API\n\n\ncomponent.get_detections_from_audio(audio_job)\n\n\nUsed to detect objects in an audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_audio(self, audio_job):\n return [mpf_component_api.AudioTrack(...), ...]\n\n\n\nget_detections_from_audio\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\naudio_job\n\n\nmpf_component_api.AudioJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.AudioTrack\n\n\n\n\nmpf_component_api.AudioJob\n\n\nClass containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\".\n\n \n\n \n\n \nstart_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n \n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.AudioTrack\n\n \nAn \nmpf_component_api.AudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.AudioTrack\n\n\nClass used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_time, stop_time, confidence, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.AudioTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\nGeneric API\n\n\ncomponent.get_detections_from_generic(generic_job)\n\n\nUsed to detect objects in files that are not video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_generic(self, generic_job):\n return [mpf_component_api.GenericTrack(...), ...]\n\n\n\nget_detections_from_generic\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ngeneric_job\n\n\nmpf_component_api.GenericJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.GenericTrack\n\n\n\n\nmpf_component_api.GenericJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is not\nlogically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.txt\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.GenericTrack\n\n \nAn \nmpf_component_api.GenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.GenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nHow to Report Errors\n\n\nThe following is an example of how to throw an exception:\n\n\nimport mpf_component_api as mpf\n\n...\nraise mpf.DetectionError.MISSING_PROPERTY.exception(\n 'The REALLY_IMPORTANT property must be provided as a job property.')\n\n\n\nThe Python Batch Component API supports all of the same error types\nlisted \nhere\n for the C++ Batch Component API. Be sure to omit\nthe \nMPF_\n prefix. You can replace the \nMISSING_PROPERTY\n part in the above code with any other error type. When\ngenerating an exception, choose the type that best describes your error.\n\n\nPython Component Build Environment\n\n\nAll Python components must work with CPython 3.8.10. Also, Python components \nmust work with the Linux version that is used by the OpenMPF Component \nExecutable. At this writing, OpenMPF runs on \nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any \nOS, but incompatibility issues can arise when using Python libraries that \ninclude compiled extension modules. Python libraries are typically distributed \nas wheel files. The wheel format requires that the file name follows the pattern \nof \n----.whl\n. \n\n--\n are called \n\ncompatibility tags\n. For example, \n\nmpf_component_api\n is pure Python, so the name of its wheel file is \n\nmpf_component_api-0.1-py3-none-any.whl\n. \npy3\n means it will work with any \nPython 3 implementation because it does not use any implementation-specific \nfeatures. \nnone\n means that it does not use the Python ABI. \nany\n means it will \nwork on any platform.\n\n\nThe following combinations of compatibility tags are supported:\n\n\n\n\ncp38-cp38-manylinux2014_x86_64\n\n\ncp38-cp38-manylinux2010_x86_64\n\n\ncp38-cp38-manylinux1_x86_64\n\n\ncp38-cp38-linux_x86_64\n\n\ncp38-abi3-manylinux2014_x86_64\n\n\ncp38-abi3-manylinux2010_x86_64\n\n\ncp38-abi3-manylinux1_x86_64\n\n\ncp38-abi3-linux_x86_64\n\n\ncp38-none-manylinux2014_x86_64\n\n\ncp38-none-manylinux2010_x86_64\n\n\ncp38-none-manylinux1_x86_64\n\n\ncp38-none-linux_x86_64\n\n\ncp37-abi3-manylinux2014_x86_64\n\n\ncp37-abi3-manylinux2010_x86_64\n\n\ncp37-abi3-manylinux1_x86_64\n\n\ncp37-abi3-linux_x86_64\n\n\ncp36-abi3-manylinux2014_x86_64\n\n\ncp36-abi3-manylinux2010_x86_64\n\n\ncp36-abi3-manylinux1_x86_64\n\n\ncp36-abi3-linux_x86_64\n\n\ncp35-abi3-manylinux2014_x86_64\n\n\ncp35-abi3-manylinux2010_x86_64\n\n\ncp35-abi3-manylinux1_x86_64\n\n\ncp35-abi3-linux_x86_64\n\n\ncp34-abi3-manylinux2014_x86_64\n\n\ncp34-abi3-manylinux2010_x86_64\n\n\ncp34-abi3-manylinux1_x86_64\n\n\ncp34-abi3-linux_x86_64\n\n\ncp33-abi3-manylinux2014_x86_64\n\n\ncp33-abi3-manylinux2010_x86_64\n\n\ncp33-abi3-manylinux1_x86_64\n\n\ncp33-abi3-linux_x86_64\n\n\ncp32-abi3-manylinux2014_x86_64\n\n\ncp32-abi3-manylinux2010_x86_64\n\n\ncp32-abi3-manylinux1_x86_64\n\n\ncp32-abi3-linux_x86_64\n\n\npy38-none-manylinux2014_x86_64\n\n\npy38-none-manylinux2010_x86_64\n\n\npy38-none-manylinux1_x86_64\n\n\npy38-none-linux_x86_64\n\n\npy3-none-manylinux2014_x86_64\n\n\npy3-none-manylinux2010_x86_64\n\n\npy3-none-manylinux1_x86_64\n\n\npy3-none-linux_x86_64\n\n\npy37-none-manylinux2014_x86_64\n\n\npy37-none-manylinux2010_x86_64\n\n\npy37-none-manylinux1_x86_64\n\n\npy37-none-linux_x86_64\n\n\npy36-none-manylinux2014_x86_64\n\n\npy36-none-manylinux2010_x86_64\n\n\npy36-none-manylinux1_x86_64\n\n\npy36-none-linux_x86_64\n\n\npy35-none-manylinux2014_x86_64\n\n\npy35-none-manylinux2010_x86_64\n\n\npy35-none-manylinux1_x86_64\n\n\npy35-none-linux_x86_64\n\n\npy34-none-manylinux2014_x86_64\n\n\npy34-none-manylinux2010_x86_64\n\n\npy34-none-manylinux1_x86_64\n\n\npy34-none-linux_x86_64\n\n\npy33-none-manylinux2014_x86_64\n\n\npy33-none-manylinux2010_x86_64\n\n\npy33-none-manylinux1_x86_64\n\n\npy33-none-linux_x86_64\n\n\npy32-none-manylinux2014_x86_64\n\n\npy32-none-manylinux2010_x86_64\n\n\npy32-none-manylinux1_x86_64\n\n\npy32-none-linux_x86_64\n\n\npy31-none-manylinux2014_x86_64\n\n\npy31-none-manylinux2010_x86_64\n\n\npy31-none-manylinux1_x86_64\n\n\npy31-none-linux_x86_64\n\n\npy30-none-manylinux2014_x86_64\n\n\npy30-none-manylinux2010_x86_64\n\n\npy30-none-manylinux1_x86_64\n\n\npy30-none-linux_x86_64\n\n\ncp38-none-any\n\n\npy38-none-any\n\n\npy3-none-any\n\n\npy37-none-any\n\n\npy36-none-any\n\n\npy35-none-any\n\n\npy34-none-any\n\n\npy33-none-any\n\n\npy32-none-any\n\n\npy31-none-any\n\n\npy30-none-any\n\n\n\n\nThe list above was generated with the following command: \n\npython3 -c 'import pip._internal.pep425tags as tags; print(\"\\n\".join(str(t) for t in tags.get_supported()))'\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through\nmultiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input\n(i.e. when processing the same job).\n\n\nLogging\n\n\nIt recommended that components use Python's built-in \n\nlogging\n module.\n The component should \n\nimport logging\n and call \nlogging.getLogger('')\n to get a logger instance. \nThe component should not configure logging itself. The Component Executor will configure the \n\nlogging\n module for the component. The logger will write log messages to standard error and \n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n. Note that multiple instances of the \nsame component can log to the same file. Also, logging content can span multiple lines. \n\n\nThe following log levels are supported: \nFATAL, ERROR, WARN, INFO, DEBUG\n. \nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging \nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nThe format of the log messages is:\n\n\nDATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE\n\n\n\nFor example:\n\n\n2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used detect\nobjects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n.\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent_cls = locate_component_class()\ncomponent = component_cls()\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections)\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods. See the \nAPI Specification\n for more information.\n\n\nThe figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe Component Executor determines that it is running a Python component so it creates an instance of the\n\nPythonComponentHandle\n\nclass. The \nPythonComponentHandle\n class creates an instance of the component class and calls one of the\n\nget_detections_from_*\n methods on the component instance. The example\nabove is an image component, so \nPythonComponentHandle\n calls \nExampleImageFaceDetection.get_detections_from_image\n\non the component instance. The component instance creates an instance of\n\nmpf_component_util.ImageReader\n to access the image. Components that support video\nwould implement \nget_detections_from_video\n and use\n\nmpf_component_util.VideoCapture\n instead.\n\n\nThis figure show the structure when the mixin classes are used:\n\n\n\n\nThe figure above shows a video component, \nExampleVideoFaceDetection\n, that extends the\n\nmpf_component_util.VideoCaptureMixin\n class. \nPythonComponentHandle\n will\ncall \nget_detections_from_video\n on an instance of \nExampleVideoFaceDetection\n. \nExampleVideoFaceDetection\n does not\nimplement \nget_detections_from_video\n, so the implementation inherited from \nmpf_component_util.VideoCaptureMixin\n\ngets called. \nmpf_component_util.VideoCaptureMixin.get_detections_from_video\n creates an instance of\n\nmpf_component_util.VideoCapture\n and calls\n\nExampleVideoFaceDetection.get_detections_from_video_capture\n, passing in the \nmpf_component_util.VideoCapture\n it\njust created. \nExampleVideoFaceDetection.get_detections_from_video_capture\n is where the component reads the video\nusing the passed-in \nmpf_component_util.VideoCapture\n and attempts to find detections. Components that support images\nwould extend \nmpf_component_util.ImageReaderMixin\n, implement\n\nget_detections_from_image_reader\n, and access the image using the passed-in\n\nmpf_component_util.ImageReader\n.\n\n\nDuring component registration a \nvirtualenv\n is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is \nmpf_component_api\n. \nmpf_component_api\n is the package containing the job classes\n(e.g. \nmpf_component_api.ImageJob\n,\n\nmpf_component_api.VideoJob\n) and detection result classes\n(e.g. \nmpf_component_api.ImageLocation\n,\n\nmpf_component_api.VideoTrack\n).\n\n\nHow to Create a Python Component\n\n\nThere are two types of Python components that are supported, setuptools-based components and basic Python components.\nBasic Python components are quicker to set up, but have no built-in support for dependency management.\nAll dependencies must be handled by the developer. Setuptools-based components are recommended since they use\nsetuptools and pip for dependency management.\n\n\nEither way, the end goal is to create a Docker image. This document describes the steps for developing a component\noutside of Docker. Many developers prefer to do that first and then focus on building and running their component\nwithin Docker after they are confident it works in a local environment. Alternatively, some developers feel confident\ndeveloping their component entirely within Docker. When you're ready for the Docker steps, refer to the\n\nREADME\n.\n\n\nGet openmpf-python-component-sdk\n\n\nIn order to create a Python component you will need to clone the\n\nopenmpf-python-component-sdk repository\n if you don't\nalready have it. While not technically required, it is recommended to also clone the\n\nopenmpf-build-tools repository\n.\nThe rest of the steps assume you cloned openmpf-python-component-sdk to\n\n~/openmpf-projects/openmpf-python-component-sdk\n. The rest of the steps also assume that if you cloned the\nopenmpf-build-tools repository, you cloned it to \n~/openmpf-projects/openmpf-build-tools\n.\n\n\nSetup Python Component Libraries\n\n\nThe component packaging steps require that wheel files for \nmpf_component_api\n, \nmpf_component_util\n, and\ntheir dependencies are available in the \n~/mpf-sdk-install/python/wheelhouse\n directory.\n\n\nIf you have openmpf-build-tools, then you can run:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk\n\n\n\nTo setup the libraries manually you can run:\n\n\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/api\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/component_util\n\n\n\nHow to Create a Setuptools-based Python Component\n\n\nIn this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json\n\n\n\n2. Create pyproject.toml file in project's top-level directory:\n\n\npyproject.toml\n should contain the following content:\n\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n\n3. Create setup.cfg file in project's top-level directory:\n\n\nExample of a minimal setup.cfg file:\n\n\n[metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/*\n\n\n\nThe \nname\n parameter defines the distribution name. Typically the distribution name matches the component name.\n\n\nAny dependencies that component requires should be listed in the \ninstall_requires\n field.\n\n\nThe Component Executor looks in the \nentry_points\n element and uses the \nmpf.exported_component\n field to determine\nthe component class. The right hand side of \ncomponent =\n should be the dotted module name, followed by a \n:\n,\nfollowed by the name of the class. The general pattern is\n\n'mpf.exported_component': 'component = .:'\n. In the above example,\n\nMyComponent\n is the class name. The module is listed as \nmy_component.my_component\n because the \nmy_component\n\npackage contains the \nmy_component.py\n file and the \nmy_component.py\n file contains the \nMyComponent\n class.\n\n\nThe \n[options.package_data]\n section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed.\n\n\n4. Create descriptor.json file in MyComponent/plugin-files/descriptor:\n\n\nThe \nbatchLibrary\n field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \n\"batchLibrary\" : \"MyComponent\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n5. Implement your component class:\n\n\nBelow is an example of the structure of a simple component. This component extends\n\nmpf_component_util.VideoCaptureMixin\n to simplify the use of\n\nmpf_component_util.VideoCapture\n. You would replace the call to\n\nrun_detection_algorithm_on_frame\n with your component-specific logic.\n\n\nimport logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track\n\n\n\n6. Optional: Add prebuilt wheel files if not available on PyPi:\n\n\nIf your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's \nplugin-files/wheelhouse\n directory.\nThe prebuilt library names must be listed in your \nsetup.cfg\n file's \ninstall_requires\n field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's \nplugin-files/wheelhouse\n directory.\n\n\n7. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nMyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-1.18.4-cp38-cp38-manylinux1_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.2.0.34-cp38-cp38-manylinux1_x86_64.whl\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following commands:\n\n\nmkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n8. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nHow to Create a Basic Python Component\n\n\nIn this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py\n\n\n\n2. Create descriptor.json file in MyComponent/descriptor:\n\n\nThe \nbatchLibrary\n field should be the full path to the Python file containing your component class.\nIn this example the field should be: \n\"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n3. Implement your component class:\n\n\nBelow is an example of the structure of a simple component that does not use\n\nmpf_component_util.VideoCaptureMixin\n. You would replace the call to\n\nrun_detection_algorithm\n with your component-specific logic.\n\n\nimport logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent\n\n\n\nThe Component Executor looks for a module-level variable named \nEXPORT_MPF_COMPONENT\n to specify which class\nis the component.\n\n\n4. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following command:\n\n\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n5. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nAPI Specification\n\n\nAn OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.\n\n\ncomponent.get_detections_from_* methods\n\n\nAll get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. \nmpf_component_api.ImageJob\n, \nmpf_component_api.VideoJob\n). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example:\n\n\ninstance method:\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nstatic method:\n\n\nclass MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nclass method:\n\n\nclass MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nAll get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. \nmpf_component_api.ImageLocation\n, \nmpf_component_api.VideoTrack\n). The return value is normally a list or generator,\nbut any iterable can be used.\n\n\nImage API\n\n\ncomponent.get_detections_from_image(image_job)\n\n\nUsed to detect objects in an image file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nget_detections_from_image\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nimage_job\n\n\nmpf_component_api.ImageJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.ImageLocation\n\n\n\n\nmpf_component_api.ImageJob\n\n\nClass containing data used for detection of objects in an image file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nfeed_forward_location\n\n \nNone\n or \nmpf_component_api.ImageLocation\n\n \nAn \nmpf_component_api.ImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.ImageLocation\n\n\nClass used to store the location of detected objects in a image file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, x_left_upper, y_left_upper, width, height, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nSee here for information about rotation and horizontal flipping.\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\nmpf_component_api.ImageLocation(0, 0, 100, 100, 1.0, {'CLASSIFICATION': 'backpack'})\n\n\n\nmpf_component_util.ImageReader\n\n\nmpf_component_util.ImageReader\n is a utility class for accessing images. It is the image equivalent to\n\nmpf_component_util.VideoCapture\n. Like \nmpf_component_util.VideoCapture\n,\nit may modify the read-in frame data based on job_properties. From the point of view of someone using\n\nmpf_component_util.ImageReader\n, these modifications are mostly transparent. \nmpf_component_util.ImageReader\n makes\nit look like you are reading the original image file as though it has already been rotated, flipped, cropped, etc.\n\n\nOne issue with this approach is that the detection bounding boxes will be relative to the\nmodified frame data, not the original. To make the detections relative to the original image\nthe \nmpf_component_util.ImageReader.reverse_transform(image_location)\n method must be called on each\n\nmpf_component_api.ImageLocation\n. Since the use of \nmpf_component_util.ImageReader\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.ImageReader\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_image(image_job):\n image_reader = mpf_component_util.ImageReader(image_job)\n image = image_reader.get_image()\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_image_locations = run_component_specific_algorithm(image)\n for result in result_image_locations:\n image_reader.reverse_transform(result)\n yield result\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.ImageReaderMixin\n for a more concise way to use\n\nmpf_component_util.ImageReader\n below.\n\n\nmpf_component_util.ImageReaderMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.ImageReader\n.\n\nmpf_component_util.ImageReaderMixin\n takes care of initializing a \nmpf_component_util.ImageReader\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.ImageReaderMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.ImageReaderMixin\n.\n\n\nThe component must implement \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must read the image using the \nmpf_component_util.ImageReader\n\n that is passed in to \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must NOT implement \nget_detections_from_image(image_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.ImageReader.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.ImageReaderMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.ImageReaderMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_image_reader(image_job, image_reader):\n image = image_reader.get_image()\n\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n return run_component_specific_algorithm(image)\n\n\n\nmpf_component_util.ImageReaderMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\n\n\nVideo API\n\n\ncomponent.get_detections_from_video(video_job)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive a\nrequest for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_video(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_video\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.VideoJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.VideoJob\n\n\nClass containing data used for detection of objects in a video file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n \n\n \n\n \nstart_frame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.VideoTrack\n\n \nAn \nmpf_component_api.VideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.VideoTrack\n\n\nClass used to store the location of detected objects in a video file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\nframe_locations\n\n\ndict[int, mpf_component_api.ImageLocation]\n\n\nA dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a \nmpf_component_api.ImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.VideoTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\ntrack = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())\n\n\n\nmpf_component_util.VideoCapture\n\n\nmpf_component_util.VideoCapture\n is a utility class for reading videos. \nmpf_component_util.VideoCapture\n works very\nsimilarly to \ncv2.VideoCapture\n, except that it might modify the video frames based on job properties. From the point\nof view of someone using \nmpf_component_util.VideoCapture\n, these modifications are mostly transparent.\n\nmpf_component_util.VideoCapture\n makes it look like you are reading the original video file as though it has already\nbeen rotated, flipped, cropped, etc. Also, if frame skipping is enabled, such as by setting the value of the\n\nFRAME_INTERVAL\n job property, it makes it look like you are reading the video as though it never contained the\nskipped frames.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the\nmodified video, not the original. To make the detections relative to the original video\nthe \nmpf_component_util.VideoCapture.reverse_transform(video_track)\n method must be called on each\n\nmpf_component_api.VideoTrack\n. Since the use of \nmpf_component_util.VideoCapture\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.VideoCapture\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n video_capture = mpf_component_util.VideoCapture(video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n video_capture.reverse_transform(track)\n yield track\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.VideoCaptureMixin\n for a more concise way to use\n\nmpf_component_util.VideoCapture\n below.\n\n\nmpf_component_util.VideoCaptureMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.VideoCapture\n.\n\nmpf_component_util.VideoCaptureMixin\n takes care of initializing a \nmpf_component_util.VideoCapture\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.VideoCaptureMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.VideoCaptureMixin\n.\n\n\nThe component must implement \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must read the video using the \nmpf_component_util.VideoCapture\n\n that is passed in to \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must NOT implement \nget_detections_from_video(video_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.VideoCapture.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.VideoCaptureMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_video_capture(video_job, video_capture):\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield track\n\n\n\nmpf_component_util.VideoCaptureMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\nFor example:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin, mpf_component_util.ImageReaderMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n ...\n\n @staticmethod\n def get_detections_from_image_reader(image_job, image_reader):\n ...\n\n\n\nAudio API\n\n\ncomponent.get_detections_from_audio(audio_job)\n\n\nUsed to detect objects in an audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_audio(self, audio_job):\n return [mpf_component_api.AudioTrack(...), ...]\n\n\n\nget_detections_from_audio\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\naudio_job\n\n\nmpf_component_api.AudioJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.AudioTrack\n\n\n\n\nmpf_component_api.AudioJob\n\n\nClass containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\".\n\n \n\n \n\n \nstart_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.AudioTrack\n\n \nAn \nmpf_component_api.AudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.AudioTrack\n\n\nClass used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_time, stop_time, confidence, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.AudioTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\nGeneric API\n\n\ncomponent.get_detections_from_generic(generic_job)\n\n\nUsed to detect objects in files that are not video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_generic(self, generic_job):\n return [mpf_component_api.GenericTrack(...), ...]\n\n\n\nget_detections_from_generic\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ngeneric_job\n\n\nmpf_component_api.GenericJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.GenericTrack\n\n\n\n\nmpf_component_api.GenericJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is not\nlogically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.txt\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.GenericTrack\n\n \nAn \nmpf_component_api.GenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.GenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nHow to Report Errors\n\n\nThe following is an example of how to throw an exception:\n\n\nimport mpf_component_api as mpf\n\n...\nraise mpf.DetectionError.MISSING_PROPERTY.exception(\n 'The REALLY_IMPORTANT property must be provided as a job property.')\n\n\n\nThe Python Batch Component API supports all of the same error types\nlisted \nhere\n for the C++ Batch Component API. Be sure to omit\nthe \nMPF_\n prefix. You can replace the \nMISSING_PROPERTY\n part in the above code with any other error type. When\ngenerating an exception, choose the type that best describes your error.\n\n\nPython Component Build Environment\n\n\nAll Python components must work with CPython 3.8.10. Also, Python components\nmust work with the Linux version that is used by the OpenMPF Component\nExecutable. At this writing, OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any\nOS, but incompatibility issues can arise when using Python libraries that\ninclude compiled extension modules. Python libraries are typically distributed\nas wheel files. The wheel format requires that the file name follows the pattern\nof \n----.whl\n.\n\n--\n are called\n\ncompatibility tags\n. For example,\n\nmpf_component_api\n is pure Python, so the name of its wheel file is\n\nmpf_component_api-0.1-py3-none-any.whl\n. \npy3\n means it will work with any\nPython 3 implementation because it does not use any implementation-specific\nfeatures. \nnone\n means that it does not use the Python ABI. \nany\n means it will\nwork on any platform.\n\n\nThe following combinations of compatibility tags are supported:\n\n\n\n\ncp38-cp38-manylinux2014_x86_64\n\n\ncp38-cp38-manylinux2010_x86_64\n\n\ncp38-cp38-manylinux1_x86_64\n\n\ncp38-cp38-linux_x86_64\n\n\ncp38-abi3-manylinux2014_x86_64\n\n\ncp38-abi3-manylinux2010_x86_64\n\n\ncp38-abi3-manylinux1_x86_64\n\n\ncp38-abi3-linux_x86_64\n\n\ncp38-none-manylinux2014_x86_64\n\n\ncp38-none-manylinux2010_x86_64\n\n\ncp38-none-manylinux1_x86_64\n\n\ncp38-none-linux_x86_64\n\n\ncp37-abi3-manylinux2014_x86_64\n\n\ncp37-abi3-manylinux2010_x86_64\n\n\ncp37-abi3-manylinux1_x86_64\n\n\ncp37-abi3-linux_x86_64\n\n\ncp36-abi3-manylinux2014_x86_64\n\n\ncp36-abi3-manylinux2010_x86_64\n\n\ncp36-abi3-manylinux1_x86_64\n\n\ncp36-abi3-linux_x86_64\n\n\ncp35-abi3-manylinux2014_x86_64\n\n\ncp35-abi3-manylinux2010_x86_64\n\n\ncp35-abi3-manylinux1_x86_64\n\n\ncp35-abi3-linux_x86_64\n\n\ncp34-abi3-manylinux2014_x86_64\n\n\ncp34-abi3-manylinux2010_x86_64\n\n\ncp34-abi3-manylinux1_x86_64\n\n\ncp34-abi3-linux_x86_64\n\n\ncp33-abi3-manylinux2014_x86_64\n\n\ncp33-abi3-manylinux2010_x86_64\n\n\ncp33-abi3-manylinux1_x86_64\n\n\ncp33-abi3-linux_x86_64\n\n\ncp32-abi3-manylinux2014_x86_64\n\n\ncp32-abi3-manylinux2010_x86_64\n\n\ncp32-abi3-manylinux1_x86_64\n\n\ncp32-abi3-linux_x86_64\n\n\npy38-none-manylinux2014_x86_64\n\n\npy38-none-manylinux2010_x86_64\n\n\npy38-none-manylinux1_x86_64\n\n\npy38-none-linux_x86_64\n\n\npy3-none-manylinux2014_x86_64\n\n\npy3-none-manylinux2010_x86_64\n\n\npy3-none-manylinux1_x86_64\n\n\npy3-none-linux_x86_64\n\n\npy37-none-manylinux2014_x86_64\n\n\npy37-none-manylinux2010_x86_64\n\n\npy37-none-manylinux1_x86_64\n\n\npy37-none-linux_x86_64\n\n\npy36-none-manylinux2014_x86_64\n\n\npy36-none-manylinux2010_x86_64\n\n\npy36-none-manylinux1_x86_64\n\n\npy36-none-linux_x86_64\n\n\npy35-none-manylinux2014_x86_64\n\n\npy35-none-manylinux2010_x86_64\n\n\npy35-none-manylinux1_x86_64\n\n\npy35-none-linux_x86_64\n\n\npy34-none-manylinux2014_x86_64\n\n\npy34-none-manylinux2010_x86_64\n\n\npy34-none-manylinux1_x86_64\n\n\npy34-none-linux_x86_64\n\n\npy33-none-manylinux2014_x86_64\n\n\npy33-none-manylinux2010_x86_64\n\n\npy33-none-manylinux1_x86_64\n\n\npy33-none-linux_x86_64\n\n\npy32-none-manylinux2014_x86_64\n\n\npy32-none-manylinux2010_x86_64\n\n\npy32-none-manylinux1_x86_64\n\n\npy32-none-linux_x86_64\n\n\npy31-none-manylinux2014_x86_64\n\n\npy31-none-manylinux2010_x86_64\n\n\npy31-none-manylinux1_x86_64\n\n\npy31-none-linux_x86_64\n\n\npy30-none-manylinux2014_x86_64\n\n\npy30-none-manylinux2010_x86_64\n\n\npy30-none-manylinux1_x86_64\n\n\npy30-none-linux_x86_64\n\n\ncp38-none-any\n\n\npy38-none-any\n\n\npy3-none-any\n\n\npy37-none-any\n\n\npy36-none-any\n\n\npy35-none-any\n\n\npy34-none-any\n\n\npy33-none-any\n\n\npy32-none-any\n\n\npy31-none-any\n\n\npy30-none-any\n\n\n\n\nThe list above was generated with the following command:\n\npython3 -c 'import pip._internal.pep425tags as tags; print(\"\\n\".join(str(t) for t in tags.get_supported()))'\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through\nmultiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input\n(i.e. when processing the same job).\n\n\nLogging\n\n\nIt recommended that components use Python's built-in\n\nlogging\n module.\n The component should\n\nimport logging\n and call \nlogging.getLogger('')\n to get a logger instance.\nThe component should not configure logging itself. The Component Executor will configure the\n\nlogging\n module for the component. The logger will write log messages to standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n. Note that multiple instances of the\nsame component can log to the same file. Also, logging content can span multiple lines.\n\n\nThe following log levels are supported: \nFATAL, ERROR, WARN, INFO, DEBUG\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nThe format of the log messages is:\n\n\nDATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE\n\n\n\nFor example:\n\n\n2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message",
"title": "Python Batch Component API"
},
{
@@ -767,7 +797,7 @@
},
{
"location": "/Python-Batch-Component-API/index.html#how-components-integrate-into-openmpf",
- "text": "Components are integrated into OpenMPF through the use of OpenMPF's Component Executable .\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM). The Component Executable: Receives and parses job requests from the WFM Invokes methods on the component library to obtain detection results Populates and sends the respective responses to the WFM The basic pseudocode for the Component Executable is as follows: component_cls = locate_component_class()\ncomponent = component_cls()\ndetection_type = component.detection_type\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections) Each instance of a Component Executable runs as a separate process. The Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM. A component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods and has a detection_type field.\nSee the API Specification for more information. The figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure: The Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself. The Component Executor determines that it is running a Python component so it creates an instance of the PythonComponentHandle \nclass. The PythonComponentHandle class creates an instance of the component class and calls one of the get_detections_from_* methods on the component instance. The example\nabove is an image component, so PythonComponentHandle calls ExampleImageFaceDetection.get_detections_from_image \non the component instance. The component instance creates an instance of mpf_component_util.ImageReader to access the image. Components that support video\nwould implement get_detections_from_video and use mpf_component_util.VideoCapture instead. This figure show the structure when the mixin classes are used: The figure above shows a video component, ExampleVideoFaceDetection , that extends the mpf_component_util.VideoCaptureMixin class. PythonComponentHandle will\ncall get_detections_from_video on an instance of ExampleVideoFaceDetection . ExampleVideoFaceDetection does not\nimplement get_detections_from_video , so the implementation inherited from mpf_component_util.VideoCaptureMixin \ngets called. mpf_component_util.VideoCaptureMixin.get_detections_from_video creates an instance of mpf_component_util.VideoCapture and calls ExampleVideoFaceDetection.get_detections_from_video_capture , passing in the mpf_component_util.VideoCapture it\njust created. ExampleVideoFaceDetection.get_detections_from_video_capture is where the component reads the video\nusing the passed-in mpf_component_util.VideoCapture and attempts to find detections. Components that support images\nwould extend mpf_component_util.ImageReaderMixin , implement get_detections_from_image_reader , and access the image using the passed-in mpf_component_util.ImageReader . During component registration a virtualenv is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is mpf_component_api . mpf_component_api is the package containing the job classes\n(e.g. mpf_component_api.ImageJob , mpf_component_api.VideoJob ) and detection result classes\n(e.g. mpf_component_api.ImageLocation , mpf_component_api.VideoTrack ).",
+ "text": "Components are integrated into OpenMPF through the use of OpenMPF's Component Executable .\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM). The Component Executable: Receives and parses job requests from the WFM Invokes methods on the component library to obtain detection results Populates and sends the respective responses to the WFM The basic pseudocode for the Component Executable is as follows: component_cls = locate_component_class()\ncomponent = component_cls()\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections) Each instance of a Component Executable runs as a separate process. The Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM. A component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods. See the API Specification for more information. The figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure: The Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself. The Component Executor determines that it is running a Python component so it creates an instance of the PythonComponentHandle \nclass. The PythonComponentHandle class creates an instance of the component class and calls one of the get_detections_from_* methods on the component instance. The example\nabove is an image component, so PythonComponentHandle calls ExampleImageFaceDetection.get_detections_from_image \non the component instance. The component instance creates an instance of mpf_component_util.ImageReader to access the image. Components that support video\nwould implement get_detections_from_video and use mpf_component_util.VideoCapture instead. This figure show the structure when the mixin classes are used: The figure above shows a video component, ExampleVideoFaceDetection , that extends the mpf_component_util.VideoCaptureMixin class. PythonComponentHandle will\ncall get_detections_from_video on an instance of ExampleVideoFaceDetection . ExampleVideoFaceDetection does not\nimplement get_detections_from_video , so the implementation inherited from mpf_component_util.VideoCaptureMixin \ngets called. mpf_component_util.VideoCaptureMixin.get_detections_from_video creates an instance of mpf_component_util.VideoCapture and calls ExampleVideoFaceDetection.get_detections_from_video_capture , passing in the mpf_component_util.VideoCapture it\njust created. ExampleVideoFaceDetection.get_detections_from_video_capture is where the component reads the video\nusing the passed-in mpf_component_util.VideoCapture and attempts to find detections. Components that support images\nwould extend mpf_component_util.ImageReaderMixin , implement get_detections_from_image_reader , and access the image using the passed-in mpf_component_util.ImageReader . During component registration a virtualenv is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is mpf_component_api . mpf_component_api is the package containing the job classes\n(e.g. mpf_component_api.ImageJob , mpf_component_api.VideoJob ) and detection result classes\n(e.g. mpf_component_api.ImageLocation , mpf_component_api.VideoTrack ).",
"title": "How Components Integrate into OpenMPF"
},
{
@@ -787,17 +817,17 @@
},
{
"location": "/Python-Batch-Component-API/index.html#how-to-create-a-setuptools-based-python-component",
- "text": "In this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found here . This is the recommended project structure: ComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl 1. Create directory structure: mkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json 2. Create pyproject.toml file in project's top-level directory: pyproject.toml should contain the following content: [build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\" 3. Create setup.cfg file in project's top-level directory: Example of a minimal setup.cfg file: [metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/* The name parameter defines the distribution name. Typically the distribution name matches the component name. Any dependencies that component requires should be listed in the install_requires field. The Component Executor looks in the entry_points element and uses the mpf.exported_component field to determine\nthe component class. The right hand side of component = should be the dotted module name, followed by a : ,\nfollowed by the name of the class. The general pattern is 'mpf.exported_component': 'component = .:' . In the above example, MyComponent is the class name. The module is listed as my_component.my_component because the my_component \npackage contains the my_component.py file and the my_component.py file contains the MyComponent class. The [options.package_data] section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed. 4. Create descriptor.json file in MyComponent/plugin-files/descriptor: The batchLibrary field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \"batchLibrary\" : \"MyComponent\" .\nSee the Component Descriptor Reference for details about\nthe descriptor format. 5. Implement your component class: Below is an example of the structure of a simple component. This component extends mpf_component_util.VideoCaptureMixin to simplify the use of mpf_component_util.VideoCapture . You would replace the call to run_detection_algorithm_on_frame with your component-specific logic. import logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n detection_type = 'FACE'\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track 6. Optional: Add prebuilt wheel files if not available on PyPi: If your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's plugin-files/wheelhouse directory.\nThe prebuilt library names must be listed in your setup.cfg file's install_requires field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's plugin-files/wheelhouse directory. 7. Optional: Create the plugin package for non-Docker deployments: The directory structure of the .tar.gz file will be: MyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-1.18.4-cp38-cp38-manylinux1_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.2.0.34-cp38-cp38-manylinux1_x86_64.whl To create the plugin packages you can run the build script as follows: ~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent The plugin package can also be built manually using the following commands: mkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent 8. Create the component Docker image: See the README .",
+ "text": "In this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found here . This is the recommended project structure: ComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl 1. Create directory structure: mkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json 2. Create pyproject.toml file in project's top-level directory: pyproject.toml should contain the following content: [build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\" 3. Create setup.cfg file in project's top-level directory: Example of a minimal setup.cfg file: [metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/* The name parameter defines the distribution name. Typically the distribution name matches the component name. Any dependencies that component requires should be listed in the install_requires field. The Component Executor looks in the entry_points element and uses the mpf.exported_component field to determine\nthe component class. The right hand side of component = should be the dotted module name, followed by a : ,\nfollowed by the name of the class. The general pattern is 'mpf.exported_component': 'component = .:' . In the above example, MyComponent is the class name. The module is listed as my_component.my_component because the my_component \npackage contains the my_component.py file and the my_component.py file contains the MyComponent class. The [options.package_data] section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed. 4. Create descriptor.json file in MyComponent/plugin-files/descriptor: The batchLibrary field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \"batchLibrary\" : \"MyComponent\" .\nSee the Component Descriptor Reference for details about\nthe descriptor format. 5. Implement your component class: Below is an example of the structure of a simple component. This component extends mpf_component_util.VideoCaptureMixin to simplify the use of mpf_component_util.VideoCapture . You would replace the call to run_detection_algorithm_on_frame with your component-specific logic. import logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track 6. Optional: Add prebuilt wheel files if not available on PyPi: If your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's plugin-files/wheelhouse directory.\nThe prebuilt library names must be listed in your setup.cfg file's install_requires field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's plugin-files/wheelhouse directory. 7. Optional: Create the plugin package for non-Docker deployments: The directory structure of the .tar.gz file will be: MyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-1.18.4-cp38-cp38-manylinux1_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.2.0.34-cp38-cp38-manylinux1_x86_64.whl To create the plugin packages you can run the build script as follows: ~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent The plugin package can also be built manually using the following commands: mkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent 8. Create the component Docker image: See the README .",
"title": "How to Create a Setuptools-based Python Component"
},
{
"location": "/Python-Batch-Component-API/index.html#how-to-create-a-basic-python-component",
- "text": "In this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound here . This is the recommended project structure: ComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json 1. Create directory structure: mkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py 2. Create descriptor.json file in MyComponent/descriptor: The batchLibrary field should be the full path to the Python file containing your component class.\nIn this example the field should be: \"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\" .\nSee the Component Descriptor Reference for details about\nthe descriptor format. 3. Implement your component class: Below is an example of the structure of a simple component that does not use mpf_component_util.VideoCaptureMixin . You would replace the call to run_detection_algorithm with your component-specific logic. import logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n detection_type = 'FACE'\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent The Component Executor looks for a module-level variable named EXPORT_MPF_COMPONENT to specify which class\nis the component. 4. Optional: Create the plugin package for non-Docker deployments: The directory structure of the .tar.gz file will be: ComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json To create the plugin packages you can run the build script as follows: ~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent The plugin package can also be built manually using the following command: tar -zcf MyComponent.tar.gz MyComponent 5. Create the component Docker image: See the README .",
+ "text": "In this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound here . This is the recommended project structure: ComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json 1. Create directory structure: mkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py 2. Create descriptor.json file in MyComponent/descriptor: The batchLibrary field should be the full path to the Python file containing your component class.\nIn this example the field should be: \"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\" .\nSee the Component Descriptor Reference for details about\nthe descriptor format. 3. Implement your component class: Below is an example of the structure of a simple component that does not use mpf_component_util.VideoCaptureMixin . You would replace the call to run_detection_algorithm with your component-specific logic. import logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent The Component Executor looks for a module-level variable named EXPORT_MPF_COMPONENT to specify which class\nis the component. 4. Optional: Create the plugin package for non-Docker deployments: The directory structure of the .tar.gz file will be: ComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json To create the plugin packages you can run the build script as follows: ~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent The plugin package can also be built manually using the following command: tar -zcf MyComponent.tar.gz MyComponent 5. Create the component Docker image: See the README .",
"title": "How to Create a Basic Python Component"
},
{
"location": "/Python-Batch-Component-API/index.html#api-specification",
- "text": "An OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods and has a detection_type field.",
+ "text": "An OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.",
"title": "API Specification"
},
{
@@ -805,11 +835,6 @@
"text": "All get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. mpf_component_api.ImageJob , mpf_component_api.VideoJob ). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example: instance method: class MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...] static method: class MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...] class method: class MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...] All get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. mpf_component_api.ImageLocation , mpf_component_api.VideoTrack ). The return value is normally a list or generator,\nbut any iterable can be used.",
"title": "component.get_detections_from_* methods"
},
- {
- "location": "/Python-Batch-Component-API/index.html#componentdetection_type",
- "text": "str field describing the type of object that is detected by the component. Should be in all CAPS.\nExamples include: FACE , MOTION , PERSON , SPEECH , CLASS (for object classification), or TEXT . Example: class MyComponent:\n detection_type = 'FACE'",
- "title": "component.detection_type"
- },
{
"location": "/Python-Batch-Component-API/index.html#image-api",
"text": "",
@@ -852,7 +877,7 @@
},
{
"location": "/Python-Batch-Component-API/index.html#mpf_component_apivideojob",
- "text": "Class containing data used for detection of objects in a video file. Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n str \n A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. \n \n \n data_uri \n str \n The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\". \n \n \n start_frame \n int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stop_frame \n int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n job_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job.\n \n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n \n \n media_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str of metadata about the media associated with the job.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n \n feed_forward_track \n None or mpf_component_api.VideoTrack \n An mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98. Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
+ "text": "Class containing data used for detection of objects in a video file. Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n str \n A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. \n \n \n data_uri \n str \n The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\". \n \n \n start_frame \n int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stop_frame \n int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n job_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job.\n \n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n \n \n media_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str of metadata about the media associated with the job.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n \n feed_forward_track \n None or mpf_component_api.VideoTrack \n An mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98. Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
"title": "mpf_component_api.VideoJob"
},
{
@@ -882,7 +907,7 @@
},
{
"location": "/Python-Batch-Component-API/index.html#mpf_component_apiaudiojob",
- "text": "Class containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file. Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n str \n A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. \n \n \n data_uri \n str \n The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\". \n \n \n start_time \n int \n The time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections. \n \n \n stop_time \n int \n The time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections. \n \n \n job_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job.\n \n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n \n \n media_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str of metadata about the media associated with the job.\n \n Includes the following key-value pairs:\n \n DURATION : length of audio file in milliseconds \n MIME_TYPE : the MIME type of the media \n \n \n \n \n feed_forward_track \n None or mpf_component_api.AudioTrack \n An mpf_component_api.AudioTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
+ "text": "Class containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file. Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n str \n A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. \n \n \n data_uri \n str \n The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\". \n \n \n start_time \n int \n The time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections. \n \n \n stop_time \n int \n The time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections. \n \n \n job_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job.\n \n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n \n \n media_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str of metadata about the media associated with the job.\n \n Includes the following key-value pairs:\n \n DURATION : length of audio file in milliseconds \n MIME_TYPE : the MIME type of the media \n \n \n \n \n feed_forward_track \n None or mpf_component_api.AudioTrack \n An mpf_component_api.AudioTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
"title": "mpf_component_api.AudioJob"
},
{
@@ -917,7 +942,7 @@
},
{
"location": "/Python-Batch-Component-API/index.html#python-component-build-environment",
- "text": "All Python components must work with CPython 3.8.10. Also, Python components \nmust work with the Linux version that is used by the OpenMPF Component \nExecutable. At this writing, OpenMPF runs on \nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any \nOS, but incompatibility issues can arise when using Python libraries that \ninclude compiled extension modules. Python libraries are typically distributed \nas wheel files. The wheel format requires that the file name follows the pattern \nof ----.whl . -- are called compatibility tags . For example, mpf_component_api is pure Python, so the name of its wheel file is mpf_component_api-0.1-py3-none-any.whl . py3 means it will work with any \nPython 3 implementation because it does not use any implementation-specific \nfeatures. none means that it does not use the Python ABI. any means it will \nwork on any platform. The following combinations of compatibility tags are supported: cp38-cp38-manylinux2014_x86_64 cp38-cp38-manylinux2010_x86_64 cp38-cp38-manylinux1_x86_64 cp38-cp38-linux_x86_64 cp38-abi3-manylinux2014_x86_64 cp38-abi3-manylinux2010_x86_64 cp38-abi3-manylinux1_x86_64 cp38-abi3-linux_x86_64 cp38-none-manylinux2014_x86_64 cp38-none-manylinux2010_x86_64 cp38-none-manylinux1_x86_64 cp38-none-linux_x86_64 cp37-abi3-manylinux2014_x86_64 cp37-abi3-manylinux2010_x86_64 cp37-abi3-manylinux1_x86_64 cp37-abi3-linux_x86_64 cp36-abi3-manylinux2014_x86_64 cp36-abi3-manylinux2010_x86_64 cp36-abi3-manylinux1_x86_64 cp36-abi3-linux_x86_64 cp35-abi3-manylinux2014_x86_64 cp35-abi3-manylinux2010_x86_64 cp35-abi3-manylinux1_x86_64 cp35-abi3-linux_x86_64 cp34-abi3-manylinux2014_x86_64 cp34-abi3-manylinux2010_x86_64 cp34-abi3-manylinux1_x86_64 cp34-abi3-linux_x86_64 cp33-abi3-manylinux2014_x86_64 cp33-abi3-manylinux2010_x86_64 cp33-abi3-manylinux1_x86_64 cp33-abi3-linux_x86_64 cp32-abi3-manylinux2014_x86_64 cp32-abi3-manylinux2010_x86_64 cp32-abi3-manylinux1_x86_64 cp32-abi3-linux_x86_64 py38-none-manylinux2014_x86_64 py38-none-manylinux2010_x86_64 py38-none-manylinux1_x86_64 py38-none-linux_x86_64 py3-none-manylinux2014_x86_64 py3-none-manylinux2010_x86_64 py3-none-manylinux1_x86_64 py3-none-linux_x86_64 py37-none-manylinux2014_x86_64 py37-none-manylinux2010_x86_64 py37-none-manylinux1_x86_64 py37-none-linux_x86_64 py36-none-manylinux2014_x86_64 py36-none-manylinux2010_x86_64 py36-none-manylinux1_x86_64 py36-none-linux_x86_64 py35-none-manylinux2014_x86_64 py35-none-manylinux2010_x86_64 py35-none-manylinux1_x86_64 py35-none-linux_x86_64 py34-none-manylinux2014_x86_64 py34-none-manylinux2010_x86_64 py34-none-manylinux1_x86_64 py34-none-linux_x86_64 py33-none-manylinux2014_x86_64 py33-none-manylinux2010_x86_64 py33-none-manylinux1_x86_64 py33-none-linux_x86_64 py32-none-manylinux2014_x86_64 py32-none-manylinux2010_x86_64 py32-none-manylinux1_x86_64 py32-none-linux_x86_64 py31-none-manylinux2014_x86_64 py31-none-manylinux2010_x86_64 py31-none-manylinux1_x86_64 py31-none-linux_x86_64 py30-none-manylinux2014_x86_64 py30-none-manylinux2010_x86_64 py30-none-manylinux1_x86_64 py30-none-linux_x86_64 cp38-none-any py38-none-any py3-none-any py37-none-any py36-none-any py35-none-any py34-none-any py33-none-any py32-none-any py31-none-any py30-none-any The list above was generated with the following command: python3 -c 'import pip._internal.pep425tags as tags; print(\"\\n\".join(str(t) for t in tags.get_supported()))' Components should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.",
+ "text": "All Python components must work with CPython 3.8.10. Also, Python components\nmust work with the Linux version that is used by the OpenMPF Component\nExecutable. At this writing, OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any\nOS, but incompatibility issues can arise when using Python libraries that\ninclude compiled extension modules. Python libraries are typically distributed\nas wheel files. The wheel format requires that the file name follows the pattern\nof ----.whl . -- are called compatibility tags . For example, mpf_component_api is pure Python, so the name of its wheel file is mpf_component_api-0.1-py3-none-any.whl . py3 means it will work with any\nPython 3 implementation because it does not use any implementation-specific\nfeatures. none means that it does not use the Python ABI. any means it will\nwork on any platform. The following combinations of compatibility tags are supported: cp38-cp38-manylinux2014_x86_64 cp38-cp38-manylinux2010_x86_64 cp38-cp38-manylinux1_x86_64 cp38-cp38-linux_x86_64 cp38-abi3-manylinux2014_x86_64 cp38-abi3-manylinux2010_x86_64 cp38-abi3-manylinux1_x86_64 cp38-abi3-linux_x86_64 cp38-none-manylinux2014_x86_64 cp38-none-manylinux2010_x86_64 cp38-none-manylinux1_x86_64 cp38-none-linux_x86_64 cp37-abi3-manylinux2014_x86_64 cp37-abi3-manylinux2010_x86_64 cp37-abi3-manylinux1_x86_64 cp37-abi3-linux_x86_64 cp36-abi3-manylinux2014_x86_64 cp36-abi3-manylinux2010_x86_64 cp36-abi3-manylinux1_x86_64 cp36-abi3-linux_x86_64 cp35-abi3-manylinux2014_x86_64 cp35-abi3-manylinux2010_x86_64 cp35-abi3-manylinux1_x86_64 cp35-abi3-linux_x86_64 cp34-abi3-manylinux2014_x86_64 cp34-abi3-manylinux2010_x86_64 cp34-abi3-manylinux1_x86_64 cp34-abi3-linux_x86_64 cp33-abi3-manylinux2014_x86_64 cp33-abi3-manylinux2010_x86_64 cp33-abi3-manylinux1_x86_64 cp33-abi3-linux_x86_64 cp32-abi3-manylinux2014_x86_64 cp32-abi3-manylinux2010_x86_64 cp32-abi3-manylinux1_x86_64 cp32-abi3-linux_x86_64 py38-none-manylinux2014_x86_64 py38-none-manylinux2010_x86_64 py38-none-manylinux1_x86_64 py38-none-linux_x86_64 py3-none-manylinux2014_x86_64 py3-none-manylinux2010_x86_64 py3-none-manylinux1_x86_64 py3-none-linux_x86_64 py37-none-manylinux2014_x86_64 py37-none-manylinux2010_x86_64 py37-none-manylinux1_x86_64 py37-none-linux_x86_64 py36-none-manylinux2014_x86_64 py36-none-manylinux2010_x86_64 py36-none-manylinux1_x86_64 py36-none-linux_x86_64 py35-none-manylinux2014_x86_64 py35-none-manylinux2010_x86_64 py35-none-manylinux1_x86_64 py35-none-linux_x86_64 py34-none-manylinux2014_x86_64 py34-none-manylinux2010_x86_64 py34-none-manylinux1_x86_64 py34-none-linux_x86_64 py33-none-manylinux2014_x86_64 py33-none-manylinux2010_x86_64 py33-none-manylinux1_x86_64 py33-none-linux_x86_64 py32-none-manylinux2014_x86_64 py32-none-manylinux2010_x86_64 py32-none-manylinux1_x86_64 py32-none-linux_x86_64 py31-none-manylinux2014_x86_64 py31-none-manylinux2010_x86_64 py31-none-manylinux1_x86_64 py31-none-linux_x86_64 py30-none-manylinux2014_x86_64 py30-none-manylinux2010_x86_64 py30-none-manylinux1_x86_64 py30-none-linux_x86_64 cp38-none-any py38-none-any py3-none-any py37-none-any py36-none-any py35-none-any py34-none-any py33-none-any py32-none-any py31-none-any py30-none-any The list above was generated with the following command: python3 -c 'import pip._internal.pep425tags as tags; print(\"\\n\".join(str(t) for t in tags.get_supported()))' Components should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.",
"title": "Python Component Build Environment"
},
{
@@ -937,12 +962,12 @@
},
{
"location": "/Python-Batch-Component-API/index.html#logging",
- "text": "It recommended that components use Python's built-in logging module. The component should import logging and call logging.getLogger('') to get a logger instance. \nThe component should not configure logging itself. The Component Executor will configure the logging module for the component. The logger will write log messages to standard error and ${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log . Note that multiple instances of the \nsame component can log to the same file. Also, logging content can span multiple lines. The following log levels are supported: FATAL, ERROR, WARN, INFO, DEBUG . \nThe LOG_LEVEL environment variable can be set to one of the log levels to change the logging \nverbosity. When LOG_LEVEL is absent, INFO is used. The format of the log messages is: DATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE For example: 2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message",
+ "text": "It recommended that components use Python's built-in logging module. The component should import logging and call logging.getLogger('') to get a logger instance.\nThe component should not configure logging itself. The Component Executor will configure the logging module for the component. The logger will write log messages to standard error and ${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log . Note that multiple instances of the\nsame component can log to the same file. Also, logging content can span multiple lines. The following log levels are supported: FATAL, ERROR, WARN, INFO, DEBUG .\nThe LOG_LEVEL environment variable can be set to one of the log levels to change the logging\nverbosity. When LOG_LEVEL is absent, INFO is used. The format of the log messages is: DATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE For example: 2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message",
"title": "Logging"
},
{
"location": "/Java-Batch-Component-API/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executor\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executor loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executor:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executor is as follows:\n\n\ncomponent.setRunDirectory(...)\ncomponent.init()\nwhile (true) {\n job = ReceiveJob()\n if (component.supports(job.dataType))\n component.getDetections(...) // Component does the work here\n }\ncomponent.close()\n\n\n\nEach instance of a Component Executor runs as a separate process.\n\n\nThe Component Executor receives and parses requests from the WFM, invokes methods on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponentBase\n.\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the Java Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF Java detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponentBase\n.\n\n\nBuilding the component into a jar. (See \nHelloWorldComponent pom.xml\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the Java Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; and \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponentInterface\n methods (See the \nMPFAudioAndVideoDetectionComponentAdapter\n for an example; \nTODO: implement those shown in the diagram\n). In the future, the API will also include \nComponent Utilities\n, which perform actions such as image flipping, rotation, and cropping.\n\n\nComponent Interfaces\n\n\n\n\nMPFComponentInterface\n - Interface for all Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\nDetection Component Interfaces\n\n\n\n\nMPFDetectionComponentInterface\n - Baseline interface for detection components.\n\n\nMPFDetectionComponentBase\n - An abstract baseline for detection components. Provides default implementations for \nMPFDetectionComponentInterface\n.\n\n\n\n\nJob Definitions\n\n\nThe following classes define the details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponent Interface\n\n\nThe OpenMPF Component class structure consists of:\n\n\n\n\nMPFComponentInterface\n - Interface for all OpenMPF Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\n\n\nIMPORTANT:\n This interface and abstract class should not be directly implemented because no mechanism exists for launching components based off of it. Instead, it defines the contract that components must follow. Currently, the only supported type of batch component is \"DETECTION\". Those components should extend \nMPFDetectionComponentBase\n\n\n\n\nSee the latest source here.\n\n\nsetRunDirectory(String)\n\n\nSets the value to the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void setRunDirectory(String runDirectory);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrunDirectory\n\n\nString\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nsetRunDirectory\n is called by the Component Executor to set the correct path. It is not necessary to call this method in your component implementation.\n\n\n\n\ngetRunDirectory()\n\n\nReturns the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic String getRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nString\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\ninit()\n\n\nPerforms any necessary startup tasks for the component. This will be executed once by the Component Executor, on component startup, before the first job, after \nsetRunDirectory\n.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void init() {\n // Setup logger, Load data models, etc.\n}\n\n\n\nclose()\n\n\nPerforms any necessary shutdown tasks for the component. This will be executed once by the Component Executor, on component shutdown, usually after the last job.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void close() {\n // Close file handlers, etc.\n}\n\n\n\ngetComponentType()\n\n\nAllows the Component API to determine the component \"type.\" Currently \nDETECTION\n is the only supported component type.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic MPFComponentType getComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nDETECTION\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic MPFComponentType getComponentType() {\n return MPFComponentType.DETECTION;\n}\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponentInterface\n must be utilized by all OpenMPF Java detection components that perform batch processing.\n\n\nEvery batch detection component must define a \ncomponent\n class which implements the MPFComponentInterface. This is typically performed by extending \nMPFDetectionComponentBase\n, which extends \nMPFComponentBase\n and implements \nMPFDetectionComponentInterface\n.\n\n\nTo designate the component class, every batch detection component should include an applicationContext.xml which defines the \ncomponent\n bean. The \ncomponent\n bean class must implement \nMPFDetectionComponentInterface\n.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \ngetDetections()\n methods or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\n\nthrow new MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE);\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, developers may extend a convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several methods in \nMPFDetectionComponentInterface\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapter is provided:\n\n\n\n\nAudio And Video Detection Component Adapter (\nsource\n)\n\n\n\n\n\n\nExample: Using Adaptors to Provide Simple AudioVisual Handling:\n\nMany components designed to work on audio files, such as speech detection, are relevant to video files as well. Some of the tools for these components, however, only function on audio files (such as .wav, .mp3) and not video files (.avi, .mov, etc).\n\n\nThe \nMPFAudioAndVideoDetectionComponentAdapter\n adapter class implements the \ngetDetections(MPFVideoJob)\n method by translating the video request into an audio request. It builds a temporary audio file by ripping the audio from the video media input, translates the \nMPFVideoJob\n into an \nMPFAudioJob\n, and invokes \ngetDetections(MPFAudioJob)\n on the generated file. Once processing is done, the adapter translates the \nMPFAudioTrack\n list into an \nMPFVideoTrack\n list.\n\n\nSince only audio and video files are relevant to this adapter, it provides a default implementation of the \ngetDetections(MPFImageJob)\n method which throws \nnew MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE)\n.\n\n\nThe Sphinx speech detection component uses this adapter to run Sphinx speech detection on video files. Other components that need to process video files as audio may also use the adapter.\n\n\n\n\nsupports(MPFDataType)\n\n\nReturns the supported data types of the component.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic boolean supports(MPFDataType dataType)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndataType\n\n\nMPFDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nboolean\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample Component that supports only image and video files\npublic boolean supports(MPFDataType dataType) {\n return dataType == MPFDataType.IMAGE || dataType == MPFDataType.VIDEO;\n}\n\n\n\ngetDetectionType()\n\n\nReturns the type of object detected by the component.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic String getDetectionType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nString\n) The type of object detected by the component. Should be in all CAPS. Examples include: \nFACE\n, \nMOTION\n, \nPERSON\n, \nSPEECH\n, \nCLASS\n (for object classification), or \nTEXT\n.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic String getDetectionType() {\n return \"FACE\";\n}\n\n\n\ngetDetections(MPFImageJob)\n\n\nUsed to detect objects in image files. The MPFImageJob class contains the URI specifying the location of the image file.\n\n\nCurrently, the dataUri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". This is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFImageJob\n\n\nClass containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate image locations\n}\n\n\n\ngetDetections(MPFVideoJob)\n\n\nUsed to detect objects in a video.\n\n\nPrior to being sent to the component, videos are split into logical \"segments\" of video data and each segment (containing a range of frames) is assigned to a different job. Components are not guaranteed to receive requests in any order. For example, the first request processed by a component might receive a request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFVideoJob\n\n\nClass containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate video tracks\n}\n\n\n\ngetDetections(MPFAudioJob)\n\n\nUsed to detect objects in audio files. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFAudioJob\n\n\nClass containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate audio tracks\n}\n\n\n\ngetDetections(MPFGenericJob)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and handled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFGenericJob\n\n\nClass containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate generic tracks\n}\n\n\n\nMPFComponentDetectionError\n\n\nAn exception that occurs in a component. The exception must contain a reference to a valid \nMPFDetectionError\n.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFComponentDetectionError (\n MPFDetectionError error,\n String msg,\n Exception e\n)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror\n\n\nMPFDetectionError\n\n\nThe type of error generated by the component. See \nMPFDetectionError\n.\n\n\n\n\n\n\nmsg\n\n\nString\n\n\nThe detail message (which is saved for later retrieval by the \nThrowable.getMessage()\n method).\n\n\n\n\n\n\ne\n\n\nException\n\n\nThe cause (which is saved for later retrieval by the \nThrowable.getCause()\n method). A null value is permitted.\n\n\n\n\n\n\n\n\nDetection Job Classes\n\n\nThe following classes contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nClass containing data used for detection of objects.\n\n\n\n\nConstructor(s):\n\n\n\n\nprotected MPFJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njobName \n\n\nString\n\n\nA specific name given to the job by the OpenMPF Framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndataUri \n\n\nString\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njobProperties \n\n\nMap\n\n\nThe key corresponds to the property name specified in the component descriptor file described in \"Installing and Registering a Component\". Values are determined by an end user when creating a pipeline. \n Note: Only those property values specified by the user will be in the jobProperties map; for properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmediaProperties \n\n\nMap\n\n\nMetadata about the media associated with the job. The key is the property name and value is the property value. The entries in the map vary depend on the job type. They are defined in the specific Job's API description.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in image files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFImageLocation location)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nlocation\n\n \nMPFImageLocation\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in video files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame)\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame,\n MPFVideoTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartFrame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstopFrame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n \n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFVideoTrack\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in audio files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime)\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime,\n MPFAudioTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n \n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFAudioTrack\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFGenericTrack track) {\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n \n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFGenericTrack\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageLocation(\n int xLeftUpper,\n int yLeftUpper,\n int width,\n int height,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nxLeftUpper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\nyLeftUpper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"CLASSIFICATION\", \"backpack\");\nMPFImageLocation imageLocation = new MPFImageLocation(0, 0, 100, 100, 1.0, detectionProperties);\n\n\n\nMPFVideoTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoTrack(\n int startFrame,\n int stopFrame,\n Map frameLocations,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartFrame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstopFrame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframeLocations\n\n\nMap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame. In some cases, frames are deliberately skipped, as when a FRAME_INTERVAL > 1 is specified\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text could add an entry to \ndetectionProperties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"TRANSCRIPT\", \"RE5ULTS FR0M A TEXT DETECTER\");\nMPFVideoTrack videoTrack = new MPFVideoTrack(0, 5, frameLocations, 1.0, detectionProperties);\n\n\n\nMPFAudioTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioTrack(\n int startTime,\n int stopTime,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstopTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFGenericTrack(\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the status of \ngetDetections\n in a \nMPFComponentDetectionError\n. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component method has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nTODO: Implement Java utility classes\n\n\nJava Component Build Environment\n\n\nA Java Component must be built using a version of the Java SDK that is compatible with the one used to build the Java Component Executor. The OpenMPF Java Component Executor is currently built using OpenJDK 11.0.11. In general, the Java SDK is backwards compatible.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and Java SDK libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that Java components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Other component-specific configuration\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib - All libraries required by the component\n\u2514\u2500\u2500 libComponentName.jar - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nslf4j\n with \nlog4j2\n for OpenMPF Java Component logging. Multiple instances of the same component can log to the same file. Logging content can span multiple lines.\n\n\nLog files should be output to:\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n\n\nEach log statement must take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nThe following log LEVELs are supported:\n \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]\n\n\nThe following log4j2 configuration can be used to match the format of other OpenMPF logs:\n\n\n \n\n \n ${env:MPF_LOG_PATH}/${env:THIS_MPF_NODE}/log/sample-component-detection.log\n %date %level [%thread] %logger{1.} - %msg%n\n \n\n \n \n \n \n\n \n \n \n \n \n \n \n \n\n \n\n \n \n \n\n \n \n \n \n \n",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used to detect objects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executor\n. Developers create component libraries that encapsulate the component detection logic. Each instance of the Component Executor loads one of these libraries and uses it to service job requests sent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executor:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executor is as follows:\n\n\ncomponent.setRunDirectory(...)\ncomponent.init()\nwhile (true) {\n job = ReceiveJob()\n if (component.supports(job.dataType))\n component.getDetections(...) // Component does the work here\n }\ncomponent.close()\n\n\n\nEach instance of a Component Executor runs as a separate process.\n\n\nThe Component Executor receives and parses requests from the WFM, invokes methods on the Component Logic to get detection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by extending \nMPFDetectionComponentBase\n.\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, a developer may extend one of several convenience adapter classes provided by OpenMPF. See \nConvenience Adapters\n for more information.\n\n\nGetting Started\n\n\nThe quickest way to get started with the Java Batch Component API is to first read the \nOpenMPF Component API Overview\n and then \nreview the source\n for example OpenMPF Java detection components.\n\n\nDetection components are implemented by:\n\n\n\n\nExtending \nMPFDetectionComponentBase\n.\n\n\nBuilding the component into a jar. (See \nHelloWorldComponent pom.xml\n).\n\n\nCreating a component Docker image. (See the \nREADME\n).\n\n\n\n\nAPI Specification\n\n\nThe figure below presents a high-level component diagram of the Java Batch Component API:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe API consists of \nComponent Interfaces\n, which provide interfaces and abstract classes for developing components; \nJob Definitions\n, which define the work to be performed by a component; \nJob Results\n, which define the results generated by the component; and \nComponent Adapters\n, which provide default implementations of several of the \nMPFDetectionComponentInterface\n methods (See the \nMPFAudioAndVideoDetectionComponentAdapter\n for an example; \nTODO: implement those shown in the diagram\n). In the future, the API will also include \nComponent Utilities\n, which perform actions such as image flipping, rotation, and cropping.\n\n\nComponent Interfaces\n\n\n\n\nMPFComponentInterface\n - Interface for all Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\nDetection Component Interfaces\n\n\n\n\nMPFDetectionComponentInterface\n - Baseline interface for detection components.\n\n\nMPFDetectionComponentBase\n - An abstract baseline for detection components. Provides default implementations for \nMPFDetectionComponentInterface\n.\n\n\n\n\nJob Definitions\n\n\nThe following classes define the details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nJob Results\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nComponent Interface\n\n\nThe OpenMPF Component class structure consists of:\n\n\n\n\nMPFComponentInterface\n - Interface for all OpenMPF Java components that perform batch processing.\n\n\nMPFComponentBase\n - An abstract baseline for components. Provides default implementations for \nMPFComponentInterface\n.\n\n\n\n\n\n\nIMPORTANT:\n This interface and abstract class should not be directly implemented because no mechanism exists for launching components based off of it. Instead, it defines the contract that components must follow. Currently, the only supported type of batch component is \"DETECTION\". Those components should extend \nMPFDetectionComponentBase\n\n\n\n\nSee the latest source here.\n\n\nsetRunDirectory(String)\n\n\nSets the value to the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void setRunDirectory(String runDirectory);\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nrunDirectory\n\n\nString\n\n\nFull path of the parent folder above where the component is installed.\n\n\n\n\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nIMPORTANT:\n \nsetRunDirectory\n is called by the Component Executor to set the correct path. It is not necessary to call this method in your component implementation.\n\n\n\n\ngetRunDirectory()\n\n\nReturns the full path of the parent folder above where the component is installed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic String getRunDirectory()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nString\n) Full path of the parent folder above where the component is installed.\n\n\n\n\n\n\ninit()\n\n\nPerforms any necessary startup tasks for the component. This will be executed once by the Component Executor, on component startup, before the first job, after \nsetRunDirectory\n.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void init()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void init() {\n // Setup logger, Load data models, etc.\n}\n\n\n\nclose()\n\n\nPerforms any necessary shutdown tasks for the component. This will be executed once by the Component Executor, on component shutdown, usually after the last job.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic void close()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: none\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic void close() {\n // Close file handlers, etc.\n}\n\n\n\ngetComponentType()\n\n\nAllows the Component API to determine the component \"type.\" Currently \nDETECTION\n is the only supported component type.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic MPFComponentType getComponentType()\n\n\n\n\n\n\n\nParameters: none\n\n\n\n\n\n\nReturns: (\nMPFComponentType\n) Currently, \nDETECTION\n is the only supported return value.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic MPFComponentType getComponentType() {\n return MPFComponentType.DETECTION;\n}\n\n\n\nDetection Component Interface\n\n\nThe \nMPFDetectionComponentInterface\n must be utilized by all OpenMPF Java detection components that perform batch processing.\n\n\nEvery batch detection component must define a \ncomponent\n class which implements the MPFComponentInterface. This is typically performed by extending \nMPFDetectionComponentBase\n, which extends \nMPFComponentBase\n and implements \nMPFDetectionComponentInterface\n.\n\n\nTo designate the component class, every batch detection component should include an applicationContext.xml which defines the \ncomponent\n bean. The \ncomponent\n bean class must implement \nMPFDetectionComponentInterface\n.\n\n\n\n\nIMPORTANT:\n Each batch detection component must implement all of the \ngetDetections()\n methods or extend from a superclass which provides implementations for them (see \nconvenience adapters\n).\n\n\nIf your component does not support a particular data type, it should simply:\n\n\nthrow new MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE);\n\n\n\n\nConvenience Adapters\n\n\nAs an alternative to extending \nMPFDetectionComponentBase\n directly, developers may extend a convenience adapter classes provided by OpenMPF.\n\n\nThese adapters provide default implementations of several methods in \nMPFDetectionComponentInterface\n and ensure that the component's logic properly extends from the Component API. This enables developers to concentrate on implementation of the detection algorithm.\n\n\nThe following adapter is provided:\n\n\n\n\nAudio And Video Detection Component Adapter (\nsource\n)\n\n\n\n\n\n\nExample: Using Adaptors to Provide Simple AudioVisual Handling:\n\nMany components designed to work on audio files, such as speech detection, are relevant to video files as well. Some of the tools for these components, however, only function on audio files (such as .wav, .mp3) and not video files (.avi, .mov, etc).\n\n\nThe \nMPFAudioAndVideoDetectionComponentAdapter\n adapter class implements the \ngetDetections(MPFVideoJob)\n method by translating the video request into an audio request. It builds a temporary audio file by ripping the audio from the video media input, translates the \nMPFVideoJob\n into an \nMPFAudioJob\n, and invokes \ngetDetections(MPFAudioJob)\n on the generated file. Once processing is done, the adapter translates the \nMPFAudioTrack\n list into an \nMPFVideoTrack\n list.\n\n\nSince only audio and video files are relevant to this adapter, it provides a default implementation of the \ngetDetections(MPFImageJob)\n method which throws \nnew MPFComponentDetectionError(MPFDetectionError.MPF_UNSUPPORTED_DATA_TYPE)\n.\n\n\nThe Sphinx speech detection component uses this adapter to run Sphinx speech detection on video files. Other components that need to process video files as audio may also use the adapter.\n\n\n\n\nsupports(MPFDataType)\n\n\nReturns the supported data types of the component.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic boolean supports(MPFDataType dataType)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ndataType\n\n\nMPFDataType\n\n\nReturn true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing.\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nboolean\n) True if the component supports the data type, otherwise false.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\n// Sample Component that supports only image and video files\npublic boolean supports(MPFDataType dataType) {\n return dataType == MPFDataType.IMAGE || dataType == MPFDataType.VIDEO;\n}\n\n\n\ngetDetections(MPFImageJob)\n\n\nUsed to detect objects in image files. The MPFImageJob class contains the URI specifying the location of the image file.\n\n\nCurrently, the dataUri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". This is because all media is copied to the OpenMPF server before the job is executed.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFImageJob\n\n\nClass containing details about the work to be performed. See \nMPFImageJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFImageLocation\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate image locations\n}\n\n\n\ngetDetections(MPFVideoJob)\n\n\nUsed to detect objects in a video.\n\n\nPrior to being sent to the component, videos are split into logical \"segments\" of video data and each segment (containing a range of frames) is assigned to a different job. Components are not guaranteed to receive requests in any order. For example, the first request processed by a component might receive a request for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFVideoJob\n\n\nClass containing details about the work to be performed. See \nMPFVideoJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFVideoTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFVideoJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate video tracks\n}\n\n\n\ngetDetections(MPFAudioJob)\n\n\nUsed to detect objects in audio files. Currently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFAudioJob\n\n\nClass containing details about the work to be performed. See \nMPFAudioJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFAudioTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFAudioJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate audio tracks\n}\n\n\n\ngetDetections(MPFGenericJob)\n\n\nUsed to detect objects in files that aren't video, image, or audio files. Such files are of the UNKNOWN type and handled generically. These files are not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMethod Definition:\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError;\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njob\n\n\nMPFGenericJob\n\n\nClass containing details about the work to be performed. See \nMPFGenericJob\n\n\n\n\n\n\n\n\n\n\n\n\nReturns: (\nList\n) The \nMPFGenericTrack\n data for each detected object.\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\npublic List getDetections(MPFGenericJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate generic tracks\n}\n\n\n\nMPFComponentDetectionError\n\n\nAn exception that occurs in a component. The exception must contain a reference to a valid \nMPFDetectionError\n.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFComponentDetectionError (\n MPFDetectionError error,\n String msg,\n Exception e\n)\n\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nerror\n\n\nMPFDetectionError\n\n\nThe type of error generated by the component. See \nMPFDetectionError\n.\n\n\n\n\n\n\nmsg\n\n\nString\n\n\nThe detail message (which is saved for later retrieval by the \nThrowable.getMessage()\n method).\n\n\n\n\n\n\ne\n\n\nException\n\n\nThe cause (which is saved for later retrieval by the \nThrowable.getCause()\n method). A null value is permitted.\n\n\n\n\n\n\n\n\nDetection Job Classes\n\n\nThe following classes contain details about a specific job (work unit):\n\n\n\n\nMPFImageJob\n extends \nMPFJob\n\n\nMPFVideoJob\n extends \nMPFJob\n\n\nMPFAudioJob\n extends \nMPFJob\n\n\nMPFGenericJob\n extends \nMPFJob\n\n\n\n\nThe following classes define detection results:\n\n\n\n\nMPFImageLocation\n\n\nMPFVideoTrack\n\n\nMPFAudioTrack\n\n\nMPFGenericTrack\n\n\n\n\nMPFJob\n\n\nClass containing data used for detection of objects.\n\n\n\n\nConstructor(s):\n\n\n\n\nprotected MPFJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\njobName \n\n\nString\n\n\nA specific name given to the job by the OpenMPF Framework. This value may be used, for example, for logging and debugging purposes.\n\n\n\n\n\n\ndataUri \n\n\nString\n\n\nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n\n\n\n\n\njobProperties \n\n\nMap\n\n\nThe key corresponds to the property name specified in the component descriptor file described in \"Installing and Registering a Component\". Values are determined by an end user when creating a pipeline. \n Note: Only those property values specified by the user will be in the jobProperties map; for properties not contained in the map, the component must use a default value.\n\n\n\n\n\n\nmediaProperties \n\n\nMap\n\n\nMetadata about the media associated with the job. The key is the property name and value is the property value. The entries in the map vary depend on the job type. They are defined in the specific Job's API description.\n\n\n\n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nMPFImageJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in image files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFImageJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFImageLocation location)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nlocation\n\n \nMPFImageLocation\n\n \nAn \nMPFImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFVideoJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in video files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame)\n\n\n\npublic MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame,\n MPFVideoTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartFrame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstopFrame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFVideoTrack\n\n \nAn \nMPFVideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nMPFAudioJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in audio files.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime)\n\n\n\npublic MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startTime,\n int stopTime,\n MPFAudioTrack track)\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFAudioTrack\n\n \nAn \nMPFAudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nMPFGenericJob\n\n\nExtends \nMPFJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is of the UNKNOWN type and handled generically. The file is not logically segmented, so a job will contain the entirety of the file.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties)\n\n\n\npublic MPFGenericJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n MPFGenericTrack track) {\n\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njobName\n\n \nString\n\n \nSee \nMPFJob.jobName\n for description.\n\n \n\n \n\n \ndataUri\n\n \nString\n\n \nSee \nMPFJob.dataUri\n for description.\n\n \n\n \n\n \nstartTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstopTime\n\n \nint\n\n \nThe time (0-based index, in ms) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njobProperties\n\n \nMap\n\n \nSee \nMPFJob.jobProperties\n for description.\n\n \n\n \n\n \nmediaProperties\n\n \nMap\n\n \n\n See \nMPFJob.mediaProperties\n for description.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \ntrack\n\n \nMPFGenericTrack\n\n \nAn \nMPFGenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nDetection Job Result Classes\n\n\nMPFImageLocation\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFImageLocation(\n int xLeftUpper,\n int yLeftUpper,\n int width,\n int height,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nxLeftUpper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\nyLeftUpper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is \nCLASSIFICATION\n and the value is the type of object detected.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"CLASSIFICATION\", \"backpack\");\nMPFImageLocation imageLocation = new MPFImageLocation(0, 0, 100, 100, 1.0, detectionProperties);\n\n\n\nMPFVideoTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFVideoTrack(\n int startFrame,\n int stopFrame,\n Map frameLocations,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartFrame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstopFrame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nframeLocations\n\n\nMap\n\n\nA map of individual detections. The key for each map entry is the frame number where the detection was generated, and the value is a \nMPFImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame. In some cases, frames are deliberately skipped, as when a FRAME_INTERVAL > 1 is specified\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detected object. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nExample:\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFVideoTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nA component that detects text could add an entry to \ndetectionProperties\n where the key is \nTRANSCRIPT\n and the value is a string representing the text found in the video segment.\n\n\nMap detectionProperties = new HashMap();\ndetectionProperties.put(\"TRANSCRIPT\", \"RE5ULTS FR0M A TEXT DETECTER\");\nMPFVideoTrack videoTrack = new MPFVideoTrack(0, 5, frameLocations, 1.0, detectionProperties);\n\n\n\nMPFAudioTrack\n\n\nClass used to store the location of detected objects in an image.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFAudioTrack(\n int startTime,\n int stopTime,\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstartTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstopTime\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nMPFAudioTrack.detectionProperties\n do not show up in the JSON output object or are used by the WFM in any way.\n\n\n\n\nMPFGenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file. The file is of the UNKNOWN type and handled generically.\n\n\n\n\nConstructor(s):\n\n\n\n\npublic MPFGenericTrack(\n float confidence,\n Map detectionProperties\n)\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetectionProperties\n\n\nMap\n\n\nOptional additional information about the detection. There is no restriction on the keys or the number of entries that can be added to the properties map. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nEnumeration Types\n\n\nMPFDetectionError\n\n\nEnum used to indicate the status of \ngetDetections\n in a \nMPFComponentDetectionError\n. A component is not required to support all error types.\n\n\n\n\n\n\n\n\nENUM\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nMPF_DETECTION_SUCCESS\n\n\nThe component function completed successfully.\n\n\n\n\n\n\nMPF_OTHER_DETECTION_ERROR_TYPE\n\n\nThe component method has failed for a reason that is not captured by any of the other error codes.\n\n\n\n\n\n\nMPF_DETECTION_NOT_INITIALIZED\n\n\nThe initialization of the component, or the initialization of any of its dependencies, has failed for any reason.\n\n\n\n\n\n\nMPF_UNSUPPORTED_DATA_TYPE\n\n\nThe job passed to a component requests processing of a job of an unsupported type. For instance, a component that is only capable of processing audio files should return this error code if a video or image job request is received.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_DATAFILE\n\n\nThe data file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI. \nUse MPF_COULD_NOT_OPEN_MEDIA for media files.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_DATAFILE\n\n\nThere is a failure reading data from a successfully opened input data file. \nUse MPF_COULD_NOT_READ_MEDIA for media files.\n\n\n\n\n\n\nMPF_FILE_WRITE_ERROR\n\n\nThe component received a failure for any reason when attempting to write to a file.\n\n\n\n\n\n\nMPF_BAD_FRAME_SIZE\n\n\nThe frame data retrieved has an incorrect or invalid frame size.\n\n\n\n\n\n\nMPF_DETECTION_FAILED\n\n\nGeneral failure of a detection algorithm. This does not indicate a lack of detections found in the media, but rather a break down in the algorithm that makes it impossible to continue to try to detect objects.\n\n\n\n\n\n\nMPF_INVALID_PROPERTY\n\n\nThe component received a property that is unrecognized or has an invalid/out-of-bounds value.\n\n\n\n\n\n\nMPF_MISSING_PROPERTY\n\n\nThe component received a job that is missing a required property.\n\n\n\n\n\n\nMPF_MEMORY_ALLOCATION_FAILED\n\n\nThe component failed to allocate memory for any reason.\n\n\n\n\n\n\nMPF_GPU_ERROR\n\n\nThe job was configured to execute on a GPU, but there was an issue with the GPU or no GPU was detected.\n\n\n\n\n\n\nMPF_NETWORK_ERROR\n\n\nThe component failed to communicate with an external system over the network. The system may not be available or there may have been a timeout.\n\n\n\n\n\n\nMPF_COULD_NOT_OPEN_MEDIA\n\n\nThe media file to be processed could not be opened for any reason, such as a permissions failure, or an unreachable URI.\n\n\n\n\n\n\nMPF_COULD_NOT_READ_MEDIA\n\n\nThere is a failure reading data from a successfully opened media file.\n\n\n\n\n\n\n\n\nUtility Classes\n\n\nTODO: Implement Java utility classes\n\n\nJava Component Build Environment\n\n\nA Java Component must be built using a version of the Java SDK that is compatible with the one used to build the Java Component Executor. The OpenMPF Java Component Executor is currently built using OpenJDK 11.0.11. In general, the Java SDK is backwards compatible.\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or files needed for execution. This includes all other non-standard libraries used by the component (aside from the standard Linux and Java SDK libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through multiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input (i.e. when processing the same \nMPFJob\n).\n\n\nComponent Structure for non-Docker Deployments\n\n\nIt is recommended that Java components are organized according to the following directory structure:\n\n\ncomponentName\n\u251c\u2500\u2500 config - Other component-specific configuration\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 lib - All libraries required by the component\n\u2514\u2500\u2500 libComponentName.jar - Compiled component library\n\n\n\nOnce built, components should be packaged into a .tar.gz containing the contents of the directory shown above.\n\n\nLogging\n\n\nIt is recommended to use \nslf4j\n with \nlog4j2\n for OpenMPF Java Component logging. Multiple instances of the same component can log to the same file. Logging content can span multiple lines.\n\n\nLog files should be output to:\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n\n\nEach log statement must take the form:\n\nDATE TIME LEVEL CONTENT\n\n\nThe following log LEVELs are supported:\n \nFATAL, ERROR, WARN, INFO, DEBUG, TRACE\n.\n\n\nFor example:\n\n2016-02-09 13:42:42,341 INFO - Starting sample-component: [ OK ]\n\n\nThe following log4j2 configuration can be used to match the format of other OpenMPF logs:\n\n\n \n\n \n ${env:MPF_LOG_PATH}/${env:THIS_MPF_NODE}/log/sample-component-detection.log\n %date %level [%thread] %logger{1.} - %msg%n\n \n\n \n \n \n \n\n \n \n \n \n \n \n \n \n\n \n\n \n \n \n\n \n \n \n \n \n",
"title": "Java Batch Component API"
},
{
@@ -1010,11 +1035,6 @@
"text": "Returns the supported data types of the component. Method Definition: public boolean supports(MPFDataType dataType) Parameters: Parameter Data Type Description dataType MPFDataType Return true if the component supports IMAGE, VIDEO, AUDIO, and/or UNKNOWN (generic) processing. Returns: ( boolean ) True if the component supports the data type, otherwise false. Example: // Sample Component that supports only image and video files\npublic boolean supports(MPFDataType dataType) {\n return dataType == MPFDataType.IMAGE || dataType == MPFDataType.VIDEO;\n}",
"title": "supports(MPFDataType)"
},
- {
- "location": "/Java-Batch-Component-API/index.html#getdetectiontype",
- "text": "Returns the type of object detected by the component. Method Definition: public String getDetectionType() Parameters: none Returns: ( String ) The type of object detected by the component. Should be in all CAPS. Examples include: FACE , MOTION , PERSON , SPEECH , CLASS (for object classification), or TEXT . Example: public String getDetectionType() {\n return \"FACE\";\n}",
- "title": "getDetectionType()"
- },
{
"location": "/Java-Batch-Component-API/index.html#getdetectionsmpfimagejob",
"text": "Used to detect objects in image files. The MPFImageJob class contains the URI specifying the location of the image file. Currently, the dataUri is always a local file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\". This is because all media is copied to the OpenMPF server before the job is executed. Method Definition: public List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError; Parameters: Parameter Data Type Description job MPFImageJob Class containing details about the work to be performed. See MPFImageJob Returns: ( List ) The MPFImageLocation data for each detected object. Example: public List getDetections(MPFImageJob job)\n throws MPFComponentDetectionError {\n // Component logic to generate image locations\n}",
@@ -1057,17 +1077,17 @@
},
{
"location": "/Java-Batch-Component-API/index.html#mpfvideojob",
- "text": "Extends MPFJob Class containing data used for detection of objects in video files. Constructor(s): public MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame) public MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame,\n MPFVideoTrack track) Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n jobName \n String \n See MPFJob.jobName for description. \n \n \n dataUri \n String \n See MPFJob.dataUri for description. \n \n \n startFrame \n int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stopFrame \n int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n jobProperties \n Map \n See MPFJob.jobProperties for description. \n \n \n mediaProperties \n Map \n \n See MPFJob.mediaProperties for description.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n \n track \n MPFVideoTrack \n An MPFVideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.",
+ "text": "Extends MPFJob Class containing data used for detection of objects in video files. Constructor(s): public MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame) public MPFVideoJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map mediaProperties,\n int startFrame,\n int stopFrame,\n MPFVideoTrack track) Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n jobName \n String \n See MPFJob.jobName for description. \n \n \n dataUri \n String \n See MPFJob.dataUri for description. \n \n \n startFrame \n int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stopFrame \n int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n jobProperties \n Map \n See MPFJob.jobProperties for description. \n \n \n mediaProperties \n Map \n \n See MPFJob.mediaProperties for description.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n \n track \n MPFVideoTrack \n An MPFVideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support. For frame intervals greater than 1, the component must look for detections starting with the first frame, and then skip frames as specified by the frame interval, until or before it reaches the stop frame. For example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component must look for objects in frames numbered 0, 2, 4, 6, ..., 98.",
"title": "MPFVideoJob"
},
{
"location": "/Java-Batch-Component-API/index.html#mpfaudiojob",
- "text": "Extends MPFJob Class containing data used for detection of objects in audio files. Constructor(s): public MPFAudioJob(\n String jobName,\n String dataUri,\n final Map jobProperties,\n final Map