Table of contents
diff --git a/docs/site/index.html b/docs/site/index.html
index 2bcfabafef15..19fcf1e72557 100644
--- a/docs/site/index.html
+++ b/docs/site/index.html
@@ -388,5 +388,5 @@
Overview
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index 3d64222f4621..326f15147e7a 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -152,12 +152,12 @@
},
{
"location": "/Admin-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n Please refer to the \nUser Configuration\n section for changing the default user passwords.\n\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nWeb UI\n\n\nThe login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the \nUser Guide\n for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\". \n\n\nWe highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the \nUser Configuration\n section below.\n\n\nThis document will cover the additional functionality permitted to admin users through the Admin Console pages.\n\n\nDashboard\n\n\nThe landing page for an admin user is the Job Status page:\n\n\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\nProperties Settings\n\n\nThis page allows an admin user to view and edit various OpenMPF properties:\n\n\n\n\nAn admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.\n\n\nNote that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration.\n\n\nWARNING:\n Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!\n\n\n\nAt the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:\n\n\n\n\nUser Configuration\n\n\nEvery time the Workflow Manager starts it will attempt to create accounts for the users listed in the \nuser.properties\n file. At runtime this file is extracted to \n$MPF_HOME/config\n on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password.\n\n\nWe highly recommend modifying the \nuser.properties\n file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created. \n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker \nREADME\n that explain how to use a \ndocker secret\n for your custom \nuser.properties\n file.\n\n\n(Optional) Configure HTTPS\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. \nIf you are using Docker, please follow the instructions in the openmpf-docker \n\nREADME\n \nthat explain how to configure HTTPS.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nWARNING:\n Please refer to the \nUser Configuration\n section for changing the default user passwords.\n\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nWeb UI\n\n\nThe login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the \nUser Guide\n for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\".\n\n\nWe highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the \nUser Configuration\n section below.\n\n\nThis document will cover the additional functionality permitted to admin users through the Admin Console pages.\n\n\nDashboard\n\n\nThe landing page for an admin user is the Job Status page:\n\n\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\nProperties Settings\n\n\nThis page allows an admin user to view and edit various OpenMPF properties:\n\n\n\n\nAn admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name.\n\n\nNote that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration.\n\n\nWARNING:\n Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution!\n\n\n\nAt the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:\n\n\n\n\nHawtio\n\n\nThe \nHawtio\n web console can be accessed by selecting \"Hawtio\" from the\n\"Configuration\" dropdown menu in the top menu bar. Hawtio exposes various management information\nand settings. It can be used to monitor the state of the ActiveMQ queues used for communication\nbetween the Workflow Manager and the components.\n\n\nUser Configuration\n\n\nEvery time the Workflow Manager starts it will attempt to create accounts for the users listed in the \nuser.properties\n file. At runtime this file is extracted to \n$MPF_HOME/config\n on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password.\n\n\nWe highly recommend modifying the \nuser.properties\n file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created.\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker \nREADME\n that explain how to use a \ndocker secret\n for your custom \nuser.properties\n file.\n\n\n(Optional) Configure HTTPS\n\n\nThe official way to deploy OpenMPF is to use the Docker container platform.\nIf you are using Docker, please follow the instructions in the openmpf-docker\n\nREADME\n\nthat explain how to configure HTTPS.",
"title": "Admin Guide"
},
{
"location": "/Admin-Guide/index.html#web-ui",
- "text": "The login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the User Guide for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\". We highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the User Configuration section below. This document will cover the additional functionality permitted to admin users through the Admin Console pages.",
+ "text": "The login procedure, as well as all of the pages accessible through the Workflow Manager sidebar, are the same for admin and non-admin users. Refer to the User Guide for more information. The default account for an admin user has the username \"admin\" and password \"mpfadm\". We highly recommend changing the default username and password settings for any environment which is exposed on a network, especially production environments. The default settings are public knowledge, which could be a security risk. Please refer to the User Configuration section below. This document will cover the additional functionality permitted to admin users through the Admin Console pages.",
"title": "Web UI"
},
{
@@ -170,19 +170,24 @@
"text": "This page allows an admin user to view and edit various OpenMPF properties: An admin user can click inside of the \"Value\" field for any of the properties and type a new value. Doing so will change the color of the property to orange and display an orange icon to the right of the property name. Note that if the admin user types in the original value of the property, or clicks the \"Reset\" button, then it will return back to the normal coloration. WARNING: Changing the value of these properties can prevent the workflow manager from running after the web server is restarted. Also, no validation checks are performed on the user-provided values. Proceed with caution! At the bottom of the properties table is the \"Save Properties\" button. The number of modified properties is shown in parentheses. Clicking the button will make the necessary changes to the properties file on the file system, but the changes will not take effect until the workflow manager is restarted. The saved properties will be colored blue and a blue icon will be displayed to the right of the property name. Additionally, a notification will appear at the top of the page alerting all system users that a restart is required:",
"title": "Properties Settings"
},
+ {
+ "location": "/Admin-Guide/index.html#hawtio",
+ "text": "The Hawtio web console can be accessed by selecting \"Hawtio\" from the\n\"Configuration\" dropdown menu in the top menu bar. Hawtio exposes various management information\nand settings. It can be used to monitor the state of the ActiveMQ queues used for communication\nbetween the Workflow Manager and the components.",
+ "title": "Hawtio"
+ },
{
"location": "/Admin-Guide/index.html#user-configuration",
- "text": "Every time the Workflow Manager starts it will attempt to create accounts for the users listed in the user.properties file. At runtime this file is extracted to $MPF_HOME/config on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password. We highly recommend modifying the user.properties file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created. The official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker README that explain how to use a docker secret for your custom user.properties file.",
+ "text": "Every time the Workflow Manager starts it will attempt to create accounts for the users listed in the user.properties file. At runtime this file is extracted to $MPF_HOME/config on the machine running the Workflow Manager. For every user listed in that file, the Workflow Manager will create that user account if a user with the same name doesn't already exists in the SQL database. By default, that file contains two entries, one for the \"admin\" user with the \"mpfadm\" password, and one for a non-admin \"mpf\" user with the \"mpf123\" password. We highly recommend modifying the user.properties file with your own user entries before attempting to start the Workflow Manager for the first time. This will ensure that the default user accounts are not created. The official way to deploy OpenMPF is to use the Docker container platform. If you are using Docker, please follow the instructions in the openmpf-docker README that explain how to use a docker secret for your custom user.properties file.",
"title": "User Configuration"
},
{
"location": "/Admin-Guide/index.html#optional-configure-https",
- "text": "The official way to deploy OpenMPF is to use the Docker container platform. \nIf you are using Docker, please follow the instructions in the openmpf-docker README \nthat explain how to configure HTTPS.",
+ "text": "The official way to deploy OpenMPF is to use the Docker container platform.\nIf you are using Docker, please follow the instructions in the openmpf-docker README \nthat explain how to configure HTTPS.",
"title": "(Optional) Configure HTTPS"
},
{
"location": "/User-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080/workflow-manager. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node. The same is true for the \"activemq\" logs.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\nINFO:\n This document refers to components and pipelines that are no longer supported by OpenMPF; however, the images and general content still reflect the appearance and usage of the OpenMPF web UI and its features.\n\n\n\nGeneral\n\n\nThe Open Media Processing Framework (OpenMPF) can be used in three ways:\n\n\n\n\nThrough the OpenMPF Web user interface (UI)\n\n\nThrough the \nREST API endpoints\n\n\nThrough the \nCLI Runner\n\n\n\n\nAccessing the Web UI\n\n\nOn the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\".\n\n\nThe OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.\n\n\nLogging In\n\n\nThe OpenMPF Web UI requires user authentication and provides two default accounts: \"mpf\" and \"admin\". The password for the \"mpf\" user is \"mpf123\". These accounts are used to assign user or admin roles for OpenMPF cluster management. Note that an administrator can remove these accounts and/or add new ones using a command line tool. Refer to the \nAdmin Guide\n for features available to an admin user.\n\n\n\n\nThe landing page for a user is the Job Status page:\n\n\n\n\nLogging out\n\n\nTo log out a user can click the down arrow associated with the user icon at the top right hand corner of the page and then select \"Logout\":\n\n\n\n\nUser (Non-Admin) Features\n\n\nThe remainder of this document will describe the features available to a non-admin user.\n\n\nCreating Workflow Manager Jobs\n\n\nA \"job\" consists of a set of image, video, or audio files and a set of exploitation algorithms that will operate on those files. A job is created by assigning input media file(s) to a pipeline. A pipeline specifies the order in which processing steps are performed. Each step consists of a single task and each task consists of one or more actions which may be performed in parallel. The following sections describe the UI views associated with the different aspects of job creation and job execution.\n\n\nCreate Job\n\n\nThis is the primary page for creating jobs. Creating a job consists of uploading and selecting files as well as a pipeline and job priority.\n\n\n\n\nUploading Files\n\n\nSelecting a directory in the File Manager will display all files in that directory. The user can use previously uploaded files, or to choose from the icon bar at the bottom of the panel:\n\n\n Create New Folder\n\n Add Local Files\n\n Upload from URL\n\n Refresh\n\n\nNote that the first three options are only available if the \"remote-media\" directory or one of its subdirectories is selected. That directory resides in the OpenMPF share directory. The full path is shown in the footer of the File Manager section.\n\n\nClicking the \"Add Local Files\" icon will display a file browser dialog so that the user can select and upload one or more files from their local machine. The files will be uploaded to the selected directory. The upload progress dialog will display a preview of each file (if possible) and whether or not each file is uploaded successfully.\n\n\nClicking the \"Create New Folder\" icon will allow the user to create a new directory within the one currently selected. If the user has selected \"remote-media\", then adding a directory called \"Test Data\" will place it within \"remote-media\". \"Test Data\" will appear as a subdirectory in the directory tree shown in the web UI. If the user then clicks on \"Test Data\" and then the \"Add Local Files\" button the user can upload files to that specific directory. In the screenshot below, \"lena.png\" has been uploaded to the parent \"remote-media\" directory.\n\n\n\n\nClicking the \"Upload from URL\" icon enables the user to specify URLs pointing to remote media. Each URL must appear on a new line. Note that if a URL to a video is submitted then it must be a direct link to the video file. Specifying a URL to a YouTube HTML page, for example, will not work.\n\n\n\n\nClicking the \"Refresh\" icon updates the displayed file tree from the file system. Use this if an external process has added or removed files to or from the underlying file system.\n\n\nCreating Jobs\n\n\nCreating a job consists of selecting files as well as a pipeline and job priority.\n\n\n\n\nFiles are selected by first clicking the name of a directory to populate the files table in the center of the UI and then clicking the checkbox next to the file. Multiple files can be selected, including files from different directories. Also, the contents of an entire directory, and its subdirectories, can be selected by clicking the checkbox next to the parent directory name. To review which files have been selected, click the \"View\" button shown to the right of the \"# Files\" indicator. If there are many files in a directory, you may need to page through the directory using the page number buttons at the bottom of the center pane.\n\n\nYou can remove a file from the selected files by clicking on the red \"X\" for the individual file. You can also remove multiple files by first selecting the files using the checkboxes and then clicking on the \"Remove Checked\" button.\n\n\n\n\nThe media properties can be adjusted for individual files by clicking on the \"Set Properties\" button for that file. You can modify the properties of a group of files by clicking on the \"Set properties for Checked\" after selecting multiple files.\n\n\n\n\nAfter files have been selected it's time to assign a pipeline and job priority. The \"Select a pipeline and job priority\" section is located on the right side of the screen. Clicking on the down-arrow on the far right of the \"Select a pipeline\" area displays a drop-down menu containing the available pipelines. Click on the desired pipeline to select it. Existing pipelines provided with the system are listed in the Default Pipelines section of this document.\n\n\n\"Select job priority\" is immediately below \"Select a pipeline\" and has a similar drop-down menu. Clicking on the down-arrow on the right hand side of the \"Select job priority\" area displays the drop-down menu of available priorities. Clicking on the desired priority selects it. Priority 4 is the default value used if no priority is selected by the user. Priority 0 is the lowest priority, and priority 9 is the highest priority. When a job is executed it's divided into tasks that are each executed by a component service running on one of the nodes in the OpenMPF cluster. Each service executes tasks with the highest priority first. Note that a service will first complete the task it's currently processing before moving on to the next task. Thus, a long-running low-priority task may delay the execution of a high-priority task.\n\n\nAfter files have been selected and a pipeline and priority are assigned, clicking on the \"Create Job\" icon will start the job. When the job starts, the user will be shown the \"Job Status\" view.\n\n\nJob Status\n\n\nThe Job Status page displays a summary of the status for all jobs run by any user in the past. The current status and progress of any running job can be monitored from this view, which is updated automatically.\n\n\n\n\nWhen a job is COMPLETE a user can view the generated JSON output object data by clicking the \"Output Objects\" button for that job. A new tab/window will open with the detection output. The detection object output displays a formatted JSON representation of the detection results.\n\n\n{\n \"jobId\": \"localhost-11\",\n \"errors\": [],\n \"warnings\": [],\n \"objectId\": \"ef027349-8e6a-4472-a459-eba9463787f3\",\n \"pipeline\": {\n \"name\": \"OCV FACE DETECTION PIPELINE\",\n \"description\": \"Performs OpenCV face detection.\",\n \"tasks\": [\n {\n \"actionType\": \"DETECTION\",\n \"name\": \"OCV FACE DETECTION TASK\",\n \"description\": \"Performs OpenCV face detection.\",\n \"actions\": [\n {\n \"algorithm\": \"FACECV\",\n \"name\": \"OCV FACE DETECTION ACTION\",\n \"description\": \"Executes the OpenCV face detection algorithm using the default parameters.\",\n \"properties\": {}\n }\n ]\n }\n ]\n },\n \"priority\": 4,\n \"siteId\": \"mpf1\",\n \"externalJobId\": null,\n \"timeStart\": \"2021-09-07T20:57:01.073Z\",\n \"timeStop\": \"2021-09-07T20:57:02.946Z\",\n \"status\": \"COMPLETE\",\n \"algorithmProperties\": {},\n \"jobProperties\": {},\n \"environmentVariableProperties\": {},\n \"media\": [\n {\n \"mediaId\": 3,\n \"path\": \"file:///opt/mpf/share/remote-media/faces.jpg\",\n \"sha256\": \"184e9b04369248ae8a97ec2a20b1409a016e2895686f90a2a1910a0bef763d56\",\n \"mimeType\": \"image/jpeg\",\n \"mediaType\": \"IMAGE\",\n \"length\": 1,\n \"mediaMetadata\": {\n \"FRAME_HEIGHT\": \"1275\",\n \"FRAME_WIDTH\": \"1920\",\n \"MIME_TYPE\": \"image/jpeg\"\n },\n \"mediaProperties\": {},\n \"status\": \"COMPLETE\",\n \"detectionProcessingErrors\": {},\n \"markupResult\": null,\n \"output\": {\n \"FACE\": [\n {\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"algorithm\": \"FACECV\",\n \"tracks\": [\n {\n \"id\": \"d4b4a6e870c1378a3bc85a234b6f4c881f81a14edcf858d6d256d04ad40bc175\",\n \"startOffsetFrame\": 0,\n \"stopOffsetFrame\": 0,\n \"startOffsetTime\": 0,\n \"stopOffsetTime\": 0,\n \"type\": \"FACE\",\n \"source\": \"+#OCV FACE DETECTION ACTION\",\n \"confidence\": 5,\n \"trackProperties\": {},\n \"exemplar\": {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n },\n \"detections\": [\n {\n \"offsetFrame\": 0,\n \"offsetTime\": 0,\n \"x\": 652,\n \"y\": 212,\n \"width\": 277,\n \"height\": 277,\n \"confidence\": 5,\n \"detectionProperties\": {},\n \"artifactExtractionStatus\": \"NOT_ATTEMPTED\",\n \"artifactPath\": null\n }\n ]\n }\n ]\n }\n ]\n }\n }\n ]\n}\n\n\n\nA user can click the \"Cancel\" button to attempt to cancel the execution of a job before it completes. Note that if a service is currently processing part of a job, for example, a video segment that's part of a larger video file, then it will continue to process that part of the job until it completes or there is an error. The act of cancelling a job will prevent other parts of that job from being processed. Thus, if the \"Cancel\" button is clicked late into the job execution, or if each part of the job is already being processed by services executing in parallel, it may have no effect. Also, if the video segment size is set to a very large number, and the detection being performed is slow, then cancelling a job could take awhile.\n\n\nA user can click the \"Resubmit\" button to execute a job again. The new job execution will retain the same job id and all generated artifacts, marked up media, and detection objects will be replaced with the new results. The results of the previous job execution will no longer be available. Note that the user has the option to change the job priority when resubmitting a job.\n\n\nYou can view the results of any Media Markup by clicking on the \"Media\" button for that job. This view will display the path of the source medium and the marked up output path of any media processed using a pipeline that contains a markup action. Clicking an image will display a popup with the marked up image. You cannot view a preview for marked up videos. In any case, the marked up data can be downloaded to the machine running the web browser by clicking the \"Download\" button.\n\n\n\n\nCreate Custom Pipelines\n\n\nA pipeline consists of a series of tasks executed sequentially. A task consists of a single action or a set of two or more actions performed in parallel. An action is the execution of an algorithm. The ability to arrange tasks and actions in various ways provides a great deal of flexibility when creating pipelines. Users may combine pre-existing tasks in different ways, or create new tasks based on the pre-existing actions.\n\n\nSelecting \"Pipelines\" from the \"Configuration\" dropdown menu in the top menu bar brings up the Pipeline Creation View, which enables users to create new pipelines. To create a new action, the user can scroll to the \"Create A New Action\" section of the page and select the desired algorithm from the \"Select an Algorithm\" dropdown menu:\n\n\n\n\nSelecting an algorithm will bring up a scrollable table of properties associated with the algorithm, including each property's name, description, data type, and an editable field allowing the user to set a custom value. The user may enter values for only those properties that they wish to change; any property value fields left blank will result in default values being used for those properties. For example, a custom action may be created based on the OpenCV face detection component to scan for faces equal to or exceeding a size of 100x100 pixels.\n\n\nWhen done editing the property values, the user can click the \"Create Action\" button, enter a name and description for the action (both are required), and then click the \"Create\" button. The action will then be listed in the \"Available Actions\" table and also in the \"Select an Action\" dropdown menu used for task creation.\n\n\n\n\nTo create a new task, the user can scroll to the \"Create A New Task\" section of the page:\n\n\n\n\nThe user can use the \"Select an Action\" dropdown menu to select the desired action and then click \"Add Action to Task\". The user can follow this procedure to add additional actions to the task, if desired. Clicking on the \"Remove\" button next to an added action will remove it from the task. When the user is finished adding actions the user can click \"Create Task\", enter a name and description for the task (both are required), and then click the \"Create\" button. The task will be listed in the \"Available Tasks\" table as well as in the \"Select a Task\" dropdown menu used for pipeline creation.\n\n\n\n\nTo build a new pipeline, the user can scroll down to the \"Create A New Pipeline\" section of the page:\n\n\n\n\nThe user can use the \"Select a Task\" dropdown menu to select the first task and then click \"Add Task to Pipeline\". The user can follow this procedure to add additional tasks to the pipeline, if desired. Clicking on the \"Remove\" button next to an added task will remove it from the pipeline. When the user is finished adding tasks the user can click \"Create Pipeline\", enter a name and description for the pipeline (both are required), and then click the \"Create\" button. The pipeline will be listed in the \"Available Pipelines\" table.\n\n\n\n\nAll pipelines successfully created in this view will also appear in the pipeline drop down selection menus on any job creation page:\n\n\n\n\n\n\nNOTE: Pipeline, task, and action names are case-insensitive. All letters will be converted to uppercase.\n\n\n\n\nLogs\n\n\nThis page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information).\n\n\n\n\nIn general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file.\n\n\nNote that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node.\n\n\nThe \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services.\n\n\nThe \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.\n\n\nProperties Settings\n\n\nThis page allows a user to view the various OpenMPF properties configured automatically or by an admin user:\n\n\n\n\nStatistics\n\n\nThe \"Jobs\" tab on this page allows a user to view a bar graph representing the time it took to execute the longest running job for a given pipeline. Pipelines that do not have bars have not been used to run any jobs yet. Job statistics are preserved when the workflow manager is restarted.\n\n\n\n\nFor example, the DLIB FACE DETECTION PIPELINE was run twice. Note that the Y-axis in the bar graph has a logarithmic scale. Hovering the mouse over any bar in the graph will show more information. Information about each pipeline is listed below the graph.\n\n\nThe \"Processes\" tab on this page allows a user to view a table with information about the runtime of various internal workflow manager operations. The \"Count\" field represents the number of times each operation was run. The min, max, and mean are calculated over the set of times each operation was performed. Runtime information is reset when the workflow manager is restarted.\n\n\n\n\nREST API\n\n\nThis page allows a user to try out the \nvarious REST API endpoints\n provided by the workflow manager. It is intended to serve as a learning tool for technical users who wish to design and build systems that interact with the OpenMPF.\n\n\nAfter selecting a functional category, such as \"meta\", \"jobs\", \"statistics\", \"nodes\", \"pipelines\", or \"system-message\", each REST endpoint for that category is shown in a list. Selecting one of them will cause it to expand and reveal more information about the request and response structures. If the request takes any parameters then a section will appear that allows the user to manually specify them.\n\n\n\n\nIn the example above, the \"/rest/jobs/{id}\" endpoint was selected. It takes a required \"id\" parameter that corresponds to a previously run job and returns a JSON representation of that job's information. The screenshot below shows the result of specifying an \"id\" of \"1\", providing the \"mpf\" user credentials when prompted, and then clicking the \"Try it out!\" button:\n\n\n\n\nThe HTTP response information is shown below the \"Try it out!\" button. Note that the structure of the \"Response Body\" is the same as the response model shown in the \"Response Class\" directly underneath the \"/rest/jobs/{id}\" label.",
"title": "User Guide"
},
{
@@ -192,7 +197,7 @@
},
{
"location": "/User-Guide/index.html#accessing-the-web-ui",
- "text": "On the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080/workflow-manager. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\". The OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.",
+ "text": "On the server hosting the Open Media Processing Framework, the Web UI is accessible at http://localhost:8080. To access it from other machines, substitute the hostname or IP address of the master node server in place of \"localhost\". The OpenMPF user interface was designed and tested for use with Chrome and Firefox. It has not been tested with other browsers. Attempting to use an unsupported browser will result in a warning.",
"title": "Accessing the Web UI"
},
{
@@ -242,7 +247,7 @@
},
{
"location": "/User-Guide/index.html#logs",
- "text": "This page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information). In general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file. Note that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node. The same is true for the \"activemq\" logs. The \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services. The \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.",
+ "text": "This page allows a user to view the various log files that are generated by system processes running on the various nodes in the OpenMPF cluster. A log file can be selected by first selecting a host from the \"Available Hosts\" drop-down and then selecting a log file from the \"Available Logs\" drop-down. The information in the log can be filtered for display based on the following log levels: ALL, TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. Choosing a successive log level displays all information at that level and levels below (e.g., choosing WARN will cause all WARN, INFO, DEBUG, and TRACE information to be displayed, but will filter out ERROR and FATAL information). In general, all services of the same component type running on the same node write log messages to the same file. For example, all OCV face detection services on somehost-7-mpfd2 write log messages to the same \"ocv-face-detection\" log file. All OCV face detection services on somehost-7-mpfd3 write log messages to a different \"ocv-face-detection\" log file. Note that only the master node will have the \"workflow-manager\" log. This is because the workflow manager only runs on the master node. The \"node-manager-startup\" and \"node-manager\" logs will appear for every node in a non-Docker OpenMPF cluster. The \"node-manager-startup\" log captures information about the nodemanager startup process, such as if any errors occurred. The \"node-manager\" log captures information about node manager execution, such as starting and stopping services. The \"detection\" log captures information about initializing C++ detection components and how they handle job request and response messages.",
"title": "Logs"
},
{
@@ -262,7 +267,7 @@
},
{
"location": "/OpenID-Connect-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nOpenID Connect Overview\n\n\nWorkflow Manager can use an OpenID Connect (OIDC) provider to handle authentication for users of\nthe web UI and clients of the REST API.\n\n\nConfiguration\n\n\nIn order to use OIDC, Workflow Manager must first be registered with OIDC provider. The exact\nprocess for this varies by provider. As part of the registration process, a client ID and client\nsecret should be provided. Those values should be set in the \nOIDC_CLIENT_ID\n and\n\nOIDC_CLIENT_SECRET\n environment variables. During the registration process the provider will\nlikely request a redirect URI. The redirect URI should be set to the base URI for Workflow Manager\nwith \n/login/oauth2/code/provider\n appended.\n\n\nThe documentation for the OIDC provider should specify the base URI a client should use to\nauthenticate users. The URI should be set in the \nOIDC_ISSUER_URI\n environment variable. To verify\nthe URI is correct, check that the JSON discovery document is returned when sending an HTTP GET\nrequest to the URI with \n/.well-known/openid-configuration\n appended.\n\n\nAfter a user or REST client authenticates with the OIDC provider, Workflow Manager will check for a\nclaim with a specific value to determine if the user is authorized to access Workflow Manager and\nwith what role. The \nOIDC_USER_CLAIM_NAME\n and \nOIDC_ADMIN_CLAIM_NAME\n environment variables\nspecify the name of the claim that must be present. The \nOIDC_USER_CLAIM_VALUE\n and\n\nOIDC_ADMIN_CLAIM_VALUE\n environment variables specify the required value of the claim.\n\n\nIf Workflow Manager is configured to use OIDC, then the component services must also be configured\nto use OIDC. The component services will use OIDC if either the \nOIDC_JWT_ISSUER_URI\n or\n\nOIDC_ISSUER_URI\n environment variables are set on the component service. When a component service\nis configured to use OIDC, the \nOIDC_CLIENT_ID\n and \nOIDC_CLIENT_SECRET\n environment variables are\nused to specify the client ID and secret that will be used during component registration.\n\n\nWorkflow Manager Environment Variables\n\n\n\n\nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used to authenticate users\n through the web UI. If \nOIDC_JWT_ISSUER_URI\n is not set, \nOIDC_ISSUER_URI\n will also be used to\n authenticate REST clients. The OIDC configuration endpoint must exist at the value of\n \nOIDC_ISSUER_URI\n with \n/.well-known/openid-configuration\n appended.\n\n\nOIDC_JWT_ISSUER_URI\n (Optional): Works the same way as \nOIDC_ISSUER_URI\n, except that the\n configuration will only be used to authenticate REST clients. When not provided,\n \nOIDC_ISSUER_URI\n will be used. This would be used when the authentication provider's endpoint\n for user authentication is different from the endpoint for authentication of REST clients.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that Workflow Manager will use to authenticate with\n the OIDC provider.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret Workflow Manager will use to authenticate\n with the OIDC provider.\n\n\nOIDC_USER_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nUSER\n role.\n\n\nOIDC_USER_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_USER_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_ADMIN_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nADMIN\n role.\n\n\nOIDC_ADMIN_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_ADMIN_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_SCOPES\n (Optional): A comma-separated list of the scopes to be requested from the OIDC\n provider when authenticating a user through the web UI. The OIDC specification requires one of\n the scopes to be \nopenid\n, so if this environment variable is omitted or \nopenid\n is not in the\n list, it will be automatically added.\n\n\nOIDC_USER_NAME_ATTR\n (Optional): The name of the claim containing the user name. Defaults to\n \nsub\n.\n\n\nOIDC_REDIRECT_URI\n (Optional): Specifies the URL the user's browser will be redirected to after\n logging in to the OIDC provider. If provided, the URL must end in \n/login/oauth2/code/provider\n.\n This would generally be used when the host name that Workflow Manager uses to connect to the\n OIDC provider is different from the OIDC provider's public host name. The value can use the\n \ntemplate variables supported by Spring.\n\n\n\n\nComponent Environment Variables\n\n\n\n\nOIDC_JWT_ISSUER_URI\n or \nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used\n to authenticate REST clients. The OIDC configuration endpoint must exist at the value of this\n environment variable with \n/.well-known/openid-configuration\n appended. If both environment\n variables are provided, \nOIDC_JWT_ISSUER_URI\n will be used. If \nOIDC_JWT_ISSUER_URI\n is set on\n Workflow Manager, it should be set to the same value on the component services. If\n \nOIDC_JWT_ISSUER_URI\n is not set on Workflow Manager, \nOIDC_ISSUER_URI\n should be set to the\n same value on Workflow Manager and the component services. When either environment variable is\n set, the \nWFM_USER\n and \nWFM_PASSWORD\n environment variables are ignored.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that the component service will use when registering\n the component with Workflow Manager.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret that the component service will use when\n registering the component with Workflow Manager.\n\n\n\n\nExample with Keycloak\n\n\nThe following example explains how to test Workflow Manager with Keycloak as the OIDC provider.\nIt is just an example and should not be used in production.\n\n\n1. Get the Docker gateway IP address by running the command below. It will be used in later steps.\n\n\ndocker network inspect --format '{{(index .IPAM.Config 0).Gateway}}' bridge\n\n\n\n2. Start Keycloak in development mode using the command below. Do not start Workflow Manager yet.\n The values for the OIDC environment variables are dependent on how you set up Keycloak in the\n following steps.\n\n\ndocker run -p 9090:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \\\n quay.io/keycloak/keycloak:21.1.1 start-dev\n\n\n\n3. Go to \nhttp://localhost:9090/admin\n in a browser and login with username \nadmin\n and\n password \nadmin\n.\n\n\n4. Create a new realm:\n\n\n\n\nCreate a new realm using the drop down box in upper left that says \"master\".\n\n\nUse the realm name you entered and the gateway IP address from step 1 to set Workflow\n Manager and the component services' \nOIDC_ISSUER_URI\n environment variable to:\n \nhttp://
:9090/realms/\n\n\n\n\n5. Create the client that Workflow Manager will use to authenticate users:\n\n\n\n\nUse the \"Clients\" link in the left menu to create a new client.\n\n\nGeneral Settings:\n\n\nThe \"Client type\" needs to be set to \"OpenID Connect\".\n\n\nEnter a \"Client ID\".\n\n\nSet Workflow Manager's \nOIDC_CLIENT_ID\n environment variable to the client ID you entered.\n\n\n\n\n\n\nCapability config:\n\n\n\"Client authentication\" must be enabled.\n\n\n\"Standard flow\" must be enabled.\n\n\n\"Service accounts roles\" must be enabled so that Workflow Manager can include an OAuth token\n in job completion callbacks and when communicating with TiesDb.\n\n\n\n\n\n\nLogin settings:\n\n\nSet \"Valid redirect URIs\" to\n \nhttp://localhost:8080/workflow-manager/login/oauth2/code/provider\n\n\nSet \"Valid post logout redirect URIs\" to \nhttp://localhost:8080/workflow-manager\n\n\n\n\n\n\nSet Workflow Manager's \nOIDC_CLIENT_SECRET\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n6. Create a Keycloak role that maps to a Workflow Manager role:\n\n\n\n\nUse the \"Realm roles\" link in the left menu to create a new role.\n\n\nIf the Keycloak role should make the user an \nADMIN\n in Workflow Manager, set Workflow\n Manager's \nOIDC_ADMIN_CLAIM_VALUE\n to the role name you just entered. If it should be a\n \nUSER\n, then set the \nOIDC_USER_CLAIM_VALUE\n environment variable.\n\n\nOnly one of \nOIDC_ADMIN_CLAIM_VALUE\n and \nOIDC_USER_CLAIM_VALUE\n need to be set. If you would\n like to set up both roles repeat this step.\n\n\n\n\n7. Include the Keycloak role(s) in the access token:\n\n\n\n\nIn the \"Client scopes\" menu add a mapper to the \"roles\" scope.\n\n\nUse the \"groups\" predefined mapper.\n\n\nThe default name \"Token Claim Name\" is \"groups\". This can be changed.\n\n\nIf you created an \nADMIN\n role in step 6 set \nOIDC_ADMIN_CLAIM_NAME\n to the value in\n \"Token Claim Name\". If you created a \nUSER\n role, do the same for \nOIDC_USER_CLAIM_NAME\n.\n\n\n\n\n8. Optionally, set Workflow Manager's \nOIDC_USER_NAME_ATTR\n to \npreferred_username\n to display the\n user name instead of the ID.\n\n\n9. Create Users:\n\n\n\n\nAfter creating a user, set a password in the \"Credentials\" tab.\n\n\nUse the \"Role mapping\" tab to add the user to one of roles created in step 6.\n\n\n\n\n10. Add Component Registration REST client:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\nSet the component services' \nWFM_USER\n environment variable to the client ID you entered.\n\n\nSet component services' \nWFM_PASSWORD\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n11. Add external REST clients:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\n\n\n12. Start Workflow Manager. When you initially navigate to Workflow Manager, you will be\n redirected to the Keycloak log in page. You can log in using the users created in step 9.\n\n\nTest REST authentication\n\n\nUsing the Docker gateway IP address from step 1, the client ID and secret from step 11, and the\nrealm name from step 4, run the following command:\n\n\ncurl -d grant_type=client_credentials -u ':' 'http://:9090/realms//protocol/openid-connect/token'\n\n\n\nThe response JSON will contain a token in the \n\"access_token\"\n property. That token needs to be\nincluded as a bearer token in REST requests to Workflow Manager. For example:\n\n\ncurl -H \"Authorization: Bearer \" http://localhost:8080/workflow-manager/rest/actions\n\n\n\nUse OAuth when sending job complete callbacks and when posting to TiesDb.\n\n\n1. Create a client for the callback receiver or TiesDb:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\n\n\n\n\nConfigure the callback receiver or TiesDb with the client ID and secret.\n\n\n\n\n2. Create a client role:\n\n\n\n\nUse the \"Roles\" tab to add a role to the client that was just created.\n\n\n\n\n3. Add the role to the Workflow Manager's client:\n\n\n\n\nGo to the client details page for the client created for Workflow Manager.\n\n\nGo to the \"Service accounts roles\" tab.\n\n\nClick \"Assign role\".\n\n\nChange \"Filter by realm roles\" to \"Filter by clients\".\n\n\nAssign the role created in step 2.\n\n\n\n\n4. Run jobs with the \nCALLBACK_USE_OIDC\n or \nTIES_DB_USE_OIDC\n job properties set to \nTRUE\n.\n\n\nTest callback authentication\n\n\nThe Python script below can be used to test callback authentication. Before running the script you\nmust run \npip install Flask-pyoidc==3.14.2\n. To run the script, you must set the \nOIDC_ISSUER_URI\n,\n\nOIDC_CLIENT_ID\n, and \nOIDC_CLIENT_SECRET\n environment variables. Note that the script configures\nthe \nFlask-pyoidc\n package to authenticate Web users, as required by the package, but we are only\ntesting the authentication of REST clients.\n\n\nOnce the script is running, a user can submit a job via the Workflow Manager Swagger page with the\nfollowing fields to test callbacks:\n\n\n{\n \"callbackMethod\": \"POST\",\n \"callbackURL\": \"http://localhost:5000/api\",\n \"jobProperties\": {\n \"CALLBACK_USE_OIDC\": \"TRUE\"\n }\n}\n\n\n\nimport json\nimport logging\nimport os\n\nfrom flask import Flask, jsonify\nfrom flask_pyoidc.provider_configuration import ProviderConfiguration, ClientMetadata\nfrom flask_pyoidc import OIDCAuthentication\n\nlogging.basicConfig(level=logging.INFO)\n\napp = Flask(__name__)\napp.config.update(\n OIDC_REDIRECT_URI='http://localhost:5000/redirect_uri',\n SECRET_KEY='secret',\n DEBUG=True\n)\n\nauth = OIDCAuthentication({\n 'default': ProviderConfiguration(\n os.getenv('OIDC_ISSUER_URI'),\n client_metadata=ClientMetadata(\n os.getenv('OIDC_CLIENT_ID'), os.getenv('OIDC_CLIENT_SECRET'))\n )\n}, app)\n\n@app.route('/api', methods = ('GET', 'POST'))\n@auth.token_auth('default')\ndef api():\n print(type(auth.current_token_identity))\n print(json.dumps(auth.current_token_identity, sort_keys=True, indent=4))\n return jsonify({'message': 'test message'})\n\nif __name__ == '__main__':\n app.run()",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nOpenID Connect Overview\n\n\nWorkflow Manager can use an OpenID Connect (OIDC) provider to handle authentication for users of\nthe web UI and clients of the REST API.\n\n\nConfiguration\n\n\nIn order to use OIDC, Workflow Manager must first be registered with OIDC provider. The exact\nprocess for this varies by provider. As part of the registration process, a client ID and client\nsecret should be provided. Those values should be set in the \nOIDC_CLIENT_ID\n and\n\nOIDC_CLIENT_SECRET\n environment variables. During the registration process the provider will\nlikely request a redirect URI. The redirect URI should be set to the base URI for Workflow Manager\nwith \n/login/oauth2/code/provider\n appended.\n\n\nThe documentation for the OIDC provider should specify the base URI a client should use to\nauthenticate users. The URI should be set in the \nOIDC_ISSUER_URI\n environment variable. To verify\nthe URI is correct, check that the JSON discovery document is returned when sending an HTTP GET\nrequest to the URI with \n/.well-known/openid-configuration\n appended.\n\n\nAfter a user or REST client authenticates with the OIDC provider, Workflow Manager will check for a\nclaim with a specific value to determine if the user is authorized to access Workflow Manager and\nwith what role. The \nOIDC_USER_CLAIM_NAME\n and \nOIDC_ADMIN_CLAIM_NAME\n environment variables\nspecify the name of the claim that must be present. The \nOIDC_USER_CLAIM_VALUE\n and\n\nOIDC_ADMIN_CLAIM_VALUE\n environment variables specify the required value of the claim.\n\n\nIf Workflow Manager is configured to use OIDC, then the component services must also be configured\nto use OIDC. The component services will use OIDC if either the \nOIDC_JWT_ISSUER_URI\n or\n\nOIDC_ISSUER_URI\n environment variables are set on the component service. When a component service\nis configured to use OIDC, the \nOIDC_CLIENT_ID\n and \nOIDC_CLIENT_SECRET\n environment variables are\nused to specify the client ID and secret that will be used during component registration.\n\n\nWorkflow Manager Environment Variables\n\n\n\n\nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used to authenticate users\n through the web UI. If \nOIDC_JWT_ISSUER_URI\n is not set, \nOIDC_ISSUER_URI\n will also be used to\n authenticate REST clients. The OIDC configuration endpoint must exist at the value of\n \nOIDC_ISSUER_URI\n with \n/.well-known/openid-configuration\n appended.\n\n\nOIDC_JWT_ISSUER_URI\n (Optional): Works the same way as \nOIDC_ISSUER_URI\n, except that the\n configuration will only be used to authenticate REST clients. When not provided,\n \nOIDC_ISSUER_URI\n will be used. This would be used when the authentication provider's endpoint\n for user authentication is different from the endpoint for authentication of REST clients.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that Workflow Manager will use to authenticate with\n the OIDC provider.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret Workflow Manager will use to authenticate\n with the OIDC provider.\n\n\nOIDC_USER_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nUSER\n role.\n\n\nOIDC_USER_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_USER_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_ADMIN_CLAIM_NAME\n (Optional): Specifies the name of the claim from the authentication token\n that is required for a user or REST client to be granted access to Workflow Manager with the\n \nADMIN\n role.\n\n\nOIDC_ADMIN_CLAIM_VALUE\n (Optional): Specifies the required value of the claim specified in\n \nOIDC_ADMIN_CLAIM_NAME\n. If the claim is a list, only one of the values in the list must match.\n\n\nOIDC_SCOPES\n (Optional): A comma-separated list of the scopes to be requested from the OIDC\n provider when authenticating a user through the web UI. The OIDC specification requires one of\n the scopes to be \nopenid\n, so if this environment variable is omitted or \nopenid\n is not in the\n list, it will be automatically added.\n\n\nOIDC_USER_NAME_ATTR\n (Optional): The name of the claim containing the user name. Defaults to\n \nsub\n.\n\n\nOIDC_REDIRECT_URI\n (Optional): Specifies the URL the user's browser will be redirected to after\n logging in to the OIDC provider. If provided, the URL must end in \n/login/oauth2/code/provider\n.\n This would generally be used when the host name that Workflow Manager uses to connect to the\n OIDC provider is different from the OIDC provider's public host name. The value can use the\n \ntemplate variables supported by Spring.\n\n\n\n\nComponent Environment Variables\n\n\n\n\nOIDC_JWT_ISSUER_URI\n or \nOIDC_ISSUER_URI\n (Required): URI for the OIDC provider that will be used\n to authenticate REST clients. The OIDC configuration endpoint must exist at the value of this\n environment variable with \n/.well-known/openid-configuration\n appended. If both environment\n variables are provided, \nOIDC_JWT_ISSUER_URI\n will be used. If \nOIDC_JWT_ISSUER_URI\n is set on\n Workflow Manager, it should be set to the same value on the component services. If\n \nOIDC_JWT_ISSUER_URI\n is not set on Workflow Manager, \nOIDC_ISSUER_URI\n should be set to the\n same value on Workflow Manager and the component services. When either environment variable is\n set, the \nWFM_USER\n and \nWFM_PASSWORD\n environment variables are ignored.\n\n\nOIDC_CLIENT_ID\n (Required): The client ID that the component service will use when registering\n the component with Workflow Manager.\n\n\nOIDC_CLIENT_SECRET\n (Required): The client secret that the component service will use when\n registering the component with Workflow Manager.\n\n\n\n\nExample with Keycloak\n\n\nThe following example explains how to test Workflow Manager with Keycloak as the OIDC provider.\nIt is just an example and should not be used in production.\n\n\n1. Get the Docker gateway IP address by running the command below. It will be used in later steps.\n\n\ndocker network inspect --format '{{(index .IPAM.Config 0).Gateway}}' bridge\n\n\n\n2. Start Keycloak in development mode using the command below. Do not start Workflow Manager yet.\n The values for the OIDC environment variables are dependent on how you set up Keycloak in the\n following steps.\n\n\ndocker run -p 9090:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \\\n quay.io/keycloak/keycloak:21.1.1 start-dev\n\n\n\n3. Go to \nhttp://localhost:9090/admin\n in a browser and login with username \nadmin\n and\n password \nadmin\n.\n\n\n4. Create a new realm:\n\n\n\n\nCreate a new realm using the drop down box in upper left that says \"master\".\n\n\nUse the realm name you entered and the gateway IP address from step 1 to set Workflow\n Manager and the component services' \nOIDC_ISSUER_URI\n environment variable to:\n \nhttp://:9090/realms/\n\n\n\n\n5. Create the client that Workflow Manager will use to authenticate users:\n\n\n\n\nUse the \"Clients\" link in the left menu to create a new client.\n\n\nGeneral Settings:\n\n\nThe \"Client type\" needs to be set to \"OpenID Connect\".\n\n\nEnter a \"Client ID\".\n\n\nSet Workflow Manager's \nOIDC_CLIENT_ID\n environment variable to the client ID you entered.\n\n\n\n\n\n\nCapability config:\n\n\n\"Client authentication\" must be enabled.\n\n\n\"Standard flow\" must be enabled.\n\n\n\"Service accounts roles\" must be enabled so that Workflow Manager can include an OAuth token\n in job completion callbacks and when communicating with TiesDb.\n\n\n\n\n\n\nLogin settings:\n\n\nSet \"Valid redirect URIs\" to\n \nhttp://localhost:8080/login/oauth2/code/provider\n\n\nSet \"Valid post logout redirect URIs\" to \nhttp://localhost:8080\n\n\n\n\n\n\nSet Workflow Manager's \nOIDC_CLIENT_SECRET\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n6. Create a Keycloak role that maps to a Workflow Manager role:\n\n\n\n\nUse the \"Realm roles\" link in the left menu to create a new role.\n\n\nIf the Keycloak role should make the user an \nADMIN\n in Workflow Manager, set Workflow\n Manager's \nOIDC_ADMIN_CLAIM_VALUE\n to the role name you just entered. If it should be a\n \nUSER\n, then set the \nOIDC_USER_CLAIM_VALUE\n environment variable.\n\n\nOnly one of \nOIDC_ADMIN_CLAIM_VALUE\n and \nOIDC_USER_CLAIM_VALUE\n need to be set. If you would\n like to set up both roles repeat this step.\n\n\n\n\n7. Include the Keycloak role(s) in the access token:\n\n\n\n\nIn the \"Client scopes\" menu add a mapper to the \"roles\" scope.\n\n\nUse the \"groups\" predefined mapper.\n\n\nThe default name \"Token Claim Name\" is \"groups\". This can be changed.\n\n\nIf you created an \nADMIN\n role in step 6 set \nOIDC_ADMIN_CLAIM_NAME\n to the value in\n \"Token Claim Name\". If you created a \nUSER\n role, do the same for \nOIDC_USER_CLAIM_NAME\n.\n\n\n\n\n8. Optionally, set Workflow Manager's \nOIDC_USER_NAME_ATTR\n to \npreferred_username\n to display the\n user name instead of the ID.\n\n\n9. Create Users:\n\n\n\n\nAfter creating a user, set a password in the \"Credentials\" tab.\n\n\nUse the \"Role mapping\" tab to add the user to one of roles created in step 6.\n\n\n\n\n10. Add Component Registration REST client:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\nSet the component services' \nWFM_USER\n environment variable to the client ID you entered.\n\n\nSet component services' \nWFM_PASSWORD\n environment variable to the \"Client secret\" in the\n \"Credentials\" tab.\n\n\n\n\n11. Add external REST clients:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\nUse the \"Service account roles\" tab to add the client to one of the roles created in step 6.\n\n\n\n\n\n\n\n\n12. Start Workflow Manager. When you initially navigate to Workflow Manager, you will be\n redirected to the Keycloak log in page. You can log in using the users created in step 9.\n\n\nTest REST authentication\n\n\nUsing the Docker gateway IP address from step 1, the client ID and secret from step 11, and the\nrealm name from step 4, run the following command:\n\n\ncurl -d grant_type=client_credentials -u ':' 'http://:9090/realms//protocol/openid-connect/token'\n\n\n\nThe response JSON will contain a token in the \n\"access_token\"\n property. That token needs to be\nincluded as a bearer token in REST requests to Workflow Manager. For example:\n\n\ncurl -H \"Authorization: Bearer \" http://localhost:8080/rest/actions\n\n\n\nUse OAuth when sending job complete callbacks and when posting to TiesDb.\n\n\n1. Create a client for the callback receiver or TiesDb:\n\n\n\n\nUse the \"Clients\" menu to create a new client.\n\n\nCapability config:\n\n\nThe client needs to have \"Client authentication\" and \"Service accounts roles\" enabled.\n\n\n\n\n\n\nConfigure the callback receiver or TiesDb with the client ID and secret.\n\n\n\n\n2. Create a client role:\n\n\n\n\nUse the \"Roles\" tab to add a role to the client that was just created.\n\n\n\n\n3. Add the role to the Workflow Manager's client:\n\n\n\n\nGo to the client details page for the client created for Workflow Manager.\n\n\nGo to the \"Service accounts roles\" tab.\n\n\nClick \"Assign role\".\n\n\nChange \"Filter by realm roles\" to \"Filter by clients\".\n\n\nAssign the role created in step 2.\n\n\n\n\n4. Run jobs with the \nCALLBACK_USE_OIDC\n or \nTIES_DB_USE_OIDC\n job properties set to \nTRUE\n.\n\n\nTest callback authentication\n\n\nThe Python script below can be used to test callback authentication. Before running the script you\nmust run \npip install Flask-pyoidc==3.14.2\n. To run the script, you must set the \nOIDC_ISSUER_URI\n,\n\nOIDC_CLIENT_ID\n, and \nOIDC_CLIENT_SECRET\n environment variables. Note that the script configures\nthe \nFlask-pyoidc\n package to authenticate Web users, as required by the package, but we are only\ntesting the authentication of REST clients.\n\n\nOnce the script is running, a user can submit a job via the Workflow Manager Swagger page with the\nfollowing fields to test callbacks:\n\n\n{\n \"callbackMethod\": \"POST\",\n \"callbackURL\": \"http://localhost:5000/api\",\n \"jobProperties\": {\n \"CALLBACK_USE_OIDC\": \"TRUE\"\n }\n}\n\n\n\nimport json\nimport logging\nimport os\n\nfrom flask import Flask, jsonify\nfrom flask_pyoidc.provider_configuration import ProviderConfiguration, ClientMetadata\nfrom flask_pyoidc import OIDCAuthentication\n\nlogging.basicConfig(level=logging.INFO)\n\napp = Flask(__name__)\napp.config.update(\n OIDC_REDIRECT_URI='http://localhost:5000/redirect_uri',\n SECRET_KEY='secret',\n DEBUG=True\n)\n\nauth = OIDCAuthentication({\n 'default': ProviderConfiguration(\n os.getenv('OIDC_ISSUER_URI'),\n client_metadata=ClientMetadata(\n os.getenv('OIDC_CLIENT_ID'), os.getenv('OIDC_CLIENT_SECRET'))\n )\n}, app)\n\n@app.route('/api', methods = ('GET', 'POST'))\n@auth.token_auth('default')\ndef api():\n print(type(auth.current_token_identity))\n print(json.dumps(auth.current_token_identity, sort_keys=True, indent=4))\n return jsonify({'message': 'test message'})\n\nif __name__ == '__main__':\n app.run()",
"title": "OpenID Connect Guide"
},
{
@@ -287,12 +292,12 @@
},
{
"location": "/OpenID-Connect-Guide/index.html#example-with-keycloak",
- "text": "The following example explains how to test Workflow Manager with Keycloak as the OIDC provider.\nIt is just an example and should not be used in production. 1. Get the Docker gateway IP address by running the command below. It will be used in later steps. docker network inspect --format '{{(index .IPAM.Config 0).Gateway}}' bridge 2. Start Keycloak in development mode using the command below. Do not start Workflow Manager yet.\n The values for the OIDC environment variables are dependent on how you set up Keycloak in the\n following steps. docker run -p 9090:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \\\n quay.io/keycloak/keycloak:21.1.1 start-dev 3. Go to http://localhost:9090/admin in a browser and login with username admin and\n password admin . 4. Create a new realm: Create a new realm using the drop down box in upper left that says \"master\". Use the realm name you entered and the gateway IP address from step 1 to set Workflow\n Manager and the component services' OIDC_ISSUER_URI environment variable to:\n http://:9090/realms/ 5. Create the client that Workflow Manager will use to authenticate users: Use the \"Clients\" link in the left menu to create a new client. General Settings: The \"Client type\" needs to be set to \"OpenID Connect\". Enter a \"Client ID\". Set Workflow Manager's OIDC_CLIENT_ID environment variable to the client ID you entered. Capability config: \"Client authentication\" must be enabled. \"Standard flow\" must be enabled. \"Service accounts roles\" must be enabled so that Workflow Manager can include an OAuth token\n in job completion callbacks and when communicating with TiesDb. Login settings: Set \"Valid redirect URIs\" to\n http://localhost:8080/workflow-manager/login/oauth2/code/provider Set \"Valid post logout redirect URIs\" to http://localhost:8080/workflow-manager Set Workflow Manager's OIDC_CLIENT_SECRET environment variable to the \"Client secret\" in the\n \"Credentials\" tab. 6. Create a Keycloak role that maps to a Workflow Manager role: Use the \"Realm roles\" link in the left menu to create a new role. If the Keycloak role should make the user an ADMIN in Workflow Manager, set Workflow\n Manager's OIDC_ADMIN_CLAIM_VALUE to the role name you just entered. If it should be a\n USER , then set the OIDC_USER_CLAIM_VALUE environment variable. Only one of OIDC_ADMIN_CLAIM_VALUE and OIDC_USER_CLAIM_VALUE need to be set. If you would\n like to set up both roles repeat this step. 7. Include the Keycloak role(s) in the access token: In the \"Client scopes\" menu add a mapper to the \"roles\" scope. Use the \"groups\" predefined mapper. The default name \"Token Claim Name\" is \"groups\". This can be changed. If you created an ADMIN role in step 6 set OIDC_ADMIN_CLAIM_NAME to the value in\n \"Token Claim Name\". If you created a USER role, do the same for OIDC_USER_CLAIM_NAME . 8. Optionally, set Workflow Manager's OIDC_USER_NAME_ATTR to preferred_username to display the\n user name instead of the ID. 9. Create Users: After creating a user, set a password in the \"Credentials\" tab. Use the \"Role mapping\" tab to add the user to one of roles created in step 6. 10. Add Component Registration REST client: Use the \"Clients\" menu to create a new client. Capability config: The client needs to have \"Client authentication\" and \"Service accounts roles\" enabled. Use the \"Service account roles\" tab to add the client to one of the roles created in step 6. Set the component services' WFM_USER environment variable to the client ID you entered. Set component services' WFM_PASSWORD environment variable to the \"Client secret\" in the\n \"Credentials\" tab. 11. Add external REST clients: Use the \"Clients\" menu to create a new client. Capability config: The client needs to have \"Client authentication\" and \"Service accounts roles\" enabled. Use the \"Service account roles\" tab to add the client to one of the roles created in step 6. 12. Start Workflow Manager. When you initially navigate to Workflow Manager, you will be\n redirected to the Keycloak log in page. You can log in using the users created in step 9.",
+ "text": "The following example explains how to test Workflow Manager with Keycloak as the OIDC provider.\nIt is just an example and should not be used in production. 1. Get the Docker gateway IP address by running the command below. It will be used in later steps. docker network inspect --format '{{(index .IPAM.Config 0).Gateway}}' bridge 2. Start Keycloak in development mode using the command below. Do not start Workflow Manager yet.\n The values for the OIDC environment variables are dependent on how you set up Keycloak in the\n following steps. docker run -p 9090:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin \\\n quay.io/keycloak/keycloak:21.1.1 start-dev 3. Go to http://localhost:9090/admin in a browser and login with username admin and\n password admin . 4. Create a new realm: Create a new realm using the drop down box in upper left that says \"master\". Use the realm name you entered and the gateway IP address from step 1 to set Workflow\n Manager and the component services' OIDC_ISSUER_URI environment variable to:\n http://:9090/realms/ 5. Create the client that Workflow Manager will use to authenticate users: Use the \"Clients\" link in the left menu to create a new client. General Settings: The \"Client type\" needs to be set to \"OpenID Connect\". Enter a \"Client ID\". Set Workflow Manager's OIDC_CLIENT_ID environment variable to the client ID you entered. Capability config: \"Client authentication\" must be enabled. \"Standard flow\" must be enabled. \"Service accounts roles\" must be enabled so that Workflow Manager can include an OAuth token\n in job completion callbacks and when communicating with TiesDb. Login settings: Set \"Valid redirect URIs\" to\n http://localhost:8080/login/oauth2/code/provider Set \"Valid post logout redirect URIs\" to http://localhost:8080 Set Workflow Manager's OIDC_CLIENT_SECRET environment variable to the \"Client secret\" in the\n \"Credentials\" tab. 6. Create a Keycloak role that maps to a Workflow Manager role: Use the \"Realm roles\" link in the left menu to create a new role. If the Keycloak role should make the user an ADMIN in Workflow Manager, set Workflow\n Manager's OIDC_ADMIN_CLAIM_VALUE to the role name you just entered. If it should be a\n USER , then set the OIDC_USER_CLAIM_VALUE environment variable. Only one of OIDC_ADMIN_CLAIM_VALUE and OIDC_USER_CLAIM_VALUE need to be set. If you would\n like to set up both roles repeat this step. 7. Include the Keycloak role(s) in the access token: In the \"Client scopes\" menu add a mapper to the \"roles\" scope. Use the \"groups\" predefined mapper. The default name \"Token Claim Name\" is \"groups\". This can be changed. If you created an ADMIN role in step 6 set OIDC_ADMIN_CLAIM_NAME to the value in\n \"Token Claim Name\". If you created a USER role, do the same for OIDC_USER_CLAIM_NAME . 8. Optionally, set Workflow Manager's OIDC_USER_NAME_ATTR to preferred_username to display the\n user name instead of the ID. 9. Create Users: After creating a user, set a password in the \"Credentials\" tab. Use the \"Role mapping\" tab to add the user to one of roles created in step 6. 10. Add Component Registration REST client: Use the \"Clients\" menu to create a new client. Capability config: The client needs to have \"Client authentication\" and \"Service accounts roles\" enabled. Use the \"Service account roles\" tab to add the client to one of the roles created in step 6. Set the component services' WFM_USER environment variable to the client ID you entered. Set component services' WFM_PASSWORD environment variable to the \"Client secret\" in the\n \"Credentials\" tab. 11. Add external REST clients: Use the \"Clients\" menu to create a new client. Capability config: The client needs to have \"Client authentication\" and \"Service accounts roles\" enabled. Use the \"Service account roles\" tab to add the client to one of the roles created in step 6. 12. Start Workflow Manager. When you initially navigate to Workflow Manager, you will be\n redirected to the Keycloak log in page. You can log in using the users created in step 9.",
"title": "Example with Keycloak"
},
{
"location": "/OpenID-Connect-Guide/index.html#test-rest-authentication",
- "text": "Using the Docker gateway IP address from step 1, the client ID and secret from step 11, and the\nrealm name from step 4, run the following command: curl -d grant_type=client_credentials -u ':' 'http://:9090/realms//protocol/openid-connect/token' The response JSON will contain a token in the \"access_token\" property. That token needs to be\nincluded as a bearer token in REST requests to Workflow Manager. For example: curl -H \"Authorization: Bearer \" http://localhost:8080/workflow-manager/rest/actions",
+ "text": "Using the Docker gateway IP address from step 1, the client ID and secret from step 11, and the\nrealm name from step 4, run the following command: curl -d grant_type=client_credentials -u ':' 'http://:9090/realms//protocol/openid-connect/token' The response JSON will contain a token in the \"access_token\" property. That token needs to be\nincluded as a bearer token in REST requests to Workflow Manager. For example: curl -H \"Authorization: Bearer \" http://localhost:8080/rest/actions",
"title": "Test REST authentication"
},
{
@@ -1287,7 +1292,7 @@
},
{
"location": "/Development-Environment-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\n\n \nWARNING:\n\n For most component developers, these steps are not necessary. Instead,\n refer to the\n \nC++\n,\n \nPython\n, or\n \nJava\n\n README for developing a Docker component in your desired language.\n\n\n\n\n\n \nWARNING:\n This guide is a work in progress and may not be completely\n accurate or comprehensive.\n\n\n\n\nOverview\n\n\nThe following instructions are for setting up an environment for building and\nrunning OpenMPF outside of Docker. They serve as a reference for developers who\nwant to develop the Workflow Manager web application itself and perform end-to-\nend integration testing.\n\n\nSetup VM\n\n\n\n\n\n\nDownload the ISO for the desktop version of Ubuntu 20.04 from\n \nhttps://releases.ubuntu.com/20.04\n.\n\n\n\n\n\n\nCreate an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using.\n\n\n\n\nUse mpf as your username.\n\n\nDuring the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software.\n\n\n\n\n\n\n\n\nAfter completing the installation, you will likely be prompted to update\n software. You should install the updates.\n\n\n\n\n\n\nOptionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong.\n\n\n\n\n\n\nOpen a terminal and run \nsudo apt update\n\n\n\n\n\n\nRun \nsudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc\n\n\n\n\n\n\nFollow instructions to install Docker:\n \nhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository\n\n\n\n\n\n\nOptionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use \ndocker\n commands:\n\n\n\n\n\n\nsudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket;\n\n\n\n\n\n\n\nFollow instructions so that you can run Docker without sudo:\n \nhttps://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user\n\n\n\n\n\n\nInstall Docker Compose:\n\n\n\n\n\n\nsudo apt update\nsudo apt install docker-compose-plugin\n\n\n\n\n\n\n\nOptionally, stop redis from starting automatically:\n \nsudo systemctl disable redis\n\n\n\n\n\n\nOptionally, stop postgresql from starting automatically:\n \nsudo systemctl disable postgresql\n\n\n\n\n\n\nInitialize Postgres (use \"password\" when prompted for a password):\n\n\n\n\n\n\nsudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf\n\n\n\n\n\nBuild and install OpenCV:\n\n\n\n\nmkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.5.5;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.5.5' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DWITH_CUBLAS=true \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.5.5/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.5.5/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.5.5/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv;\n\n\n\n\n\nBuild and install the ActiveMQ C++ library:\n\n\n\n\nmkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp;\n\n\n\n\n\nInstall NotoEmoji font for markup:\n\n\n\n\nmkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto;\n\n\n\n\n\nBuild and install PNG Defry:\n\n\n\n\nmkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry;\n\n\n\n\n\nInstall Maven:\n\n\n\n\nwget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin;\n\n\n\n\n\nBuild and install libheif:\n\n\n\n\nmkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif;\n\n\n\n\n\nInstall ActiveMQ:\n\n\n\n\nwget -O- https://archive.apache.org/dist/activemq/5.17.0/apache-activemq-5.17.0-bin.tar.gz \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-activemq-5.17.0 /opt/activemq;\nsudo chown -R mpf:mpf /opt/apache-activemq-5.17.0\n\n\n\n\n\n\n\nIn \n/opt/activemq/conf/activemq.xml\n change line 34 from \n\n \n\n\n \n to \n\n \n\n\n\n\n\n\n\nIn \n/opt/activemq/conf/activemq.xml\n (line 38) under the line:\n \n\n, add \n\n \n\" prioritizedMessages=\"true\" useCache=\"false\" expireMessagesPeriod=\"0\" queuePrefetch=\"1\" />\n\n\n\n\n\n\nIn \n/opt/activemq/conf/activemq.xml\n (line 66, after making the above addition),\n change the line: \n\n\n \n to \n\n \n\n.\n\n\n\n\n\n\nIn \n/opt/activemq/conf/log4j2.properties\n (line 69), change the line \n\n \nappender.logfile.layout.pattern=%d | %-5p | %m | %c | %t%n%throwable{full}\n\n \n to \n\n \nappender.logfile.layout.pattern=%d %p [%t] %c - %m%n\n\n\n\n\n\n\nFrom your home directory run:\n\n\n\n\n\n\ngit clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop;\n\n\n\n\n\n\n\nRun: \npip install openmpf-projects/openmpf/trunk/bin/mpf-scripts\n\n\n\n\n\n\nAdd \nPATH=\"$HOME/.local/bin:$PATH\"\n to \n~/.bashrc\n\n\n\n\n\n\nRun \nmkdir -p openmpf-projects/openmpf/trunk/install/share/logs\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh\n\n\n\n\n\n\nRun \nsudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf'\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service\n\n\n\n\n\n\nRun \ncd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties\n\n\n\n\n\n\nRun \nsudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts'\n\n\n\n\n\n\nRun \nmkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/\n\n\n\n\n\n\nReboot the VM.\n\n\n\n\n\n\nAt this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the \nDockerfile\n\nfor each component you're interested in.\n\n\nConfigure Users\n\n\nTo change the default user password settings, modify\n\nopenmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/user.properties\n.\nNote that the default settings are public knowledge, which could be a security\nrisk.\n\n\nNote that \nmpf remove-user\n and \nmpf add-user\n commands explained in the\n\nCommand Line Tools\n section do not modify the\n\nuser.properties\n file. If you remove a user using the \nmpf remove-user\n\ncommand, the changes will take effect at runtime, but an entry may still exist\nfor that user in the \nuser.properties\n file. If so, then the user account will\nbe recreated the next time the Workflow Manager is restarted.\n\n\nBuild and Run the OpenMPF Workflow Manager Web Application\n\n\n\n\nBuild OpenMPF:\n\n\n\n\ncd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false;\n\n\n\n\n\nStart OpenMPF with \nmpf start\n.\n\n\n\n\nLook for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting:\n\n\n2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661)\n\n\n\nAfter startup, the Workflow Manager will be available at\n\nhttp://localhost:8080/workflow-manager\n.\nBrowse to this URL using Firefox or Chrome.\n\n\nIf you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the\n\nOpenMPF User Guide\n for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the\n\nOpenMPF Admin Guide\n for more information.\nWhen finished using OpenMPF, stop Workflow Manager with \nctrl-c\n and then run \nmpf stop\n to stop\nthe other system dependencies.\n\n\nThe preferred method to start and stop services for OpenMPF is with the\n\nmpf start\n and \nmpf stop\n commands. For additional information on these\ncommands, please see the\n\nCommand Line Tools\n section.\nThese will start and stop the ActiveMQ, PostgreSQL, Redis, Node Manager,\nand Workflow Manager processes.\n\n\nKnown Issues\n\n\no.m.m.m.c.JobController - Failure creating job. supplier.get()\n\n\nIf you see an error message similar to:\n\n\n2022-02-07 17:17:30,538 ERROR [http-nio-8080-exec-1] o.m.m.m.c.JobController - Failure creating job. supplier.get()\njava.lang.NullPointerException: supplier.get()\n at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]\n at java.util.Objects.requireNonNullElseGet(Objects.java:321) ~[?:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getHostName(PropertiesUtil.java:267) ~[classes/:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getExportedJobId(PropertiesUtil.java:285) ~[classes/:?]\n\n\n\nOpen \n/etc/profile.d/mpf.sh\n and change \nexport HOSTNAME\n to\n\nexport HOSTNAME=$(hostname)\n. Then, restart the VM.\n\n\nAppendices\n\n\nCommand Line Tools\n\n\nOpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions:\n\nmpf [options ...]\n.\n\n\nExecute \nmpf --help\n for general documentation and \nmpf --help\n for\ndocumentation about a specific action.\n\n\n\n\nStart / Stop Actions\n: Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, ActiveMQ, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster.\n\n\nmpf status\n: displays a message indicating whether each of the system\n dependencies is running or not\n\n\nmpf start\n: starts all of the system dependencies\n\n\nmpf stop\n: stops all of the system dependencies\n\n\nmpf restart\n : stops and then starts all of the system dependencies\n\n\n\n\n\n\nUser Actions\n: Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect.\n\n\nmpf list-users\n : lists all of the existing user accounts and their role\n (non-admin or admin)\n\n\nmpf add-user \n: adds a new user account; will be\n prompted to enter the account password\n\n\nmpf remove-user \n : removes an existing user account\n\n\nmpf change-role \n : change the role (non-admin to admin\n or vice versa) for an existing user\n\n\nmpf change-password \n: change the password for an existing\n user; will be prompted to enter the new account password\n\n\n\n\n\n\nClean Actions\n: Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved.\n\n\nmpf clean\n: cleans out old job information and results, pending job\n requests, marked up media files, and ActiveMQ data, but preserves log\n files and uploaded media\n\n\nmpf clean --delete-logs --delete-uploaded-media\n: the same as \nmpf clean\n\n but also deletes log files and uploaded media\n\n\n\n\n\n\nNode Action\n: Actions for managing node membership in the OpenMPF cluster.\n\n\nmpf list-nodes\n: If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes\n\n\n\n\n\n\n\n\nPackaging a Component\n\n\nIn a non-Docker deployment, admin users can register component packages through\nthe web UI. Refer to \nComponent Registration\n.\n\n\nOnce the descriptor file is complete, as described in\n\nComponent Descriptor Reference\n,\nthe next step is to compile your component source code, and finally, create a\n.tar.gz package containing the descriptor file, component library, and all\nother necessary files.\n\n\nThe package should contain a top-level directory with a unique name that will\nnot conflict with existing component packages that have already been developed.\nThe top-level directory name should be the same as the \ncomponentName\n.\n\n\nWithin the top-level directory there must be a directory named \u201cdescriptor\u201d\nwith the descriptor JSON file in it. The name of the file must be\n\u201cdescriptor.json\u201d.\n\n\nExample:\n\n\n//sample-component-1.0.0-tar.gz contents\nSampleComponent/\n config/\n descriptor/\n descriptor.json\n lib/\n\n\n\nInstalling and registering a component\n\n\nThe Component Registration web page, located in the Admin section of the\nOpenMPF web user interface, can be used to upload and register the component.\n\n\nDrag and drop the .tar.gz file containing the component onto the dropzone area\nof that page. The component will automatically be uploaded and registered.\n\n\nUpon successful registration, the component will be available for deployment\nonto OpenMPF nodes via the Node Configuration web page and\n\n/rest/nodes/config\n end point.\n\n\nIf the descriptor contains custom actions, tasks, or pipelines, then they will\nbe automatically added to the system upon registration.\n\n\n\n\nNOTE:\n If the descriptor does not contain custom actions, tasks,\nor pipelines, then a default action, task, and pipeline will be generated\nand added to the system.\n\n\nThe default action will use the component\u2019s algorithm with its default\nproperty value settings.\nThe default task will use the default action.\nThe default pipeline will use the default task. This will only be generated\nif the algorithm does not specify any \nrequiresCollection\n states.\n\n\n\n\nUnregistering a component\n\n\nA component can be unregistered by using the remove button on the Component\nRegistration page.\n\n\nDuring unregistration, all services, algorithms, actions, tasks, and pipelines\nassociated with the component are deleted. Additionally, all actions, tasks,\nand pipelines that depend on these elements are removed.\n\n\nWeb UI\n\n\nThe following sections will cover some additional functionality permitted to\nadmin users in a non-Docker deployment.\n\n\nNode Configuration and Status\n\n\nThis page provides a list of all of the services that are configured to run on\nthe OpenMPF cluster:\n\n\n\n\nEach node shows information about the current status of each service, if it is\nunlaunchable due to an underlying error, and how many services are running for\neach node. If a service is unlaunchable, it will be indicated using a red\nstatus icon (not shown). Note that services are grouped by component type.\nClick the chevron \">\" to expand a service group to view the individual services.\n\n\nAn admin user can start, stop, or restart them on an individual basis. If a\nnon-admin user views this page, the \"Action(s)\" column is not displayed. This\npage also enables an admin user to edit the configuration for all nodes in the\nOpenMPF cluster. A non-admin user can only view the existing configuration.\n\n\nAn admin user can add a node by using the \"Add Node\" button and selecting a\nnode in the OpenMPF cluster from the drop-down list. You can also select to add\nall services at this time. A node and all if its configured services can be\nremoved by clicking the trash can to the right of the node's hostname.\n\n\nAn admin user can add services individually by selecting the node edit button\nat the bottom of the node. The number of service instances can be increased or\ndecreased by using the drop-down. Click the \"Submit\" button to save the changes.\n\n\nWhen making changes, please be aware of the following:\n\n\n\n\nIt may take a minute for the configuration to take effect on the server.\n\n\nIf you remove an existing service from a node, any job that service is\n processing will be stopped, and you will need to resubmit that job.\n\n\nIf you create a new node, its configuration will not take effect until the\n OpenMPF software is properly installed and started on the associated host.\n\n\nIf you delete a node, you will need to manually turn off the hardware running\n that node (deleting a node does not shut down the machine).\n\n\n\n\nComponent Registration\n\n\nThis page allows an admin user to add and remove non-default components to and\nfrom the system:\n\n\n\n\nA component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the workflow manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus.\n\n\n\n\nIf for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the workflow manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the \n/opt/mpf/plugins\n\ndirectory on the system.\n\n\nOnce registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023 The MITRE Corporation. All Rights Reserved.\n\n\n\n \nWARNING:\n\n For most component developers, these steps are not necessary. Instead,\n refer to the\n \nC++\n,\n \nPython\n, or\n \nJava\n\n README for developing a Docker component in your desired language.\n\n\n\n\n\n \nWARNING:\n This guide is a work in progress and may not be completely\n accurate or comprehensive.\n\n\n\n\nOverview\n\n\nThe following instructions are for setting up an environment for building and\nrunning OpenMPF outside of Docker. They serve as a reference for developers who\nwant to develop the Workflow Manager web application itself and perform end-to-\nend integration testing.\n\n\nSetup VM\n\n\n\n\n\n\nDownload the ISO for the desktop version of Ubuntu 20.04 from\n \nhttps://releases.ubuntu.com/20.04\n.\n\n\n\n\n\n\nCreate an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using.\n\n\n\n\nUse mpf as your username.\n\n\nDuring the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software.\n\n\n\n\n\n\n\n\nAfter completing the installation, you will likely be prompted to update\n software. You should install the updates.\n\n\n\n\n\n\nOptionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong.\n\n\n\n\n\n\nOpen a terminal and run \nsudo apt update\n\n\n\n\n\n\nRun \nsudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3\n\n\n\n\n\n\nRun \nsudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc\n\n\n\n\n\n\nFollow instructions to install Docker:\n \nhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository\n\n\n\n\n\n\nOptionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use \ndocker\n commands:\n\n\n\n\n\n\nsudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket;\n\n\n\n\n\n\n\nFollow instructions so that you can run Docker without sudo:\n \nhttps://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user\n\n\n\n\n\n\nInstall Docker Compose:\n\n\n\n\n\n\nsudo apt update\nsudo apt install docker-compose-plugin\n\n\n\n\n\n\n\nOptionally, stop redis from starting automatically:\n \nsudo systemctl disable redis\n\n\n\n\n\n\nOptionally, stop postgresql from starting automatically:\n \nsudo systemctl disable postgresql\n\n\n\n\n\n\nInitialize Postgres (use \"password\" when prompted for a password):\n\n\n\n\n\n\nsudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf\n\n\n\n\n\nBuild and install OpenCV:\n\n\n\n\nmkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.5.5;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.5.5' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DWITH_CUBLAS=true \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.5.5/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.5.5/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.5.5/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv;\n\n\n\n\n\nBuild and install the ActiveMQ C++ library:\n\n\n\n\nmkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp;\n\n\n\n\n\nInstall NotoEmoji font for markup:\n\n\n\n\nmkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto;\n\n\n\n\n\nBuild and install PNG Defry:\n\n\n\n\nmkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry;\n\n\n\n\n\nInstall Maven:\n\n\n\n\nwget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin;\n\n\n\n\n\nBuild and install libheif:\n\n\n\n\nmkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif;\n\n\n\n\n\nFrom your home directory run:\n\n\n\n\ngit clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop;\n\n\n\n\n\n\n\nRun: \npip install openmpf-projects/openmpf/trunk/bin/mpf-scripts\n\n\n\n\n\n\nAdd \nPATH=\"$HOME/.local/bin:$PATH\"\n to \n~/.bashrc\n\n\n\n\n\n\nRun \nmkdir -p openmpf-projects/openmpf/trunk/install/share/logs\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh\n\n\n\n\n\n\nRun \nsudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf'\n\n\n\n\n\n\nRun \nsudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service\n\n\n\n\n\n\nRun \ncd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties\n\n\n\n\n\n\nRun \nsudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts'\n\n\n\n\n\n\nRun \nmkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/\n\n\n\n\n\n\nReboot the VM.\n\n\n\n\n\n\nAt this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the \nDockerfile\n\nfor each component you're interested in.\n\n\nConfigure Users\n\n\nTo change the default user password settings, modify\n\nopenmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/user.properties\n.\nNote that the default settings are public knowledge, which could be a security\nrisk.\n\n\nNote that \nmpf remove-user\n and \nmpf add-user\n commands explained in the\n\nCommand Line Tools\n section do not modify the\n\nuser.properties\n file. If you remove a user using the \nmpf remove-user\n\ncommand, the changes will take effect at runtime, but an entry may still exist\nfor that user in the \nuser.properties\n file. If so, then the user account will\nbe recreated the next time the Workflow Manager is restarted.\n\n\nBuild and Run the OpenMPF Workflow Manager Web Application\n\n\n\n\nBuild OpenMPF:\n\n\n\n\ncd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false;\n\n\n\n\n\nStart OpenMPF with \nmpf start\n.\n\n\n\n\nLook for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting:\n\n\n2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661)\n\n\n\nAfter startup, the Workflow Manager will be available at \nhttp://localhost:8080\n.\nBrowse to this URL using Firefox or Chrome.\n\n\nIf you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the\n\nOpenMPF User Guide\n for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the\n\nOpenMPF Admin Guide\n for more information.\nWhen finished using OpenMPF, stop Workflow Manager with \nctrl-c\n and then run \nmpf stop\n to stop\nthe other system dependencies.\n\n\nThe preferred method to start and stop services for OpenMPF is with the\n\nmpf start\n and \nmpf stop\n commands. For additional information on these\ncommands, please see the\n\nCommand Line Tools\n section.\nThese will start and stop the PostgreSQL, Redis, Node Manager, and Workflow Manager processes.\n\n\nKnown Issues\n\n\no.m.m.m.c.JobController - Failure creating job. supplier.get()\n\n\nIf you see an error message similar to:\n\n\n2022-02-07 17:17:30,538 ERROR [http-nio-8080-exec-1] o.m.m.m.c.JobController - Failure creating job. supplier.get()\njava.lang.NullPointerException: supplier.get()\n at java.util.Objects.requireNonNull(Objects.java:246) ~[?:?]\n at java.util.Objects.requireNonNullElseGet(Objects.java:321) ~[?:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getHostName(PropertiesUtil.java:267) ~[classes/:?]\n at org.mitre.mpf.wfm.util.PropertiesUtil.getExportedJobId(PropertiesUtil.java:285) ~[classes/:?]\n\n\n\nOpen \n/etc/profile.d/mpf.sh\n and change \nexport HOSTNAME\n to\n\nexport HOSTNAME=$(hostname)\n. Then, restart the VM.\n\n\nAppendices\n\n\nCommand Line Tools\n\n\nOpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions:\n\nmpf [options ...]\n.\n\n\nExecute \nmpf --help\n for general documentation and \nmpf --help\n for\ndocumentation about a specific action.\n\n\n\n\nStart / Stop Actions\n: Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster.\n\n\nmpf status\n: displays a message indicating whether each of the system\n dependencies is running or not\n\n\nmpf start\n: starts all of the system dependencies\n\n\nmpf stop\n: stops all of the system dependencies\n\n\nmpf restart\n : stops and then starts all of the system dependencies\n\n\n\n\n\n\nUser Actions\n: Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect.\n\n\nmpf list-users\n : lists all of the existing user accounts and their role\n (non-admin or admin)\n\n\nmpf add-user \n: adds a new user account; will be\n prompted to enter the account password\n\n\nmpf remove-user \n : removes an existing user account\n\n\nmpf change-role \n : change the role (non-admin to admin\n or vice versa) for an existing user\n\n\nmpf change-password \n: change the password for an existing\n user; will be prompted to enter the new account password\n\n\n\n\n\n\nClean Actions\n: Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved.\n\n\nmpf clean\n: cleans out old job information and results, pending job requests, and marked up\n media files, but preserves log files and uploaded media.\n\n\nmpf clean --delete-logs --delete-uploaded-media\n: the same as \nmpf clean\n\n but also deletes log files and uploaded media\n\n\n\n\n\n\nNode Action\n: Actions for managing node membership in the OpenMPF cluster.\n\n\nmpf list-nodes\n: If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes\n\n\n\n\n\n\n\n\nPackaging a Component\n\n\nIn a non-Docker deployment, admin users can register component packages through\nthe web UI. Refer to \nComponent Registration\n.\n\n\nOnce the descriptor file is complete, as described in\n\nComponent Descriptor Reference\n,\nthe next step is to compile your component source code, and finally, create a\n.tar.gz package containing the descriptor file, component library, and all\nother necessary files.\n\n\nThe package should contain a top-level directory with a unique name that will\nnot conflict with existing component packages that have already been developed.\nThe top-level directory name should be the same as the \ncomponentName\n.\n\n\nWithin the top-level directory there must be a directory named \u201cdescriptor\u201d\nwith the descriptor JSON file in it. The name of the file must be\n\u201cdescriptor.json\u201d.\n\n\nExample:\n\n\n//sample-component-1.0.0-tar.gz contents\nSampleComponent/\n config/\n descriptor/\n descriptor.json\n lib/\n\n\n\nInstalling and registering a component\n\n\nThe Component Registration web page, located in the Admin section of the\nOpenMPF web user interface, can be used to upload and register the component.\n\n\nDrag and drop the .tar.gz file containing the component onto the dropzone area\nof that page. The component will automatically be uploaded and registered.\n\n\nUpon successful registration, the component will be available for deployment\nonto OpenMPF nodes via the Node Configuration web page and\n\n/rest/nodes/config\n end point.\n\n\nIf the descriptor contains custom actions, tasks, or pipelines, then they will\nbe automatically added to the system upon registration.\n\n\n\n\nNOTE:\n If the descriptor does not contain custom actions, tasks,\nor pipelines, then a default action, task, and pipeline will be generated\nand added to the system.\n\n\nThe default action will use the component\u2019s algorithm with its default\nproperty value settings.\nThe default task will use the default action.\nThe default pipeline will use the default task. This will only be generated\nif the algorithm does not specify any \nrequiresCollection\n states.\n\n\n\n\nUnregistering a component\n\n\nA component can be unregistered by using the remove button on the Component\nRegistration page.\n\n\nDuring unregistration, all services, algorithms, actions, tasks, and pipelines\nassociated with the component are deleted. Additionally, all actions, tasks,\nand pipelines that depend on these elements are removed.\n\n\nWeb UI\n\n\nThe following sections will cover some additional functionality permitted to\nadmin users in a non-Docker deployment.\n\n\nNode Configuration and Status\n\n\nThis page provides a list of all of the services that are configured to run on\nthe OpenMPF cluster:\n\n\n\n\nEach node shows information about the current status of each service, if it is\nunlaunchable due to an underlying error, and how many services are running for\neach node. If a service is unlaunchable, it will be indicated using a red\nstatus icon (not shown). Note that services are grouped by component type.\nClick the chevron \">\" to expand a service group to view the individual services.\n\n\nAn admin user can start, stop, or restart them on an individual basis. If a\nnon-admin user views this page, the \"Action(s)\" column is not displayed. This\npage also enables an admin user to edit the configuration for all nodes in the\nOpenMPF cluster. A non-admin user can only view the existing configuration.\n\n\nAn admin user can add a node by using the \"Add Node\" button and selecting a\nnode in the OpenMPF cluster from the drop-down list. You can also select to add\nall services at this time. A node and all if its configured services can be\nremoved by clicking the trash can to the right of the node's hostname.\n\n\nAn admin user can add services individually by selecting the node edit button\nat the bottom of the node. The number of service instances can be increased or\ndecreased by using the drop-down. Click the \"Submit\" button to save the changes.\n\n\nWhen making changes, please be aware of the following:\n\n\n\n\nIt may take a minute for the configuration to take effect on the server.\n\n\nIf you remove an existing service from a node, any job that service is\n processing will be stopped, and you will need to resubmit that job.\n\n\nIf you create a new node, its configuration will not take effect until the\n OpenMPF software is properly installed and started on the associated host.\n\n\nIf you delete a node, you will need to manually turn off the hardware running\n that node (deleting a node does not shut down the machine).\n\n\n\n\nComponent Registration\n\n\nThis page allows an admin user to add and remove non-default components to and\nfrom the system:\n\n\n\n\nA component package takes the form of a tar.gz file. An admin user can either\ndrag and drop the file onto the \"Upload a new component\" dropzone area or click\nthe dropzone area to open a file browser and select the file that way.\nIn either case, the component will begin to be uploaded to the system. If the\nadmin user dragged and dropped the file onto the dropzone area then the upload\nprogress will be shown in that area. Once uploaded, the workflow manager will\nautomatically attempt to register the component. Notification messages will\nappear in the upper right side of the screen to indicate success or failure if\nan error occurs. The \"Current Components\" table will display the component\nstatus.\n\n\n\n\nIf for some reason the component package upload succeeded but the component\nregistration failed then the admin user will be able to click the \"Register\"\nbutton again to try to another registration attempt. For example, the admin\nuser may do this after reviewing the workflow manager logs and resolving any\nissues that prevented the component from successfully registering the first\ntime. One reason may be that a component with the same name already exists on\nthe system. Note that an error will also occur if the top-level directory of\nthe component package, once extracted, already exists in the \n/opt/mpf/plugins\n\ndirectory on the system.\n\n\nOnce registered, an admin user has the option to remove the component. This\nwill unregister it and completely remove any configured services, as well as\nthe uploaded file and its extracted contents, from the system. Also, the\ncomponent algorithm as well as any actions, tasks, and pipelines specified in\nthe component's descriptor file will be removed when the component is removed.",
"title": "Development Environment Guide"
},
{
@@ -1297,7 +1302,7 @@
},
{
"location": "/Development-Environment-Guide/index.html#setup-vm",
- "text": "Download the ISO for the desktop version of Ubuntu 20.04 from\n https://releases.ubuntu.com/20.04 . Create an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using. Use mpf as your username. During the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software. After completing the installation, you will likely be prompted to update\n software. You should install the updates. Optionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong. Open a terminal and run sudo apt update Run sudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible Run sudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h Run sudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3 Run sudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc Follow instructions to install Docker:\n https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository Optionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use docker commands: sudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket; Follow instructions so that you can run Docker without sudo:\n https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user Install Docker Compose: sudo apt update\nsudo apt install docker-compose-plugin Optionally, stop redis from starting automatically:\n sudo systemctl disable redis Optionally, stop postgresql from starting automatically:\n sudo systemctl disable postgresql Initialize Postgres (use \"password\" when prompted for a password): sudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf Build and install OpenCV: mkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.5.5;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.5.5' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DWITH_CUBLAS=true \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.5.5/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.5.5/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.5.5/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv; Build and install the ActiveMQ C++ library: mkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp; Install NotoEmoji font for markup: mkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto; Build and install PNG Defry: mkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry; Install Maven: wget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin; Build and install libheif: mkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif; Install ActiveMQ: wget -O- https://archive.apache.org/dist/activemq/5.17.0/apache-activemq-5.17.0-bin.tar.gz \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-activemq-5.17.0 /opt/activemq;\nsudo chown -R mpf:mpf /opt/apache-activemq-5.17.0 In /opt/activemq/conf/activemq.xml change line 34 from \n \n to \n In /opt/activemq/conf/activemq.xml (line 38) under the line:\n , add \n \" prioritizedMessages=\"true\" useCache=\"false\" expireMessagesPeriod=\"0\" queuePrefetch=\"1\" /> In /opt/activemq/conf/activemq.xml (line 66, after making the above addition),\n change the line: \n to \n . In /opt/activemq/conf/log4j2.properties (line 69), change the line \n appender.logfile.layout.pattern=%d | %-5p | %m | %c | %t%n%throwable{full} \n to \n appender.logfile.layout.pattern=%d %p [%t] %c - %m%n From your home directory run: git clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop; Run: pip install openmpf-projects/openmpf/trunk/bin/mpf-scripts Add PATH=\"$HOME/.local/bin:$PATH\" to ~/.bashrc Run mkdir -p openmpf-projects/openmpf/trunk/install/share/logs Run sudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh Run sudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf' Run sudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service Run cd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties Run sudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts' Run mkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/ Reboot the VM. At this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the Dockerfile \nfor each component you're interested in.",
+ "text": "Download the ISO for the desktop version of Ubuntu 20.04 from\n https://releases.ubuntu.com/20.04 . Create an Ubuntu VM using the downloaded iso. This part is different based on\n what VM software you are using. Use mpf as your username. During the initial install, the VM window was small and didn't stretch to\n fill up the screen, but this may be fixed automatically after the installation\n finishes, or there may be additional steps necessary to install tools or\n configure settings based on your VM software. After completing the installation, you will likely be prompted to update\n software. You should install the updates. Optionally, shutdown the VM and take a snapshot. This will enable you to revert back\n to a clean Ubuntu install in case anything goes wrong. Open a terminal and run sudo apt update Run sudo apt install gnupg2 unzip xz-utils cmake make g++ libgtest-dev mediainfo libssl-dev liblog4cxx-dev libboost-dev file openjdk-17-jdk libprotobuf-dev protobuf-compiler libprotobuf-java python3.8-dev python3-pip python3.8-venv libde265-dev libopenblas-dev liblapacke-dev libavcodec-dev libavcodec-extra libavformat-dev libavutil-dev libswscale-dev libavresample-dev libharfbuzz-dev libfreetype-dev ffmpeg git git-lfs redis postgresql-12 curl ansible Run sudo ln --symbolic /usr/include/x86_64-linux-gnu/openblas-pthread/cblas.h /usr/include/cblas.h Run sudo ln --symbolic /usr/bin/cmake /usr/bin/cmake3 Run sudo ln --symbolic /usr/bin/protoc /usr/local/bin/protoc Follow instructions to install Docker:\n https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository Optionally, configure Docker to use socket activation. The advantage of socket activation is\n that systemd will automatically start the Docker daemon when you use docker commands: sudo systemctl disable docker.service;\nsudo systemctl stop docker.service;\nsudo systemctl enable docker.socket; Follow instructions so that you can run Docker without sudo:\n https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user Install Docker Compose: sudo apt update\nsudo apt install docker-compose-plugin Optionally, stop redis from starting automatically:\n sudo systemctl disable redis Optionally, stop postgresql from starting automatically:\n sudo systemctl disable postgresql Initialize Postgres (use \"password\" when prompted for a password): sudo -i -u postgres createuser -P mpf\nsudo -i -u postgres createdb -O mpf mpf Build and install OpenCV: mkdir /tmp/opencv-contrib;\nwget -O- 'https://github.com/opencv/opencv_contrib/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip --directory /tmp/opencv-contrib;\nmkdir /tmp/opencv;\ncd /tmp/opencv;\nwget -O- 'https://github.com/opencv/opencv/archive/4.5.5.tar.gz' \\\n | tar --extract --gzip;\ncd opencv-4.5.5;\nmkdir build;\ncd build;\nexport OpenBLAS_HOME=/usr/lib/x86_64-linux-gnu/openblas-pthread; \\\ncmake -DCMAKE_INSTALL_PREFIX:PATH='/opt/opencv-4.5.5' \\\n -DWITH_IPP=false \\\n -DBUILD_EXAMPLES=false \\\n -DBUILD_TESTS=false \\\n -DBUILD_PERF_TESTS=false \\\n -DWITH_CUBLAS=true \\\n -DOPENCV_EXTRA_MODULES_PATH=/tmp/opencv-contrib/opencv_contrib-4.5.5/modules \\\n ..;\nsudo make --jobs \"$(nproc)\" install;\nsudo ln --symbolic '/opt/opencv-4.5.5/include/opencv4/opencv2' /usr/local/include/opencv2;\nsudo sh -c 'echo /opt/opencv-4.5.5/lib > /etc/ld.so.conf.d/mpf.conf'\nsudo ldconfig;\nsudo rm -rf /tmp/opencv-contrib /tmp/opencv; Build and install the ActiveMQ C++ library: mkdir /tmp/activemq-cpp;\ncd /tmp/activemq-cpp;\nwget -O- https://dlcdn.apache.org/activemq/activemq-cpp/3.9.5/activemq-cpp-library-3.9.5-src.tar.gz \\\n | tar --extract --gzip;\ncd activemq-cpp-library-3.9.5;\n./configure;\nsudo make --jobs \"$(nproc)\" install;\nsudo rm -rf /tmp/activemq-cpp; Install NotoEmoji font for markup: mkdir /tmp/noto;\ncd /tmp/noto;\nwget https://noto-website-2.storage.googleapis.com/pkgs/NotoEmoji-unhinted.zip;\nunzip NotoEmoji-unhinted.zip;\nsudo mkdir --parents /usr/share/fonts/google-noto-emoji;\nsudo cp NotoEmoji-Regular.ttf /usr/share/fonts/google-noto-emoji/;\nsudo chmod a+r /usr/share/fonts/google-noto-emoji/NotoEmoji-Regular.ttf;\nrm -rf /tmp/noto; Build and install PNG Defry: mkdir /tmp/pngdefry;\ncd /tmp/pngdefry;\nwget -O- 'https://github.com/openmpf/pngdefry/archive/v1.2.tar.gz' \\\n | tar --extract --gzip;\ncd pngdefry-1.2;\nsudo gcc pngdefry.c -o /usr/local/bin/pngdefry;\nrm -rf /tmp/pngdefry; Install Maven: wget -O- 'https://archive.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz' \\\n | sudo tar --extract --gzip --directory /opt;\nsudo ln --symbolic /opt/apache-maven-3.3.3/bin/mvn /usr/local/bin; Build and install libheif: mkdir /tmp/libheif;\ncd /tmp/libheif;\nwget -O- https://github.com/strukturag/libheif/archive/refs/tags/v1.12.0.tar.gz \\\n | tar --extract --gzip;\ncd libheif-1.12.0;\nmkdir build;\ncd build;\ncmake3 -DCMAKE_INSTALL_PREFIX=/usr -DWITH_EXAMPLES=false ..;\nsudo make --jobs \"$(nproc)\" install;\ncd;\nsudo rm -rf /tmp/libheif; From your home directory run: git clone https://github.com/openmpf/openmpf-projects.git --recursive;\ncd openmpf-projects;\ngit checkout develop;\ngit submodule foreach git checkout develop; Run: pip install openmpf-projects/openmpf/trunk/bin/mpf-scripts Add PATH=\"$HOME/.local/bin:$PATH\" to ~/.bashrc Run mkdir -p openmpf-projects/openmpf/trunk/install/share/logs Run sudo cp openmpf-projects/openmpf/trunk/mpf-install/src/main/scripts/mpf-profile.sh /etc/profile.d/mpf.sh Run sudo sh -c 'echo /home/mpf/mpf-sdk-install/lib >> /etc/ld.so.conf.d/mpf.conf' Run sudo cp openmpf-projects/openmpf/trunk/node-manager/src/scripts/node-manager.service /etc/systemd/system/node-manager.service Run cd ~/openmpf-projects/openmpf/trunk/workflow-manager/src/main/resources/properties/; cp mpf-private-example.properties mpf-private.properties Run sudo sh -c 'echo \"[mpf-child]\\nlocalhost\" >> /etc/ansible/hosts' Run mkdir -p ~/.m2/repository/; tar -f /home/mpf/openmpf-projects/openmpf-build-tools/mpf-maven-deps.tar.gz --extract --gzip --directory ~/.m2/repository/ Reboot the VM. At this point you may wish to install additional dependencies so that you can\nbuild specific OpenMPF components. Refer to the commands in the Dockerfile \nfor each component you're interested in.",
"title": "Setup VM"
},
{
@@ -1307,7 +1312,7 @@
},
{
"location": "/Development-Environment-Guide/index.html#build-and-run-the-openmpf-workflow-manager-web-application",
- "text": "Build OpenMPF: cd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false; Start OpenMPF with mpf start . Look for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting: 2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661) After startup, the Workflow Manager will be available at http://localhost:8080/workflow-manager .\nBrowse to this URL using Firefox or Chrome. If you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the OpenMPF User Guide for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the OpenMPF Admin Guide for more information.\nWhen finished using OpenMPF, stop Workflow Manager with ctrl-c and then run mpf stop to stop\nthe other system dependencies. The preferred method to start and stop services for OpenMPF is with the mpf start and mpf stop commands. For additional information on these\ncommands, please see the Command Line Tools section.\nThese will start and stop the ActiveMQ, PostgreSQL, Redis, Node Manager,\nand Workflow Manager processes.",
+ "text": "Build OpenMPF: cd ~/openmpf-projects/openmpf;\nmvn clean install \\\n -DskipTests -Dmaven.test.skip=true \\\n -DskipITs \\\n -Dcomponents.build.components=openmpf-components/cpp/OcvFaceDetection \\\n -Dstartup.auto.registration.skip=false; Start OpenMPF with mpf start . Look for this log message in the terminal with a time value indicating the Workflow Manager has\nfinished starting: 2022-10-11 12:21:16,447 INFO [main] o.m.m.Application - Started Application in 22.843 seconds (JVM running for 24.661) After startup, the Workflow Manager will be available at http://localhost:8080 .\nBrowse to this URL using Firefox or Chrome. If you want to test regular user capabilities, log in as the \"mpf\" user with\nthe \"mpf123\" password. Please see the OpenMPF User Guide for more information.\nAlternatively, if you want to test admin capabilities then log in as \"admin\"\nuser with the \"mpfadm\" password. Please see the OpenMPF Admin Guide for more information.\nWhen finished using OpenMPF, stop Workflow Manager with ctrl-c and then run mpf stop to stop\nthe other system dependencies. The preferred method to start and stop services for OpenMPF is with the mpf start and mpf stop commands. For additional information on these\ncommands, please see the Command Line Tools section.\nThese will start and stop the PostgreSQL, Redis, Node Manager, and Workflow Manager processes.",
"title": "Build and Run the OpenMPF Workflow Manager Web Application"
},
{
@@ -1322,7 +1327,7 @@
},
{
"location": "/Development-Environment-Guide/index.html#command-line-tools",
- "text": "OpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions: mpf [options ...] . Execute mpf --help for general documentation and mpf --help for\ndocumentation about a specific action. Start / Stop Actions : Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, ActiveMQ, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster. mpf status : displays a message indicating whether each of the system\n dependencies is running or not mpf start : starts all of the system dependencies mpf stop : stops all of the system dependencies mpf restart : stops and then starts all of the system dependencies User Actions : Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect. mpf list-users : lists all of the existing user accounts and their role\n (non-admin or admin) mpf add-user : adds a new user account; will be\n prompted to enter the account password mpf remove-user : removes an existing user account mpf change-role : change the role (non-admin to admin\n or vice versa) for an existing user mpf change-password : change the password for an existing\n user; will be prompted to enter the new account password Clean Actions : Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved. mpf clean : cleans out old job information and results, pending job\n requests, marked up media files, and ActiveMQ data, but preserves log\n files and uploaded media mpf clean --delete-logs --delete-uploaded-media : the same as mpf clean \n but also deletes log files and uploaded media Node Action : Actions for managing node membership in the OpenMPF cluster. mpf list-nodes : If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes",
+ "text": "OpenMPF installs command line tools that can be accessed through a terminal\non the development machine. All of the tools take the form of actions: mpf [options ...] . Execute mpf --help for general documentation and mpf --help for\ndocumentation about a specific action. Start / Stop Actions : Actions for starting and stopping the OpenMPF\n system dependencies, including PostgreSQL, Redis, Workflow Manager, and the\n node managers on the various nodes in the OpenMPF cluster. mpf status : displays a message indicating whether each of the system\n dependencies is running or not mpf start : starts all of the system dependencies mpf stop : stops all of the system dependencies mpf restart : stops and then starts all of the system dependencies User Actions : Actions for managing Workflow Manager user accounts. If\n changes are made to an existing user then that user will need to log off or\n the Workflow Manager will need to be restarted for the changes to take effect. mpf list-users : lists all of the existing user accounts and their role\n (non-admin or admin) mpf add-user : adds a new user account; will be\n prompted to enter the account password mpf remove-user : removes an existing user account mpf change-role : change the role (non-admin to admin\n or vice versa) for an existing user mpf change-password : change the password for an existing\n user; will be prompted to enter the new account password Clean Actions : Actions to remove old data and revert the system to a\n new install state. User accounts, registered components, as well as custom\n actions, tasks, and pipelines, are preserved. mpf clean : cleans out old job information and results, pending job requests, and marked up\n media files, but preserves log files and uploaded media. mpf clean --delete-logs --delete-uploaded-media : the same as mpf clean \n but also deletes log files and uploaded media Node Action : Actions for managing node membership in the OpenMPF cluster. mpf list-nodes : If the Workflow Manager is running, get the current\n JGroups view; otherwise, list the core nodes",
"title": "Command Line Tools"
},
{
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index adde9a7eb22f..b28c50eaa618 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,137 +2,137 @@
/index.html
- 2023-12-06
+ 2023-12-14
daily
/Release-Notes/index.html
- 2023-12-06
+ 2023-12-14
daily
/License-And-Distribution/index.html
- 2023-12-06
+ 2023-12-14
daily
/Acknowledgements/index.html
- 2023-12-06
+ 2023-12-14
daily
/Install-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Admin-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/User-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/OpenID-Connect-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Media-Segmentation-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Feed-Forward-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Derivative-Media-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Object-Storage-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Markup-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/TiesDb-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Trigger-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/REST-API/index.html
- 2023-12-06
+ 2023-12-14
daily
/Component-API-Overview/index.html
- 2023-12-06
+ 2023-12-14
daily
/Component-Descriptor-Reference/index.html
- 2023-12-06
+ 2023-12-14
daily
/CPP-Batch-Component-API/index.html
- 2023-12-06
+ 2023-12-14
daily
/Python-Batch-Component-API/index.html
- 2023-12-06
+ 2023-12-14
daily
/Java-Batch-Component-API/index.html
- 2023-12-06
+ 2023-12-14
daily
/GPU-Support-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Contributor-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Development-Environment-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Node-Guide/index.html
- 2023-12-06
+ 2023-12-14
daily
/Workflow-Manager-Architecture/index.html
- 2023-12-06
+ 2023-12-14
daily
/CPP-Streaming-Component-API/index.html
- 2023-12-06
+ 2023-12-14
daily
\ No newline at end of file