From 8d420d60f7ed7696ecf1ac78c1177ac8de974546 Mon Sep 17 00:00:00 2001 From: Brian Rosenberg Date: Mon, 20 Nov 2023 09:14:54 -0500 Subject: [PATCH] Change whitelist to allow list. --- docs/docs/Trigger-Guide.md | 6 ++-- docs/site/Trigger-Guide/index.html | 6 ++-- docs/site/index.html | 2 +- docs/site/search/search_index.json | 4 +-- docs/site/sitemap.xml | 52 +++++++++++++++--------------- 5 files changed, 35 insertions(+), 35 deletions(-) diff --git a/docs/docs/Trigger-Guide.md b/docs/docs/Trigger-Guide.md index 8ca15bba68cb..897e161a32c8 100644 --- a/docs/docs/Trigger-Guide.md +++ b/docs/docs/Trigger-Guide.md @@ -185,11 +185,11 @@ It's important to note that the possible `CLASSIFICATION` values generated by st mutually exclusive. This means, for example, that YOLO will not generate a `blue` track in stage 1 that will later satisfy the trigger for stage 4. -Also, note that stages 1, 2, and 3 can all accept an optional `WHITELIST_FILE` property that can be +Also, note that stages 1, 2, and 3 can all accept an optional `ALLOW_LIST_FILE` property that can be used to discard tracks with a `CLASSIFICATION` not listed in that file. It is possible to recreate -the behavior of the above pipeline without using triggers and instead only using whitelist files to +the behavior of the above pipeline without using triggers and instead only using allow list files to ensure each of those stages can only generate the track types the user is interested in. The -disadvantage of the whitelist approach is that the final JSON output object will not contain all of +disadvantage of the allow list approach is that the final JSON output object will not contain all of the YOLO tracks, only `truck` tracks. Using triggers is better when a user wants to know about those other track types. Using triggers also enables a user to create a version of this pipeline where `person` tracks from YOLO are fed into OpenCV face. `person` is just an example of one other type of diff --git a/docs/site/Trigger-Guide/index.html b/docs/site/Trigger-Guide/index.html index 9b1468cce9de..f770fcb2b531 100644 --- a/docs/site/Trigger-Guide/index.html +++ b/docs/site/Trigger-Guide/index.html @@ -407,11 +407,11 @@

Filtering Using Triggers

It's important to note that the possible CLASSIFICATION values generated by stages 1, 2, and 3 are mutually exclusive. This means, for example, that YOLO will not generate a blue track in stage 1 that will later satisfy the trigger for stage 4.

-

Also, note that stages 1, 2, and 3 can all accept an optional WHITELIST_FILE property that can be +

Also, note that stages 1, 2, and 3 can all accept an optional ALLOW_LIST_FILE property that can be used to discard tracks with a CLASSIFICATION not listed in that file. It is possible to recreate -the behavior of the above pipeline without using triggers and instead only using whitelist files to +the behavior of the above pipeline without using triggers and instead only using allow list files to ensure each of those stages can only generate the track types the user is interested in. The -disadvantage of the whitelist approach is that the final JSON output object will not contain all of +disadvantage of the allow list approach is that the final JSON output object will not contain all of the YOLO tracks, only truck tracks. Using triggers is better when a user wants to know about those other track types. Using triggers also enables a user to create a version of this pipeline where person tracks from YOLO are fed into OpenCV face. person is just an example of one other type of diff --git a/docs/site/index.html b/docs/site/index.html index 16560c732a7a..b0e651bd56d6 100644 --- a/docs/site/index.html +++ b/docs/site/index.html @@ -384,5 +384,5 @@

Overview

diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json index 23b7759fdd04..a81633a4ebe1 100644 --- a/docs/site/search/search_index.json +++ b/docs/site/search/search_index.json @@ -482,7 +482,7 @@ }, { "location": "/Trigger-Guide/index.html", - "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nTrigger Overview\n\n\nThe \nTRIGGER\n property enables pipelines that use \nfeed forward\n to have\npipeline stages that only process certain tracks based on their track properties. It can be used\nto select the best algorithm when there are multiple similar algorithms that each perform better\nunder certain circumstances. It can also be used to iteratively filter down tracks at each stage of\na pipeline.\n\n\nSyntax\n\n\nThe syntax for the \nTRIGGER\n property is: \n=[;...]\n.\nThe left hand side of the equals sign is the name of track property that will be used to determine\nif a track matches the trigger. The right hand side specifies the required value for the specified\ntrack property. More than one value can be specified by separating them with a semicolon. When\nmultiple properties are specified the track property must match any one of the specified values.\nIf the value should match a track property that contains a semicolon or backslash,\nthey must be escaped with a leading backslash. For example, \nCLASSIFICATION=dog;cat\n will match\n\"dog\" or \"cat\". \nCLASSIFICATION=dog\\;cat\n will match \"dog;cat\". \nCLASSIFICATION=dog\\\\cat\n will\nmatch \"dog\\cat\". When specifying a trigger in JSON it will need to \ndoubly escaped\n.\n\n\nAlgorithm Selection Using Triggers\n\n\nThe example pipeline below will be used to describe the way that the Workflow Manager uses the\n\nTRIGGER\n property. Each task in the pipeline is composed of one action, so only the actions are\nshown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.\n\n\n\n\nWHISPER SPEECH LANGUAGE DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nSPHINX SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=eng\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nWHISPER SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nARGOS TRANSLATION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nKEYWORD TAGGING ACTION\n\n\n(No TRIGGER)\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,\nsays a keyword that the user is interested in. The complication is that the input file could either\nbe in English, Spanish, or another language the user is not interested in. Spanish audio must be\ntranslated to English before looking for keywords.\n\n\nWe are going to pretend that Whisper language detection can return multiple tracks, one per language\ndetected in the audio, although in reality it is limited to detecting one language for the entire\npiece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are\npretending that Sphinx performs better than Whisper on English audio, and the user wants to use\nWhisper for transcribing Spanish audio.\n\n\nThe first stage should not have a trigger condition. If one is set, it will be ignored. The\nWorkflow Manager will take all of the tracks generated by stage 1 and determine if the trigger\ncondition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this\ncase, if stage 1 detected the language as English and set \nISO_LANGUAGE\n to \neng\n, then those\ntracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.\n\n\nIf any of the Whisper tracks do not meet the condition for the stage 2, they are later considered\nas possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger\ndiamond pointing down to the stage 3 trigger diamond.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 2, the\n\nSPHINX SPEECH DETECTION ACTION\n, as well as the tracks that didn't satisfy the stage 2 trigger, and\ndetermine if the trigger condition for stage 3 is met.\n\n\nNote that the Sphinx component does not generate tracks with the \nISO_LANGUAGE\n property, so\nit's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later\nflow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the\nSphinx tracks cannot satisfy that trigger either.\n\n\nEven if the Sphinx component did generate tracks with the \nISO_LANGUAGE\n property, it would be set\nto \neng\n and would not satisfy the \nspa\n condition (they are mutually exclusive). Either way,\neventually the tracks from stage 2 will flow into stage 5.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 3, the\n\nWHISPER SPEECH DETECTION ACTION\n, as well as the tracks that did not satisfy the stage 2 and 3\ntriggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by\nstage 3 will have the \nISO_LANGUAGE\n property set to \nspa\n, because the stage 3 trigger only\nmatched Spanish tracks and when Whisper performs transcription, it sets the \nISO_LANGUAGE\n property.\nSince the stage 4 trigger, like the stage 3 trigger, is \nISO_LANGUAGE=spa\n, all of the tracks\nproduced by stage 3 will be fed in to stage 4.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 4, the\n\nARGOS TRANSLATION (WITH FF REGION) ACTION\n, as well as the tracks that did not satisfy the stage 2,\n3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger\ncondition, so all of those tracks flow into stage 5 by default.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nIn this diagram the trigger diamonds have been replaced with the orange boxes at the top of each\nstage. Also, all of the arrows for flows that are not logically possible have been removed,\nleaving only arrows that flow from one stage to another.\n\n\nWhat remains shows that this pipeline has three main flows of execution:\n\n\n\n\nEnglish audio is transcribed by the Sphinx component and then processed by keyword tagging.\n\n\nSpanish audio is transcribed by the Whisper component, translated by the Argos component, and\n then processed by keyword tagging.\n\n\nAll other languages are not transcribed and those tracks pass directly to keyword tagging. Since\n there is no transcript to look at, keyword tagging essentially ignores them.\n\n\n\n\nFurther Understanding\n\n\nIn general, triggers work as a mechanism to decide which tracks are passed forward to later stages\nof a pipeline. It is important to note that not only are the tracks from the previous stage\nconsidered, but also tracks from stages that were not fed into any previous stage.\n\n\nFor example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3\nwould never be triggered. This is because Sphinx tracks don't have an \nISO_LANGUAGE\n property. Even\nif they did have that property, it would be set to \neng\n, not \nspa\n, which would not satisfy the\nstage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks\nfrom stage 1 should only be processed by one speech-to-text algorithm (i.e. one \nSPEECH DETECTION\n\nstage). Both algorithms should be considered, but only one should be selected based on the language.\nTo accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs\nto stage 3.\n\n\nAdditionally, it's important to note that when a stage is triggered, the tracks passed into that\nstage are no longer considered for later stages. Instead, the tracks generated by that stage can be\npassed to later stages.\n\n\nFor example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If\nall of the tracks generated in prior stages could be passed to stage 4, then the \nspa\n tracks\ngenerated in stage 1 would trigger stage 4. Since those have not passed through the Whisper\nspeech-to-text stage 3 they would not have a transcript to translate.\n\n\nFiltering Using Triggers\n\n\nThe pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically \nAND\ned together. This allows you to produce\npipelines that search for very specific things.\n\n\nConsider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment:\n\n\n\n\nOCV YOLO OBJECT DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nCAFFE GOOGLENET DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=truck\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nTENSORFLOW VEHICLE COLOR DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nOALPR LICENSE PLATE TEXT DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=blue\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior.\n\n\nStage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates \ntruck\n tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to \nblue\n trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1.\n\n\nTracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused.\n\n\nIt's important to note that the possible \nCLASSIFICATION\n values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a \nblue\n track in stage 1\nthat will later satisfy the trigger for stage 4.\n\n\nAlso, note that stages 1, 2, and 3 can all accept an optional \nWHITELIST_FILE\n property that can be\nused to discard tracks with a \nCLASSIFICATION\n not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using whitelist files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the whitelist approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only \ntruck\n tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where\n\nperson\n tracks from YOLO are fed into OpenCV face. \nperson\n is just an example of one other type of\nYOLO track a user might be interested in.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nRemoving all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially \nAND\ns the\ntrigger conditions together.\n\n\nJSON escaping\n\n\nMany times job properties are defined using JSON and track properties appear in the JSON output\nobject. JSON also uses backslash as its escape character. Since the \nTRIGGER\n property and JSON both\nuse backslash as the escape character, when specifying the \nTRIGGER\n property in JSON, the string\nmust be doubly escaped.\n\n\nIf the job request contains this JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog;cat\"} } }\n\n\n\nit will match either \"dog\" or \"cat\", but not \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\;cat\"} } }\n\n\n\nwould only match \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\\\\\cat\"} } }\n\n\n\nwould only match \"dog\\cat\". The track property in the JSON output object would appear as:\n\n\n{ \"trackProperties\": { \"CLASSIFICATION\": \"dog\\\\cat\" } }", + "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract,\nand is subject to the Rights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2023\nThe MITRE Corporation. All Rights Reserved.\n\n\nTrigger Overview\n\n\nThe \nTRIGGER\n property enables pipelines that use \nfeed forward\n to have\npipeline stages that only process certain tracks based on their track properties. It can be used\nto select the best algorithm when there are multiple similar algorithms that each perform better\nunder certain circumstances. It can also be used to iteratively filter down tracks at each stage of\na pipeline.\n\n\nSyntax\n\n\nThe syntax for the \nTRIGGER\n property is: \n=[;...]\n.\nThe left hand side of the equals sign is the name of track property that will be used to determine\nif a track matches the trigger. The right hand side specifies the required value for the specified\ntrack property. More than one value can be specified by separating them with a semicolon. When\nmultiple properties are specified the track property must match any one of the specified values.\nIf the value should match a track property that contains a semicolon or backslash,\nthey must be escaped with a leading backslash. For example, \nCLASSIFICATION=dog;cat\n will match\n\"dog\" or \"cat\". \nCLASSIFICATION=dog\\;cat\n will match \"dog;cat\". \nCLASSIFICATION=dog\\\\cat\n will\nmatch \"dog\\cat\". When specifying a trigger in JSON it will need to \ndoubly escaped\n.\n\n\nAlgorithm Selection Using Triggers\n\n\nThe example pipeline below will be used to describe the way that the Workflow Manager uses the\n\nTRIGGER\n property. Each task in the pipeline is composed of one action, so only the actions are\nshown. Note that this is a hypothetical pipeline and not intended for use in a real deployment.\n\n\n\n\nWHISPER SPEECH LANGUAGE DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nSPHINX SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=eng\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nWHISPER SPEECH DETECTION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nARGOS TRANSLATION ACTION\n\n\nTRIGGER: \nISO_LANGUAGE=spa\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nKEYWORD TAGGING ACTION\n\n\n(No TRIGGER)\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to determine if someone in an audio file, or the audio of a video file,\nsays a keyword that the user is interested in. The complication is that the input file could either\nbe in English, Spanish, or another language the user is not interested in. Spanish audio must be\ntranslated to English before looking for keywords.\n\n\nWe are going to pretend that Whisper language detection can return multiple tracks, one per language\ndetected in the audio, although in reality it is limited to detecting one language for the entire\npiece of media. Also, the user wants to use Sphinx for transcribing English audio, because we are\npretending that Sphinx performs better than Whisper on English audio, and the user wants to use\nWhisper for transcribing Spanish audio.\n\n\nThe first stage should not have a trigger condition. If one is set, it will be ignored. The\nWorkflow Manager will take all of the tracks generated by stage 1 and determine if the trigger\ncondition for stage 2 is met. This trigger condition is shown by the topmost orange diamond. In this\ncase, if stage 1 detected the language as English and set \nISO_LANGUAGE\n to \neng\n, then those\ntracks are fed into the second stage. This is shown by the green arrow pointing to the stage 2 box.\n\n\nIf any of the Whisper tracks do not meet the condition for the stage 2, they are later considered\nas possible inputs to stage 3. This is shown by the red arrow coming out of the stage 2 trigger\ndiamond pointing down to the stage 3 trigger diamond.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 2, the\n\nSPHINX SPEECH DETECTION ACTION\n, as well as the tracks that didn't satisfy the stage 2 trigger, and\ndetermine if the trigger condition for stage 3 is met.\n\n\nNote that the Sphinx component does not generate tracks with the \nISO_LANGUAGE\n property, so\nit's not possible for tracks coming out of stage 2 to satisfy the stage 3 trigger. They will later\nflow down to the stage 4 trigger, and because it has the same condition as the stage 3 trigger, the\nSphinx tracks cannot satisfy that trigger either.\n\n\nEven if the Sphinx component did generate tracks with the \nISO_LANGUAGE\n property, it would be set\nto \neng\n and would not satisfy the \nspa\n condition (they are mutually exclusive). Either way,\neventually the tracks from stage 2 will flow into stage 5.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 3, the\n\nWHISPER SPEECH DETECTION ACTION\n, as well as the tracks that did not satisfy the stage 2 and 3\ntriggers, and determine if the trigger condition for stage 4 is met. All of the tracks produced by\nstage 3 will have the \nISO_LANGUAGE\n property set to \nspa\n, because the stage 3 trigger only\nmatched Spanish tracks and when Whisper performs transcription, it sets the \nISO_LANGUAGE\n property.\nSince the stage 4 trigger, like the stage 3 trigger, is \nISO_LANGUAGE=spa\n, all of the tracks\nproduced by stage 3 will be fed in to stage 4.\n\n\nThe Workflow Manager will take all of the tracks generated by stage 4, the\n\nARGOS TRANSLATION (WITH FF REGION) ACTION\n, as well as the tracks that did not satisfy the stage 2,\n3, or 4 triggers, and determine if the trigger condition for stage 5 is met. Stage 5 has no trigger\ncondition, so all of those tracks flow into stage 5 by default.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nIn this diagram the trigger diamonds have been replaced with the orange boxes at the top of each\nstage. Also, all of the arrows for flows that are not logically possible have been removed,\nleaving only arrows that flow from one stage to another.\n\n\nWhat remains shows that this pipeline has three main flows of execution:\n\n\n\n\nEnglish audio is transcribed by the Sphinx component and then processed by keyword tagging.\n\n\nSpanish audio is transcribed by the Whisper component, translated by the Argos component, and\n then processed by keyword tagging.\n\n\nAll other languages are not transcribed and those tracks pass directly to keyword tagging. Since\n there is no transcript to look at, keyword tagging essentially ignores them.\n\n\n\n\nFurther Understanding\n\n\nIn general, triggers work as a mechanism to decide which tracks are passed forward to later stages\nof a pipeline. It is important to note that not only are the tracks from the previous stage\nconsidered, but also tracks from stages that were not fed into any previous stage.\n\n\nFor example, if only the Sphinx tracks from stage 2 were passed to Whisper stage 3, then stage 3\nwould never be triggered. This is because Sphinx tracks don't have an \nISO_LANGUAGE\n property. Even\nif they did have that property, it would be set to \neng\n, not \nspa\n, which would not satisfy the\nstage 3 trigger. This is mutual exclusion is by design. Both stages perform speech-to-text. Tracks\nfrom stage 1 should only be processed by one speech-to-text algorithm (i.e. one \nSPEECH DETECTION\n\nstage). Both algorithms should be considered, but only one should be selected based on the language.\nTo accomplish this, tracks from stage 1 that don't trigger stage 2 are considered as possible inputs\nto stage 3.\n\n\nAdditionally, it's important to note that when a stage is triggered, the tracks passed into that\nstage are no longer considered for later stages. Instead, the tracks generated by that stage can be\npassed to later stages.\n\n\nFor example, the Argos algorithm in stage 4 should only accept tracks with Spanish transcripts. If\nall of the tracks generated in prior stages could be passed to stage 4, then the \nspa\n tracks\ngenerated in stage 1 would trigger stage 4. Since those have not passed through the Whisper\nspeech-to-text stage 3 they would not have a transcript to translate.\n\n\nFiltering Using Triggers\n\n\nThe pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically \nAND\ned together. This allows you to produce\npipelines that search for very specific things.\n\n\nConsider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment:\n\n\n\n\nOCV YOLO OBJECT DETECTION ACTION\n\n\n(No TRIGGER)\n\n\n\n\n\n\nCAFFE GOOGLENET DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=truck\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nTENSORFLOW VEHICLE COLOR DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\nOALPR LICENSE PLATE TEXT DETECTION ACTION\n\n\nTRIGGER: \nCLASSIFICATION=blue\n\n\nFEED_FORWARD_TYPE: \nREGION\n\n\n\n\n\n\n\n\nThe pipeline can be represented as a flow chart:\n\n\n\n\nThe goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior.\n\n\nStage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates \ntruck\n tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to \nblue\n trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1.\n\n\nTracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused.\n\n\nIt's important to note that the possible \nCLASSIFICATION\n values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a \nblue\n track in stage 1\nthat will later satisfy the trigger for stage 4.\n\n\nAlso, note that stages 1, 2, and 3 can all accept an optional \nALLOW_LIST_FILE\n property that can be\nused to discard tracks with a \nCLASSIFICATION\n not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using allow list files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the allow list approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only \ntruck\n tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where\n\nperson\n tracks from YOLO are fed into OpenCV face. \nperson\n is just an example of one other type of\nYOLO track a user might be interested in.\n\n\nThe above diagram can be simplified as follows:\n\n\n\n\nRemoving all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially \nAND\ns the\ntrigger conditions together.\n\n\nJSON escaping\n\n\nMany times job properties are defined using JSON and track properties appear in the JSON output\nobject. JSON also uses backslash as its escape character. Since the \nTRIGGER\n property and JSON both\nuse backslash as the escape character, when specifying the \nTRIGGER\n property in JSON, the string\nmust be doubly escaped.\n\n\nIf the job request contains this JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog;cat\"} } }\n\n\n\nit will match either \"dog\" or \"cat\", but not \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\;cat\"} } }\n\n\n\nwould only match \"dog;cat\".\n\n\nThis JSON fragment:\n\n\n{ \"algorithmProperties\": { \"DNNCV\": {\"TRIGGER\": \"CLASS=dog\\\\\\\\cat\"} } }\n\n\n\nwould only match \"dog\\cat\". The track property in the JSON output object would appear as:\n\n\n{ \"trackProperties\": { \"CLASSIFICATION\": \"dog\\\\cat\" } }", "title": "Trigger Guide" }, { @@ -507,7 +507,7 @@ }, { "location": "/Trigger-Guide/index.html#filtering-using-triggers", - "text": "The pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically AND ed together. This allows you to produce\npipelines that search for very specific things. Consider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment: OCV YOLO OBJECT DETECTION ACTION (No TRIGGER) CAFFE GOOGLENET DETECTION ACTION TRIGGER: CLASSIFICATION=truck FEED_FORWARD_TYPE: REGION TENSORFLOW VEHICLE COLOR DETECTION ACTION TRIGGER: CLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle FEED_FORWARD_TYPE: REGION OALPR LICENSE PLATE TEXT DETECTION ACTION TRIGGER: CLASSIFICATION=blue FEED_FORWARD_TYPE: REGION The pipeline can be represented as a flow chart: The goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior. Stage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates truck tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to blue trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1. Tracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused. It's important to note that the possible CLASSIFICATION values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a blue track in stage 1\nthat will later satisfy the trigger for stage 4. Also, note that stages 1, 2, and 3 can all accept an optional WHITELIST_FILE property that can be\nused to discard tracks with a CLASSIFICATION not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using whitelist files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the whitelist approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only truck tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where person tracks from YOLO are fed into OpenCV face. person is just an example of one other type of\nYOLO track a user might be interested in. The above diagram can be simplified as follows: Removing all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially AND s the\ntrigger conditions together.", + "text": "The pipeline in the previous section shows an example of how triggers can be used to conditionally\nexecute or skip stages in a pipeline. Triggers can also be useful when all stages get triggered. In\ncases like that, the individual triggers are logically AND ed together. This allows you to produce\npipelines that search for very specific things. Consider the example pipeline defined below. Again, each task in the pipeline is composed of one\naction, so only the actions are shown. Also, note that this is a hypothetical pipeline and not\nintended for use in a real real deployment: OCV YOLO OBJECT DETECTION ACTION (No TRIGGER) CAFFE GOOGLENET DETECTION ACTION TRIGGER: CLASSIFICATION=truck FEED_FORWARD_TYPE: REGION TENSORFLOW VEHICLE COLOR DETECTION ACTION TRIGGER: CLASSIFICATION=ice cream, icecream;ice lolly, lolly, lollipop, popsicle FEED_FORWARD_TYPE: REGION OALPR LICENSE PLATE TEXT DETECTION ACTION TRIGGER: CLASSIFICATION=blue FEED_FORWARD_TYPE: REGION The pipeline can be represented as a flow chart: The goal of this pipeline is to extract the license plate numbers for all blue trucks that have\nphotos of ice cream or popsicles on their exterior. Stage 2 and 3 do not generate new detection regions. Instead, they generate tracks using the same\ndetection regions in the feed-forward tracks. Specifically, if YOLO generates truck tracks in\nstage 1, then those tracks will be fed into stage 2. In that stage, GoogLeNet will process the\ntruck region to determine the ImageNet class with the highest confidence. If that class corresponds\nto ice cream or popsicle, those tracks will be fed into stage 3, which will operate on the same\ntruck region to determine the vehicle color. Tracks corresponding to blue trucks will be fed\ninto stage 4, which will try to detect the license plate region and text. OALPR will operate on\nthe same truck region passed forward all of the way from YOLO in stage 1. Tracks generated by any stage in the pipeline that don't meet the three trigger criteria do not\nflow into the final license plate detection stage, and are therefore unused. It's important to note that the possible CLASSIFICATION values generated by stages 1, 2, and 3 are\nmutually exclusive. This means, for example, that YOLO will not generate a blue track in stage 1\nthat will later satisfy the trigger for stage 4. Also, note that stages 1, 2, and 3 can all accept an optional ALLOW_LIST_FILE property that can be\nused to discard tracks with a CLASSIFICATION not listed in that file. It is possible to recreate\nthe behavior of the above pipeline without using triggers and instead only using allow list files to\nensure each of those stages can only generate the track types the user is interested in. The\ndisadvantage of the allow list approach is that the final JSON output object will not contain all of\nthe YOLO tracks, only truck tracks. Using triggers is better when a user wants to know about those\nother track types. Using triggers also enables a user to create a version of this pipeline where person tracks from YOLO are fed into OpenCV face. person is just an example of one other type of\nYOLO track a user might be interested in. The above diagram can be simplified as follows: Removing all of the flows that aren't logically possible, or result in unused tracks, only\nleaves one flow that passes through all of the stages. Again, this flow essentially AND s the\ntrigger conditions together.", "title": "Filtering Using Triggers" }, { diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml index 659eb2a7c5bd..8777bb04f3ba 100644 --- a/docs/site/sitemap.xml +++ b/docs/site/sitemap.xml @@ -2,132 +2,132 @@ /index.html - 2023-11-03 + 2023-11-20 daily /Release-Notes/index.html - 2023-11-03 + 2023-11-20 daily /License-And-Distribution/index.html - 2023-11-03 + 2023-11-20 daily /Acknowledgements/index.html - 2023-11-03 + 2023-11-20 daily /Install-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Admin-Guide/index.html - 2023-11-03 + 2023-11-20 daily /User-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Media-Segmentation-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Feed-Forward-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Derivative-Media-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Object-Storage-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Markup-Guide/index.html - 2023-11-03 + 2023-11-20 daily /TiesDb-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Trigger-Guide/index.html - 2023-11-03 + 2023-11-20 daily /REST-API/index.html - 2023-11-03 + 2023-11-20 daily /Component-API-Overview/index.html - 2023-11-03 + 2023-11-20 daily /Component-Descriptor-Reference/index.html - 2023-11-03 + 2023-11-20 daily /CPP-Batch-Component-API/index.html - 2023-11-03 + 2023-11-20 daily /Python-Batch-Component-API/index.html - 2023-11-03 + 2023-11-20 daily /Java-Batch-Component-API/index.html - 2023-11-03 + 2023-11-20 daily /GPU-Support-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Contributor-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Development-Environment-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Node-Guide/index.html - 2023-11-03 + 2023-11-20 daily /Workflow-Manager-Architecture/index.html - 2023-11-03 + 2023-11-20 daily /CPP-Streaming-Component-API/index.html - 2023-11-03 + 2023-11-20 daily \ No newline at end of file