-
Notifications
You must be signed in to change notification settings - Fork 3
Feat/wfm trigger #336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/wfm trigger #336
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 4 of 4 files at r1, all commit messages.
Reviewable status:complete! all files reviewed, all discussions resolved (waiting on @brosenberg42)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 50 of 50 files at r2, 2 of 2 files at r3, 2 of 2 files at r4, all commit messages.
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @brosenberg42)
a discussion (no related file):
In the README for Whisper we have:
WHISPER_MODE
: Determines whether Whisper will perform language detection, speech-to-text transcription, or speech translation. English-only models can only transcribe English audio. Set toLANGUAGE_DETECTION
for spoken language detection,TRANSCRIPTION
for speech-to-text transcription, andSPEECH_TRANSLATION
for speech translation.
Update to:
WHISPER_MODE
: Determines whether Whisper will perform language detection, speech-to-text transcription, or speech translation. If multiple languages are spoken in a single piece of media, language detection will detect only one of them. English-only models can only transcribe English audio. Set toLANGUAGE_DETECTION
for spoken language detection,TRANSCRIPTION
for speech-to-text transcription, andSPEECH_TRANSLATION
for speech translation.
python/AzureSpeechDetection/README.md
line 39 at r3 (raw file):
| Property Key | Description | |--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `SPEAKER_ID` | A unique speaker identifier, of the form "`<start_offset>-<stop_offset>-<#>`, where `<start_offset>` and `<stop_offset>` are integers indicating the segment range (in frame counts for video jobs, milliseconds for audio jobs) for sub-jobs when a job has been segmented by the Workflow Manager. The final `#` portion of the ID is a 1-indexed counter for speaker identity within the indicated segment range. When jobs are not segmented, or not submitted through the Workflow Manager at all, `stop_offset` may instead be `EOF`, indicating that the job extends to the end of the file. |
Increment the outputChangedCounter
due to this change. The OpenMPF major version number change will result in jobs being re-run anyway, but good to get in the habit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 54 of 56 files reviewed, 2 unresolved discussions (waiting on @brosenberg42 and @jrobble)
a discussion (no related file):
Previously, jrobble (Jeff Robble) wrote…
In the README for Whisper we have:
WHISPER_MODE
: Determines whether Whisper will perform language detection, speech-to-text transcription, or speech translation. English-only models can only transcribe English audio. Set toLANGUAGE_DETECTION
for spoken language detection,TRANSCRIPTION
for speech-to-text transcription, andSPEECH_TRANSLATION
for speech translation.Update to:
WHISPER_MODE
: Determines whether Whisper will perform language detection, speech-to-text transcription, or speech translation. If multiple languages are spoken in a single piece of media, language detection will detect only one of them. English-only models can only transcribe English audio. Set toLANGUAGE_DETECTION
for spoken language detection,TRANSCRIPTION
for speech-to-text transcription, andSPEECH_TRANSLATION
for speech translation.
Done.
python/AzureSpeechDetection/README.md
line 39 at r3 (raw file):
Previously, jrobble (Jeff Robble) wrote…
Increment the
outputChangedCounter
due to this change. The OpenMPF major version number change will result in jobs being re-run anyway, but good to get in the habit.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r5, all commit messages.
Reviewable status:complete! all files reviewed, all discussions resolved (waiting on @brosenberg42)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r6, all commit messages.
Reviewable status:complete! all files reviewed, all discussions resolved (waiting on @brosenberg42)
Issues:
Related PRs:
This change is