Skip to content

Commit

Permalink
[minor] release transcription vocabularies
Browse files Browse the repository at this point in the history
  • Loading branch information
eropple committed Jul 7, 2022
1 parent cee2c8b commit b7b42b2
Show file tree
Hide file tree
Showing 47 changed files with 3,459 additions and 25 deletions.
24 changes: 24 additions & 0 deletions .openapi-generator/FILES
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ docs/CreateSimulcastTargetRequest.md
docs/CreateSpaceRequest.md
docs/CreateTrackRequest.md
docs/CreateTrackResponse.md
docs/CreateTranscriptionVocabularyRequest.md
docs/CreateUploadRequest.md
docs/DeliveryReport.md
docs/DeliveryUsageApi.md
Expand Down Expand Up @@ -87,11 +88,13 @@ docs/ListRealTimeMetricsResponse.md
docs/ListRelatedIncidentsResponse.md
docs/ListSigningKeysResponse.md
docs/ListSpacesResponse.md
docs/ListTranscriptionVocabulariesResponse.md
docs/ListUploadsResponse.md
docs/ListVideoViewExportsResponse.md
docs/ListVideoViewsResponse.md
docs/LiveStream.md
docs/LiveStreamEmbeddedSubtitleSettings.md
docs/LiveStreamGeneratedSubtitleSettings.md
docs/LiveStreamResponse.md
docs/LiveStreamStatus.md
docs/LiveStreamsApi.md
Expand Down Expand Up @@ -126,13 +129,18 @@ docs/SpacesApi.md
docs/StartSpaceBroadcastResponse.md
docs/StopSpaceBroadcastResponse.md
docs/Track.md
docs/TranscriptionVocabulariesApi.md
docs/TranscriptionVocabulary.md
docs/TranscriptionVocabularyResponse.md
docs/URLSigningKeysApi.md
docs/UpdateAssetMP4SupportRequest.md
docs/UpdateAssetMasterAccessRequest.md
docs/UpdateAssetRequest.md
docs/UpdateLiveStreamEmbeddedSubtitlesRequest.md
docs/UpdateLiveStreamGeneratedSubtitlesRequest.md
docs/UpdateLiveStreamRequest.md
docs/UpdateReferrerDomainRestrictionRequest.md
docs/UpdateTranscriptionVocabularyRequest.md
docs/Upload.md
docs/UploadError.md
docs/UploadResponse.md
Expand All @@ -156,6 +164,7 @@ mux_python/api/playback_id_api.py
mux_python/api/playback_restrictions_api.py
mux_python/api/real_time_api.py
mux_python/api/spaces_api.py
mux_python/api/transcription_vocabularies_api.py
mux_python/api/url_signing_keys_api.py
mux_python/api/video_views_api.py
mux_python/api_client.py
Expand Down Expand Up @@ -187,6 +196,7 @@ mux_python/models/create_simulcast_target_request.py
mux_python/models/create_space_request.py
mux_python/models/create_track_request.py
mux_python/models/create_track_response.py
mux_python/models/create_transcription_vocabulary_request.py
mux_python/models/create_upload_request.py
mux_python/models/delivery_report.py
mux_python/models/dimension_value.py
Expand Down Expand Up @@ -240,11 +250,13 @@ mux_python/models/list_real_time_metrics_response.py
mux_python/models/list_related_incidents_response.py
mux_python/models/list_signing_keys_response.py
mux_python/models/list_spaces_response.py
mux_python/models/list_transcription_vocabularies_response.py
mux_python/models/list_uploads_response.py
mux_python/models/list_video_view_exports_response.py
mux_python/models/list_video_views_response.py
mux_python/models/live_stream.py
mux_python/models/live_stream_embedded_subtitle_settings.py
mux_python/models/live_stream_generated_subtitle_settings.py
mux_python/models/live_stream_response.py
mux_python/models/live_stream_status.py
mux_python/models/metric.py
Expand Down Expand Up @@ -273,12 +285,16 @@ mux_python/models/space_type.py
mux_python/models/start_space_broadcast_response.py
mux_python/models/stop_space_broadcast_response.py
mux_python/models/track.py
mux_python/models/transcription_vocabulary.py
mux_python/models/transcription_vocabulary_response.py
mux_python/models/update_asset_master_access_request.py
mux_python/models/update_asset_mp4_support_request.py
mux_python/models/update_asset_request.py
mux_python/models/update_live_stream_embedded_subtitles_request.py
mux_python/models/update_live_stream_generated_subtitles_request.py
mux_python/models/update_live_stream_request.py
mux_python/models/update_referrer_domain_restriction_request.py
mux_python/models/update_transcription_vocabulary_request.py
mux_python/models/upload.py
mux_python/models/upload_error.py
mux_python/models/upload_response.py
Expand All @@ -291,4 +307,12 @@ setup.cfg
setup.py
test-requirements.txt
test/__init__.py
test/test_create_transcription_vocabulary_request.py
test/test_list_transcription_vocabularies_response.py
test/test_live_stream_generated_subtitle_settings.py
test/test_transcription_vocabularies_api.py
test/test_transcription_vocabulary.py
test/test_transcription_vocabulary_response.py
test/test_update_live_stream_generated_subtitles_request.py
test/test_update_transcription_vocabulary_request.py
tox.ini
1 change: 1 addition & 0 deletions docs/CreateLiveStreamRequest.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Name | Type | Description | Notes
**passthrough** | **str** | | [optional]
**audio_only** | **bool** | Force the live stream to only process the audio track when the value is set to true. Mux drops the video track if broadcasted. | [optional]
**embedded_subtitles** | [**list[LiveStreamEmbeddedSubtitleSettings]**](LiveStreamEmbeddedSubtitleSettings.md) | Describe the embedded closed caption contents of the incoming live stream. | [optional]
**generated_subtitles** | [**list[LiveStreamGeneratedSubtitleSettings]**](LiveStreamGeneratedSubtitleSettings.md) | Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with `generated_subtitles` configured will automatically receive two text tracks. The first of these will have a `text_source` value of `generated_live`, and will be available with `ready` status as soon as the stream is live. The second text track will have a `text_source` value of `generated_live_final` and will contain subtitles with improved accuracy, timing, and formatting. However, `generated_live_final` tracks will not be available in `ready` status until the live stream ends. If an Asset has both `generated_live` and `generated_live_final` tracks that are `ready`, then only the `generated_live_final` track will be included during playback. | [optional]
**reduced_latency** | **bool** | This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. Note: Reconnect windows are incompatible with Reduced Latency and will always be set to zero (0) seconds. Read more here: https://mux.com/blog/reduced-latency-for-mux-live-streaming-now-available/ | [optional]
**low_latency** | **bool** | This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Setting this option will enable compatibility with the LL-HLS specification for low-latency streaming. This typically has lower latency than Reduced Latency streams, and cannot be combined with Reduced Latency. Note: Reconnect windows are incompatible with Low Latency and will always be set to zero (0) seconds. | [optional]
**latency_mode** | **str** | Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags. The Low Latency value is a beta feature. Note: Reconnect windows are incompatible with Reduced Latency and Low Latency and will always be set to zero (0) seconds. Read more here: https://mux.com/blog/introducing-low-latency-live-streaming/ | [optional]
Expand Down
12 changes: 12 additions & 0 deletions docs/CreateTranscriptionVocabularyRequest.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# CreateTranscriptionVocabularyRequest

## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | **str** | The user-supplied name of the Transcription Vocabulary. | [optional]
**phrases** | **list[str]** | Phrases, individual words, or proper names to include in the Transcription Vocabulary. When the Transcription Vocabulary is attached to a live stream's `generated_subtitles`, the probability of successful speech recognition for these words or phrases is boosted. |
**passthrough** | **str** | Arbitrary user-supplied metadata set for the Transcription Vocabulary. Max 255 characters. | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


10 changes: 10 additions & 0 deletions docs/ListTranscriptionVocabulariesResponse.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# ListTranscriptionVocabulariesResponse

## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**data** | [**list[TranscriptionVocabulary]**](TranscriptionVocabulary.md) | | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


3 changes: 2 additions & 1 deletion docs/LiveStream.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ Name | Type | Description | Notes
**passthrough** | **str** | Arbitrary user-supplied metadata set for the asset. Max 255 characters. | [optional]
**audio_only** | **bool** | The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted. | [optional]
**embedded_subtitles** | [**list[LiveStreamEmbeddedSubtitleSettings]**](LiveStreamEmbeddedSubtitleSettings.md) | Describes the embedded closed caption configuration of the incoming live stream. | [optional]
**reconnect_window** | **float** | When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. **Min**: 0.1s. **Max**: 300s (5 minutes). | [optional] [default to 60]
**generated_subtitles** | [**list[LiveStreamGeneratedSubtitleSettings]**](LiveStreamGeneratedSubtitleSettings.md) | Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with `generated_subtitles` configured will automatically receive two text tracks. The first of these will have a `text_source` value of `generated_live`, and will be available with `ready` status as soon as the stream is live. The second text track will have a `text_source` value of `generated_live_final` and will contain subtitles with improved accuracy, timing, and formatting. However, `generated_live_final` tracks will not be available in `ready` status until the live stream ends. If an Asset has both `generated_live` and `generated_live_final` tracks that are `ready`, then only the `generated_live_final` track will be included during playback. | [optional]
**reconnect_window** | **float** | When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. **Min**: 0.1s. **Max**: 1800s (30 minutes). | [optional] [default to 60]
**reduced_latency** | **bool** | This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. **Note**: Reconnect windows are incompatible with Reduced Latency and will always be set to zero (0) seconds. See the [Reduce live stream latency guide](https://docs.mux.com/guides/video/reduce-live-stream-latency) to understand the tradeoffs. | [optional]
**low_latency** | **bool** | This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Setting this option will enable compatibility with the LL-HLS specification for low-latency streaming. This typically has lower latency than Reduced Latency streams, and cannot be combined with Reduced Latency. Note: Reconnect windows are incompatible with Low Latency and will always be set to zero (0) seconds. | [optional]
**simulcast_targets** | [**list[SimulcastTarget]**](SimulcastTarget.md) | Each Simulcast Target contains configuration details to broadcast (or \"restream\") a live stream to a third-party streaming service. [See the Stream live to 3rd party platforms guide](https://docs.mux.com/guides/video/stream-live-to-3rd-party-platforms). | [optional]
Expand Down
13 changes: 13 additions & 0 deletions docs/LiveStreamGeneratedSubtitleSettings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# LiveStreamGeneratedSubtitleSettings

## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | **str** | A name for this live stream subtitle track. | [optional]
**passthrough** | **str** | Arbitrary metadata set for the live stream subtitle track. Max 255 characters. | [optional]
**language_code** | **str** | The language to generate subtitles in. | [optional] [default to 'en']
**transcription_vocabulary_ids** | **list[str]** | Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included. | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


76 changes: 76 additions & 0 deletions docs/LiveStreamsApi.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Method | HTTP request | Description
[**signal_live_stream_complete**](LiveStreamsApi.md#signal_live_stream_complete) | **PUT** /video/v1/live-streams/{LIVE_STREAM_ID}/complete | Signal a live stream is finished
[**update_live_stream**](LiveStreamsApi.md#update_live_stream) | **PATCH** /video/v1/live-streams/{LIVE_STREAM_ID} | Update a live stream
[**update_live_stream_embedded_subtitles**](LiveStreamsApi.md#update_live_stream_embedded_subtitles) | **PUT** /video/v1/live-streams/{LIVE_STREAM_ID}/embedded-subtitles | Update a live stream's embedded subtitles
[**update_live_stream_generated_subtitles**](LiveStreamsApi.md#update_live_stream_generated_subtitles) | **PUT** /video/v1/live-streams/{LIVE_STREAM_ID}/generated-subtitles | Update a live stream's generated subtitles


# **create_live_stream**
Expand Down Expand Up @@ -1209,3 +1210,78 @@ Name | Type | Description | Notes

[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)

# **update_live_stream_generated_subtitles**
> LiveStreamResponse update_live_stream_generated_subtitles(live_stream_id, update_live_stream_generated_subtitles_request)
Update a live stream's generated subtitles

Updates a live stream's automatic-speech-recognition-generated subtitle configuration. Automatic speech recognition subtitles can be removed by sending an empty array in the request payload.

### Example

* Basic Authentication (accessToken):
```python
from __future__ import print_function
import time
import mux_python
from mux_python.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to https://api.mux.com
# See configuration.py for a list of all supported configuration parameters.
configuration = mux_python.Configuration(
host = "https://api.mux.com"
)

# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.

# Configure HTTP basic authorization: accessToken
configuration = mux_python.Configuration(
username = 'YOUR_USERNAME',
password = 'YOUR_PASSWORD'
)

# Enter a context with an instance of the API client
with mux_python.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = mux_python.LiveStreamsApi(api_client)
live_stream_id = 'live_stream_id_example' # str | The live stream ID
update_live_stream_generated_subtitles_request = {"generated_subtitles":[{"name":"English CC (ASR)","language":"en","passthrough":"Example"}]} # UpdateLiveStreamGeneratedSubtitlesRequest |

try:
# Update a live stream's generated subtitles
api_response = api_instance.update_live_stream_generated_subtitles(live_stream_id, update_live_stream_generated_subtitles_request)
pprint(api_response)
except ApiException as e:
print("Exception when calling LiveStreamsApi->update_live_stream_generated_subtitles: %s\n" % e)
```

### Parameters

Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**live_stream_id** | **str**| The live stream ID |
**update_live_stream_generated_subtitles_request** | [**UpdateLiveStreamGeneratedSubtitlesRequest**](UpdateLiveStreamGeneratedSubtitlesRequest.md)| |

### Return type

[**LiveStreamResponse**](LiveStreamResponse.md)

### Authorization

[accessToken](../README.md#accessToken)

### HTTP request headers

- **Content-Type**: application/json
- **Accept**: application/json

### HTTP response details
| Status code | Description | Response headers |
|-------------|-------------|------------------|
**200** | OK | - |

[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)

Loading

0 comments on commit b7b42b2

Please sign in to comment.