Skip to content

Latest commit

 

History

History
42 lines (32 loc) · 1.92 KB

how-to-use-meeting-transcription.md

File metadata and controls

42 lines (32 loc) · 1.92 KB
title titleSuffix description author manager ms.service ms.topic ms.date ms.author zone_pivot_groups ms.custom
Real-time meeting transcription quickstart - Speech service
Azure AI services
In this quickstart, learn how to transcribe meetings. You can add, remove, and identify multiple participants by streaming audio to the Speech service.
eric-urban
nitinme
azure-ai-speech
quickstart
1/21/2024
eur
acs-js-csharp-python
cogserv-non-critical-speech, references_regions, devx-track-extended-java, devx-track-js, devx-track-python

Quickstart: Real-time meeting transcription

You can transcribe meetings with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe meetings. See the meeting transcription overview for more information.

Limitations

  • Only available in the following subscription regions: centralus, eastasia, eastus, westeurope
  • Requires a 7-mic circular multi-microphone array. The microphone array should meet our specification.

Note

The Speech SDK for C++, Java, Objective-C, and Swift support meeting transcription, but we haven't yet included a guide here.

::: zone pivot="programming-language-javascript" [!INCLUDE JavaScript Basics include] ::: zone-end

::: zone pivot="programming-language-csharp" [!INCLUDE C# Basics include] ::: zone-end

::: zone pivot="programming-language-python" [!INCLUDE Python Basics include] ::: zone-end

Next steps

[!div class="nextstepaction"] Asynchronous meeting transcription