diff --git a/README.md b/README.md index 5f37379b..c624e45a 100644 --- a/README.md +++ b/README.md @@ -131,20 +131,17 @@ Depending on your preference, you can also add parameters as named arguments, in response = await dg_client.transcription.prerecorded(source, punctuate=True, keywords=['first:5', 'second']) ``` -## Code Samples - -To run the sample code, you may want to create a virtual environment to isolate your Python projects, but it's not required. You can learn how to make and activate these virtual environments in [this article](https://blog.deepgram.com/python-virtual-environments/) on our Deepgram blog. +## Testing -#### Streaming Audio Code Samples +### Setup -In the `sample-projects` folder, there are examples from four different Python web frameworks of how to do live streaming audio transcription with Deepgram. These include: +Run the following command to install `pytest` and `pytest-cov` as dev dependencies. -- Flask 2.0 -- FastAPI -- Django -- Quart +``` +pip install -r requirements-dev.txt +``` -## Testing +### Run All Tests ### Setup @@ -165,6 +162,17 @@ pytest --api-key tests/ pytest --cov=deepgram --api-key tests/ ``` +### Using Example Projects to test new features + +Contributors to the SDK can test their changes locally by running the projects in the `examples` folder. This can be done when making changes without adding a unit test, but of course it is recommended that you add unit tests for any feature additions made to the SDK. + +Go to the folder `examples` and look for these two projects, which can be used to test out features in the Deepgram Python SDK: + +- prerecorded +- streaming + +These are standalone projects, so you will need to follow the instructions in the `README.md` files for each project to get it running. + ## Development and Contributing Interested in contributing? We ❤️ pull requests! diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 00000000..4f4467a2 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,76 @@ +# Examples for Testing Features Locally + +The example projects are meant to be used to test features locally by contributors working on this SDK, but they can also be used as quickstarts to get up and running with the Deeegram Python SDK. + +Here are the steps to follow to run the examples with the **local version** of the SDK: + +## Add Your Code + +Make your changes to the SDK (be sure you are on a branch you have created to do this work). + +## Install dependencies + +You can choose between two methods for installing the Deepgram SDK from the local folder: + +### Install locally with the `examples//requirements.txt` file + +Move to within the `examples/` folder and run the following command to install the project dependencies. This command will install the dependencies from the local repo due to the package being added as `-e ../../` The`-e` indicates that a package should be installed in "editable" mode, which means in-place from the source code (local folder). + +`pip install -r requirements.txt` + +### Install locally with `pip install -e` + +The other method that can be used to install the Deepgram SDK from the local project is to use `pip install -e`. In this case, you would: + +``` +pip uninstall deepgram-sdk # If it's already installed +cd /path/to/deepgram-python-sdk/ # navigate to inside the deepgram SDK +pip install -e . +``` + +This will install the SDK from the local source code in editable mode, so any changes made inside the project will be instantly usable from the example files. + +### Edit the API key + +Inside the example file, replace the API key where it says 'YOUR_DEEPGRAM_API_KEY' + +`DEEPGRAM_API_KEY = 'YOUR_DEEPGRAM_API_KEY'` + +### Run the project + +Make sure you're in the directory with the `main.py` file and run the project with the following command. + +`python main.py` + +### After testing + +After you have used the example files to test your code, be sure to reset the example file to the way it was when you started (i.e. discard features you may have added to the options dictionary when testing features). + +## How to verify that you're testing the local changes + +If you want to be sure that you are testing the local `deepgram` package, you can run this check. + +### Step 1 + +Launch the Python interpreter by typing `python` in the terminal. Make sure you are in the folder with the `main.py` file you will be using to run the test. + +``` +python +``` + +### Step 2 + +Inside the interpreter, run the following code. This will import the `importlib` modules and use `find_spec()` to determine the location of the imported module. + +```py +import importlib.util + +spec = importlib.util.find_spec("deepgram") +if spec is not None: + print("Module 'deepgram' is imported from:", spec.origin) +else: + print("Module 'deepgram' is not found.") + +``` + +This code checks whether the module named "deepgram" is imported and, if so, prints its origin (i.e., the location from where it's imported). diff --git a/examples/prerecorded/main.py b/examples/prerecorded/main.py new file mode 100644 index 00000000..e6c67295 --- /dev/null +++ b/examples/prerecorded/main.py @@ -0,0 +1,71 @@ +# Example filename: deepgram_test.py + +import json +import asyncio + +from deepgram import Deepgram + +# Your Deepgram API Key +DEEPGRAM_API_KEY = 'YOUR_DEEPGRAM_API_KEY' + +# Location of the file you want to transcribe. Should include filename and extension. +# Example of a local file: ../../Audio/life-moves-pretty-fast.wav +# Example of a remote file: https://static.deepgram.com/examples/interview_speech-analytics.wav +FILE = 'https://static.deepgram.com/examples/interview_speech-analytics.wav' + +# Mimetype for the file you want to transcribe +# Include this line only if transcribing a local file +# Example: audio/wav +MIMETYPE = 'audio/mpeg' + + +async def main(): + + # Initialize the Deepgram SDK + deepgram = Deepgram(DEEPGRAM_API_KEY) + + # Check whether requested file is local or remote, and prepare source + if FILE.startswith('http'): + # file is remote + # Set the source + source = { + 'url': FILE + } + else: + # file is local + # Open the audio file + audio = open(FILE, 'rb') + + # Set the source + source = { + 'buffer': audio, + 'mimetype': MIMETYPE + } + + # Send the audio to Deepgram and get the response + response = await asyncio.create_task( + deepgram.transcription.prerecorded( + source, + { + 'detect_language': "true", + 'summarize': "v2", + } + ) + ) + + # Write the response to the console + print(json.dumps(response, indent=4)) + + # Write only the transcript to the console + # print(response["results"]["channels"][0]["alternatives"][0]["transcript"]) + + # print(response["results"]["channels"]) + +try: + # If running in a Jupyter notebook, Jupyter is already running an event loop, so run main with this line instead: + # await main() + asyncio.run(main()) +except Exception as e: + exception_type, exception_object, exception_traceback = sys.exc_info() + line_number = exception_traceback.tb_lineno + print(f'line {line_number}: {exception_type} - {e}') diff --git a/examples/prerecorded/requirements.txt b/examples/prerecorded/requirements.txt new file mode 100644 index 00000000..ab70bdb3 --- /dev/null +++ b/examples/prerecorded/requirements.txt @@ -0,0 +1 @@ +-e ../../ diff --git a/examples/streaming/main.py b/examples/streaming/main.py new file mode 100644 index 00000000..4f1ebbd8 --- /dev/null +++ b/examples/streaming/main.py @@ -0,0 +1,54 @@ +# Example filename: deepgram_test.py + +from deepgram import Deepgram +import asyncio +import aiohttp + +# Your Deepgram API Key +DEEPGRAM_API_KEY = '' + +# URL for the realtime streaming audio you would like to transcribe +URL = 'http://stream.live.vc.bbcmedia.co.uk/bbc_world_service' + + +async def main(): + # Initialize the Deepgram SDK + deepgram = Deepgram(DEEPGRAM_API_KEY) + + # Create a websocket connection to Deepgram + # In this example, punctuation is turned on, interim results are turned off, and language is set to UK English. + try: + deepgramLive = await deepgram.transcription.live({ + 'smart_format': True, + 'interim_results': False, + 'language': 'en-US', + 'model': 'nova', + }) + except Exception as e: + print(f'Could not open socket: {e}') + return + + # Listen for the connection to close + deepgramLive.registerHandler(deepgramLive.event.CLOSE, lambda c: print( + f'Connection closed with code {c}.')) + + # Listen for any transcripts received from Deepgram and write them to the console + deepgramLive.registerHandler(deepgramLive.event.TRANSCRIPT_RECEIVED, print) + + # Listen for the connection to open and send streaming audio from the URL to Deepgram + async with aiohttp.ClientSession() as session: + async with session.get(URL) as audio: + while True: + data = await audio.content.readany() + deepgramLive.send(data) + + # If no data is being sent from the live stream, then break out of the loop. + if not data: + break + + # Indicate that we've finished sending data by sending the customary zero-byte message to the Deepgram streaming endpoint, and wait until we get back the final summary metadata object + await deepgramLive.finish() + +# If running in a Jupyter notebook, Jupyter is already running an event loop, so run main with this line instead: +# await main() +asyncio.run(main()) diff --git a/examples/streaming/requirements.txt b/examples/streaming/requirements.txt new file mode 100644 index 00000000..d2a44749 --- /dev/null +++ b/examples/streaming/requirements.txt @@ -0,0 +1,3 @@ +-e ../../ +asyncio +aiohttp \ No newline at end of file diff --git a/requirements-dev.txt b/requirements-dev.txt index b218315d..7a360ab6 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -8,3 +8,4 @@ aiohttp pytest pytest-asyncio fuzzywuzzy +pytest-cov \ No newline at end of file diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/README.md b/sample-projects/streaming-audio/Django/live-transcription-django/README.md deleted file mode 100644 index bed88c30..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Live Transcription With Python and Django - -To run this project create a virtual environment by running the below commands. You can learn more about setting up a virtual environment in this article. - -``` -mkdir [% NAME_OF_YOUR_DIRECTORY %] -cd [% NAME_OF_YOUR_DIRECTORY %] -python3 -m venv venv -source venv/bin/activate -``` - -Make sure your virtual environment is activated and install the dependencies in the requirements.txt file inside. - -`pip install -r requirements.txt` - -Make sure you're in the directory with the manage.py file and run the project in the development server. - -`python3 manage.py runserver` - -Pull up a browser and go to your localhost, http://127.0.0.1:8000/. - -Allow access to your microphone and start speaking. A transcript of your audio will appear in the browser. \ No newline at end of file diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/manage.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/manage.py deleted file mode 100755 index 3288579a..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/manage.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python -"""Django's command-line utility for administrative tasks.""" -import os -import sys - - -def main(): - """Run administrative tasks.""" - os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'stream.settings') - try: - from django.core.management import execute_from_command_line - except ImportError as exc: - raise ImportError( - "Couldn't import Django. Are you sure it's installed and " - "available on your PYTHONPATH environment variable? Did you " - "forget to activate a virtual environment?" - ) from exc - execute_from_command_line(sys.argv) - - -if __name__ == '__main__': - main() diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/requirements.txt b/sample-projects/streaming-audio/Django/live-transcription-django/stream/requirements.txt deleted file mode 100644 index e4baec2a..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/requirements.txt +++ /dev/null @@ -1,39 +0,0 @@ -aiohttp==3.8.5 -aiosignal==1.2.0 -asgiref==3.5.0 -async-timeout==4.0.2 -attrs==21.4.0 -autobahn==22.2.2 -Automat==20.2.0 -certifi==2023.7.22 -cffi==1.15.0 -channels==3.0.4 -charset-normalizer==2.0.12 -constantly==15.1.0 -cryptography==41.0.3 -daphne==3.0.2 -deepgram-sdk==0.2.4 -Django==4.1.10 -frozenlist==1.3.0 -hyperlink==21.0.0 -idna==3.3 -incremental==21.3.0 -multidict==6.0.2 -oauthlib==3.2.0 -pyasn1==0.4.8 -pyasn1-modules==0.2.8 -pycparser==2.21 -pyOpenSSL==22.0.0 -python-dotenv==0.19.2 -requests==2.31.0 -requests-oauthlib==1.3.1 -service-identity==21.1.0 -six==1.16.0 -sqlparse==0.4.4 -Twisted==22.1.0 -txaio==22.2.1 -typing_extensions==4.1.1 -urllib3==1.26.8 -websockets==10.2 -yarl==1.7.2 -zope.interface==5.4.0 diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/__init__.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/asgi.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/asgi.py deleted file mode 100644 index 137f2d45..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/asgi.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -ASGI config for stream project. - -It exposes the ASGI callable as a module-level variable named ``application``. - -For more information on this file, see -https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/ -""" - -import os - -from channels.auth import AuthMiddlewareStack -from channels.routing import ProtocolTypeRouter, URLRouter -from django.core.asgi import get_asgi_application -import transcript.routing - -os.environ.setdefault("DJANGO_SETTINGS_MODULE", "stream.settings") - -application = ProtocolTypeRouter({ - "http": get_asgi_application(), - "websocket": AuthMiddlewareStack( - URLRouter( - transcript.routing.websocket_urlpatterns - ) - ), -}) diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/settings.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/settings.py deleted file mode 100644 index df158100..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/settings.py +++ /dev/null @@ -1,124 +0,0 @@ -""" -Django settings for stream project. - -Generated by 'django-admin startproject' using Django 4.0.3. - -For more information on this file, see -https://docs.djangoproject.com/en/4.0/topics/settings/ - -For the full list of settings and their values, see -https://docs.djangoproject.com/en/4.0/ref/settings/ -""" - -from pathlib import Path - -# Build paths inside the project like this: BASE_DIR / 'subdir'. -BASE_DIR = Path(__file__).resolve().parent.parent - - -# Quick-start development settings - unsuitable for production -# See https://docs.djangoproject.com/en/4.0/howto/deployment/checklist/ - -# SECURITY WARNING: don't run with debug turned on in production! -DEBUG = True - -ALLOWED_HOSTS = [] - - -# Application definition - -INSTALLED_APPS = [ - 'channels', - 'transcript', - 'django.contrib.admin', - 'django.contrib.auth', - 'django.contrib.contenttypes', - 'django.contrib.sessions', - 'django.contrib.messages', - 'django.contrib.staticfiles', -] - -MIDDLEWARE = [ - 'django.middleware.security.SecurityMiddleware', - 'django.contrib.sessions.middleware.SessionMiddleware', - 'django.middleware.common.CommonMiddleware', - 'django.middleware.csrf.CsrfViewMiddleware', - 'django.contrib.auth.middleware.AuthenticationMiddleware', - 'django.contrib.messages.middleware.MessageMiddleware', - 'django.middleware.clickjacking.XFrameOptionsMiddleware', -] - -ROOT_URLCONF = 'stream.urls' - -TEMPLATES = [ - { - 'BACKEND': 'django.template.backends.django.DjangoTemplates', - 'DIRS': [], - 'APP_DIRS': True, - 'OPTIONS': { - 'context_processors': [ - 'django.template.context_processors.debug', - 'django.template.context_processors.request', - 'django.contrib.auth.context_processors.auth', - 'django.contrib.messages.context_processors.messages', - ], - }, - }, -] - -WSGI_APPLICATION = 'stream.wsgi.application' - - -# Database -# https://docs.djangoproject.com/en/4.0/ref/settings/#databases - -DATABASES = { - 'default': { - 'ENGINE': 'django.db.backends.sqlite3', - 'NAME': BASE_DIR / 'db.sqlite3', - } -} - - -# Password validation -# https://docs.djangoproject.com/en/4.0/ref/settings/#auth-password-validators - -AUTH_PASSWORD_VALIDATORS = [ - { - 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', - }, - { - 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', - }, - { - 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', - }, - { - 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', - }, -] - - -# Internationalization -# https://docs.djangoproject.com/en/4.0/topics/i18n/ - -LANGUAGE_CODE = 'en-us' - -TIME_ZONE = 'UTC' - -USE_I18N = True - -USE_TZ = True - - -# Static files (CSS, JavaScript, Images) -# https://docs.djangoproject.com/en/4.0/howto/static-files/ - -STATIC_URL = 'static/' - -# Default primary key field type -# https://docs.djangoproject.com/en/4.0/ref/settings/#default-auto-field - -DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' - -ASGI_APPLICATION = 'stream.asgi.application' \ No newline at end of file diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/urls.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/urls.py deleted file mode 100644 index 6fdc7811..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/urls.py +++ /dev/null @@ -1,23 +0,0 @@ -"""stream URL Configuration - -The `urlpatterns` list routes URLs to views. For more information please see: - https://docs.djangoproject.com/en/4.0/topics/http/urls/ -Examples: -Function views - 1. Add an import: from my_app import views - 2. Add a URL to urlpatterns: path('', views.home, name='home') -Class-based views - 1. Add an import: from other_app.views import Home - 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') -Including another URLconf - 1. Import the include() function: from django.urls import include, path - 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) -""" -from django.conf.urls import include -from django.contrib import admin -from django.urls import path - -urlpatterns = [ - path('', include('transcript.urls')), - path('admin/', admin.site.urls), -] diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/wsgi.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/wsgi.py deleted file mode 100644 index 1cd83868..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/stream/wsgi.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -WSGI config for stream project. - -It exposes the WSGI callable as a module-level variable named ``application``. - -For more information on this file, see -https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/ -""" - -import os - -from django.core.wsgi import get_wsgi_application - -os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'stream.settings') - -application = get_wsgi_application() diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/__init__.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/admin.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/admin.py deleted file mode 100644 index 8c38f3f3..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/admin.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.contrib import admin - -# Register your models here. diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/apps.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/apps.py deleted file mode 100644 index ec9da48a..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/apps.py +++ /dev/null @@ -1,6 +0,0 @@ -from django.apps import AppConfig - - -class TranscriptConfig(AppConfig): - default_auto_field = 'django.db.models.BigAutoField' - name = 'transcript' diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/consumers.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/consumers.py deleted file mode 100644 index 19857fc6..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/consumers.py +++ /dev/null @@ -1,42 +0,0 @@ -from channels.generic.websocket import AsyncWebsocketConsumer -from dotenv import load_dotenv -from deepgram import Deepgram -from typing import Dict - -import os - -load_dotenv() - -class TranscriptConsumer(AsyncWebsocketConsumer): - dg_client = Deepgram(os.getenv('DEEPGRAM_API_KEY')) - - async def get_transcript(self, data: Dict) -> None: - if 'channel' in data: - transcript = data['channel']['alternatives'][0]['transcript'] - - if transcript: - await self.send(transcript) - - - async def connect_to_deepgram(self): - try: - self.socket = await self.dg_client.transcription.live({'punctuate': True, 'interim_results': False}) - self.socket.registerHandler(self.socket.event.CLOSE, lambda c: print(f'Connection closed with code {c}.')) - self.socket.registerHandler(self.socket.event.TRANSCRIPT_RECEIVED, self.get_transcript) - - except Exception as e: - raise Exception(f'Could not open socket: {e}') - - async def connect(self): - await self.connect_to_deepgram() - await self.accept() - - - async def disconnect(self, close_code): - await self.channel_layer.group_discard( - self.room_group_name, - self.channel_name - ) - - async def receive(self, bytes_data): - self.socket.send(bytes_data) diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/migrations/__init__.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/migrations/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/models.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/models.py deleted file mode 100644 index 71a83623..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/models.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.db import models - -# Create your models here. diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/routing.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/routing.py deleted file mode 100644 index 4833a008..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/routing.py +++ /dev/null @@ -1,7 +0,0 @@ -from django.urls import re_path - -from . import consumers - -websocket_urlpatterns = [ - re_path(r'listen', consumers.TranscriptConsumer.as_asgi()), -] \ No newline at end of file diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/templates/transcript/index.html b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/templates/transcript/index.html deleted file mode 100644 index 8ea65c54..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/templates/transcript/index.html +++ /dev/null @@ -1,53 +0,0 @@ - - - - Chat - - -

Transcribe Audio With Django

-

Connection status will go here

-

- - - - \ No newline at end of file diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/tests.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/tests.py deleted file mode 100644 index 7ce503c2..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/tests.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.test import TestCase - -# Create your tests here. diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/urls.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/urls.py deleted file mode 100644 index 3ef24d97..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/urls.py +++ /dev/null @@ -1,7 +0,0 @@ -from django.urls import path - -from . import views - -urlpatterns = [ - path('', views.index, name='index'), -] \ No newline at end of file diff --git a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/views.py b/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/views.py deleted file mode 100644 index dd6609a4..00000000 --- a/sample-projects/streaming-audio/Django/live-transcription-django/stream/transcript/views.py +++ /dev/null @@ -1,4 +0,0 @@ -from django.shortcuts import render - -def index(request): - return render(request, 'transcript/index.html') \ No newline at end of file diff --git a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/README.md b/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/README.md deleted file mode 100644 index 0d82c23d..00000000 --- a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# Live Transcription With Python and FastAPI - -To run this project create a virtual environment by running the below commands. You can learn more about setting up a virtual environment in this [article](https://developers.deepgram.com/blog/2022/02/python-virtual-environments/). - -``` -mkdir [% NAME_OF_YOUR_DIRECTORY %] -cd [% NAME_OF_YOUR_DIRECTORY %] -python3 -m venv venv -source venv/bin/activate -``` - -Make sure your virtual environment is activated and install the dependencies in the requirements.txt file inside. - -``` -pip install -r requirements.txt -``` - -Make sure you're in the directory with the **main.py** file and run the project in the development server. - -``` -uvicorn main:app --reload -``` - -Pull up a browser and go to your localhost, http://127.0.0.1:8000/. - -Allow access to your microphone and start speaking. A transcript of your audio will appear in the browser. - diff --git a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/main.py b/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/main.py deleted file mode 100644 index 56e44727..00000000 --- a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/main.py +++ /dev/null @@ -1,56 +0,0 @@ -from fastapi import FastAPI, Request, WebSocket -from fastapi.responses import HTMLResponse -from fastapi.templating import Jinja2Templates -from typing import Dict, Callable -from deepgram import Deepgram -from dotenv import load_dotenv -import os - -load_dotenv() - -app = FastAPI() - -dg_client = Deepgram(os.getenv('DEEPGRAM_API_KEY')) - -templates = Jinja2Templates(directory="templates") - -async def process_audio(fast_socket: WebSocket): - async def get_transcript(data: Dict) -> None: - if 'channel' in data: - transcript = data['channel']['alternatives'][0]['transcript'] - - if transcript: - await fast_socket.send_text(transcript) - - deepgram_socket = await connect_to_deepgram(get_transcript) - - return deepgram_socket - -async def connect_to_deepgram(transcript_received_handler: Callable[[Dict], None]): - try: - socket = await dg_client.transcription.live({'punctuate': True, 'interim_results': False}) - socket.registerHandler(socket.event.CLOSE, lambda c: print(f'Connection closed with code {c}.')) - socket.registerHandler(socket.event.TRANSCRIPT_RECEIVED, transcript_received_handler) - - return socket - except Exception as e: - raise Exception(f'Could not open socket: {e}') - -@app.get("/", response_class=HTMLResponse) -def get(request: Request): - return templates.TemplateResponse("index.html", {"request": request}) - -@app.websocket("/listen") -async def websocket_endpoint(websocket: WebSocket): - await websocket.accept() - - try: - deepgram_socket = await process_audio(websocket) - - while True: - data = await websocket.receive_bytes() - deepgram_socket.send(data) - except Exception as e: - raise Exception(f'Could not process audio: {e}') - finally: - await websocket.close() \ No newline at end of file diff --git a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/requirements.txt b/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/requirements.txt deleted file mode 100644 index 561bc09f..00000000 --- a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/requirements.txt +++ /dev/null @@ -1,38 +0,0 @@ -aiohttp==3.8.5 -aiosignal==1.2.0 -anyio==3.5.0 -asgiref==3.5.0 -async-timeout==4.0.2 -attrs==21.4.0 -certifi==2023.7.22 -charset-normalizer==2.0.12 -click==8.0.4 -deepgram-sdk==0.2.4 -dnspython==2.2.0 -email-validator==1.1.3 -fastapi==0.74.1 -frozenlist==1.3.0 -h11==0.13.0 -httptools==0.2.0 -idna==3.3 -itsdangerous==2.1.0 -Jinja2==3.0.3 -MarkupSafe==2.1.0 -multidict==6.0.2 -orjson==3.6.7 -pydantic==1.9.0 -python-dotenv==0.19.2 -python-multipart==0.0.5 -PyYAML==5.4.1 -requests==2.31.0 -six==1.16.0 -sniffio==1.2.0 -starlette==0.17.1 -typing_extensions==4.1.1 -ujson==4.3.0 -urllib3==1.26.8 -uvicorn==0.15.0 -uvloop==0.16.0 -watchgod==0.7 -websockets==10.2 -yarl==1.7.2 diff --git a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/templates/index.html b/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/templates/index.html deleted file mode 100644 index 8f81ef39..00000000 --- a/sample-projects/streaming-audio/FastAPI/live-transcription-fastapi/templates/index.html +++ /dev/null @@ -1,52 +0,0 @@ - - - - Live Transcription - - -

Transcribe Audio With FastAPI

-

Connection status will go here

-

- - - - \ No newline at end of file diff --git a/sample-projects/streaming-audio/Flask/live-transcription-flask/README.md b/sample-projects/streaming-audio/Flask/live-transcription-flask/README.md deleted file mode 100644 index cb949cab..00000000 --- a/sample-projects/streaming-audio/Flask/live-transcription-flask/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Live Transcription With Python, Flask 2.0 and Deepgram - -To run this project create a virtual environment by running the below commands. You can learn more about setting up a virtual environment in this article. - -``` -mkdir [% NAME_OF_YOUR_DIRECTORY %] -cd [% NAME_OF_YOUR_DIRECTORY %] -python3 -m venv venv -source venv/bin/activate -``` - -Make sure your virtual environment is activated and install the dependencies in the requirements.txt file inside. - -`pip install -r requirements.txt` - -Make sure you're in the directory with the main.py file and run the project in the development server. - -`python main.py` - -Pull up a browser and go to your localhost, `http://127.0.0.1:8000/`. - -Allow access to your microphone and start speaking. A transcript of your audio will appear in the browser. diff --git a/sample-projects/streaming-audio/Flask/live-transcription-flask/main.py b/sample-projects/streaming-audio/Flask/live-transcription-flask/main.py deleted file mode 100644 index 74cd3775..00000000 --- a/sample-projects/streaming-audio/Flask/live-transcription-flask/main.py +++ /dev/null @@ -1,62 +0,0 @@ -from flask import Flask, render_template -from deepgram import Deepgram -from dotenv import load_dotenv -import os -import asyncio -from aiohttp import web -from aiohttp_wsgi import WSGIHandler - -from typing import Dict, Callable - - -load_dotenv() - -app = Flask('aioflask') - -dg_client = Deepgram(os.getenv('DEEPGRAM_API_KEY')) - -async def process_audio(fast_socket: web.WebSocketResponse): - async def get_transcript(data: Dict) -> None: - if 'channel' in data: - transcript = data['channel']['alternatives'][0]['transcript'] - - if transcript: - await fast_socket.send_str(transcript) - - deepgram_socket = await connect_to_deepgram(get_transcript) - - return deepgram_socket - -async def connect_to_deepgram(transcript_received_handler: Callable[[Dict], None]) -> str: - try: - socket = await dg_client.transcription.live({'punctuate': True, 'interim_results': False}) - socket.registerHandler(socket.event.CLOSE, lambda c: print(f'Connection closed with code {c}.')) - socket.registerHandler(socket.event.TRANSCRIPT_RECEIVED, transcript_received_handler) - - return socket - except Exception as e: - raise Exception(f'Could not open socket: {e}') - -@app.route('/') -def index(): - return render_template('index.html') - -async def socket(request): - ws = web.WebSocketResponse() - await ws.prepare(request) - - deepgram_socket = await process_audio(ws) - - while True: - data = await ws.receive_bytes() - deepgram_socket.send(data) - - - -if __name__ == "__main__": - loop = asyncio.get_event_loop() - aio_app = web.Application() - wsgi = WSGIHandler(app) - aio_app.router.add_route('*', '/{path_info: *}', wsgi.handle_request) - aio_app.router.add_route('GET', '/listen', socket) - web.run_app(aio_app, port=5555) \ No newline at end of file diff --git a/sample-projects/streaming-audio/Flask/live-transcription-flask/requirements.txt b/sample-projects/streaming-audio/Flask/live-transcription-flask/requirements.txt deleted file mode 100644 index d5ad131e..00000000 --- a/sample-projects/streaming-audio/Flask/live-transcription-flask/requirements.txt +++ /dev/null @@ -1,20 +0,0 @@ -aiohttp==3.8.5 -aiohttp-wsgi==0.10.0 -aiosignal==1.2.0 -asgiref==3.5.0 -async-timeout==4.0.2 -attrs==21.4.0 -charset-normalizer==2.0.12 -click==8.0.4 -deepgram-sdk==0.2.4 -Flask==2.2.5 -frozenlist==1.3.0 -idna==3.3 -itsdangerous==2.1.0 -Jinja2==3.0.3 -MarkupSafe==2.1.0 -multidict==6.0.2 -python-dotenv==0.19.2 -websockets==10.2 -Werkzeug==2.0.3 -yarl==1.7.2 diff --git a/sample-projects/streaming-audio/Flask/live-transcription-flask/templates/index.html b/sample-projects/streaming-audio/Flask/live-transcription-flask/templates/index.html deleted file mode 100644 index 1b000045..00000000 --- a/sample-projects/streaming-audio/Flask/live-transcription-flask/templates/index.html +++ /dev/null @@ -1,50 +0,0 @@ - - - - Live Transcription - - -

Transcribe Audio With Flask 2.0

-

Connection status will go here

-

- - - - - diff --git a/sample-projects/streaming-audio/Quart/live-transcription-quart/README.md b/sample-projects/streaming-audio/Quart/live-transcription-quart/README.md deleted file mode 100644 index 47475358..00000000 --- a/sample-projects/streaming-audio/Quart/live-transcription-quart/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# Live Transcription With Python and Quart - -To run this project create a virtual environment by running the below commands. You can learn more about setting up a virtual environment in this [article](https://developers.deepgram.com/blog/2022/02/python-virtual-environments/). - -``` -mkdir [% NAME_OF_YOUR_DIRECTORY %] -cd [% NAME_OF_YOUR_DIRECTORY %] -python3 -m venv venv -source venv/bin/activate -``` - -Make sure your virtual environment is activated and install the dependencies in the requirements.txt file inside. - -``` -pip install -r requirements.txt -``` - -Make sure you're in the directory with the `main.py` file and export your application: - -``` -export QUART_APP=main:app -``` - -Now run the project in the development server. - -``` -python main.py -``` - -Pull up a browser and go to your localhost, http://127.0.0.1:3000/. - -Allow access to your microphone and start speaking. A transcript of your audio will appear in the browser. - diff --git a/sample-projects/streaming-audio/Quart/live-transcription-quart/main.py b/sample-projects/streaming-audio/Quart/live-transcription-quart/main.py deleted file mode 100644 index 4038b016..00000000 --- a/sample-projects/streaming-audio/Quart/live-transcription-quart/main.py +++ /dev/null @@ -1,57 +0,0 @@ -from quart import Quart, render_template, websocket -from deepgram import Deepgram -from dotenv import load_dotenv -from typing import Dict, Callable - -import os - -load_dotenv() - -app = Quart(__name__) - -dg_client = Deepgram(os.getenv('DEEPGRAM_API_KEY')) - -async def process_audio(fast_socket): - async def get_transcript(data: Dict) -> None: - if 'channel' in data: - transcript = data['channel']['alternatives'][0]['transcript'] - - if transcript: - await fast_socket.send(transcript) - - deepgram_socket = await connect_to_deepgram(get_transcript) - - return deepgram_socket - -async def connect_to_deepgram(transcript_received_handler: Callable[[Dict], None]) -> str: - try: - socket = await dg_client.transcription.live({'punctuate': True, 'interim_results': False}) - socket.registerHandler(socket.event.CLOSE, lambda c: print(f'Connection closed with code {c}.')) - socket.registerHandler(socket.event.TRANSCRIPT_RECEIVED, transcript_received_handler) - - return socket - except Exception as e: - raise Exception(f'Could not open socket: {e}') - -@app.route('/') -async def index(): - return await render_template('index.html') - -@app.websocket('/listen') -async def websocket_endpoint(): - - try: - deepgram_socket = await process_audio(websocket) - - while True: - data = await websocket.receive() - deepgram_socket.send(data) - except Exception as e: - raise Exception(f'Could not process audio: {e}') - finally: - websocket.close(1000) - - - -if __name__ == "__main__": - app.run('localhost', port=3000, debug=True) \ No newline at end of file diff --git a/sample-projects/streaming-audio/Quart/live-transcription-quart/requirements.txt b/sample-projects/streaming-audio/Quart/live-transcription-quart/requirements.txt deleted file mode 100644 index 1a4b86c6..00000000 --- a/sample-projects/streaming-audio/Quart/live-transcription-quart/requirements.txt +++ /dev/null @@ -1,28 +0,0 @@ -aiofiles==0.8.0 -aiohttp==3.8.5 -aiosignal==1.2.0 -async-timeout==4.0.2 -attrs==21.4.0 -blinker==1.4 -charset-normalizer==2.0.12 -click==8.0.4 -deepgram-sdk==0.2.4 -frozenlist==1.3.0 -h11==0.13.0 -h2==4.1.0 -hpack==4.0.0 -hypercorn==0.13.2 -hyperframe==6.0.1 -idna==3.3 -itsdangerous==2.1.0 -Jinja2==3.0.3 -MarkupSafe==2.1.0 -multidict==6.0.2 -priority==2.0.0 -python-dotenv==0.19.2 -quart==0.16.3 -toml==0.10.2 -websockets==10.2 -Werkzeug==2.0.3 -wsproto==1.1.0 -yarl==1.7.2 diff --git a/sample-projects/streaming-audio/Quart/live-transcription-quart/templates/index.html b/sample-projects/streaming-audio/Quart/live-transcription-quart/templates/index.html deleted file mode 100644 index d2b5d47e..00000000 --- a/sample-projects/streaming-audio/Quart/live-transcription-quart/templates/index.html +++ /dev/null @@ -1,51 +0,0 @@ - - - - Live Transcription - - -

Transcribe Audio With Quart

-

Connection status will go here

-

- - - - -