AI-powered navigation in spoken word audio content for humans and machines
Request Feature
Table of Contents
Podcasts and other long-form audio formats are more popular than ever, but their content remains less accessible than to other media on the web: Established search engines work with titles and descriptions, but not the spoken word itself. You may remember an interesting segment from an episode or talk, but it's cumbersome to retrieve it again and refer others to it.
Thanks to current progress in AI, navigating speech audio content could become much easier: Automatic transcription, speaker recognition and diarization, detection of structure and topics - examples of functions enabled by state of the art machine learning models.
The audiomariner project aims to provide an open source toolkit of machine learning solutions for transcribing and indexing spoken-word audio files. Going beyond transcription, it focuses on extracting rich metadata that maps the content and form of podcast episodes, interviews, talks, voice memos and similar content. This is the basis for advanced search features over audio content, which can power a variety of end user applications: Imagine an audio player with a search field for your queries. Imagine a chatbot that enables you to ask questions about a long talk. Imagine a true podcast search and recommendation engine that knows large volumes of episodes and point you to the most relevant segments, or suggest new content based on the topic that you are listening to.
audio = Audio("../data/audio/ozymandias.mp3")
audio.process()
result_segments = audio.search(query="mighty", top=3)
top_segment = result_segments[0]
print(top_segment.transcript)
And on the pedestal these words appear:
"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and despair!"
- R&D: Evaluation and integration of transcription / speech-to-text models, with focus on local, open models => select best models, trading off transcript quality with cost and scalability
- R&D: Evaluation and integration of AI methods for content analysis (segmentation by speakers, Q&A segments, chapters...) => rich metadata extraction
- R&D: Evaluation and integration of relevant search engine technology for indexing and accessing audio content => query engine over the content
- User Research & Requirements Discovery: connect with potential application developers and gather requirements
- Development: Creation of an API exposing the verified functionality to application developers
- Demo Applications: Development and deployment of a demo application with search mask and audio player => showcase the audio navigation capabilities of the library
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE.txt
for more information.