Skip to content

Simplifying audio content through transcription and summarization using Meta's LLAMA 2 & OpenAI's Whisper.

License

Notifications You must be signed in to change notification settings

iiakshat/BriefComm

Repository files navigation

BriefComm

Keep the comm. brief.

Approach

The approach for the BriefComm project involves a multi-step process to efficiently summarize audio content. Initially, raw text, audio, or video files are accepted as input. These files are then processed using an OpenAI's model called Whisper for transcription, converting the audio content into text. The transcribed text is then processed further, potentially being translated into different languages using translation services. Next, the processed text is fed into an AI model by META called Llama2, which is fine-tuned for summarization tasks. Llama2 generates concise and coherent summaries of the input text. Finally, the summarized output is displayed on a webpage, allowing users to easily access and utilize the key insights and information extracted from the original audio content. This approach streamlines the process of summarizing audio content, enabling users to efficiently extract valuable insights for various applications and use cases.

How to use BriefComm?

Visit Hugging Face Space or Run locally using:

git clone https://github.com/iiakshat/BriefComm.git

and then, running:

pip install -r requirements.txt
  • Once you run the above commands, you should see this interface: 1

  • Click on $submit$ after giving input, 2

  • $(Optional)$ Enter additional details, choose output language and enter email (email doesn't work), 5

  • Hit $Submit$ and you should see the output as: 7

About

Simplifying audio content through transcription and summarization using Meta's LLAMA 2 & OpenAI's Whisper.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published