Skip to content

TurboReel/mediachain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🎬🔮 Mediachain

Langchain for creating audiovisual experiences


What is Mediachain?

AI toolkit for creating audiovisual experiences.

The goal is simple: help developers produce great audiovisual experiences.


Showcase

Story

minecraft.mp4

Reddit Stories

AI.video.mp4

Why Mediachain exists?

A few months ago, I started working on TurboReel, an automation tool for generating short videos 100x faster. It was built with MoviePy and OpenAI. While MoviePy is great for basic tasks, I found it limiting for more complex ones. Plus, I relied too heavily on OpenAI, which made it tricky to keep improving the project.

We ended up using Revideo for the video processing tasks.

That made me realize that AI tools should be separated from the video engine(MoviePy, Revideo, Remotion, etc.) or AI service(GPT, ElevenLabs, Dalle, Runway, Sora, etc.) you choose to use. So you can easily switch between the best out there.

Also, there is no hub for audiovisual generation knowledge. So this is my attempt to create that hub.


Technologies

Special shoutout to Pollinations for their free image generation API. Pollinations

Vision

Mediachain is designed to be the LangChain for audiovisual creation, a centralized toolkit and knowledge hub for the field.

  • Image and video generation is just the start.
  • Emerging features like video embeddings (which can understand the context of videos) are next along with powerful video generation models.

Our mission is to push boundaries and make audiovisual generation accessible for everyone at a fraction of the cost of current solutions.


Roadmap

Here’s what’s planned for Mediachain:

  • Add the Revideo engine to the examples folder.
  • Introduce new features like image animation, image editing, voice cloning, and AI avatars.
  • Support more video generation services and models.
  • Create useful templates using Mediachain.
  • Publish the package on PyPI.
  • Write detailed documentation.
  • Develop a beginner-friendly guide to audiovisual generation.

How to Get Started

The project is organized into the following folders:

  • core: Core functionality of MediaChain. See the core README for more information.
  • examples: Examples showing how to use MediaChain with tools like MoviePy, Revideo, and Remotion. See the examples README for more information.

Running Your First Example

To test MediaChain, start with the Reddit Stories example. This template creates a video from Reddit posts.

  1. Make Sure You’ve Got Python: Grab Python 3.10.x from python.org.

  2. Install FFmpeg and ImageMagick:

    • For Windows: Download the binaries from FFmpeg and ImageMagick. Just add them to your system's PATH.
    • For macOS: Use Homebrew (if you haven’t tried it yet, now’s the time!):
    brew install ffmpeg imagemagick
    • For Linux: Just use your package manager:
    sudo apt-get install ffmpeg imagemagick
  3. Get the Required Python Packages:

    pip install -r requirements.txt
  4. Add your OpenAI API key: Add your OpenAI API key to the .env file as OPENAI_API_KEY.

  5. Edit the examples variables:

    • Go to examples/moviepy_engine/reddit_stories/main_moviepy.py.
    • Change the prompt variable to whatever you want.
    • Change the video_url variable to the video you want to use as background.
  6. Run the example:

    python3 examples/moviepy_engine/reddit_stories/main_moviepy.py

Community

Feel free to contribute, ask questions, or share your ideas!

Discord: https://discord.gg/bby6DYsCPu

Made with ❤️ by @TacosyHorchata