ShortsMaker is a Python package designed to facilitate the creation of engaging short videos or social media clips. It leverages a variety of external services and libraries to streamline the process of generating, processing, and uploading short content.
Like what I do, Please consider supporting me.
- Automated Content Creation: Easily generate engaging short videos.
- External Service Integration: Seamlessly integrates with services like Discord for notifications.
- GPU-Accelerated Processing: Optional GPU support for faster processing using whisperx.
- Modular Design: Built with extensibility in mind.
- In Development: AskLLM AI agent now fully integrated for generating metadata and creative insights.
- In Development: GenerateImage class enhanced for text-to-image generation using flux. May be resource intensive.
- Python: 3.12.8
- Package Manager:
uv
is used for package management. ( It's amazing! try it out. ) - Operating System: Windows, Mac, or Linux (ensure external dependencies are installed for your platform)
To use ShortsMaker via Docker, follow these steps:
-
Build the Docker Image:
Build the Docker image using the provided Dockerfile.
docker build -t shorts_maker -f Dockerfile .
-
Run the Docker Container:
For the first time, run the container with the necessary mounts, container name, and working directory set.
docker run --name shorts_maker_container -v $pwd/assets:/shorts_maker/assets -w /shorts_maker -it shorts_maker bash
-
Start the Docker Container:
If the container was previously stopped, you can start it again using:
docker start shorts_maker_container
-
Access the Docker Container:
Execute a bash shell inside the running container.
docker exec -it shorts_maker_container bash
-
Run Examples and Tests:
Once you are in the bash shell of the container, you can run the example script or tests using
uv
.To run the example script:
uv run example.py
To run tests:
uv run pytest
Note: If you plan to use ask_llm
or generate_image
, it is not recommended to use the Docker image due to the high resource requirements of these features. Instead, run ShortsMaker directly on your host machine.
-
Clone the Repository:
git clone https://github.com/rajathjn/shorts_maker cd shorts_maker
-
Install the Package Using uv:
Note: Before starting the installation process. Ensure a python3.12 virtual environment is set up.
uv venv -p 3.12 .venv or python -m venv .venv
Package Installation.
uv pip install -r pyproject.toml or uv sync uv sync --extra cpu # for cpu uv sync --extra cu124 # for cuda 12.4 versions
-
Install Any Additional Python Dependencies:
If not automatically managed by uv, you may install them using pip ( In most cases you do not need to use the below. ):
pip install -r requirements.txt
ShortsMaker relies on several external non-Python components. Please ensure the following are installed/configured on your system:
-
Discord Notifications:
- You must set your Discord webhook URL (
DISCORD_WEBHOOK_URL
) as an environment variable. - Refer to the Discord documentation for creating a webhook.
- If you don't want to use Discord notifications, you can set
DISCORD_WEBHOOK_URL
toNone
or do something like
import os os.environ["DISCORD_WEBHOOK_URL"] = "None"
- You must set your Discord webhook URL (
-
Ollama:
- The external tool Ollama must be installed on your system. Refer to the Ollama documentation for installation details.
-
WhisperX (GPU Acceleration):
- For GPU execution, ensure that the NVIDIA libraries are installed on your system:
- cuBLAS: Version 11.x
- cuDNN: Version 8.x
- These libraries are required for optimal performance when using whisperx for processing.
- For GPU execution, ensure that the NVIDIA libraries are installed on your system:
Before running ShortsMaker, make sure you set the necessary environment variables:
-
DISCORD_WEBHOOK_URL: This token is required for sending notifications through Discord. Example (Windows Command Prompt):
set DISCORD_WEBHOOK_URL=your_discord_webhook_url_here
Example (Linux/macOS):
export DISCORD_WEBHOOK_URL=your_discord_webhook_url_here
From Python:
import os os.environ["DISCORD_WEBHOOK_URL"] = "your_discord_webhook_url_here"
Ensure you have a setup.yml
configuration file in the shorts_maker
directory. Use the example-setup.yml as a reference.
Below is a basic example to get you started with ShortsMaker:
You can also refer to the same here
from pathlib import Path
import yaml
from ShortsMaker import MoviepyCreateVideo, ShortsMaker
setup_file = "setup.yml"
with open(setup_file) as f:
cfg = yaml.safe_load(f)
get_post = ShortsMaker(setup_file)
# You can either provide an URL for the reddit post
get_post.get_reddit_post(
url="https://www.reddit.com/r/Python/comments/1j36d7a/i_got_tired_of_ai_shorts_scams_so_i_built_my_own/"
)
# Or just run the method to get a random post from the subreddit defined in setup.yml
# get_post.get_reddit_post()
with open(Path(cfg["cache_dir"]) / cfg["reddit_post_getter"]["record_file_txt"]) as f:
script = f.read()
get_post.generate_audio(
source_txt=script,
output_audio=f"{cfg['cache_dir']}/{cfg['audio']['output_audio_file']}",
output_script_file=f"{cfg['cache_dir']}/{cfg['audio']['output_script_file']}",
)
get_post.generate_audio_transcript(
source_audio_file=f"{cfg['cache_dir']}/{cfg['audio']['output_audio_file']}",
source_text_file=f"{cfg['cache_dir']}/{cfg['audio']['output_script_file']}",
)
get_post.quit()
create_video = MoviepyCreateVideo(
config_file=setup_file,
speed_factor=1.0,
)
create_video(output_path="assets/output.mp4")
create_video.quit()
# Do not run the below when you are using shorts_maker within a container.
# ask_llm = AskLLM(config_file=setup_file)
# result = ask_llm.invoke(script)
# print(result["parsed"].title)
# print(result["parsed"].description)
# print(result["parsed"].tags)
# print(result["parsed"].thumbnail_description)
# ask_llm.quit_llm()
# You can use, AskLLM to generate a text prompt for the image generation as well
# image_description = ask_llm.invoke_image_describer(script = script, input_text = "A wild scenario")
# print(image_description)
# print(image_description["parsed"].description)
# Generate image uses a lot of resources so beware
# generate_image = GenerateImage(config_file=setup_file)
# generate_image.use_huggingface_flux_schnell(image_description["parsed"].description, "output.png")
# generate_image.quit()
Generated from this post here
example_video.mp4
- Explain working and usage in blog.
- Dockerize the project, To avoid the complex set up process.
- Add option to fetch post from submission URLs.
- Add an example video to the README.
If you want to contribute to the project, please follow these steps:
-
Set up the development environment:
- Ensure you have Python 3.12.8 and uv installed.
- Clone the repository and install the development dependencies.
-
Run the Tests:
-
Tests are located in the
tests/
directory. -
Run tests using:
uv run --frozen pytest
-
If you want to contribute to the project, please follow these steps:
Follow everything in the Development section and then:
Submit a Pull Request:
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Commit your changes and push the branch to your fork.
- Open a pull request with a detailed description of your changes.
This project is licensed under the GNU General Public License v3.0 License. See the LICENSE file for details.