Skip to content


Repository files navigation


Automated README file generator, powered by large language model APIs

github-actions codecov pypi-version pepy-total-downloads license

Table of Contents

πŸ“ Overview


Readme-ai is a developer tool that auto-generates files using a combination of data extraction and generative ai. Simply provide a repository URL or local path to your codebase and a well-structured and detailed README file will be generated for you.


Streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software.


Readme-ai is currently under development with an opinionated configuration and setup. It is vital to review all generated text from the LLM API to ensure it accurately represents your project.

πŸ‘Ύ Demo

Standard CLI Usage:

Offline Mode Demonstration:


Offline mode is useful for generating a boilerplate README at no cost. View the offline example here!

🧩 Features

Flexible README Generation

Readme-ai uses a balanced approach to building README files, combining data extraction and generative AI to create comprehensive and informative documentation.

  • Data Extraction & Analysis: File parsers and analyzers are used to extract project metadata, dependencies, and other relevant details. This data is used to both populate many sections of the README, as well as provide context to the LLM API.
  • Generative Content: For more abstract or creative sections, readme-ai uses LLM APIs to generate content that is both informative and engaging. This includes sections such as a project slogan, overview, features table, and file summaries.

CLI Customization

Over a dozen CLI options are available to customize the README generation process:

  • LLM Options: Run the tool with OpenAI, Ollama, Google Gemini, or in offline mode.
  • Offline Mode: Generate a README without making API calls. Readme-ai is still able to populate a significant portion of the README using metadata collected during preprocessing.
  • Project Badges: Choose from an array of badge styles, colors, and alignments.
  • Project Logo: Select from the default set, upload your own, or let the LLM give it a try!

A few examples of the CLI options in action:

default output (no options provided to cli)
--alignment left --badge-style flat-square --image cloud
--alignment left --badge-style flat --image gradient
--badge-style flat --image custom
--badge-style skills-light --image grey
--badge-style flat-square
--badge-style flat --image black

See the Configuration section for a complete list of CLI options.

πŸ‘‹ Overview

    - High-level introduction of the project, focused on the value proposition and use-cases, rather than technical aspects.
🧩 Features
Features Table

    - Generated markdown table that highlights the key technical features and components of the codebase. This table is generated using a structured prompt template.

πŸ“„ Codebase Documentation
Repository Structure

    - Directory tree structure is generated using pure Python ( and embedded in the README.

File Summaries

    - Summarizes key files in the codebase, and also used as context for additional prompts!

πŸš€ Quickstart Commands
Getting Started

    - Auto-generated setup guides based on language and dependency analysis.
    - Install, Usage, and Test guides are supported for many languages.
    - The parsers module is a collection of tool-specific parsers that extract dependencies and metadata.

πŸ”° Contributing Guidelines
Contributing Guide

    - Dropdown section that outlines general process for contributing to your project.
    - Provides links to your contributing guidelines, issues page, and more resources.
    - Graph of contributors is also included.

Additional Sections

    - Project Roadmap, Contributing Guidelines, License, and Acknowledgements are included by default.

🎨 Templates (wip)
README Template for ML & Data

    - Themed templates tailored to AI, web, data science projects.
    - Sections targetted to programming domain.
    - Framework for consistent, comprehensive READMEs

πŸ—‚οΈ Examples

Output File Input Repository Input Contents
β–Ή readme-ai Python
β–Ή readme-ai Python
β–Ή chatgpt-app-react-ts TypeScript, React
β–Ή postgres-proxy-server Postgres, Duckdb
β–Ή Kotlin, Android
β–Ή readme-ai-streamlit Python, Streamlit
β–Ή rust-c-app C, Rust
β–Ή go-docker-app Go
β–Ή java-minimal-todo Java
β–Ή async-ml-inference FastAPI, Redis
β–Ή mlops-course Python, Jupyter
β–Ή Local Directory Flink, Python

πŸš€ Getting Started

System Requirements:

  • Python 3.9+
  • Package manager/Container: pip, pipx, docker
  • LLM service: OpenAI, Ollama, Google Gemini, Offline Mode

Repository URL or Local Path:

Make sure to have a repository URL or local directory path ready for the CLI.

Choosing an LLM Service:

  • OpenAI: Recommended, requires an account setup and API key.
  • Ollama: Free and open-source, potentially slower and more resource-intensive.
  • Google Gemini: Requires a Google Cloud account and API key.
  • Offline Mode: Generates a boilerplate README without making API calls.

βš™οΈ Installation

Using pip


pip install readmeai



Use pipx to install and run Python command-line applications without causing dependency conflicts with other packages!

Using docker


docker pull zeroxeli/readme-ai:latest

Using conda


conda install -c conda-forge readmeai

From source

Clone and Install

Clone repository and change directory.

$ git clone
$ cd readme-ai

Using bash


$ bash setup/

Using poetry


$ poetry install
  • Similiary you can use pipenv or pip to install the requirements.txt.

πŸ€– Usage

Environment Variables

Using OpenAI

Set your OpenAI API key as an environment variable.

# Using Linux or macOS
$ export OPENAI_API_KEY=<your_api_key>

# Using Windows
$ set OPENAI_API_KEY=<your_api_key>

Using Ollama

Set Ollama local host as an environment variable.

$ export OLLAMA_HOST=
$ ollama pull mistral:latest    # llama2, etc.
$ ollama serve                  # run if not using the Ollama desktop app

For more details, check out the Ollama repository.

Using Google Gemini

Set your Google Cloud project ID and location as environment variables.

$ export GOOGLE_API_KEY=<your_api_key>

Run the CLI

Using pip


# Using OpenAI API
readmeai --repository --api openai

# Using Ollama local model
readmeai --repository --api ollama --model mistral

Using docker


docker run -it \
-v "$(pwd)":/app zeroxeli/readme-ai:latest \

Using streamlit

Streamlit App

Try directly in your browser on Streamlit, no installation required! For more details, check out the readme-ai-streamlit repository.

From source


Using bash


$ conda activate readmeai
$ python3 -m readmeai.cli.main -r

Using poetry


$ poetry shell
$ poetry run python3 -m readmeai.cli.main -r

πŸ§ͺ Tests

Using pytest


$ make pytest

Using nox

$ nox -f


Use nox to test application against multiple Python environments and dependencies!

πŸ“¦ Configuration

Customize the README file using the CLI options below.

Option Type Description Default Value
--alignment, -a String Align the text in the file's header. center
--api String LLM API service to use for text generation. offline
--badge-color String Badge color name or hex code. 0080ff
--badge-style String Badge icon style type. see below
--base-url String Base URL for the repository. v1/chat/completions
--context-window Integer Maximum context window of the LLM API. 3999
--emojis, -e Boolean Adds emojis to the file's header sections. False
--image, -i String Project logo image displayed in the README file header. blue
🚧 --language String Language for generating the file. en
--model, -m String LLM API to use for text generation. gpt-3.5-turbo
--output, -o String Output file name for the README file.
--rate-limit Integer Maximum number of API requests per minute. 5
--repository, -r String Repository URL or local directory path. None
--temperature, -t Float Sets the creativity level for content generation. 0.9
🚧 --template String README template style. default
--top-p Float Sets the probability of the top-p sampling method. 0.9
--tree-depth Integer Maximum depth of the directory tree structure. 2
--help Displays help information about the command and its options.

🚧 feature under development

Badge Customization

The --badge-style option lets you select the style of the default badge set.

Style Preview
skills Python Skill Icon
skills-light Python Skill Light Icon

When providing the --badge-style option, readme-ai does two things:

  1. Formats the default badge set to match the selection (i.e. flat, flat-square, etc.).
  2. Generates an additional badge set representing your projects dependencies and tech stack (i.e. Python, Docker, etc.)


$ readmeai --badge-style flat-square --repository


{... project logo ...}

{... project name ...}

{...project slogan...}

Developed with the software and tools below.


{... end of header ...}

Project Logo

Select a project logo using the --image option.

blue gradient black
cloud purple grey

For custom images, see the following options:

  • Use --image custom to invoke a prompt to upload a local image file path or URL.
  • Use --image llm to generate a project logo using a LLM API (OpenAI only).

πŸ”­ Roadmap

  • Add new CLI options to enhance README file customization.
    • --api Integrate singular interface for all LLM APIs (OpenAI, Ollama, Gemini, etc.)
    • --audit to review existing README files and suggest improvements.
    • --template to select a README template style (i.e. ai, data, web, etc.)
    • --language to generate README files in any language (i.e. zh-CN, ES, FR, JA, KO, RU)
  • Develop robust documentation generator to build full project docs (i.e. Sphinx, MkDocs)
  • Create community-driven templates for README files and gallery of readme-ai examples.
  • GitHub Actions script to automatically update README file content on repository push.

πŸ“’ Changelog


πŸ§‘β€πŸ’» Contributing

To grow the project, we need your help! See the links below to get started.

πŸŽ— License


πŸ‘Š Acknowledgments