Skip to content

twincipher/llmbookstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM + Bookstack

This project is a Flask-based web application that uses AI (ChatGPT or Ollama) to generate structured, Markdown-formatted knowledgebase content — which can be reviewed, edited, and pushed directly into your BookStack instance.

✨ Features

  • 🔍 Input a topic and select an AI model provider (ChatGPT or Ollama)
  • ⚙️ Streamed AI-powered content generation (uses OpenAI GPT-4 or Ollama Llama 3.2/Code LLaMA)
  • 📝 Editable chapters and pages with live Markdown preview
  • 📂 Drag-and-drop reordering of sections and content
  • 📤 Push final content to your BookStack documentation portal
  • 🌐 Built using Flask + vanilla JS (no heavy frontend frameworks)

📸 Screenshot

[screenshot]image [screenshot]image Bookstack Output

🚀 Requirements

  • Python 3.8+
  • Flask
  • OpenAI account and API key (for ChatGPT usage)
  • Ollama installed and running locally (for local LLM usage)
  • A running instance of BookStack

🧰 Dependencies

Install Python dependencies:

pip install -r requirements.txt

Environment variables are stored in a .env file in the root directory of the project:

OPENAI_API_KEY=your-openai-api-key
OLLAMA_URL=http://localhost:11434
BOOKSTACK_URL=http://your-bookstack-url
BOOKSTACK_TOKEN_ID=your-token-id
BOOKSTACK_TOKEN_SECRET=your-token-secret

📚 Installing BookStack

Full install instructions can be found here: 👉 https://www.bookstackapp.com/docs/admin/installation/

We recommend using the Docker installation method if you're not familiar with Laravel/PHP-based setups.

🛠️ Running the App

docker compose up --build

Then visit http://localhost:5000 in your browser.

💡 Example Use Cases

Technical documentation for software products

Internal knowledgebase generation for teams

Automatically drafting wiki content

Educational or training material

🧠 AI Model Behavior

Uses llama3.2 by default when using Ollama.

Automatically switches to Code LLaMA for topics that include "code" or "example".

With ChatGPT, uses GPT-4 and custom system prompts to generate clean, helpful Markdown.

🧪 Development Tips

Streamed responses use Server-Sent Events (SSE) for live updates in the UI.

Markdown rendering is done with marked.js.

Compression and base64 encoding are used to safely transmit large generated content blocks.

🤝 Contributing

Pull requests are welcome! If you have improvements, ideas, or find a bug — feel free to open an issue or submit a PR.

📄 License

MIT

Built with ❤️ by Josh Peart and ChatGPT.
Inspired by the dream of effortless, structured documentation.

About

Generate content on a subject and push to Bookstack

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors