My personal Open WebUI setup with custom actions and tools for news aggregation, web scraping, and AI workflow automation.
This repository contains my custom extensions for Open WebUI, a self-hosted web interface for running Large Language Models (LLMs) with Ollama. The extensions include deterministic action buttons, callable tools, and system prompts that enhance the base Open WebUI functionality.
- RSS News Fetcher: Automatically pulls tech news from Kubernetes, ArgoCD, AWS, CNCF, The New Stack, and Hacker News
- Brutalist Report Scraper: Extracts curated tech headlines with real external links
- Smart Filtering: Filters news based on keywords (Kubernetes, DevOps, SRE, GitOps, Platform Engineering)
- Customizable Timeframes: Fetches articles from the last 48 hours
- Add to Memory Button: One-click action to save assistant responses to Open WebUI's memory system
- Status Indicators: Real-time feedback during memory operations
- Error Handling: Graceful error reporting with citations
- Deterministic Actions: Buttons that execute Python code without LLM involvement
- Custom Prompts: Pre-configured system instructions for specific workflows
- Event Emitters: Real-time status updates and message injection
mywebui/
βββ actions/ # Action buttons (deterministic Python code)
β βββ add_to_memories_action_button.py
β βββ news.py
βββ tools/ # Callable tools for LLM
β βββ tool-brutalist_scraper-export-*.json
βββ prompts/ # System prompts for workflows
β βββ news.md
βββ docker-compose.yaml # Container configuration
βββ .env # Environment variables (not committed)
βββ CLAUDE.md # Developer documentation
- Docker and Docker Compose
- Ollama running locally or on network
- Open WebUI compatible version:
>= 0.5.0
git clone https://github.com/yourusername/mywebui.git
cd mywebuiCreate a .env file in the root directory:
# Tavily API Key for web search (optional)
TAVILY_API_KEY=your_tavily_api_key_hereEdit docker-compose.yaml to point to your Ollama instance:
environment:
- OLLAMA_BASE_URL=http://your-ollama-host:11434docker compose up -dAccess the interface at http://localhost:3000
Option A: Manual Import (Recommended)
- Go to Admin Panel β Actions
- Upload action files from
actions/directory - Go to Admin Panel β Tools
- Import tool files from
tools/directory
Option B: Volume Mount
Mount the directories in your docker-compose.yaml:
volumes:
- ./actions:/app/backend/data/actions
- ./tools:/app/backend/data/tools
- ./prompts:/app/backend/data/prompts- RSS News Action: Click the "Fetch Tech News" button in any chat to get the latest tech headlines
- Brutalist Scraper Tool: Use the news prompt or ask the assistant to fetch news using the Brutalist tool
- Custom Filtering: The LLM will automatically filter results based on your tech stack preferences
- After receiving a useful assistant response, click the "Add to Memory" action button
- The response will be saved to your personal memory store
- Future conversations can reference this information
Edit actions/news.py to modify RSS feeds:
self.feeds = {
"Your Source": "https://example.com/feed.xml",
# Add more feeds here
}Modify the keyword filter in actions/news.py:
self.keywords = [
"kubernetes",
"your-keyword",
# Add more keywords
]Open WebUI is configured with:
- Embedding Model:
mxbai-embed-large:latest - Chunk Size: 1000 tokens
- Chunk Overlap: 200 tokens
- Top K Results: 5
Adjust these in docker-compose.yaml under the environment section.
- Create a new Python file in
actions/directory - Define an
Actionclass with anasync def action()method - Use
__event_emitter__to send status updates and messages - Import into Open WebUI admin panel
Example structure:
class Action:
async def action(self, body: dict, __event_emitter__=None):
await __event_emitter__({
"type": "status",
"data": {"description": "Working...", "done": False}
})
# Your code here
return NoneTools provide callable functions for the LLM. See tools/ directory for JSON export format.
Actions can auto-install Python packages:
try:
import feedparser
except ImportError:
subprocess.check_call([sys.executable, "-m", "pip", "install", "feedparser"])
import feedparser- Backend: Open WebUI (Python/FastAPI)
- LLM Runtime: Ollama
- Models: Qwen 2.5 7B (task model), mxbai-embed-large (embeddings)
- Search: Tavily API
- Container: Docker
| Variable | Description | Default |
|---|---|---|
OLLAMA_BASE_URL |
Ollama API endpoint | http://192.168.15.13:11434 |
TAVILY_API_KEY |
Tavily search API key | Required for web search |
WEBUI_AUTH |
Enable authentication | False |
RAG_EMBEDDING_MODEL |
Embedding model name | mxbai-embed-large:latest |
TASK_MODEL |
Task execution model | qwen2.5:7b-instruct-q4_K_M |
- API Keys: Never commit
.envfiles or hardcode API keys - Authentication: Disabled by default (
WEBUI_AUTH=False) - enable for production - Network: Configured for local network access only
- Data: Persistent data stored in
./datadirectory (gitignored)
This is a personal configuration repository, but feel free to:
- Fork and adapt for your own use
- Submit issues for bugs
- Share your own extensions
MIT License - Feel free to use and modify for your own projects.
- Open WebUI - The amazing self-hosted LLM interface
- Ollama - Local LLM runtime
- Brutalist Report - Curated tech news aggregator
- Open WebUI Documentation
- Open WebUI GitHub
- Ollama Models
- CLAUDE.md - Developer documentation for working with this repository
Note: This is a personal configuration. Customize settings, models, and endpoints to match your infrastructure.