Tapio is a RAG (Retrieval Augmented Generation) tool for extracting, processing, and querying information from websites like Migri.fi (Finnish Immigration Service). It provides complete workflow capabilities including web crawling, content parsing, vectorization, and an interactive chatbot interface.
- Multi-site support - Configurable site-specific crawling and parsing
- End-to-end pipeline - Crawl → Parse → Vectorize → Query workflow
- Local LLM integration - Uses Ollama for private, local inference
- Semantic search - ChromaDB vector database for relevant content retrieval
- Interactive chatbot - Web interface for natural language queries
- Flexible crawling - Configurable depth and domain restrictions
- Comprehensive testing - Full test suite for reliability
Primary Users: EU and non-EU citizens navigating Finnish immigration processes
- Students seeking education information
- Workers exploring employment options
- Families pursuing reunification
- Refugees and asylum seekers needing guidance
Core Needs:
- Finding relevant, accurate information quickly
- Practice conversations on specific topics (family reunification, work permits, etc.)
- Clone and setup:
git clone https://github.com/Finntegrate/tapio.git
cd tapio
uv sync
- Install required Ollama model:
ollama pull llama3.2
Tapio provides a four-step workflow:
- crawl - Collect HTML content from websites
- parse - Convert HTML to structured Markdown
- vectorize - Create vector embeddings for semantic search
- tapio-app - Launch the interactive chatbot interface
Use uv run -m tapio.cli --help
to see all commands or uv run -m tapio.cli <command> --help
for command-specific options.
Complete workflow for the Migri website:
# 1. Crawl content (uses site configuration)
uv run -m tapio.cli crawl migri --depth 2
# 2. Parse HTML to Markdown
uv run -m tapio.cli parse migri
# 3. Create vector embeddings
uv run -m tapio.cli vectorize
# 4. Launch chatbot interface
uv run -m tapio.cli tapio-app
To list configured sites:
uv run -m tapio.cli list-sites
To view detailed site configurations:
uv run -m tapio.cli list-sites --verbose
Site configurations define how to crawl and parse specific websites. They're stored in tapio/config/site_configs.yaml
and used by both crawl and parse commands.
sites:
migri:
base_url: "https://migri.fi" # Used for crawling and converting relative links
description: "Finnish Immigration Service website"
crawler_config: # Crawling behavior
delay_between_requests: 1.0 # Seconds between requests
max_concurrent: 3 # Concurrent request limit
parser_config: # Parser-specific configuration
title_selector: "//title" # XPath for page titles
content_selectors: # Priority-ordered content extraction
- '//div[@id="main-content"]'
- "//main"
- "//article"
- '//div[@class="content"]'
fallback_to_body: true # Use <body> if selectors fail
markdown_config: # HTML-to-Markdown options
ignore_links: false
body_width: 0 # No text wrapping
protect_links: true
unicode_snob: true
ignore_images: false
ignore_tables: false
Required:
base_url
- Base URL for the site (used for crawling and link resolution)
Optional (with defaults):
description
- Human-readable descriptionparser_config
- Parser-specific settings (uses defaults if omitted)title_selector
- Page title XPath (default: "//title")content_selectors
- XPath selectors for content extraction (default: ["//main", "//article", "//body"])fallback_to_body
- Use full body content if selectors fail (default: true)markdown_config
- HTML conversion settings (uses defaults if omitted)
crawler_config
- Crawling behavior settings (uses defaults if omitted)delay_between_requests
- Delay between requests in seconds (default: 1.0)max_concurrent
- Maximum concurrent requests (default: 5)
- Analyze the target website's structure
- Identify XPath selectors for content extraction
- Add configuration to
site_configs.yaml
:
sites:
my_site:
base_url: "https://example.com"
description: "Example site configuration"
parser_config:
content_selectors:
- '//div[@class="main-content"]'
- Use with commands:
uv run -m tapio.cli crawl my_site
uv run -m tapio.cli parse my_site
uv run -m tapio.cli vectorize
uv run -m tapio.cli tapio-app
Tapio uses centralized configuration in tapio/config/settings.py
:
DEFAULT_DIRS = {
"CRAWLED_DIR": "content/crawled", # HTML storage
"PARSED_DIR": "content/parsed", # Markdown storage
"CHROMA_DIR": "chroma_db", # Vector database
}
DEFAULT_CHROMA_COLLECTION = "tapio" # ChromaDB collection name
Site-specific configurations are in tapio/config/site_configs.yaml
and automatically handle content extraction and directory organization based on the site's domain.
See CONTRIBUTING.md for development guidelines, code style requirements, and how to submit pull requests.
Licensed under the European Union Public License version 1.2. See LICENSE for details.
Thanks goes to these wonderful people (emoji key):
Brylie Christopher Oxley 🚇 |
AkiKurvinen 🔣 💻 |
ResendeTech 💻 |
This project follows the all-contributors specification. Contributions of any kind welcome!