A Python application that helps brands understand their visibility and ranking across different Large Language Models (LLMs). Track how your brand appears in LLM responses compared to competitors.
Built with BAML - Uses structured prompting and type-safe LLM interactions via BAML templates.
- Multi-Provider Support: Query multiple LLM providers (OpenAI, Ollama, etc.)
- BAML Integration: Type-safe, templated prompts with structured outputs
- Brand Tracking: Monitor mentions of your brand and competitors
- Ranking Analysis: See where brands rank in LLM responses
- Sentiment Analysis: Analyze sentiment of brand mentions
- Competitor Graph: Track co-mention networks and competitive relationships over time
- Hallucination Filter: Request sources/citations and score confidence to detect hallucinations
- SQLite Storage: Persistent storage of all results
- Comprehensive Analytics: Detailed reports and comparisons
- Configurable: Easy configuration via JSON files
- Clone and setup:
git clone <repository-url>
cd foundamental- Install dependencies and activate the virtual environment:
uv sync
source .venv/bin/activate- Setup environment:
cp .env.example .env- Configure your analysis:
Edit
config.jsonto add your brands and queries.
For each bramd that you want to test, set up an object in config.json as follows:
{
"brands": [
{
"id": 1,
"name": "YourBrand",
"aliases": ["Your Brand", "YB", "YourBrand.com"]
}
]
}Define the queries you want to test:
{
"queries": [
{
"id": 1,
"text": "Best vector database for RAG",
"k": 5,
"category": "technical"
}
]
}# Standard analysis
python foundamental.py run
# With hallucination filter (requests sources and confidence)
python foundamental.py run --with-sourcespython foundamental.py analyze --reportpython foundamental.py analyze --comparepython foundamental.py sentiment --analyze
python foundamental.py sentiment --report# Analyze hallucination risks
python foundamental.py hallucination --analyze --verify-urls
# View hallucination report
python foundamental.py hallucination --report# View competitor co-mention network
python foundamental.py analyze --graph
# Focus on specific brand's competitors
python foundamental.py analyze --graph --brand YourBrand
# Export graph to JSON for visualization
python foundamental.py analyze --export-graph competitor_graph.json
# Export to NetworkX (graph analysis)
python src/competitor_graph.py --export-networkx graph.gpickle
# Export to PyTorch Geometric (graph neural networks)
python src/competitor_graph.py --export-pyg graph.pt
# Export as adjacency matrix (NumPy/Pandas)
python src/competitor_graph.py --export-adjacency matrix.csv
# Filter by relationship strength
python foundamental.py analyze --export-graph graph.json --min-strength 0.5The competitor graph automatically tracks when brands are mentioned together in LLM responses, building a co-mention network over time. This helps you understand:
- Which brands are considered direct competitors by LLMs
- How competitive relationships evolve over time
- The strength of competitive associations based on co-mention frequency and rank proximity
Export Formats: JSON, NetworkX, PyTorch Geometric (PyG), Adjacency Matrix (NumPy/CSV)
python foundamental.py analyze --export results.jsonThe application creates a SQLite database (llmseo.db) with several main tables:
- responses: Raw LLM responses
- mentions: Brand mentions and rankings
- runs: Execution metadata
- co_mentions: Co-occurrence relationships between brands
- competitor_relationships: Aggregated competitive relationships over time
- sources: URLs and citations from LLM responses (when using --with-sources)
- hallucination_scores: Reliability scores for detecting hallucinations
LLM SEO Brand Visibility Report
==================================================
YourBrand
------------
OPENAI (gpt-5-nano-2025-08-07)
Query: Best vector database for RAG
#3 - YourBrand
Leading solution for enterprise RAG implementations...
Provider Performance Comparison
========================================
Provider Model Mentions Avg Rank Success Rate
----------------------------------------------------------------------
openai gpt-5-nano 2 2.5 100.0%
ollama llama3 1 4.0 100.0%
- Create a new provider class in
src/providers/ - Inherit from
LLMProviderbase class - Implement the
rank()method - Add to
PROVIDERSlist inrun.py
The SQLite database can be queried directly for custom analysis:
SELECT brand_name, AVG(rank_position) as avg_rank
FROM mentions
GROUP BY brand_name;- Python 3.7+
- OpenAI API key (for OpenAI provider)
- Ollama running locally (for Ollama provider) on port
11434
responses: Stores raw LLM responses and metadatamentions: Tracks brand mentions with ranking positionsruns: Execution history and statistics
This project is available for usage under the MIT License.
This application uses BAML for structured LLM interactions. The prompts are defined in baml_src/llm_seo.baml:
- RankEntitiesOpenAI: OpenAI-specific ranking function
- RankEntitiesOllama: Ollama-specific ranking function
- BrandSentiment: Sentiment analysis for brand mentions
The BAML client is pre-generated in baml_client/. If you modify the BAML files, regenerate by running the following in the root directory.
baml-cli generate- Hallucination Filter: Ask models to output URLs/sources and score confidence
- Competitor graph (co-mention network over time)
- LLM-as-a-judge: Swap out Regular Expressions for a small, cheap model to do evals
- Attribution tests
- Simple UI