Simpler, Easier, For Developers.
We build tools that turn volatile runtime data into searchable, reusable knowledge — so developers can focus on building, not debugging.
Our work is centered around VectorWave, a Python framework that brings Execution-Level Semantic Optimization and Autonomous Self-Healing to LLM-integrated applications.
| Project | Description | |
|---|---|---|
| VectorWave | Core framework — semantic caching, self-healing, drift detection | |
| VectorWave Docs | Official documentation and guides | |
| VectorSurfer | Dashboard for monitoring and managing VectorWave instances | |
| VectorCheck | Replay-based testing library for VectorWave pipelines |
@vectorize(semantic_cache=True, auto=True)
def expensive_llm_task(query: str):
...A single decorator gives you:
- Semantic Caching — Serve cached results for semantically similar inputs. ~125x faster, up to 90% cost reduction.
- Self-Healing — Automatically diagnose runtime errors and generate fix PRs via LLM.
- Drift Detection — Alert when user queries drift away from known patterns.
pip install vectorwaveRead the Documentation to set up your first project in minutes.
We welcome contributions across all projects. Check our Contributing Guide for setup, conventions, and PR process.
For questions and discussions, visit GitHub Discussions.