,_, ___ __ _____
(O,O) / __ \_ _/ / / ___/____ __ ___ ___
( ) / / / / | /| / / \__ \/ ___/ __ \/ __ \/ _ \
-"-"- / /_/ /| |/ |/ / ___/ / /__/ /_/ / /_/ / __/
.-"""""-. \____/ |__/|__/ /____/\___/\____/ .___/\___/
/_/
Open-source intelligence engine for discovering, evaluating, and adopting open-source libraries, frameworks, and agent skills.
Features • Quick Start • Usage Guidelines • Integration Testing • API Reference • Self Hosting • Contributing • Code of Conduct • Changelog • Design Doc • Roadmap • Security • Support • Release Notes
Owlscope is an open-source intelligence engine that helps developers and AI Agents discover, evaluate, and choose the best open-source projects, libraries, and Agent Skills.
Whether you're a senior engineer doing architecture selection, a Vibe Coder asking AI to build your app, or an AI Agent automating a workflow — when facing questions like "Which library should I use? Is this project still maintained? What Agent Skill fits best?" — Owlscope delivers deeply analyzed, trustworthy recommendations in seconds.
- 🔍 Deep Semantic Search: Understands developer intent, not just keywords
- 🌐 Full-Spectrum Coverage: Bridges traditional open-source (GitHub, npm, PyPI) and Agent ecosystems (Qveris, MCP Hub)
- 📊 Multi-Dimensional Evaluation: 7-dimension scoring for libraries + dedicated Agent Skill evaluation
- 🤖 Agent-Native: MCP protocol support — AI Agents can call Owlscope as a tool
- 👶 Beginner-Friendly: Adaptive output that adjusts to user expertise level
- 📦 Self-Hostable: Full functionality via
docker compose up - 💡 Idea De-duplication: Input an idea/PRD and detect whether similar open-source implementations already exist
The screenshots below show the current English UI experience.
- Natural language + structured search across open-source projects
- Multi-dimensional project evaluation (activity, security, community health, etc.)
- Side-by-side comparison of alternatives
- Dependency health audit
- Idea/PRD validation against GitHub and open-source ecosystems to avoid rebuilding solved products
- Conversational search — describe what you want to build in plain language
- Step-by-step guides with difficulty ratings
- Complete tech stack recommendations
- MCP Protocol native support
- Structured JSON API responses
- Agent Skill discovery across platforms (Qveris, MCP Hub, etc.)
git clone https://github.com/weijt606/olscope.git
cd owlscope
# Option A (CLI, current shell only): set one or more provider keys directly
export DEEPSEEK_API_KEY="your_deepseek_key"
# export OPENAI_API_KEY="your_openai_key"
# export ANTHROPIC_API_KEY="your_anthropic_key"
# Option B (config files, persistent): keep defaults in .env and secrets in .env.local
cp .env.example .env
cp .env.local.example .env.local
cat >> .env.local <<'EOF'
DEEPSEEK_API_KEY=your_deepseek_key
# OPENAI_API_KEY=your_openai_key
# ANTHROPIC_API_KEY=your_anthropic_key
EOF
# Start all services
docker compose up -dCompose loads both .env and .env.local for the API service, and values in .env.local override duplicate keys.
Important: Owlscope does not provide shared AI provider keys. You must supply your own API keys for LLM-powered features.
The Web App will be available at http://127.0.0.1:3100 and the API at http://127.0.0.1:8010.
# Install dependencies
pip install -e ".[dev]"
# Option A (CLI, current shell only)
export DEEPSEEK_API_KEY="your_deepseek_key"
# Option B (config files, persistent)
cp .env.example .env
cp .env.local.example .env.local
cat >> .env.local <<'EOF'
DEEPSEEK_API_KEY=your_deepseek_key
# OPENAI_API_KEY=your_openai_key
# ANTHROPIC_API_KEY=your_anthropic_key
EOF
# Start the API server
uvicorn src.api.main:app --reload --host 127.0.0.1 --port 8010cd web
npm install
npm run dev# Install CLI
pip install -e ".[cli]"
# Search for projects
owlscope search "Python async HTTP client with HTTP/2"
# Ops preflight (local-first)
owlscope ops preflight
# Ops deploy (local direct mode, docker as fallback)
owlscope ops deploy --mode local
# Include frontend startup and checks
owlscope ops deploy --mode local --with-web
# Run checks without leaving background processes
owlscope ops deploy --mode local --with-web --no-detached
# Ops deploy via docker explicitly
owlscope ops deploy --mode docker
# Stop processes started by CLI deploy
owlscope ops stop- Bring your own AI API keys (BYOK): set provider keys in
.env.localbefore using LLM-backed features. - Recommended minimum: configure at least one provider key such as
DEEPSEEK_API_KEY,OPENAI_API_KEY, orANTHROPIC_API_KEY. - Setup methods:
- CLI method (temporary):
export DEEPSEEK_API_KEY="..."in the same shell session before starting API/web. - Config file method (persistent): keep non-secret defaults in
.env, and write real keys only to.env.local.
- CLI method (temporary):
- Security: never commit
.env.localor API keys to Git history, screenshots, or issues. - Cost control: use lighter models for iterative workflows and monitor token usage in your provider dashboard.
- Fallback behavior: if external model services are unavailable, some retrieval flows still work with deterministic fallbacks.
Pre-release key hygiene quick check:
- Ensure no env secret files are tracked:
git ls-files .env .env.localshould return nothing. - Ensure tracked files do not contain token-like strings (example):
git grep -nE "(sk-[A-Za-z0-9]{20,}|ghp_[A-Za-z0-9]{20,}|xoxb-[A-Za-z0-9-]{20,})" -- .
Run this when you want to validate the real /api/v1/compare endpoint with a black-box test.
# Start required infra
docker compose up -d postgres redis
# Enable integration test execution and run the compare black-box case
export OWLSCOPE_RUN_INTEGRATION=1
pytest -q -m integration tests/integration/test_compare_blackbox.pyNotes:
- The integration suite is excluded from default test runs.
- CI always runs this test in a dedicated job with PostgreSQL + Redis services.
Use these commands before deployment:
# Validate production environment vars
python scripts/validate_env.py --mode prod
# One-command release validation (tests + build + smoke)
bash scripts/release_check.sh
# Include black-box integration test in the same run
OWLSCOPE_RELEASE_CHECK_INTEGRATION=1 bash scripts/release_check.shProduction-readiness docs:
- Deployment runbook:
docs/deployment.md - Migration/rollback:
docs/migrations.md - Smoke checklist:
docs/smoke-test.md - Release checklist:
docs/release-checklist.md
Production launch commands:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
bash scripts/post_deploy_check.shCLI-first alternative:
owlscope ops deploy --mode local
# if needed: owlscope ops deploy --mode docker
# include web checks: owlscope ops check --with-web
# stop local managed processes: owlscope ops stop# Search
curl -X POST http://127.0.0.1:8010/api/v1/search \
-H "Content-Type: application/json" \
-d '{"query": "lightweight Python web framework"}'
# Evaluate a project
curl http://127.0.0.1:8010/api/v1/evaluate/github:library:encode/httpx
# Compare projects
curl -X POST http://127.0.0.1:8010/api/v1/compare \
-H "Content-Type: application/json" \
-d '{"projects": ["github:library:fastapi/fastapi", "github:library:pallets/flask", "github:library:django/django"]}'
# Assess whether an idea is already implemented
curl -X POST http://127.0.0.1:8010/api/v1/idea/assess \
-H "Content-Type: application/json" \
-d '{"idea": "AI coding workflow assistant for startup teams", "product_doc": "Need repo indexing, recommendation, and integration guidance"}'
# Export assessment report as Markdown
curl -X POST "http://127.0.0.1:8010/api/v1/idea/assess/export?format=markdown" \
-H "Content-Type: application/json" \
-d '{"idea": "AI coding workflow assistant for startup teams", "product_doc": "Need repo indexing, recommendation, and integration guidance"}'
# Export assessment report as JSON envelope
curl -X POST "http://127.0.0.1:8010/api/v1/idea/assess/export?format=json" \
-H "Content-Type: application/json" \
-d '{"idea": "AI coding workflow assistant for startup teams", "product_doc": "Need repo indexing, recommendation, and integration guidance"}'
# Batch assess multiple ideas
curl -X POST "http://127.0.0.1:8010/api/v1/idea/assess/batch" \
-H "Content-Type: application/json" \
-d '{"items":[{"idea":"Open-source API mocking tool","product_doc":"Need scenario replay"},{"idea":"PR review assistant for OSS maintainers","product_doc":"Need triage automation"}],"limit":6,"max_concurrency":2,"per_item_timeout_seconds":30}'
# Export batch assessment report as Markdown
curl -X POST "http://127.0.0.1:8010/api/v1/idea/assess/batch/export?format=markdown" \
-H "Content-Type: application/json" \
-d '{"items":[{"idea":"Open-source API mocking tool","product_doc":"Need scenario replay"},{"idea":"PR review assistant for OSS maintainers","product_doc":"Need triage automation"}],"limit":6,"max_concurrency":2,"per_item_timeout_seconds":30}'
# Export batch assessment report as JSON envelope
curl -X POST "http://127.0.0.1:8010/api/v1/idea/assess/batch/export?format=json" \
-H "Content-Type: application/json" \
-d '{"items":[{"idea":"Open-source API mocking tool","product_doc":"Need scenario replay"},{"idea":"PR review assistant for OSS maintainers","product_doc":"Need triage automation"}],"limit":6,"max_concurrency":2,"per_item_timeout_seconds":30}'
# Response includes:
# - verdict + existing_project_probability
# - action_recommendation (build|fork|integrate) + action_rationale
# - decision_signals for explainability
# - similar_projects with evidence_snippets
# - export endpoint supports markdown/json report output
# - batch endpoint returns per-idea results + verdict counts
# - batch supports max_concurrency and per_item_timeout_seconds
# - batch export endpoint supports markdown/json report outputFull API documentation available at http://127.0.0.1:8010/docs (Swagger UI) or http://127.0.0.1:8010/redoc (ReDoc).
Owlscope exposes MCP tools for AI Agent integration:
| Tool | Description |
|---|---|
owlscope_search |
Search open-source projects and Agent Skills |
owlscope_evaluate |
Deep evaluation of a specific project/Skill |
owlscope_compare |
Compare multiple projects/Skills |
owlscope_check_deps |
Dependency health check |
owlscope_alternatives |
Find alternative solutions |
owlscope_discover_skills |
Discover Agent Skills from registries |
owlscope_stack_suggest |
Get complete tech stack recommendations |
Owlscope is designed for easy self-hosting. See the Self-Hosting Guide for detailed instructions.
Owlscope supports 100+ LLM providers via LiteLLM. Configure your preferred models in src/config/llm.yaml:
llm:
adapter: litellm
providers:
light:
model: "deepseek/deepseek-chat"
standard:
model: "openai/gpt-4o"
batch:
model: "ollama/qwen2.5:14b"| Component | Technology |
|---|---|
| Backend | Python (FastAPI) |
| Frontend | Next.js + TailwindCSS + shadcn/ui |
| Vector DB | Qdrant |
| Database | PostgreSQL |
| Cache | Redis |
| LLM Adapter | LiteLLM |
| Task Queue | Celery + Redis |
| CLI | Typer |
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Owlscope is licensed under the Apache License 2.0.
Built with 🦉 by the Owlscope Team





