Demoable internal web app for Sunburnt AI sales reps.
Given prospect details and a public website URL, the app:
- fetches and extracts visible website text server-side
- runs a digest pass with the configured OpenRouter model
- runs a recommendation pass with the configured reasoning model
- grounds recommendations in the Sunburnt sales guide and product catalogue
- returns a structured sales brief with:
- company snapshot
- inferred industry
- key pain points
- data/process/comms/capacity diagnosis
- recommended products
- recommended stack
- Knowledge Layer rationale
- ROI talking points
- objections + rebuttals
- discovery questions
- opening angle
The v1 app now defends against the main failure mode that showed up before re-hosting: models returning malformed “JSON” with fences, prose, or broken structure.
- JSON sanitising in the OpenRouter layer
- strips markdown fences
- extracts the first JSON object if the model adds extra prose
- validates model output against strict
zodschemas
- Automatic retry on bad model output
- if parse or schema validation fails, the app makes one repair-style retry with a stricter prompt
- Heuristic digest fallback
- if the digest model still fails after retries, the app builds a conservative digest directly from scraped website content
- the reasoning stage can continue using that fallback digest
- Actionable errors
- request validation errors come back as clear intake issues
- model failures now return operator-usable messages instead of raw brittle internals
- Metadata and logging
- the response includes whether each stage came from the original model, retry, or fallback
- attempt counts and JSON recovery strategy are exposed in metadata
If the digest falls back to heuristics, the app remains usable, but confidence should be treated more cautiously. The UI flags this state and the report is instructed to say so.
- Next.js App Router
- TypeScript
- OpenRouter-compatible API calls via the OpenAI SDK
- Cheerio for HTML text extraction
- Zod for schema validation
- generated JSON knowledge assets from the provided docs
cd sunburnt-sales-app
cp .env.example .env.local
# add your OPENROUTER_API_KEY to .env.local
npm install
npm run devThen open http://localhost:3000.
OPENROUTER_API_KEY- required, server-side onlyOPENROUTER_BASE_URL- defaults tohttps://openrouter.ai/api/v1OPENROUTER_SITE_URL- used for OpenRouter headersOPENROUTER_APP_NAME- used for OpenRouter headersDIGEST_MODEL- defaults toanthropic/claude-haiku-4.5REASONING_MODEL- defaults toanthropic/claude-opus-4.6MAX_SCRAPE_CHARS- max extracted site text for analysisREQUEST_TIMEOUT_MS- fetch timeout for website extractionTEST_BASE_URL- optional base URL for the harness script (defaults tohttp://127.0.0.1:3000)
The source docs live in docs/.
Generated app-ready assets live in src/data/ and are produced by:
npm run generate:knowledgeCurrent generated assets:
src/data/products.jsonsrc/data/stacks.jsonsrc/data/industries.jsonsrc/data/overview.json
npm run buildStart the app locally, then run:
npm run test:harnessThe harness calls the real /api/analyze route for multiple public business websites and prints:
- HTTP status
- executive summary
- inferred industry
- whether digest came from
model,retry, orfallback - attempt counts for digest and reasoning
- extraction warnings or actionable errors
Default cases currently cover:
- ServiceM8
- HubSpot
- Jim’s Mowing
- Website extraction is intentionally pragmatic, not browser-grade. It does a public HTML fetch and text extraction. JS-heavy sites may produce thin context.
- The recommendation pipeline is grounded with provided knowledge assets, but still depends on model quality and the available public website text.
- The heuristic digest fallback is intentionally conservative. It keeps the workflow alive, but it is not equivalent to a strong clean model digest.
- Pricing in the supplied catalogue includes placeholder-style values in places; the app uses them only as reference context.
npm run dev
npm run build
npm run start
npm run generate:knowledge
npm run test:harness