A beautiful, open-source job search dashboard powered by Google Sheets. Track applications, read a Daily Brief compiled from your Pipeline, and manage your pipeline — all from a single page that reads and writes directly to your own Google Sheet.
- Pipeline tracker — job cards with fit scores, priority badges, tags, and status tracking
- Write-back to Sheets — update status, mark applied, add notes directly from the dashboard
- Daily Brief — two-column layout with at-a-glance counts, follow-ups, who you’re waiting on, and stuck applications (details)
- KPI bar — total roles, hot leads, applied count, interview count, avg fit score
- Pipeline filters — Inbox (New / Researching / unassigned) by default; stage pills for Applied, Interviewing, Negotiating
- Run discovery — optional webhook in
config.jsso your agent (Hermes, n8n, etc.) runs another pass; POST includesschemaVersionand optionaldiscoveryProfilefrom Settings (AGENT_CONTRACT.md) - ATS LLM scorecard — generated drafts now include structured ATS analysis (score, strengths, gaps, rewrite suggestions) via local server endpoint or webhook
- Last contact & reply — optional columns R–S editable on each card when signed in
- Filter & search — stage filters plus priority, sort by fit score/date/company, free-text search
- Google OAuth — sign in with Google to enable write actions (read works without sign-in)
- No backend — pure HTML/CSS/JS, deploys anywhere static files are served
- Reproducible — bring your own Sheet + OAuth credentials, share with anyone
If you cloned the repo and want the Cheerio “Fetch posting” feature without a second terminal:
npm install
npm startOr run ./start.sh (macOS/Linux) or double-click start.command in Finder — same as npm start; first run installs dependencies if needed.
Then open http://localhost:8080. This installs dependencies for server/ automatically and runs the UI plus http://127.0.0.1:3847 together. You can leave jobPostingScrapeUrl empty in config.js on localhost — the app defaults to the local scraper.
The same local server now also provides POST /api/ats-scorecard when ATS mode is set to server.
For persistent ATS provider config in server mode, copy server/ats-env.example to server/.env and set your API key there.
For GitHub Pages (HTTPS), the browser cannot call a scraper on your laptop at http://127.0.0.1. Use Fetch posting by either running the dashboard locally (npm start → http://localhost:8080) or deploying the server/ app and pasting its HTTPS base URL in Settings — see DEPLOY-SCRAPER.md.
For static hosting only without Fetch posting, deploy the files as usual; the scraper is optional.
Recommended in the app: save your OAuth client in Settings, then use the setup screen button to create a blank starter sheet in your own Google Drive with just the required Pipeline headers.
Manual fallback: → Copy Template Sheet
Google’s make-a-copy flow duplicates every row in the source template. If you use that fallback and see sample jobs, open the Pipeline tab and delete all rows below the header to start blank.
After copying:
- Recommended: add your OAuth Client ID in Settings and use Sign in with Google — your sheet can stay private; the dashboard reads it via the Sheets API (no publish step).
- Alternative (no OAuth): publish or share the sheet for public read — File → Publish to web, or Share → Anyone with the link can view
- Go to Google Cloud Console → Credentials
- Create a project (or use an existing one)
- Enable the Google Sheets API (direct link)
- Go to Credentials → Create Credentials → OAuth 2.0 Client ID
- Application type: Web application
- Under Authorized JavaScript origins, add your deployment URL (e.g.,
https://yourusername.github.io) - Copy the Client ID
cp config.example.js config.jsEdit config.js:
window.COMMAND_CENTER_CONFIG = {
sheetId: "YOUR_SHEET_ID_HERE",
oauthClientId: "YOUR_CLIENT_ID_HERE.apps.googleusercontent.com",
title: "Command Center",
// Optional: POST target when the user clicks "Run discovery" (Hermes / n8n / Apps Script)
discoveryWebhookUrl: "",
};Your Sheet ID is the long string in your Google Sheet URL:
https://docs.google.com/spreadsheets/d/THIS_IS_YOUR_SHEET_ID/edit
Choose any of these (all free):
- Fork this repo
- Add your
config.js(do NOT commit credentials to a public repo — use GitHub Pages environment or a private fork) - Go to Settings → Pages → Source: Deploy from a branch → main / root
- Your dashboard is live at
https://yourusername.github.io/command-center - Add that URL to your OAuth client's Authorized JavaScript Origins
After deploying, add config.js with your credentials and add your Vercel URL to OAuth origins.
- Connect your repo in the Cloudflare Dashboard
- Build command: (none needed)
- Output directory:
/
# Clone and open
git clone https://github.com/emilio3435/command-center.git
cd command-center
cp config.example.js config.js
# Edit config.js with your credentials
open index.html
# Or use any local server:
npm run dev
# → http://localhost:8080 (static root + optional scraper; see package.json)
# Or:
python3 -m http.server 8080The dashboard reads the Pipeline tab (and ignores other tabs).
Machine-readable column contract: schemas/pipeline-row.v1.json (header row + enums). Run npm run test:pipeline-contract locally to verify README and app.js stay aligned.
| Column | Description | Updated by |
|---|---|---|
| A: Date Found | When the role was discovered | Auto (Hermes) |
| B: Title | Job title | Auto |
| C: Company | Company name | Auto |
| D: Location | Location / remote policy | Auto |
| E: Link | Direct URL to listing | Auto |
| F: Source | Where it was found (LinkedIn, etc.) | Auto |
| G: Salary | Salary if listed | Auto |
| H: Fit Score | 1-10 match score | Auto (you can override) |
| I: Priority | 🔥 Hot / ⚡ High / — Normal / ↓ Low | Auto (you can override) |
| J: Tags | Matched keywords | Auto |
| K: Fit Assessment | Why it matches your profile | Auto |
| L: Contact | Recruiter/HM name if found | Auto |
| M: Status | New / Researching / Applied / Phone Screen / Interviewing / Offer / Rejected / Passed | You (via dashboard or Sheet) |
| N: Applied Date | When you applied | You |
| O: Notes | Personal notes | You |
| P: Follow-up Date | When to follow up | You |
| Q: Talking Points | Cover letter bullets (auto for 8+ scores) | Auto |
| R: Last contact | Optional. When you last heard from them (shown on cards & in the brief) | You or automation |
| S: Did they reply? | Optional. Yes / No / Unknown (Unknown shows as “Not sure” in the app) |
You or automation |
Automation (e.g. Hermes Agent, n8n, Apps Script) fills your Pipeline sheet; this dashboard displays and edits those rows. The integration contract (webhook JSON, columns, dedupe by job URL) is documented in AGENT_CONTRACT.md.
You do not need a webhook to use the dashboard — only for the Run discovery button or automation that speaks the webhook contract. See docs/DISCOVERY-PATHS.md (diagrams: manual rows, scheduled jobs, GitHub Actions, vs browser POST).
Apps Script (visual walkthrough): integrations/apps-script/WALKTHROUGH.md — deploy the repo stub for webhook verification only (npm run apps-script:push, npm run test:discovery-webhook).
Built-in real worker path: use integrations/browser-use-discovery/ for the repo’s Browser Use-backed discovery worker. It keeps the v1 webhook contract stable, supports local and hosted deployment, writes directly to the user’s Sheet, and covers Greenhouse / Lever / Ashby as the first-layer sources.
OpenClaw / agent skills (BYO): use integrations/openclaw-command-center/ as the agent-skill alternative to the built-in worker path. It teaches user-owned agents how to append rows and handle Run discovery; runs in your environment, not the maintainer’s.
Fast local real-discovery path: if your agent runs on your own machine, use local webhook → ngrok → Cloudflare Worker. Start with npm run discovery:bootstrap-local, then use Settings → Hermes + ngrok to review the autofilled route/tunnel info and Cloudflare relay to generate the Worker deploy command and final browser URL.
- Point Run discovery at your HTTPS endpoint (see Settings and
discoveryWebhookUrl), or use scheduled automation only (paths doc). - Schedule your agent or cron so rows append to Pipeline on a cadence you want.
- The dashboard auto-refreshes on a timer — new data appears automatically.
See SETUP.md for detailed setup. Use Agent setup in the header for a built-in checklist.
This project is static and free to host (e.g. GitHub Pages). There is no central discovery service run by the authors — that would be ongoing cost and ops. Instead, each user runs automation on their side (or on a free tier they control). The dashboard only needs a discovery webhook URL you paste in Settings; something on the internet must accept HTTPS POST and update your Sheet.
| Option | Who pays | Good for |
|---|---|---|
| Google Apps Script — walkthrough | Runs in your Google account — no server bill from this repo | Webhook stub / smoke test path; replace with real logic or pair with a real worker |
| GitHub Actions (scheduled workflow) | Free tier for public repos (within limits) | Daily jobs without an always-on server |
| Free-tier serverless (e.g. Cloudflare Worker template, Render free tier) | $0 on the user’s own account via templates | CORS-friendly relay to Apps Script or other targets |
Self-hosted / agent-owned (OpenClaw, Hermes, n8n, server/, local + tunnel) |
Your machine / homelab | Real discovery that writes Pipeline rows |
Best default real-discovery path for “open source + free + maintainers pay $0”: use integrations/browser-use-discovery/ or another user-owned job that writes Pipeline rows. Use integrations/openclaw-command-center/ when you want an agent-skill workflow instead of the bundled worker. Use integrations/apps-script/ only for webhook smoke tests or as a receiver you replace with real logic. If browser POST hits CORS, use templates/github-actions/ (server curl) or templates/cloudflare-worker/ (adds CORS; forwards to your real URL). See AUTOMATION_PLAN.md for the roadmap.
All template paths in one place: SETUP.md — BYO automation templates.
┌─────────────┐ JSONP (read) ┌──────────────┐
│ Dashboard │ ◄──────────────────── │ Google Sheet │
│ (static JS) │ ───────────────────► │ (your data) │
└─────────────┘ Sheets API (write) └──────────────┘
│ ▲
│ Google OAuth │
│ (browser-only) │
▼ │
┌─────────────┐ ┌──────────────┐
│ Google GIS │ │ Your agent │
│ (auth lib) │ │ (cron/jobs) │
└─────────────┘ └──────────────┘
- Reading uses JSONP via Google's gviz endpoint — no auth needed, no CORS issues, works in iframes
- Writing uses the Google Sheets API v4 with an OAuth access token obtained via Google Identity Services
- No backend, no server, no database — your Google Sheet IS the database
- Vanilla HTML/CSS/JS (no frameworks, no build step)
- Google Identity Services for OAuth
- Google Sheets API v4 for write-back
- Inter + JetBrains Mono typography
| Parameter | Description | Example |
|---|---|---|
sheet |
Override Sheet ID (raw ID or full spreadsheet URL) | ?sheet=… |
setup=discovery |
Opens Settings with the Discovery webhook URL field focused (after onboarding if the resume wizard is showing, the URL is stripped and Settings opens when you finish onboarding) | ?setup=discovery |
- Never commit secrets — use
config.example.jsin the repo; copy toconfig.jslocally or use Settings (stored inlocalStorage). Realconfig.jsmust not be pushed to public remotes. - Repository contents — only placeholders (
YOUR_SHEET_ID_HERE, empty API keys). The public template Sheet ID in links is not a secret. - OAuth access tokens are held in memory only (not localStorage)
- Gemini/OpenAI keys from Settings live in this browser’s localStorage; they are not sent to Command Center’s authors
- Draft generation calls your chosen AI provider directly from the browser unless you select webhook mode
- ATS scorecard can run through your own server (
/api/ats-scorecard) or your own webhook URL; no maintainer-hosted ATS service is used
See SECURITY.md for maintainers and leak response.
Index and contracts for automation and integrations (column layouts stay in Sheet Structure above):
- docs/README.md — documentation index
- AGENT_CONTRACT.md — discovery webhook contract (JSON, columns, dedupe)
- AUTOMATION_PLAN.md — automation roadmap and template pointers
- examples/ — discovery webhook request fixtures for local testing
PRs welcome. Keep it simple — no build tools, no frameworks, minimal CDN use (Google Identity Services only).
MIT