Your tests pass. But are they actually good?
Most coverage tools tell you what is tested. gapix tells you how well.
Getting Started · Features · Test Quality · AI Providers · CLI · Contributing
You have 80%+ code coverage. CI is green. But bugs still make it to production.
Why? Because tests like this technically "cover" your code:
// ❌ This passes but catches nothing
test('should process user', async () => {
const result = await processUser(mockData);
expect(result).toBeTruthy();
});
// ❌ E2E test that clicks through a flow but never checks outcomes
test('signup flow', async ({ page }) => {
await page.goto('/signup');
await page.fill('#email', 'test@test.com');
await page.click('button[type="submit"]');
await page.waitForURL('/dashboard');
// ...no assertions on what actually happened
});Coverage tools can't catch this. gapix can.
# Install globally (recommended)
npm install -g @artshllaku/gapix
# Run analysis with interactive HTML report
gapix analyze ./src -o htmlOr run without installing:
npx --package=@artshllaku/gapix gapix analyze ./src -o htmlThat's it — an interactive report opens in your browser.
gapix analyzes the AST of your test files — it works with any framework that uses standard test patterns:
| Framework | Status | Matchers |
|---|---|---|
| Jest | ✅ Full | All 50+ matchers recognized |
| Vitest | ✅ Full | Same API as Jest |
| Playwright | ✅ Full | toBeVisible, toHaveURL, toHaveText, toHaveScreenshot, etc. |
| Testing Library | ✅ Full | toBeInTheDocument, toHaveTextContent, toHaveAttribute, etc. |
| Cypress | ✅ Partial | expect() chains detected |
| Mocha + Chai | ✅ Partial | describe/it + expect() style |
Source Files (.ts/.tsx) Test Files (.test.ts, .spec.ts)
│ │
AST Parse AST Parse
│ │
Extract: Extract:
· functions & classes · describe/it/test blocks
· exports & complexity · assertions per test case
· parameters & return types · matcher types & targets
│ │
└──────── Coverage Mapping ──────────┘
│
Risk Assessment
│
┌──────────┴──────────┐
│ │
AI Gap Analysis Test Quality Scoring
│ │
└──────────┬──────────┘
│
Interactive HTML Report
Dark-themed, self-contained HTML dashboard — auto-opens in your browser:
- Summary dashboard — files analyzed, coverage %, tested/untested counts
- Quality grades — each test file scored 0-100 with letter grades
- File-level breakdown — colored progress bars, click to expand details
- Function-level detail — every function/method with tested/untested status
- AI suggestions — specific, runnable test code you can copy-paste
This is what makes gapix different. Instead of counting lines, it evaluates what your tests actually check:
| Finding | Severity | What it means |
|---|---|---|
| No assertions | 🔴 High | Test runs code but never calls expect() |
| Weak matchers | 🟡 Medium | toBeTruthy() / toBeDefined() instead of checking real values |
| Single assertion | 🟡 Medium | Only one check in a test — probably not enough |
| Missing edge cases | 🟡 Medium | No tests for null, empty, or error inputs |
| Missing error handling | 🔴 High | Source has try/catch but no test triggers it |
| Wrong assertion target | 🟡 Medium | Asserting on a side effect, not the core behavior |
| Score | Grade | What it means |
|---|---|---|
| 85–100 | Excellent | Strong, specific assertions that catch real bugs |
| 65–84 | Good | Solid tests, minor improvements possible |
| 40–64 | Fair | Tests exist but have gaps — false confidence risk |
| 0–39 | Poor | Tests provide little value — likely false coverage |
gapix works without AI using rule-based AST analysis. Add an AI provider for deeper, context-aware insights:
# OpenAI (recommended for best results)
gapix set-provider openai
gapix set-key YOUR_OPENAI_API_KEY
# Ollama (free, runs locally)
ollama pull llama3
gapix set-provider ollama
# Check your config
gapix show-config| Mode | What you get |
|---|---|
| Without AI | Structural analysis — no assertions, weak matchers, single checks |
| With AI | Deep analysis — missing edge cases, wrong targets, context-specific suggestions with runnable code |
# Analyze a project
gapix analyze <path> [options]
# Options
-o, --output <format> json | markdown | html (default: json)
-d, --output-dir <dir> Output directory (default: .)
-p, --pattern <patterns> Glob patterns to include
-e, --exclude <patterns> Glob patterns to exclude
--skip-quality Skip test quality analysis
# Other commands
gapix show-report Re-open the last HTML report
gapix set-provider <name> Set AI provider (openai | ollama)
gapix set-key <key> Set API key
gapix show-config Show current config| Format | Use case |
|---|---|
| HTML | Interactive dashboard, share with your team |
| JSON | CI/CD pipelines, custom tooling |
| Markdown | Pull request comments, wiki pages |
git clone https://github.com/artshllk/gapix.git
cd gapix
npm install
npm run build
npm testSee docs/testing-guide.md for a detailed walkthrough.
Contributions are welcome.
- Fork the repo
- Create a feature branch:
git checkout -b feature/my-feature - Make changes and add tests
- Run
npm test - Open a Pull Request
- Support for more languages (JavaScript, Python, Go)
- Cypress
cy.should()chain detection - VS Code extension
- GitHub Actions / GitLab CI integration
- Performance optimization for large monorepos
MIT License · Built by Art Shllaku