Skip to content

artshllk/gapix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


npm version downloads node license

gapix

Your tests pass. But are they actually good?

Most coverage tools tell you what is tested. gapix tells you how well.


Getting Started · Features · Test Quality · AI Providers · CLI · Contributing




The Problem

You have 80%+ code coverage. CI is green. But bugs still make it to production.

Why? Because tests like this technically "cover" your code:

// ❌ This passes but catches nothing
test('should process user', async () => {
  const result = await processUser(mockData);
  expect(result).toBeTruthy();
});

// ❌ E2E test that clicks through a flow but never checks outcomes
test('signup flow', async ({ page }) => {
  await page.goto('/signup');
  await page.fill('#email', 'test@test.com');
  await page.click('button[type="submit"]');
  await page.waitForURL('/dashboard');
  // ...no assertions on what actually happened
});

Coverage tools can't catch this. gapix can.




🚀 Getting Started

# Install globally (recommended)
npm install -g @artshllaku/gapix

# Run analysis with interactive HTML report
gapix analyze ./src -o html

Or run without installing:

npx --package=@artshllaku/gapix gapix analyze ./src -o html

That's it — an interactive report opens in your browser.




✨ Features

Works with any testing framework

gapix analyzes the AST of your test files — it works with any framework that uses standard test patterns:

Framework Status Matchers
Jest ✅ Full All 50+ matchers recognized
Vitest ✅ Full Same API as Jest
Playwright ✅ Full toBeVisible, toHaveURL, toHaveText, toHaveScreenshot, etc.
Testing Library ✅ Full toBeInTheDocument, toHaveTextContent, toHaveAttribute, etc.
Cypress ✅ Partial expect() chains detected
Mocha + Chai ✅ Partial describe/it + expect() style

What gapix does

Source Files (.ts/.tsx)            Test Files (.test.ts, .spec.ts)
       │                                    │
   AST Parse                           AST Parse
       │                                    │
  Extract:                             Extract:
  · functions & classes                · describe/it/test blocks
  · exports & complexity               · assertions per test case
  · parameters & return types          · matcher types & targets
       │                                    │
       └──────── Coverage Mapping ──────────┘
                      │
               Risk Assessment
                      │
           ┌──────────┴──────────┐
           │                     │
    AI Gap Analysis    Test Quality Scoring
           │                     │
           └──────────┬──────────┘
                      │
          Interactive HTML Report

Interactive HTML report

Dark-themed, self-contained HTML dashboard — auto-opens in your browser:

  • Summary dashboard — files analyzed, coverage %, tested/untested counts
  • Quality grades — each test file scored 0-100 with letter grades
  • File-level breakdown — colored progress bars, click to expand details
  • Function-level detail — every function/method with tested/untested status
  • AI suggestions — specific, runnable test code you can copy-paste



🔍 Test Quality Analysis

This is what makes gapix different. Instead of counting lines, it evaluates what your tests actually check:

Finding Severity What it means
No assertions 🔴 High Test runs code but never calls expect()
Weak matchers 🟡 Medium toBeTruthy() / toBeDefined() instead of checking real values
Single assertion 🟡 Medium Only one check in a test — probably not enough
Missing edge cases 🟡 Medium No tests for null, empty, or error inputs
Missing error handling 🔴 High Source has try/catch but no test triggers it
Wrong assertion target 🟡 Medium Asserting on a side effect, not the core behavior

Quality scoring

Score Grade What it means
85–100 Excellent Strong, specific assertions that catch real bugs
65–84 Good Solid tests, minor improvements possible
40–64 Fair Tests exist but have gaps — false confidence risk
0–39 Poor Tests provide little value — likely false coverage



🤖 AI Providers

gapix works without AI using rule-based AST analysis. Add an AI provider for deeper, context-aware insights:

# OpenAI (recommended for best results)
gapix set-provider openai
gapix set-key YOUR_OPENAI_API_KEY

# Ollama (free, runs locally)
ollama pull llama3
gapix set-provider ollama

# Check your config
gapix show-config
Mode What you get
Without AI Structural analysis — no assertions, weak matchers, single checks
With AI Deep analysis — missing edge cases, wrong targets, context-specific suggestions with runnable code



📖 CLI Reference

# Analyze a project
gapix analyze <path> [options]

# Options
  -o, --output <format>      json | markdown | html (default: json)
  -d, --output-dir <dir>     Output directory (default: .)
  -p, --pattern <patterns>   Glob patterns to include
  -e, --exclude <patterns>   Glob patterns to exclude
  --skip-quality             Skip test quality analysis

# Other commands
gapix show-report            Re-open the last HTML report
gapix set-provider <name>    Set AI provider (openai | ollama)
gapix set-key <key>          Set API key
gapix show-config            Show current config

Output formats

Format Use case
HTML Interactive dashboard, share with your team
JSON CI/CD pipelines, custom tooling
Markdown Pull request comments, wiki pages



🛠 Development

git clone https://github.com/artshllk/gapix.git
cd gapix
npm install
npm run build
npm test

See docs/testing-guide.md for a detailed walkthrough.




🤝 Contributing

Contributions are welcome.

  1. Fork the repo
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Make changes and add tests
  4. Run npm test
  5. Open a Pull Request

Areas where help is needed

  • Support for more languages (JavaScript, Python, Go)
  • Cypress cy.should() chain detection
  • VS Code extension
  • GitHub Actions / GitLab CI integration
  • Performance optimization for large monorepos

Found a bug? Have an idea?

Open an issue



MIT License · Built by Art Shllaku

About

AI-powered test quality analyzer for TypeScript - evaluates assertions, flags weak matchers, scores test files, and generates interactive HTML reports.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors