Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,16 @@ git clone https://github.com/orkait/hyperstack.git ~/.claude/skills/hyperstack
| **rust** | Rust best practices | 4 | 18 practices (good/bad pairs), ownership guide, cheatsheet |
| **design-tokens** | Tailwind v4 + OKLCH token system | 7 | 10 token categories, 8 build procedures, color ramp templates |
| **ui-ux** | UI/UX design principles | 6 | Typography, color, spacing, elevation, motion, a11y, component patterns |
| **behaviour-analysis** | Interaction & State Audits | 2 | Heuristics, state inventory, edge case sweeps |
| **design-patterns-skill** | Core Programming Principles | 2 | Clean Code, Pragmatic Programmer patterns |
| **engineering-discipline** | Senior SDE-3 Framework | 2 | Architecture reasoning, verification gates |
| **excalidraw** | Architecture Diagrams | 2 | Automated diagram generation guidelines |
| **frame-animator** | Character Animation | 2 | Frame-based tick animation and expressions |
| **golang-design-pattern** | Go Design Patterns | 2 | Patterns adapted for Go's philosophy |
| **pinchtab** | Browser Automation | 2 | Web scraping and browser testing guidelines |
| **react-pro-coder** | Senior React/Next.js SDE | 2 | RSC-first constraints, core web vitals |
| **readme-writer** | Project Documentation | 2 | Evidence-based README generation |
| **security-review** | Security Audits | 2 | OWASP review, vulnerability checklists |

---

Expand Down
45 changes: 45 additions & 0 deletions SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,28 @@ triggers:
- typography scale
- color contrast
- wcag
- behaviour analysis
- state audit
- nielsen heuristics
- design patterns
- clean code
- engineering discipline
- code review
- architecture diagram
- excalidraw
- frame animation
- golang patterns
- pinchtab automation
- web scraping
- browser testing
- react pro
- rsc constraints
- core web vitals
- readme generation
- project documentation
- security review
- owasp
- vulnerability check
activation:
mode: fuzzy
priority: high
Expand Down Expand Up @@ -447,3 +469,26 @@ Elevation: 5 levels distinguished by bg-color not just borders; dark mode uses l
Motion: exits faster than entrances (subtract 50-100ms); ease-out entering, ease-in exiting, ease-in-out repositioning; never linear; details via ui_ux_get_principle duration-rules

Pre-ship: run ui_ux_get_checklist for each domain before shipping a new component or page

---

## Additional Engineering Skills

Hyperstack also bundles 10 specialized engineering skills that do not rely on standard API references but instead provide comprehensive guidelines, checklists, and principles.

For each of these skills, you can:
1. List all available documents: `[skill_name]_list_docs()`
2. Get the content of a specific document: `[skill_name]_get_doc({ path: "..." })` (Always start by reading `SKILL.txt`)

| Skill Prefix | Focus Area |
|--------------|------------|
| `behaviour_analysis` | UI/UX state audits, Nielsen heuristics |
| `design_patterns_skill` | Clean Code, Pragmatic Programmer concepts |
| `engineering_discipline`| Architecture reasoning, verification gates |
| `excalidraw` | Automated architecture diagram generation |
| `frame_animator` | Frame-based tick animation & expressions |
| `golang_design_pattern`| Go-specific implementations of design patterns |
| `pinchtab` | Browser automation, scraping, web testing |
| `react_pro_coder` | Senior Next.js/React constraints, RSC rules |
| `readme_writer` | Evidence-based README generation |
| `security_review` | OWASP audits, vulnerability checklists |
20 changes: 20 additions & 0 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,16 @@ import { golangPlugin } from "./plugins/golang/index.js";
import { rustPlugin } from "./plugins/rust/index.js";
import { designTokensPlugin } from "./plugins/design-tokens/index.js";
import { uiUxPlugin } from "./plugins/ui-ux/index.js";
import { behaviourAnalysisPlugin } from "./plugins/behaviour-analysis/index.js";
import { designPatternsSkillPlugin } from "./plugins/design-patterns-skill/index.js";
import { engineeringDisciplinePlugin } from "./plugins/engineering-discipline/index.js";
import { excalidrawPlugin } from "./plugins/excalidraw/index.js";
import { frameAnimatorPlugin } from "./plugins/frame-animator/index.js";
import { golangDesignPatternPlugin } from "./plugins/golang-design-pattern/index.js";
import { pinchtabPlugin } from "./plugins/pinchtab/index.js";
import { reactProCoderPlugin } from "./plugins/react-pro-coder/index.js";
import { readmeWriterPlugin } from "./plugins/readme-writer/index.js";
import { securityReviewPlugin } from "./plugins/security-review/index.js";

const server = new McpServer({
name: "hyperstack",
Expand All @@ -28,6 +38,16 @@ loadPlugins(server, [
rustPlugin,
designTokensPlugin,
uiUxPlugin,
behaviourAnalysisPlugin,
designPatternsSkillPlugin,
engineeringDisciplinePlugin,
excalidrawPlugin,
frameAnimatorPlugin,
golangDesignPatternPlugin,
pinchtabPlugin,
reactProCoderPlugin,
readmeWriterPlugin,
securityReviewPlugin,
]);

async function main() {
Expand Down
3 changes: 3 additions & 0 deletions src/plugins/behaviour-analysis/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
import { createStaticSkillPlugin } from "../../shared/static-skill.js";

export const behaviourAnalysisPlugin = createStaticSkillPlugin("behaviour-analysis", "The behaviour-analysis skill.");
162 changes: 162 additions & 0 deletions src/plugins/behaviour-analysis/snippets/SKILL.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
---
name: behaviour-analysis
description: Systematic UI/UX behaviour analysis for interactive applications. Audits every user action, state transition, view mode, and edge case like an experienced QA + UX engineer. Produces a complete interaction matrix with expected vs actual behaviour, finds inconsistencies, dead states, and missing feedback. Use when reviewing UI behaviour, before shipping features, or when something "feels off" but you can't pinpoint why.
compatibility: Requires Read, Grep, Glob, WebSearch tools. Works with any frontend codebase.
metadata:
author: kai
version: "1.0"
---

# Behaviour Analysis

Systematic interaction audit combining UX heuristics, QA state-machine thinking, and developer code-reading.

## When to Use

- After implementing a feature with multiple interaction modes
- When the user reports something "doesn't feel right" or "is inconsistent"
- Before shipping — final behavioural review
- When adding a new view mode, action, or state to an existing system

## Process

### Phase 1: Inventory (read code, build the map)

Before judging anything, build a complete picture:

1. **Identify all state variables** that affect UI behaviour
- Read the store/state management files
- List every piece of state: data, config, transient UI state
- Note which are persisted vs ephemeral

2. **Identify all user actions** that modify state
- Buttons, clicks, drags, keyboard shortcuts, sliders, toggles
- API calls triggered by actions
- Implicit actions (hover, scroll, resize, mode switch)

3. **Identify all view modes / display states**
- Tabs, toggles, conditional rendering branches
- How different modes compose (layout mode x view mode x highlight state)

4. **Identify all feedback mechanisms**
- Visual feedback (highlighting, dimming, borders, badges, glow)
- Textual feedback (labels, counts, status text)
- Animated feedback (transitions, physics, spring effects)
- Absence of feedback (silent failures, no-ops)

Output: A **state inventory table** and an **action inventory table**.

### Phase 2: Interaction Matrix (the core analysis)

Build a matrix: **every action x every relevant state combination**.

For each cell ask:
- **What should happen?** (expected behaviour — think like a UX designer)
- **What does happen?** (actual behaviour — read the code path)
- **Match?** OK / BUG / UX-ISSUE / MISSING-FEEDBACK

Structure the matrix by category:

```markdown
| # | Action | Context/State | Expected | Actual | Status |
|---|--------|---------------|----------|--------|--------|
```

Categories to cover:
- **CRUD actions** (create, read, update, delete of primary data)
- **Selection & highlighting** (what gets selected, how, clear)
- **View mode transitions** (switching between modes)
- **Layout mode transitions** (switching layout engines)
- **Configuration changes** (sliders, toggles, settings)
- **Drag & interaction** (drag, hover, click targets)
- **Reset & cleanup** (what gets cleared, what persists)
- **Edge cases** (empty state, max state, conflicting states)

### Phase 3: Heuristic Audit

Apply Nielsen's 10 heuristics (adapted for interactive visualizations):

1. **Visibility of system status** — Does the UI show what's active, selected, loading?
2. **Match between system and real world** — Do labels make sense? Are actions named clearly?
3. **User control and freedom** — Can the user undo/escape from any state? Is there always a way back?
4. **Consistency and standards** — Do similar actions behave the same way everywhere?
5. **Error prevention** — Can the user reach a broken/dead state?
6. **Recognition rather than recall** — Is the current mode/state visible without memorizing?
7. **Flexibility and efficiency** — Are there shortcuts for power users?
8. **Aesthetic and minimalist design** — Is information presented at the right density?
9. **Help users recover from errors** — What happens on API failure, empty results, bad input?
10. **Accessibility** — Keyboard navigation, screen reader, reduced motion?

Refer to [references/heuristics.md](references/heuristics.md) for detailed questions per heuristic.

### Phase 4: Edge Case Sweep

Systematically check:

**Empty states:**
- No data loaded
- No results
- No highlights active
- Empty search filter results

**Boundary states:**
- Maximum data (100+ nodes)
- Single node, no edges
- All nodes highlighted
- All sliders at min/max

**Transition states:**
- Mode switch with active highlights
- Mode switch mid-drag
- Query execution while loading
- Rapid repeated actions (double-click, spam slider)

**Composition states:**
- Every view mode x every layout mode
- Highlight + search filter active simultaneously
- Collapsed groups + highlighting + path results

### Phase 5: Report

Output a structured report:

```markdown
## State Inventory
[table of all state variables]

## Action × State Matrix
[full interaction matrix with status]

## Heuristic Findings
[issues grouped by heuristic, with severity]

## Edge Cases
[bugs and UX issues found]

## Verdict
[summary: how many behaviours tested, how many correct, critical issues]
```

Severity levels:
- **CRITICAL** — broken functionality, data loss, unreachable state
- **HIGH** — major UX inconsistency, confusing behaviour
- **MEDIUM** — minor inconsistency, missing feedback
- **LOW** — cosmetic, nice-to-have

## Research Enhancement

Before starting the analysis, search for:
- Current best practices for the specific UI pattern being analyzed (graph viz, form, dashboard, etc.)
- Known UX patterns for the interaction model (drag-and-drop, force-directed graphs, etc.)
- Accessibility guidelines for the specific component type

Use findings to set expectations in the matrix — "expected behaviour" should be informed by industry standards, not just gut feeling.

## Key Principles

- **Think like a user first** — what would someone expect when they click this?
- **Think like QA second** — what's the worst thing that could happen?
- **Think like a developer third** — read the code to verify, don't assume
- **Every action must have visible feedback** — if clicking something does nothing visibly, that's a bug
- **Every state must be escapable** — the user should never be "stuck"
- **Composition must be tested** — features that work alone often break in combination
114 changes: 114 additions & 0 deletions src/plugins/behaviour-analysis/snippets/references/heuristics.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# Heuristic Evaluation Questions

Detailed questions per Nielsen's heuristic, adapted for interactive data visualizations and modern web apps.

## 1. Visibility of System Status

- Is the current view mode clearly indicated (active tab, highlight, selected state)?
- Is there a loading indicator when async operations run?
- Does the active/selected result show a distinct visual treatment?
- When a filter is active, is it obvious that results are filtered?
- Do sliders/toggles show their current value?
- After dragging a node, is the settled state visually clear?
- Is the current layout mode (dagre/cluster) clearly indicated?

## 2. Match Between System and Real World

- Do button labels describe what they DO, not what they ARE? ("Clear" not "X")
- Are view mode names intuitive? ("Live" vs "Results" vs "Highlight" — does a new user understand these?)
- Do edge/node labels match the domain vocabulary?
- Are slider labels clear about what they control?

## 3. User Control and Freedom

- Can every highlight be cleared?
- Can every mode switch be reversed?
- Can collapsed groups be re-expanded?
- Can the user undo the last action?
- Is there a "reset to defaults" for settings?
- Can the user escape from every state back to a clean view?

## 4. Consistency and Standards

- Do all "clear" actions clear the same scope of state?
- Do all click targets have hover states?
- Are all buttons the same size/style for the same level of importance?
- Does clicking behave the same way on result cards, nodes, edges?
- Do both layout modes support the same view modes identically?
- Are keyboard shortcuts consistent with platform conventions?

## 5. Error Prevention

- Can slider values be set to break the layout?
- Can the user reach a state where no nodes are visible and there's no indication why?
- Can rapid clicking cause race conditions?
- Does the UI prevent invalid state combinations?
- Are destructive actions (reset, clear all) confirmed or easily undoable?

## 6. Recognition Rather Than Recall

- Is the current state visible at all times (not hidden in a menu)?
- Can the user see which result is highlighted without scrolling the results panel?
- Are collapsed group contents summarized (count, kind)?
- Is the search filter text always visible when active?

## 7. Flexibility and Efficiency

- Can power users access functions via keyboard?
- Are there shortcuts for common workflows (run + highlight)?
- Can the user adjust layout parameters without opening a dialog?
- Is the most common action the easiest to perform?

## 8. Aesthetic and Minimalist Design

- Are controls only shown when relevant? (dagre sliders hidden in cluster mode)
- Is information density appropriate — not too sparse, not overwhelming?
- Are animations purposeful (communicate state change) or decorative (just pretty)?
- Do hover/highlight effects add information or just noise?

## 9. Help Users Recover from Errors

- What happens when the API is unreachable?
- What happens when a query returns an error?
- What happens when the graph data is malformed?
- Are error messages actionable ("server unreachable — is it running?")?
- Can the user retry failed operations?

## 10. Accessibility

- Can all interactive elements be reached via keyboard (Tab)?
- Do interactive elements have focus indicators?
- Is there sufficient color contrast for all states?
- Do animations respect `prefers-reduced-motion`?
- Are drag interactions achievable without a mouse?
- Do screen readers announce state changes (highlights, mode switches)?
- Are ARIA labels present on non-text interactive elements?

## Visualization-Specific Heuristics

Beyond Nielsen's 10, for data visualizations check:

### Data-Ink Ratio
- Is every visual element carrying information?
- Can any decoration be removed without losing meaning?

### Gestalt Principles
- Are related nodes visually grouped (proximity, color, enclosure)?
- Do edges clearly connect their endpoints?
- Is the visual hierarchy clear (important nodes larger/brighter)?

### Interaction Affordance
- Do draggable things look draggable (cursor change)?
- Do clickable things look clickable (hover effect)?
- Are non-interactive elements clearly non-interactive?

### State Feedback Latency
- Is feedback immediate (<100ms) for direct manipulation (drag)?
- Is feedback fast (<300ms) for triggered actions (click to highlight)?
- Are long operations (layout compute) shown with progress indication?

## Sources

- [Nielsen Norman Group: 10 Usability Heuristics](https://www.nngroup.com/articles/ten-usability-heuristics/)
- [Maze: How to Conduct a Heuristic Evaluation](https://maze.co/guides/usability-testing/heuristic-evaluation/)
- [Adam Fard: Heuristic Evaluation Guide](https://adamfard.com/blog/heuristic-evaluation)
3 changes: 3 additions & 0 deletions src/plugins/design-patterns-skill/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
import { createStaticSkillPlugin } from "../../shared/static-skill.js";

export const designPatternsSkillPlugin = createStaticSkillPlugin("design-patterns-skill", "The design-patterns-skill skill.");
Loading