A collection of battle-tested AI agent skills — structured knowledge that coding agents can use to write better code.
These skills are built using skill-creator from the Anthropic skills repo. Each skill goes through multiple rounds of eval-driven iteration: write the skill, run evaluations against real-world coding prompts, review failures, refine the rules, and repeat until the skill reliably produces correct code. This isn't a one-shot prompt — it's a tested, iterated artifact.
I (@DaniAkash) use these skills daily in my own coding workflow. They exist because I kept seeing AI agents make the same mistakes, and I wanted a fix that actually sticks.
Guides correct usage of React's useEffect hook — when to use it, when NOT to use it, and what modern alternatives exist. Covers derived state, data fetching, event handlers, subscriptions, and all the cases where useEffect is the wrong tool.
npx skills add DaniAkash/skills --skill better-use-effect5 evals · 4 iterations
Visual diff comparison between two screenshots using ImageMagick. Produces pixel-level diff highlights, side-by-side composites, blend overlays, and structured reports with numerical metrics (RMSE, AE, SSIM). Handles content-only differences using structural similarity, and guides agents through an iterative design QA loop: fix → screenshot → compare → repeat.
npx skills add DaniAkash/skills --skill design-compare4 evals · 2 iterations
Gate-keeping code review skill that channels a senior engineer on-call who has been paged at 3am one too many times. Reviews diffs with zero tolerance — asserts (never suggests), applies mandatory severity tiers (P0–P3), and outputs a structured verdict designed for use in agentic pre-merge pipelines. The reviewer always reads the entire diff before returning a verdict, surfacing every issue in a single pass.
npx skills add DaniAkash/skills --skill angry-reviewer4 evals · 1 iteration
Audits any website for responsive design issues across all major device breakpoints using agent-browser. Parallelizes screenshot capture across 4 device groups simultaneously, runs a 10-point layout check matrix at each breakpoint, and produces a detailed report with screenshots, severity-classified findings, layout transition analysis, and CSS fix suggestions. For authenticated pages, falls back to Chrome DevTools MCP to audit a live logged-in session.
npx skills add DaniAkash/skills --skill responsiveness-audit3 evals · 2 iterations
Each skill follows a rigorous process powered by skill-creator:
- Draft — Write the initial skill from domain expertise and reference material
- Evals — Define realistic coding prompts with specific assertions that the generated code must satisfy
- Run & Review — Run evals against the skill, review where the agent gets it wrong
- Iterate — Refine rules, add anti-patterns, tighten the skill based on failures
- Repeat — Multiple rounds until eval pass rates are consistently high
The evals/ directory in each skill contains the test cases used during development. You can re-run them to verify the skill works with your agent setup.
skills/
└── <skill-name>/
├── SKILL.md # The skill definition
├── references/ # Supporting reference material
└── evals/ # Evaluation test cases
Install any skill into your project with:
npx skills add DaniAkash/skills --skill <skill-name>This adds the skill to your project so your coding agent automatically picks it up. The skill provides rules, patterns, and anti-patterns that guide the agent toward higher-quality code in that domain.