AI SDKs solve calling models. Model routers give you a catalog. But choosing the right model for a task — comparing pricing, matching capabilities, handling different ID formats across providers — that logic keeps getting rewritten in every app. pickai is the missing layer between "here are 300 models" and "use this one."
Zero runtime dependencies, works everywhere (browser, Node, edge). Built on OpenRouter's model catalog — the only API that provides pricing, context windows, and capabilities across all major providers in one place.
pnpm add pickaiimport { parseOpenRouterCatalog, enrich, recommend, Purpose } from "pickai";
const response = await fetch("https://openrouter.ai/api/v1/models");
const models = parseOpenRouterCatalog(await response.json()).map(enrich);
const [model] = recommend(models, Purpose.Balanced); // → top standard-tier modelEvery result has ready-to-use IDs for calling the model:
model.apiSlugs.openRouter; // "anthropic/claude-sonnet-4-5" — for OpenRouter API
model.apiSlugs.direct; // "claude-sonnet-4-5-20250929" — for provider APIs / Vercel AI SDKimport { groupByProvider, Tier, Cost } from "pickai";
recommend(models, Purpose.Cheap, { count: 3 }); // top 3 efficient-tier models
recommend(models, Purpose.Quality); // newest flagship-tier models
const groups = groupByProvider(models); // organize by provider for UI
models.filter(m => m.costTier <= Cost.Standard); // affordable models
models.filter(m => m.tier >= Tier.Standard); // capable modelsrecommend() accepts custom profiles with any mix of built-in and custom scoring criteria — including external benchmark data:
import { recommend, costEfficiency, recency, contextCapacity, Tier } from "pickai";
recommend(models, {
preferredTier: Tier.Standard,
criteria: [
{ criterion: costEfficiency, weight: 0.5 },
{ criterion: recency, weight: 0.3 },
{ criterion: contextCapacity, weight: 0.2 },
],
require: { tools: true },
});See Built-in Profiles, Custom Profiles, and the Scoring guide for external benchmark integration.
| When you need to... | Use |
|---|---|
| Get data in | parseOpenRouterCatalog() — turns the OpenRouter response into Model[] |
| Pick a model | recommend(models, purpose) — scores, filters, and returns the best match |
| Prepare for UI | enrich() adds display labels and tiers; groupByProvider() organizes into provider sections |
| Blend in benchmarks | recommend() with custom criteria, or scoreModels() for full control — see Scoring |
See the full API Reference for types, and the topic guides for Getting Started, Scoring, Classification, and Utilities.
pnpm install # install dependencies
pnpm build # build (tsup → ESM + CJS + .d.ts)
pnpm test # run tests (vitest)
pnpm update-fixture # refresh OpenRouter model fixturesrc/
adapters/ # Provider-specific parsers (OpenRouter)
with-*.ts # Composable enrichers (classification, display labels, AA slug)
enrich.ts # Convenience wrapper composing with* functions
*.ts # One module per concern (classify, score, group, etc.)
*.test.ts # Co-located tests
scripts/
update-openrouter-fixture.sh
check-aa-slugs.ts # Validate AA slug derivation against live data
Three layers of tests catch different classes of problems:
- Unit tests (
*.test.tsco-located with each module) — verify individual functions in isolation using synthetic fixtures fromsrc/test-utils.ts. Catch logic regressions in classification, scoring, formatting, ID normalization, etc. - Integration tests (
integration.test.ts) — exercise the full pipeline (parseOpenRouterCatalog → enrich → recommend/score/select/group) against the real OpenRouter fixture. Catch cross-module regressions where changes in one module (e.g., tier classification thresholds) silently break downstream behavior (e.g., recommendation results). - Smoke tests (
smoke.test.ts) — verify that every public export from both entry points (pickaiandpickai/adapters) resolves correctly. Catch broken barrel exports — the kind of bug where the library builds fine but consumers getundefinedat runtime because an export was renamed or removed.
Run pnpm build before committing to catch type errors that vitest might miss.
See the CHANGELOG for breaking changes and migration guidance between versions.
MIT