Skip to content

feat: add multi-provider NLP support for intent parsing #1

@Noooblien

Description

@Noooblien

Currently the parser relies on pattern matching and heuristics. Add support for multiple LLM providers as the NLP backbone so teams can choose based on cost, latency, or preference.

Providers to support:

  • anthropic — Claude API (claude-haiku for speed, claude-sonnet for accuracy)
  • openai — GPT-4o / GPT-4o-mini
  • google — Gemini 1.5 Flash
  • mistral — Mistral Small (good for cost-sensitive deployments)
  • cohere — Command R (strong on structured extraction tasks)

Proposed interface:

const parser = new IntentParser({
  provider: "anthropic", // swap to "openai" | "google" | "mistral" | "cohere"
  apiKey: process.env.API_KEY,
  model: "claude-haiku-4-5" // optional override
})

What this unlocks:

  • Teams not on Anthropic can still use intent-parser
  • Benchmark accuracy and latency across providers per corridor
  • Fallback to a secondary provider if primary is down
  • Cost optimization — route simple intents to cheaper models

Suggested approach:

  • Abstract LLM calls behind a ProviderAdapter interface
  • Each provider gets its own adapter — AnthropicAdapter, OpenAIAdapter etc.
  • Keep regex heuristics as the fast path, LLM as the enrichment layer
  • Add a parsed_by field in response showing which provider and model was used

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions