A TypeScript library for running AI models across multiple providers with a unified interface.
Agentic provides a simple, consistent API for interacting with various AI providers including Cloudflare Workers AI and Groq. It handles provider-specific execution logic, retry logic with exponential backoff, JSON mode, tool calling, and response formatting.
npm install @em3odme/agentic
pnpm install @em3odme/agentic
yarn add @em3odme/agenticRequirements: Node.js >= 18.0.0
import { ModelRunner, groqAIModel } from '@em3odme/agentic';
const runner = new ModelRunner({
GROQ_API_KEY: process.env.GROQ_API_KEY!,
});
const result = await runner.run({
messages: [{ role: 'user', content: 'Hello, how are you?' }],
model: groqAIModel('llama-3.3-70b-versatile'),
});
console.log(result.content);| Provider | JSON Mode | Tools | Streaming | Setup |
|---|---|---|---|---|
| Cloudflare Workers AI | ✓ | ✓ | ✗ | AI binding |
| Groq | ✓ | ✓ | ✓ | GROQ_API_KEY |
Run open-source models (Llama, DeepSeek, Mistral, etc.) on Cloudflare's global edge network.
import { ModelRunner, cloudflareAIModel } from '@em3odme/agentic';
const runner = new ModelRunner({ AI: env.AI });
const result = await runner.run({
messages: [{ role: 'user', content: 'Hello!' }],
model: cloudflareAIModel('@cf/meta/llama-3.3-70b-instruct-fp8-fast'),
});Features:
- Serverless GPU infrastructure
- JSON mode for structured responses
- Function calling capabilities
- Embedding models support
- Automatic retries with exponential backoff
High-performance LLM inference with ultra-low latency.
import { ModelRunner, groqAIModel } from '@em3odme/agentic';
const runner = new ModelRunner({
GROQ_API_KEY: process.env.GROQ_API_KEY!,
});
const result = await runner.run({
messages: [{ role: 'user', content: 'Hello!' }],
model: groqAIModel('llama-3.3-70b-versatile'),
});Features:
- Real-time streaming responses
- JSON mode for structured outputs
- Tool/function calling
- Speech recognition (Whisper models)
- Ultra-fast inference
const result = await runner.run({
messages: [{ role: 'user', content: 'List top 3 programming languages' }],
model: groqAIModel('llama-3.3-70b-versatile'),
jsonMode: true,
});
const languages = JSON.parse(result.content);const result = await runner.run({
messages: [{ role: 'user', content: 'What is the weather in New York?' }],
model: groqAIModel('llama-3.3-70b-versatile'),
options: { tools: true },
});
console.log(result.tool_calls);ModelRunner.updateRuntimeConfig({
timeout: 60000,
retries: {
maxAttempts: 5,
baseDelay: 1000,
maxDelay: 10000,
},
});For detailed documentation, see:
- ModelRunner Documentation - Core API reference
- Cloudflare Provider - Cloudflare Workers AI setup and usage
- Groq Provider - Groq API setup and usage
- Repository: https://github.com/Em3ODMe/agentic.git
- Issues: https://github.com/Em3ODMe/agentic/issues
agent, llm, ai, typescript, nodejs