Typescript-first LLM framework with static type inference, testability, and composability.
import hop from "hopfield";
import openai from "hopfield/openai";
import OpenAI from "openai";
import z from "zod";
// create an OpenAI hopfield client
const hopfield = hop.client(openai).provider(new OpenAI());
// use description templates with Typescript string literal types
const categoryDescription = hopfield
.template()
.enum("The category of the message.");
// define functions for LLMs to call, with Zod validations
const classifyMessage = hopfield.function({
name: "classifyMessage",
description: "Triage an incoming support message.",
parameters: z.object({
summary: z.string().describe("The summary of the message."),
category: z
.enum([
"ACCOUNT_ISSUES",
"BILLING_AND_PAYMENTS",
"TECHNICAL_SUPPORT",
"OTHERS",
])
.describe(categoryDescription),
}),
});
// create a client with function calling
const chat = hopfield.chat().functions([classifyMessage]);
const incomingUserMessage = "How do I reset my password?";
// use utility types to infer inputs for a simple devex
const messages: hop.inferMessageInput<typeof chat>[] = [
{
content: incomingUserMessage,
role: "user",
},
];
// use the built-in LLM API calls (or just use the input/output Zod validations)
const parsed = await chat.get({
messages,
});
// get type-strong responses with `__type` helpers
if (parsed.choices[0].__type === "function_call") {
// automatically validate the arguments returned from the LLM
// we use the Zod schema you passed, for maximum flexibility in validation
const category = parsed.choices[0].message.function_call.arguments.category;
await handleMessageWithCategory(category, incomingUserMessage);
}
Hopfield might be a good fit for your project if:
- ποΈ You build with Typescript/Javascript, and have your database schemas in these languages (e.g. Prisma and/or Next.js).
- πͺ¨ You don't need a heavyweight LLM orchestration framework that ships with a ton of dependencies you'll never use.
- π€ You're using OpenAI function calling and/or custom tools, and want Typescript-native features for them (e.g. validations w/ Zod).
- π¬ You're building complex LLM interactions which use memory & RAG, evaluation, and orchestration (Coming Soonβ’).
- π You want best-practice, extensible templates, which use string literal types under the hood for transparency.
Oh, and liking Typescript is a nice-to-have.
- π We are Typescript-first, and only support TS (or JS) - with services like Replicate or OpenAI, why do you need Python?
- π€ We provide a simple, ejectable interface with common LLM use-cases. This is aligned 1-1 with LLM provider abstractions, like OpenAI's.
- πͺ’ We explicitly don't provide a ton of custom tools (please don't ask for too many π ) outside of the building blocks and simple examples provided. Other frameworks provide these, but when you use them, you soon realize the tool you want is very use-case specific.
- π§ͺ We (will) provide evaluation frameworks which let you simulate user scenarios and backend interactions with the LLM, including multi-turn conversations and function calling.
- πΆ We support Node.js, Vercel Edge Functions, Cloudflare Workers, and more (oh and even web, if you like giving away API keys).
npm i hopfield
For full documentation, visit hopfield.ai.
If you have questions or need help, reach out to the community in the Hopfield GitHub Discussions.
Shoutout to these projects which inspired us:
If you're interested in contributing to Hopfield, please read our contributing docs before submitting a pull request.