Skip to content

cjpais/inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

inference

Trying to wrap a bunch of different inference providers models and rate limit them. As well as getting them to support typescript more natively.

My specific application may send many parallel requests to inference models and I need to rate limit these requests across the application per provider. This effectively solves that problem

This is a major WIP so a bunch of things are left unimplmented for the time being. However the basic functionality should be there

Supported providers:

  • OpenAI (for chat, audio, image, embedding)
  • Together (for chat)
  • Mistral (for chat)
  • Whisper.cpp (for audio)

WIP Stuff:

  • consistent JSON mode
  • error handling
  • more rate limiting options
  • more providers (llama.cpp for chat, image and embedding)
  • move to config file & code gen for better typing?

Usage

Check out test/index.test.ts for usage examples.

Generally speaking

  1. Instantiate a provider
const oai = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY!,
});
  1. Create a rate limiter based on your own usage (this is in requests per second)
const oaiLimiter = createRateLimiter(2);
  1. Define what models you want to use and their alias
const CHAT_MODELS: Record<string, ChatModel> = {
  "gpt-3.5": {
    provider: oai,
    name: "gpt-3.5",
    providerModel: "gpt-3.5-turbo-0125",
    rateLimiter: oaiLimiter,
  },
  "gpt-4": {
    provider: oai,
    name: "gpt-4",
    providerModel: "gpt-4-0125-preview",
    rateLimiter: oaiLimiter,
  }
}
  1. Create inference with the models you want
const inference = new Inference({chatModels: CHAT_MODELS});
  1. Call the inference with the model you want to use
const result = await inference.chat({model: "gpt-3.5", prompt: "Hello, world!"});

To install dependencies:

bun install

To run:

bun run index.ts

About

a tiny ts package to alias/rate limit requests to inference apis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors