GAI is a flexible Go library for building agent-style applications on top of LLMs. It provides a generic interface for providers and models, prompt and context helpers, and a loop for agentic-calling workflows.
The library is organized around three ideas:
- 🧩
aidefines the core provider, model, request, and response abstractions. - 🗂️
contextstores conversations, renders message history, and loads prompt files. - 🔁
loopruns iterative model and tool execution when a model returns a tool call.
- Go
1.26.1or newer - API credentials for whichever provider you use
go get github.com/lace-ai/gaiTo start, first create a provider. For example, for Gemini:
geminiProvider := gemini.New("your_api_key", nil)🗂️ Use the Model Repository to manage multiple providers and dynamic model selection
You can use a ModelRepository to register multiple providers and look up models by name across providers.
modelRepo := ai.NewModelRepository(nil)
err := modelRepo.RegisterProvider(context.Background(), geminiProvider)
if err != nil {
// handle error
}To get a model from the repo just use the provider name and the model name:
model, err := modelRepo.GetModel(context.Background(), "gemini", "gemini-3-flash-preview")
if err != nil {
// handle error
}Now you can access models from that provider, and generate text:
model, err := geminiProvider.Model("gemini-3-flash-preview")
if err != nil {
// handle error
}
response, err := model.Generate(context.Background(), ai.AIRequest{
Prompt: ai.Prompt{
System: "You are a helpful assistant.",
Prompt: "What is the capital of France?",
},
MaxTokens: 100,
})🔌 Implement Your Own Provider
Currently, the library includes Gemini and Mistral implementations. Gemini uses the official go-genai library, and Mistral uses direct HTTP calls to the Mistral API.
But you can implement your own provider by implementing the Provider and Model interfaces defined in the ai package.
Provider Implementation:
type MyProvider struct {
// any configuration fields you need, e.g. API key
}
func (p *MyProvider) Name() string {
return "myprovider"
}
func (p *MyProvider) Model(name string) (ai.Model, error) {
// return a model implementation based on the name
}
func (p *MyProvider) ListModels() ([]string, error) {
// return a list of available model names
}
func (p *MyProvider) Validate() error {
// validate the provider configuration, e.g. check API key is set
}Model Implementation:
type MyModel struct {
// any configuration fields you need, e.g. model name, provider reference
name string
}
func (m *MyModel) Name() string {
return m.name
}
func (m *MyModel) Generate(ctx context.Context, req ai.AIRequest) (*ai.AIResponse, error) {
// implement the logic to call your model API and return the response
}
func (m *MyModel) GenerateStream(ctx context.Context, req ai.AIRequest) <-chan ai.Token {
// implement streaming token generation
}
func (m *MyModel) Close() error {
// clean up any resources if needed
}Now you can use your custom provider just like the built-in ones
To build an agent with tools, use the loop package:
Tip
Use an alias for the context package to avoid conflicts with context package from the standard library. For example:
import aicontext "github.com/lace-ai/gai/context"l := loop.New(
model, // the model you want to use
[]loop.Tool{myTool}, // any tools you want to provide, one echo tool is included for testing
"What is the weather in New York?", // initial (user) prompt
"You are a helpful assistant that can call tools to get information.", // system prompt
nil, // optional context builder, if nil the loop will render prior messages itself
nil, // optional tool response preprocessor, if nil the loop will append tool results as-is
)
tokenCh, _, errCh := l.Loop(context.Background())
for range tokenCh {
// handle streamed tokens
}
for err := range errCh {
if err != nil {
// handle error
}
}
messages := l.Messages() // get final conversation messages, including tool calls and responses
var builder strings.Builder
aicontext.RenderMessages(messages, &builder)
fmt.Println(builder.String()) // render the messages for display🧩 Implement Your Own Tool
To implement your own tool, create a struct that implements the `Tool` interface:type myToolArgs struct {
Query string `json:"query"`
}
type MyTool struct {
// any configuration fields you need
}
func (t *MyTool) Name() string {
return "my_tool"
}
func (t *MyTool) Description() string {
return "A tool that does something useful."
}
func (t *MyTool) Params() string {
return `{"type":"object","required":["query"],"properties":{"query":{"type":"string","description":"Search query"}}}`
}
func (t *MyTool) Function(req *ai.ToolCall) *loop.ToolResponse {
var args myToolArgs
if err := loop.DecodeToolArgs(req, &args); err != nil {
return &loop.ToolResponse{Err: err}
}
// implement your tool logic here using args.Query
return &loop.ToolResponse{Text: "result for: " + args.Query}
}Then include an instance of your tool in the loop.New(...) call.
To manage conversation history and build prompts from it, use the context package:
store := mySessionStore // your implementation of SessionStore (e.g. in-memory, database, etc.)
sessionManager := aicontext.NewSessionManager(store, 1) // the second argument is the session ID
l := loop.New(
model, // the model you want to use
[]loop.Tool{myTool}, // any tools you want to provide
"What is the weather in New York?", // initial (user) prompt
"You are a helpful assistant that can call tools to get information.", // system prompt
sessionManager, // session manager implements loop.ContextBuilder
nil, // optional tool response preprocessor
)
tokenCh, _, errCh := l.Loop(context.Background())
for range tokenCh {
// handle streamed tokens
}
for err := range errCh {
if err != nil {
// handle error
}
}🗄️ Implement Your Own Session Store
To implement your own session store, please visit the SessionStore interface and implement the required methods.
ai/ Core abstractions: Provider, Model, AIRequest, AIResponse, ModelRepository
ai_gemini/ Gemini provider and model implementation
ai_mistral/ Mistral provider and model implementation
context/ Context management: Conversation/session types, prompt loading, message rendering
loop/ Agent loop, tool parsing, tool execution helpers
testutil/ Mocks used by tests
A provider is responsible for exposing available models and validating its own configuration. The shared interface is:
type Provider interface {
Name() string
Model(name string) (Model, error)
ListModels() ([]string, error)
Validate() error
}Use ModelRepository when you want to register multiple providers and look up models by name.
A model generates text from an AIRequest and can return either a complete AIResponse or a stream of tokens.
type Model interface {
Name() string
Generate(ctx context.Context, req AIRequest) (*AIResponse, error)
GenerateStream(ctx context.Context, req AIRequest) <-chan Token
Close() error
}ai.Prompt combines three pieces of input:
System: system instructionsContext: prior conversation or external contextPrompt: the current (user) request
Prompt.CombinedPrompt() concatenates those parts onto one string in this order: system, context, prompt.
AIRequest currently contains:
PromptMaxTokensis ignored by some providers, and might be removed in future versions.
AIResponse returns:
TextInputTokensOutputTokens
Package: ai_gemini
Constructor:
gemini.New(apiKey string, debug gai.DebugSink) *gemini.ProviderKnown model names:
gemini-3-flash-previewgemini-2.5-flashgemini-3.1-flash-lite-previewgemini-2.5-flash-lite
Package: ai_mistral
Constructor:
mistral.New(apiKey string, debug gai.DebugSink) *mistral.ProviderKnown model names:
mistral-small-latestmistral-medium-latestmistral-large-latest
This library does not read environment variables automatically. Create the provider with the API key you want to use, then register it in the repository.
The context package is not the standard library context package.
Import it with an alias such as aicontext to avoid name collisions.
import aicontext "github.com/lace-ai/gai/context"Messages have one of four roles:
systemuserassistanttool
Each message wraps a Content implementation such as text, tool calls, or tool results, (you can also implement your own).
The renderer formats history as tagged blocks, which is what the loop uses when it builds context automatically.
Conversation is a minimal interface used by the SessionManager to load and render message history:
type Conversation interface {
Messages() []Message
}SessionStore is an interface, not a built-in database implementation.
You provide your own store that can:
- create sessions
- fetch sessions and messages
- add one or many messages
SessionManager builds prompt context from stored history.
It loads the last 5 messages for the configured session, renders them, and appends the current loop messages.
Note
NewSessionManager(store, id) expects an integer session ID. If you want to start a new session, create one first.
LoadPromptFromFile reads .md and .txt files, trims whitespace, and returns the prompt text.
The loop package is for agent-style execution where the model can request tool calls.
loop.New(...) creates a loop with:
- a model
- optional tools
- an initial user prompt
- an optional system prompt
- an optional context builder
- an optional tool-response preprocessor
If no context builder is provided, the loop renders prior messages itself.
The loop stops when the model returns a normal response or when the maximum iteration count is reached.
Tools must implement:
type Tool interface {
Name() string
Description() string
Params() string
Function(req *ai.ToolCall) *ToolResponse
}Tool calls are expected to arrive as JSON with this shape:
{
"type": "function",
"name": "tool_name",
"arguments": {
"some": "value"
}
}Tool call IDs are generated internally by the runtime and are not model-controlled.
DetectToolCallsInStreamdetects tool-call JSON objects in streamed text tokens.CallToolruns a tool by name.DecodeToolArgsunmarshals tool arguments into a typed struct.RenderToolSignaturesformats tool metadata for prompting.
Common exported errors include:
ai.ErrProviderNotFoundai.ErrProviderAlreadyExistsai.ErrNilModelRepositoryloop.ErrModelNotConfiguredloop.ErrToolNotFoundloop.ErrMaxIterationscontext.ErrPromptMissingcontext.ErrSessionNotFoundgemini.ErrInvalidAPIKeymistral.ErrInvalidAPIKey
Handle provider and tool errors at the call site, especially when a model or session store is user-configured.
To see all the errors, check the errors.go file in each package.
Run all tests:
go test ./...Run a package test suite:
go test ./ai/...
go test ./loop/...
go test ./context/...- The
contextpackage name intentionally mirrors the domain it manages, but it is easy to confuse withcontext.Contextfrom the standard library. Use an alias in imports. The context package is likely to be renamed before official1.0release. SessionManagercurrently uses a fixed history window of 5 messages.
Contributions are welcome! Please open an issue or submit a pull request. If you add a new provider or tool, document the new constructor, model names, and any required environment variables.
This library is licensed under the GNU LESSER GENERAL PUBLIC LICENSE v2.1. See LICENSE for details.
Copyright (c) 2026 lace-ai. All rights reserved.