Conversation lifecycle controller for LLM-powered applications.
Manage user conversations, orchestrate pluggable processors, and stream responses — while keeping internal reasoning separate from what users see.
Your processor receives input and read-only history. What it does with an LLM is its own business.
// Processor handles reasoning — how it talks to LLMs is an implementation detail
processor := chit.ProcessorFunc(func(ctx context.Context, input string, history []chit.Message) (chit.Result, error) {
// history is the user-facing conversation (read-only)
// Internal LLM calls, chain-of-thought, tool use — none of it pollutes history
response := callYourLLM(input, history)
return &chit.Response{Content: response}, nil
})
// Emitter streams output to the user
emitter := &StreamingEmitter{writer: w}
// Chat manages the lifecycle
chat := chit.New(processor, emitter)
// Handle user input — chit manages history, you manage reasoning
chat.Handle(ctx, "What's the weather in Tokyo?")The user sees a clean conversation. Your processor can have elaborate internal dialogues with the LLM — retries, tool calls, multi-step reasoning — and none of it leaks through.
go get github.com/zoobzio/chitRequires Go 1.24+.
package main
import (
"context"
"fmt"
"github.com/zoobzio/chit"
)
// SimpleEmitter collects messages for demonstration
type SimpleEmitter struct {
Messages []chit.Message
}
func (e *SimpleEmitter) Emit(_ context.Context, msg chit.Message) error {
e.Messages = append(e.Messages, msg)
fmt.Printf("[%s]: %s\n", msg.Role, msg.Content)
return nil
}
func (e *SimpleEmitter) Push(_ context.Context, _ chit.Resource) error { return nil }
func (e *SimpleEmitter) Close() error { return nil }
func main() {
// Processor returns responses or yields for multi-turn
processor := chit.ProcessorFunc(func(_ context.Context, input string, history []chit.Message) (chit.Result, error) {
// First message — ask for clarification
if len(history) == 1 {
return &chit.Yield{
Prompt: "Which city would you like weather for?",
Continuation: func(_ context.Context, city string, _ []chit.Message) (chit.Result, error) {
return &chit.Response{Content: fmt.Sprintf("Weather in %s: Sunny, 22°C", city)}, nil
},
}, nil
}
return &chit.Response{Content: "Hello! How can I help?"}, nil
})
emitter := &SimpleEmitter{}
chat := chit.New(processor, emitter)
// First call yields, asking for more info
chat.Handle(context.Background(), "What's the weather?")
// [assistant]: Which city would you like weather for?
// Second call resumes with the answer
chat.Handle(context.Background(), "Tokyo")
// [assistant]: Weather in Tokyo: Sunny, 22°C
// History tracked automatically
fmt.Printf("Conversation has %d messages\n", len(chat.History()))
}| Feature | Description | Docs |
|---|---|---|
| Processor Interface | Pluggable reasoning with read-only history | Concepts |
| Yield & Continue | Multi-turn conversations via continuations | Concepts |
| Pipeline Resilience | Retry, timeout, circuit breaker via pipz | Reliability |
| Emitter Abstraction | Stream responses, push resources | Architecture |
| Signal Observability | Lifecycle events via capitan | Architecture |
| Testing Utilities | Mock processors and emitters | Testing |
- Clean separation — User conversation stays clean; internal LLM reasoning is processor's business
- Turn-taking built in — Yield/Continue pattern for multi-turn without manual state management
- Pipeline-native — Wrap with pipz for retry, timeout, rate limiting, circuit breakers
- Observable — Lifecycle signals via capitan without instrumentation code
- Bring your own LLM — Processor interface works with any LLM client or framework
Chit manages the conversation lifecycle. How you reason is up to you.
// Use zyn for typed LLM interactions
processor := chit.ProcessorFunc(func(ctx context.Context, input string, history []chit.Message) (chit.Result, error) {
// Create internal session — separate from user history
session := zyn.NewSession()
session.Append(zyn.RoleSystem, "You are a helpful assistant.")
// Add user context
for _, msg := range history {
session.Append(zyn.Role(msg.Role), msg.Content)
}
// Call LLM — internal retries, tool use, etc. stay internal
response, _ := synapse.Process(ctx, session)
return &chit.Response{Content: response}, nil
})
// Add resilience via pipz options
chat := chit.New(processor, emitter,
chit.WithRetry(3),
chit.WithTimeout(30*time.Second),
chit.WithCircuitBreaker(5, time.Minute),
)Your processor implementation can use zyn for typed synapses, raw API calls, or any other approach. Chit doesn't care — it just manages what the user sees.
- Learn
- Overview — Purpose and design philosophy
- Quickstart — Get started in minutes
- Concepts — Processors, results, emitters, history
- Architecture — Internal design and pipeline integration
- Guides
- Testing — Mock processors and emitters
- Troubleshooting — Common issues and solutions
- Reliability — Retry, timeout, circuit breakers
- Reference
See CONTRIBUTING.md for guidelines.
MIT License — see LICENSE for details.