Skip to content

renbaoshuo/llmfunc

Repository files navigation

llmfunc

Make LLM calls feel like regular function calls in Go.

Installation

go get go.baoshuo.dev/llmfunc

Quick Start

Text Output

package main

import (
	"context"
	"fmt"

	"github.com/sashabaranov/go-openai"
	"go.baoshuo.dev/llmfunc"
)

// 1. Define the input type and implement FunctionInputFormatter.
type TranslateInput struct {
	Text string
}

func (t TranslateInput) FunctionInput() *llmfunc.FunctionInput {
	return &llmfunc.FunctionInput{
		Messages: []openai.ChatCompletionMessage{
			{Role: openai.ChatMessageRoleUser, Content: t.Text},
		},
	}
}

func main() {
	// 2. Create a client.
	client, err := llmfunc.NewClient("YOUR_API_KEY", "")
	if err != nil {
		panic(err)
	}

	// 3. Create a function.
	translate := llmfunc.NewFunction(
		client,
		llmfunc.BypassOutput(),
		llmfunc.Model(openai.GPT4oMini),
		llmfunc.Instruction("You are a professional translator. Translate the user's text to Chinese."),
	)

	// 4. Call it like a regular function.
	result, err := translate.Run(context.Background(), &TranslateInput{
		Text: "Hello, world!",
	})
	if err != nil {
		panic(err)
	}

	fmt.Println(result.FinalAnswer)
}

Structured Output

Use StructuredOutput(true) to have the LLM return JSON that is automatically deserialized into a Go struct.

package main

import (
	"context"
	"fmt"

	"github.com/sashabaranov/go-openai"
	"go.baoshuo.dev/llmfunc"
)

type SentimentInput struct {
	Text string
}

func (s SentimentInput) FunctionInput() *llmfunc.FunctionInput {
	return &llmfunc.FunctionInput{
		Messages: []openai.ChatCompletionMessage{
			{Role: openai.ChatMessageRoleUser, Content: s.Text},
		},
	}
}

type SentimentOutput struct {
	Sentiment string  `json:"sentiment"` // "positive", "negative", "neutral", ...
	Score     float64 `json:"score"`     // 0.0 ~ 1.0
}

func main() {
	client, err := llmfunc.NewClient("YOUR_API_KEY", "")
	if err != nil {
		panic(err)
	}

	analyze := llmfunc.NewFunction(
		client,
		llmfunc.UnmarshalOutput[SentimentInput, SentimentOutput](),
		llmfunc.Model(openai.GPT4oMini),
		llmfunc.Name("sentiment_analysis"),
		llmfunc.Description("Analyze the sentiment of the input text."),
		llmfunc.Instruction("Analyze the sentiment of the user's text and respond in JSON."),
		llmfunc.StructuredOutput(true),
	)

	result, err := analyze.Run(context.Background(), &SentimentInput{
		Text: "I love this product!",
	})
	if err != nil {
		panic(err)
	}

	fmt.Printf("Sentiment: %s (score: %.2f)\n", result.Sentiment, result.Score)
}

API

Client

// Create a client from an API key and optional base URL (empty string uses the default OpenAI endpoint).
client, err := llmfunc.NewClient(apiKey, endpoint)

// Or wrap an existing *openai.Client.
client, err := llmfunc.NewClientFromExisting(openaiClient)

Function

fn := llmfunc.NewFunction(client, handler, ...options)
result, err := fn.Run(ctx, input)

The input type T must implement the FunctionInputFormatter interface:

type FunctionInputFormatter interface {
    FunctionInput() *FunctionInput
}

Output Handlers

Handler Description
BypassOutput() Returns the raw *FunctionOutput (.FinalAnswer contains the LLM response text).
UnmarshalOutput[T, R]() Unmarshals the LLM JSON response into *R. Typically used with StructuredOutput(true).

You can also provide a custom handler:

handler := func(i *MyInput, o *llmfunc.FunctionOutput) (*MyOutput, error) {
    // parse o.FinalAnswer however you like
    return &MyOutput{Text: o.FinalAnswer}, nil
}

Options

Option Description
Model(model string) The model to use (e.g. openai.GPT4oMini).
Instruction(instruction string) System prompt prepended to every request.
Temperature(temp float32) Sampling temperature (0–2). Lower = more deterministic, higher = more creative. Default is 1. Recommended not to use with TopP.
TopP(topP float32) Nucleus sampling (0–1). Only tokens within the top topP cumulative probability are considered. Default is 1. Recommended not to use with Temperature.
MaxTokens(maxTokens int) Maximum number of tokens to generate. 0 means the model's default.
FrequencyPenalty(penalty float32) Frequency penalty (−2.0–2.0). Positive values reduce repetition of the same words. Default is 0.
PresencePenalty(penalty float32) Presence penalty (−2.0–2.0). Positive values encourage introducing new topics. Default is 0.
Stop(stop ...string) Up to 4 stop sequences. Generation halts when any sequence is encountered; the sequence itself is not included in the output.
Seed(seed int) Random seed for best-effort deterministic outputs.
Name(name string) Function name, used with structured output.
Description(desc string) Function description, used with structured output.
StructuredOutput(enabled bool) Enable JSON structured output. Automatically generates a JSON schema from the output type R.

Author

llmfunc © Baoshuo, Released under the MIT License.

Personal Homepage · Blog · GitHub @renbaoshuo

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages