Lightcode is a light weight coding agent written in Go.
- Go 1.25+
- At least one OpenAI-compatible endpoint configured (see Models below)
Settings live under ~/.lightcode/. The app creates this directory and a default config.json on first run.
Providers must speak the OpenAI Chat Completions API. Add entries to the models array in ~/.lightcode/config.json. Each entry is an object with:
| Field | Meaning |
|---|---|
model |
Model id as your provider expects it (e.g. gpt-4o) |
base_url |
Base URL for the API (OpenAI-compatible) |
api_key |
Secret for that provider |
Example:
{
"theme": "light",
"models": [
{
"model": "another-model-id",
"base_url": "https://your-gateway.example/v1",
"api_key": "..."
},
{
"model": "another-model-id-2",
"base_url": "https://your-gateway.example/v1",
"api_key": "..."
}
],
"current_model": {
"model": "another-model-id",
"base_url": "https://your-gateway.example/v1",
"api_key": "..."
}
}- In the TUI, run
/modelsto select one of the models
Put skills in ~/.lightcode/skills/, each in its own subdirectory containing a SKILL.md file.
The HTTP API defaults to port 8080. To use a different port, Set port in ~/.lightcode/config.json.
go run ./cmd/lightcode/main.goRun the API server (by default listens on :8080) and TUI:
go run ./cmd/lightcode/main.go-
copy paste multiple lines into a [ paste #1 13 lines ]
-
better tool and thinking formating
-
Skills
-
grep tool
-
first make the ui work
-
UI upgrades
-
Make config files
-
improve tools and make test
-
Fix the database bug
-
Limit accessible directory to working dir
-
question tool - homepage 381, just need to create a ui and send chat completion request
-
Show Code changes
-
Plan mode - prompt and tool filter
-
todo tool - handle in the ui and send it as context in agent.go
-
json data for model selection etc
-
need to also integrate anthropic go sdk cause response format for tool calling in models like glp and claude (fixed without adding anthropic api)
-
File tracker
-
Re write prompts
-
Check for credentials before making an api call
