Skip to content

knowledgepixels/nanopub-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

nanopub-ai

Local AI coding environment using OpenCode and Ollama, running in Docker.

Prerequisites

Quick start

# First run (builds image + pulls model — takes a while)
docker compose run --rm opencode

# Subsequent runs
docker compose run --rm opencode

Use docker compose run (not up) because OpenCode is an interactive TUI that needs an attached terminal.

API access

Start the API server:

docker compose up opencode-api

The OpenCode HTTP API is then available at http://localhost:4096. Interactive API docs are served at http://localhost:4096/doc.

Architecture

Service Description
ollama Ollama server (CPU mode), persists models in a Docker volume
ollama-pull One-shot service that pulls qwen2.5-coder:7b then exits
opencode Interactive TUI connected to Ollama, mounts the current directory as /workspace
opencode-api Headless HTTP server exposing the OpenCode API on port 4096

Tear down

docker compose down

Model data is stored in the ollama_data volume and persists across restarts. To remove it:

docker compose down -v

Configuration

The OpenCode config lives in opencode.json and is baked into the image at build time. To use a different model:

  1. Change the model name in opencode.json and in the ollama-pull entrypoint in docker-compose.yml.
  2. Rebuild the image:
docker compose build opencode

About

An AI for nanopublications with local LLM model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published