Virtual Context Plugin for OpenClaw
virtual-context.com — OS-style memory for LLMs. Less context. Better answers.
Virtual Context lets your agents run with unlimited context windows while sending only what matters to the LLM. Conversations are compressed, organized, and indexed automatically. When context is needed, it's retrieved semantically and injected into the payload. The result: unlimited memory, lower token costs, and better reasoning from models that see clean, relevant context instead of raw history.
This plugin provides deep OpenClaw integration via the Virtual Context REST API. For other frameworks, the transparent proxy requires zero code changes.
- Prepare — before each LLM call, sends your messages to the Virtual Context cloud. Gets back an compressed payload with relevant historical context injected.
- Tools — registers retrieval tools (
vc_expand_topic,vc_find_quote,vc_recall_all,vc_query_facts,vc_remember_when,vc_find_session) that the LLM can call to pull in more context on demand. - Ingest — after each LLM response, sends the assistant's reply to the cloud for tagging and indexing.
openclaw plugins install clawhub:virtual-context
In openclaw.json:
{
"plugins": {
"entries": {
"virtual-context": {
"enabled": true,
"config": {
"vcKey": "vc-your-key-here",
"baseUrl": "https://api.virtual-context.com",
"providers": ["openai-direct/gpt-5.4"],
"debug": false
}
}
}
}
}| Option | Type | Default | Description |
|---|---|---|---|
vcKey |
string | required | Your Virtual Context API key |
baseUrl |
string | https://api.virtual-context.com |
VC REST API base URL |
providers |
string[] | all | Provider/model pairs to activate for. Empty = all providers. Example: ["openai-direct/gpt-5.4"] |
debug |
boolean | false |
Enable verbose logging of REST API calls and payloads |
- Bootstrap — on startup, fetches tool definitions from
/api/v1/tools/definitionsand registers them as OpenClaw tools - Before each LLM call — calls
/api/v1/context/preparewith the full message history. The cloud returns an compressed payload with context injected, old turns trimmed, and tools added. The plugin replaces the messages in-place. - After each LLM response — calls
/api/v1/context/ingestwith the assistant's reply text for tagging and compaction - On tool calls — when the LLM requests a VC tool, the plugin calls
/api/v1/tools/{name}and returns the result
By default, the plugin activates for all providers. Use the providers config to restrict it to specific provider/model combinations. The plugin reads the current model from the session store at runtime, so it correctly handles /model switches.
This plugin is transparent about what it accesses. Here is the full list:
Network calls (to your configured baseUrl):
- Sends conversation messages to
/api/v1/context/preparebefore each LLM call - Sends assistant reply text to
/api/v1/context/ingestafter each LLM response - Fetches tool definitions from
/api/v1/tools/definitionsat startup - Calls
/api/v1/tools/{name}when the LLM requests a retrieval tool
Local filesystem reads:
- Reads
~/.openclaw/agents/<agentId>/sessions/sessions.jsonto determine the current model for provider filtering. This is a read-only access to OpenClaw's session store, used because thebefore_prompt_buildhook does not expose the active model in its context. No writes.
Payload modification:
- Replaces the message array in-place with the compressed payload returned by the cloud
- Can override the system prompt if the cloud returns one (VC manages the full payload to compress it)
Tool registration:
- Registers tools dynamically from definitions fetched from the cloud at startup
Debug logging (opt-in, off by default):
- When
debug: true, logs message previews, API responses, and payload sizes to the gateway log. Disable in production.
What it does NOT do:
- Does not write to any local files (except gateway logs via the logger)
- Does not access files outside the session store
- Does not send data to any endpoint other than your configured
baseUrl - Does not store credentials or API keys beyond what is in your
openclaw.jsonconfig
Sign up at virtual-context.com to get your API key. Free tier available. Pro ($19/mo) for unlimited.
- virtual-context.com — product overview, pricing, and signup
- Documentation — integration guides for Anthropic, OpenAI, and more
- Research Paper — the technical paper behind Virtual Context
- GitHub — plugin source code
- Wire-log observability: the
[vc:wire] POST <path>log line now appendstimeout=Nmsso the prepare-call timeout selection is grep-able from gateway logs. VCMERGE / VCMERGE PREVIEW requests showtimeout=60000ms, normal prepares showtimeout=15000ms, and initial JSONL ingest showstimeout=120000ms. - Wire-shape tests strengthened to pin the full prepare-payload body shape (role, content[].type, model presence/absence) for both
VCMERGE INTOandVCMERGE PREVIEW, not just message count + prompt text. - Lockfile regenerated to record 5.1.1 (runtime payload was unaffected because
package.json"files"excludes the lockfile).
- VCMERGE support: the plugin's existing
^VC[A-Z]/i + vc_command + prependContextrail handlesVCMERGE INTO <target>,VCMERGE PREVIEW <target>, and the reserved-for-v2VCMERGESTATUS <merge_id>natively. No new dispatch code; cloud's REST endpoint resolves these alongside VCATTACH/VCSTATUS/VCLABEL/etc. - Timeout sizing for VC commands: prepare-call timeout is now
60sfor any VC command (matches against^VC[A-Z]/i). Previous behavior was15severywhere except120son initial JSONL ingest. This gives sync-path merges comfortable headroom — VCMERGE on conversations >5k turns may take several seconds (sync path); >10k turns return amerge_idimmediately for async tracking viaVCMERGESTATUS.- Alarm-threshold rule: the 60s cap is a forcing function, NOT a tuning knob. If real-world p99 nears 60s, the right lever is dropping cloud's
max_sync_source_turnsto push more sources into the async path — NOT bumping this timeout further. Bumping past 60s would mask the sync-path getting too slow rather than escalating it.
- Alarm-threshold rule: the 60s cap is a forcing function, NOT a tuning knob. If real-world p99 nears 60s, the right lever is dropping cloud's
- Test infrastructure: the plugin now ships a
vitest+fetch-mocktest harness intests/. Tests cover the timeout-per-branch contract, URL+body construction, and the message/error/bracket fallback chain over canonical error envelopes. Run vianpm test. Dev-only:tests/andnode_modules/do not bundle into the runtime npm package (perpackage.json"files"); end-user installs are unchanged.
- Defensive fix: VC command error responses now render correctly when the cloud populates the
errorfield without amessagefield. Previously, error responses (such asVCATTACHagainst a missing target) rendered the placeholder string[VC <command>]and the user saw no error context. The plugin now falls back toprepareResult.errorbefore the placeholder.
- Hardcoded retrieval tool definitions (no bootstrap network call).
- VC command handling via
prependContext(keeps history clean). - JSONL ingest tracking with
VCREINGESTreset command. - Wire-level request logging in debug mode.