-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Labels
bugSomething isn't workingSomething isn't workingstatus: triageFor new items that haven't been reviewed yet.For new items that haven't been reviewed yet.
Description
Description
Description
When selecting local Ollama during nemoclaw onboard, the sandbox's ~/.openclaw/openclaw.json still shows the cloud model (inference/nvidia/nemotron-3-super-120b-a12b) instead of the selected Ollama model.
Steps to Reproduce
- Run
nemoclaw onboard --non-interactivewithNEMOCLAW_PROVIDER=ollamaandNEMOCLAW_MODEL=nemotron-3-nano:30b - Or run interactive onboarding and select "Local Ollama" as the inference provider
- After completion, check
~/.openclaw/openclaw.jsoninside the sandbox
Expected Behavior
openclaw.json should reference the selected Ollama model (e.g., ollama/nemotron-3-nano:30b or similar).
Actual Behavior
openclaw.json contains:
"agents": {
"defaults": {
"model": {
"primary": "inference/nvidia/nemotron-3-super-120b-a12b"
}
}
}
### Reproduction Steps
1) Install ollama, and pull LLM and embedding model for RAG
a. curl -fsSL https://ollama.com/install.sh | sh
b. ollama pull nemotron-3-nano:30b
We have single A100 GPU 80G on this machine, which can run nemotron-3-nano:30b, but not nemotron-3-super, it requires more GPU vRAM
root@host:~# lspci | grep -i nvidia
6b:00.0 3D controller: NVIDIA Corporation GA100 [A100 PCIe 80GB] (rev a1)
root@host:~#
c. ollama pull nomic-embed-text
nomic-embed-text is a text embedding model — it converts text into numerical vectors (embeddings) for semantic search, RAG, and similarity tasks.
ubuntu@host:~$ ollama list
NAME ID SIZE MODIFIED
nomic-embed-text:latest 0a109f422b47 274 MB 2 hours ago
nemotron-3-nano:30b b725f1117407 24 GB 2 hours ago
ubuntu@host:~$
ubuntu@host:~$ ollama list
NAME ID SIZE MODIFIED
nomic-embed-text:latest 0a109f422b47 274 MB 11 hours ago
nemotron-3-nano:30b b725f1117407 24 GB 11 hours ago
ubuntu@host:~$ curl http://localhost:11434/api/tags
{"models":[{"name":"nomic-embed-text:latest","model":"nomic-embed-text:latest","modified_at":"2026-03-21T10:27:44.641193503+02:00","size":274302450,"digest":"0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f","details":{"parent_model":"","format":"gguf","family":"nomic-bert","families":["nomic-bert"],"parameter_size":"137M","quantization_level":"F16"}},{"name":"nemotron-3-nano:30b","model":"nemotron-3-nano:30b","modified_at":"2026-03-21T10:16:03.326277546+02:00","size":24271934866,"digest":"b725f11174073334edd0c2ff396b8d4e66d7dab22a5a63717ccad5a08a270cf1","details":{"parent_model":"","format":"gguf","family":"nemotron_h_moe","families":["nemotron_h_moe"],"parameter_size":"31.6B","quantization_level":"Q4_K_M"}}]}ubuntu@host:~$
2) Install latest nodejs
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo bash -
sudo apt-get install -y nodejs
ubuntu@host:~$ node --version
v22.22.1
ubuntu@host:~$ npm --version
10.9.4
ubuntu@host:~$
3) Install Docker (required for sandboxes, podman is not supported at the moment)
curl -fsSL https://get.docker.com | sh
# Log out and back in for docker group to take effect
ubuntu@host:~$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),101(lxd),987(docker)
ubuntu@host:~$
4) # Run the installer with sudo
curl -fsSL https://www.nvidia.com/nemoclaw.sh | sudo bash
######################
should local ollama be exposed on all interfaces?
######################
Created sandbox: my-assistant Setting up NemoClaw...
[gateway] openclaw gateway launched (pid 92)
[gateway] auto-pair watcher launched (pid 93)
[gateway] Local UI: http://127.0.0.1:18789/#token=<my token>
[gateway] Remote UI: http://127.0.0.1:18789/#token=<my token>
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-assistant in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-assistant
✓ Sandbox 'my-assistant' created
[4/7] Configuring inference (NIM)
──────────────────────────────────────────────────
Detected local inference option: Ollama
Select one explicitly to use it. Press Enter to keep the cloud default.
Inference options:
1) NVIDIA Endpoint API (build.nvidia.com)
2) Local Ollama (localhost:11434) — running (suggested)
Choose [1]: 2
✓ Using Ollama on localhost:11434
Ollama models:
1) nomic-embed-text:latest
2) nemotron-3-nano:30b
Choose model [2]:
[5/7] Setting up inference provider
──────────────────────────────────────────────────
Local Ollama is responding on localhost, but containers cannot reach http://host.openshell.internal:11434. Ensure Ollama listens on 0.0.0.0:11434 instead of 127.0.0.1 so sandboxes can reach it.
On macOS, local inference also depends on OpenShell host routing support.
ubuntu@host:~$
###########################################
root@host:~# cat /etc/systemd/system/socat-ollama.service
[Unit]
Description=Forward Ollama to Docker bridge
After=network.target ollama.service
Requires=ollama.service
[Service]
Type=simple
ExecStart=/usr/bin/socat TCP-LISTEN:11434,bind=172.18.0.1,fork TCP:127.0.0.1:11434
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
root@host:~#
listen on docker IP bridge, instead of all interfaces
root@host:~# curl http://172.18.0.1:11434/api/tags
{"models":[{"name":"nomic-embed-text:latest","model":"nomic-embed-text:latest","modified_at":"2026-03-21T10:27:44.641193503+02:00","size":274302450,"digest":"0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f","details":{"parent_model":"","format":"gguf","family":"nomic-bert","families":["nomic-bert"],"parameter_size":"137M","quantization_level":"F16"}},{"name":"nemotron-3-nano:30b","model":"nemotron-3-nano:30b","modified_at":"2026-03-21T10:16:03.326277546+02:00","size":24271934866,"digest":"b725f11174073334edd0c2ff396b8d4e66d7dab22a5a63717ccad5a08a270cf1","details":{"parent_model":"","format":"gguf","family":"nemotron_h_moe","families":["nemotron_h_moe"],"parameter_size":"31.6B","quantization_level":"Q4_K_M"}}]}root@host:~#
host:~# nemoclaw --version
Unknown command: --version
Registered sandboxes: my-assistant
Try: nemoclaw <sandbox-name> connect
Run 'nemoclaw help' for usage.
root@host:~# nemoclaw my-assistant connect
sandbox@my-assistant:~$
#############################
Wrong provider is set, despite ollama was defined ealier
#############################
root@host:~# NEMOCLAW_RECREATE_SANDBOX=1 nemoclaw onboard --non-interactive
NemoClaw Onboarding
(non-interactive mode)
===================
[1/7] Preflight checks
──────────────────────────────────────────────────
✓ Docker is running
✓ Container runtime: docker
✓ openshell CLI: openshell 0.0.13
Cleaning up previous NemoClaw session...
✓ Previous session cleaned up
✓ Port 8080 available (OpenShell gateway)
✓ Port 18789 available (NemoClaw dashboard)
✓ NVIDIA GPU detected: 1 GPU(s), 81920 MB VRAM
[2/7] Starting OpenShell gateway
──────────────────────────────────────────────────
Using pinned OpenShell gateway image: ghcr.io/nvidia/openshell/cluster:0.0.13
✓ Checking Docker
✓ Downloading gateway
✓ Initializing environment
✓ Starting gateway
✓ Gateway ready
Name: nemoclaw
Endpoint: https://127.0.0.1:8080
✓ Active gateway set to 'nemoclaw'
✓ Gateway is healthy
[3/7] Creating sandbox
──────────────────────────────────────────────────
[non-interactive] Sandbox name (lowercase, numbers, hyphens) [my-assistant]: → my-assistant
[non-interactive] Sandbox 'my-assistant' exists — recreating
Creating sandbox 'my-assistant' (this takes a few minutes on first run)...
Building image openshell/sandbox-from:1774124891 from /tmp/nemoclaw-build-0tGuw9/Dockerfile
Context: /tmp/nemoclaw-build-0tGuw9
Gateway: nemoclaw
Building image openshell/sandbox-from:1774124891 from /tmp/nemoclaw-build-0tGuw9/Dockerfile
Step 1/33 : FROM node:22-slim AS builder
---> 4f77a690f2f8
Step 2/33 : COPY nemoclaw/package.json nemoclaw/tsconfig.json /opt/nemoclaw/
---> Using cache
---> 6293560dd804
Step 3/33 : COPY nemoclaw/src/ /opt/nemoclaw/src/
---> Using cache
---> 8a8f53d4bb58
Step 4/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 411021070ee7
Step 5/33 : RUN npm install && npm run build
---> Using cache
---> 4319d89ae863
Step 6/33 : FROM node:22-slim
---> 4f77a690f2f8
Step 7/33 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 18e8a0e8cc76
Step 8/33 : RUN apt-get update && apt-get install -y --no-install-recommends python3 python3-pip python3-venv curl git ca-certificates iproute2 && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 211cd92a3cec
Step 9/33 : RUN groupadd -r sandbox && useradd -r -g sandbox -d /sandbox -s /bin/bash sandbox && mkdir -p /sandbox/.nemoclaw && chown -R sandbox:sandbox /sandbox
---> Using cache
---> eceb9d6e814f
Step 10/33 : RUN mkdir -p /sandbox/.openclaw-data/agents/main/agent /sandbox/.openclaw-data/extensions /sandbox/.openclaw-data/workspace /sandbox/.openclaw-data/skills /sandbox/.openclaw-data/hooks /sandbox/.openclaw-data/identity /sandbox/.openclaw-data/devices /sandbox/.openclaw-data/canvas /sandbox/.openclaw-data/cron && mkdir -p /sandbox/.openclaw && ln -s /sandbox/.openclaw-data/agents /sandbox/.openclaw/agents && ln -s /sandbox/.openclaw-data/extensions /sandbox/.openclaw/extensions && ln -s /sandbox/.openclaw-data/workspace /sandbox/.openclaw/workspace && ln -s /sandbox/.openclaw-data/skills /sandbox/.openclaw/skills && ln -s /sandbox/.openclaw-data/hooks /sandbox/.openclaw/hooks && ln -s /sandbox/.openclaw-data/identity /sandbox/.openclaw/identity && ln -s /sandbox/.openclaw-data/devices /sandbox/.openclaw/devices && ln -s /sandbox/.openclaw-data/canvas /sandbox/.openclaw/canvas && ln -s /sandbox/.openclaw-data/cron /sandbox/.openclaw/cron && touch /sandbox/.openclaw-data/update-check.json && ln -s /sandbox/.openclaw-data/update-check.json /sandbox/.openclaw/update-check.json && chown -R sandbox:sandbox /sandbox/.openclaw /sandbox/.openclaw-data
---> Using cache
---> 101147aa55f3
Step 11/33 : RUN npm install -g openclaw@2026.3.11
---> Using cache
---> 1a9f1458190a
Step 12/33 : RUN pip3 install --break-system-packages pyyaml
---> Using cache
---> 99d8c7f482a3
Step 13/33 : COPY --from=builder /opt/nemoclaw/dist/ /opt/nemoclaw/dist/
---> Using cache
---> 6569631d0359
Step 14/33 : COPY nemoclaw/openclaw.plugin.json /opt/nemoclaw/
---> Using cache
---> f18c7735977a
Step 15/33 : COPY nemoclaw/package.json /opt/nemoclaw/
---> Using cache
---> 8d81a46ccf8e
Step 16/33 : COPY nemoclaw-blueprint/ /opt/nemoclaw-blueprint/
---> Using cache
---> cba18e89956f
Step 17/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 5e0956c494f2
Step 18/33 : RUN npm install --omit=dev
---> Using cache
---> 085624626508
Step 19/33 : RUN mkdir -p /sandbox/.nemoclaw/blueprints/0.1.0 && cp -r /opt/nemoclaw-blueprint/* /sandbox/.nemoclaw/blueprints/0.1.0/
---> Using cache
---> 69715a714c14
Step 20/33 : COPY scripts/nemoclaw-start.sh /usr/local/bin/nemoclaw-start
---> Using cache
---> dee39ee4a8b1
Step 21/33 : RUN chmod +x /usr/local/bin/nemoclaw-start
---> Using cache
---> e42747483a47
Step 22/33 : ARG NEMOCLAW_MODEL=nvidia/nemotron-3-super-120b-a12b
---> Using cache
---> 351aa7d8efd7
Step 23/33 : ARG CHAT_UI_URL=http://127.0.0.1:18789
---> Using cache
---> 314c4f152be2
Step 24/33 : ARG NEMOCLAW_BUILD_ID=default
---> Using cache
---> cae95626e9ec
Step 25/33 : WORKDIR /sandbox
---> Using cache
---> 4dd0905eaba8
Step 26/33 : USER sandbox
---> Using cache
---> 83f051f2b0ef
Step 27/33 : RUN python3 -c "import json, os, secrets; from urllib.parse import urlparse; model = '${NEMOCLAW_MODEL}'; chat_ui_url = '${CHAT_UI_URL}'; parsed = urlparse(chat_ui_url); chat_origin = f'{parsed.scheme}://{parsed.netloc}' if parsed.scheme and parsed.netloc else 'http://127.0.0.1:18789'; origins = ['http://127.0.0.1:18789']; origins = list(dict.fromkeys(origins + [chat_origin])); config = { 'agents': {'defaults': {'model': {'primary': f'inference/{model}'}}}, 'models': {'mode': 'merge', 'providers': { 'nvidia': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'openshell-managed', 'api': 'openai-completions', 'models': [{'id': model.split('/')[-1], 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] }, 'inference': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'unused', 'api': 'openai-completions', 'models': [{'id': model, 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] } }}, 'gateway': { 'mode': 'local', 'controlUi': { 'allowInsecureAuth': True, 'dangerouslyDisableDeviceAuth': True, 'allowedOrigins': origins, }, 'trustedProxies': ['127.0.0.1', '::1'], 'auth': {'token': secrets.token_hex(32)} } }; path = os.path.expanduser('~/.openclaw/openclaw.json'); json.dump(config, open(path, 'w'), indent=2); os.chmod(path, 0o600)"
---> Using cache
---> 6c1fc48c0f7a
Step 28/33 : RUN openclaw doctor --fix > /dev/null 2>&1 || true && openclaw plugins install /opt/nemoclaw > /dev/null 2>&1 || true
---> Using cache
---> 28bc378760ef
Step 29/33 : USER root
---> Using cache
---> feb0cc35ca9e
Step 30/33 : RUN chown root:root /sandbox/.openclaw && find /sandbox/.openclaw -mindepth 1 -maxdepth 1 -exec chown -h root:root {} + && chmod 1777 /sandbox/.openclaw && chmod 444 /sandbox/.openclaw/openclaw.json
---> Using cache
---> 9b05468d1f63
Step 31/33 : USER sandbox
---> Using cache
---> bd1a78f66ab5
Step 32/33 : ENTRYPOINT ["/bin/bash"]
---> Using cache
---> e154e6cd4b50
Step 33/33 : CMD []
---> Using cache
---> 68ee53696acc
Successfully built 68ee53696acc
Successfully tagged openshell/sandbox-from:1774124891
Built image openshell/sandbox-from:1774124891
Pushing image openshell/sandbox-from:1774124891 into gateway "nemoclaw"
[progress] Exported 1174 MiB
[progress] Uploaded to gateway
Image openshell/sandbox-from:1774124891 is available in the gateway.
✓ Image openshell/sandbox-from:1774124891 is available in the gateway.
Created sandbox: my-assistant Setting up NemoClaw...
[gateway] openclaw gateway launched (pid 92)
[gateway] auto-pair watcher launched (pid 93)
[gateway] Local UI: http://127.0.0.1:18789/#token=<my token>
[gateway] Remote UI: http://127.0.0.1:18789/#token=<my token>
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-assistant in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-assistant
✓ Sandbox 'my-assistant' created
[4/7] Configuring inference (NIM)
──────────────────────────────────────────────────
[non-interactive] Provider: cloud
NVIDIA_API_KEY is required for cloud provider in non-interactive mode.
Set it via: NVIDIA_API_KEY=nvapi-... nemoclaw onboard --non-interactive
root@host:~#
####################################################################
#Second retry
####################################################################
root@host:~# NEMOCLAW_PROVIDER=ollama \
> NEMOCLAW_MODEL=nemotron-3-nano:30b \
> NEMOCLAW_RECREATE_SANDBOX=1 \
> nemoclaw onboard --non-interactive
NemoClaw Onboarding
(non-interactive mode)
===================
[1/7] Preflight checks
──────────────────────────────────────────────────
✓ Docker is running
✓ Container runtime: docker
✓ openshell CLI: openshell 0.0.13
Cleaning up previous NemoClaw session...
✓ Previous session cleaned up
✓ Port 8080 available (OpenShell gateway)
✓ Port 18789 available (NemoClaw dashboard)
✓ NVIDIA GPU detected: 1 GPU(s), 81920 MB VRAM
[2/7] Starting OpenShell gateway
──────────────────────────────────────────────────
Using pinned OpenShell gateway image: ghcr.io/nvidia/openshell/cluster:0.0.13
✓ Checking Docker
✓ Downloading gateway
✓ Initializing environment
✓ Starting gateway
✓ Gateway ready
Name: nemoclaw
Endpoint: https://127.0.0.1:8080
✓ Active gateway set to 'nemoclaw'
✓ Gateway is healthy
[3/7] Creating sandbox
──────────────────────────────────────────────────
[non-interactive] Sandbox name (lowercase, numbers, hyphens) [my-assistant]: → my-assistant
[non-interactive] Sandbox 'my-assistant' exists — recreating
Creating sandbox 'my-assistant' (this takes a few minutes on first run)...
Building image openshell/sandbox-from:1774125448 from /tmp/nemoclaw-build-TuHrer/Dockerfile
Context: /tmp/nemoclaw-build-TuHrer
Gateway: nemoclaw
Building image openshell/sandbox-from:1774125448 from /tmp/nemoclaw-build-TuHrer/Dockerfile
Step 1/33 : FROM node:22-slim AS builder
---> 4f77a690f2f8
Step 2/33 : COPY nemoclaw/package.json nemoclaw/tsconfig.json /opt/nemoclaw/
---> Using cache
---> 6293560dd804
Step 3/33 : COPY nemoclaw/src/ /opt/nemoclaw/src/
---> Using cache
---> 8a8f53d4bb58
Step 4/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 411021070ee7
Step 5/33 : RUN npm install && npm run build
---> Using cache
---> 4319d89ae863
Step 6/33 : FROM node:22-slim
---> 4f77a690f2f8
Step 7/33 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 18e8a0e8cc76
Step 8/33 : RUN apt-get update && apt-get install -y --no-install-recommends python3 python3-pip python3-venv curl git ca-certificates iproute2 && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 211cd92a3cec
Step 9/33 : RUN groupadd -r sandbox && useradd -r -g sandbox -d /sandbox -s /bin/bash sandbox && mkdir -p /sandbox/.nemoclaw && chown -R sandbox:sandbox /sandbox
---> Using cache
---> eceb9d6e814f
Step 10/33 : RUN mkdir -p /sandbox/.openclaw-data/agents/main/agent /sandbox/.openclaw-data/extensions /sandbox/.openclaw-data/workspace /sandbox/.openclaw-data/skills /sandbox/.openclaw-data/hooks /sandbox/.openclaw-data/identity /sandbox/.openclaw-data/devices /sandbox/.openclaw-data/canvas /sandbox/.openclaw-data/cron && mkdir -p /sandbox/.openclaw && ln -s /sandbox/.openclaw-data/agents /sandbox/.openclaw/agents && ln -s /sandbox/.openclaw-data/extensions /sandbox/.openclaw/extensions && ln -s /sandbox/.openclaw-data/workspace /sandbox/.openclaw/workspace && ln -s /sandbox/.openclaw-data/skills /sandbox/.openclaw/skills && ln -s /sandbox/.openclaw-data/hooks /sandbox/.openclaw/hooks && ln -s /sandbox/.openclaw-data/identity /sandbox/.openclaw/identity && ln -s /sandbox/.openclaw-data/devices /sandbox/.openclaw/devices && ln -s /sandbox/.openclaw-data/canvas /sandbox/.openclaw/canvas && ln -s /sandbox/.openclaw-data/cron /sandbox/.openclaw/cron && touch /sandbox/.openclaw-data/update-check.json && ln -s /sandbox/.openclaw-data/update-check.json /sandbox/.openclaw/update-check.json && chown -R sandbox:sandbox /sandbox/.openclaw /sandbox/.openclaw-data
---> Using cache
---> 101147aa55f3
Step 11/33 : RUN npm install -g openclaw@2026.3.11
---> Using cache
---> 1a9f1458190a
Step 12/33 : RUN pip3 install --break-system-packages pyyaml
---> Using cache
---> 99d8c7f482a3
Step 13/33 : COPY --from=builder /opt/nemoclaw/dist/ /opt/nemoclaw/dist/
---> Using cache
---> 6569631d0359
Step 14/33 : COPY nemoclaw/openclaw.plugin.json /opt/nemoclaw/
---> Using cache
---> f18c7735977a
Step 15/33 : COPY nemoclaw/package.json /opt/nemoclaw/
---> Using cache
---> 8d81a46ccf8e
Step 16/33 : COPY nemoclaw-blueprint/ /opt/nemoclaw-blueprint/
---> Using cache
---> cba18e89956f
Step 17/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 5e0956c494f2
Step 18/33 : RUN npm install --omit=dev
---> Using cache
---> 085624626508
Step 19/33 : RUN mkdir -p /sandbox/.nemoclaw/blueprints/0.1.0 && cp -r /opt/nemoclaw-blueprint/* /sandbox/.nemoclaw/blueprints/0.1.0/
---> Using cache
---> 69715a714c14
Step 20/33 : COPY scripts/nemoclaw-start.sh /usr/local/bin/nemoclaw-start
---> Using cache
---> dee39ee4a8b1
Step 21/33 : RUN chmod +x /usr/local/bin/nemoclaw-start
---> Using cache
---> e42747483a47
Step 22/33 : ARG NEMOCLAW_MODEL=nvidia/nemotron-3-super-120b-a12b
---> Using cache
---> 351aa7d8efd7
Step 23/33 : ARG CHAT_UI_URL=http://127.0.0.1:18789
---> Using cache
---> 314c4f152be2
Step 24/33 : ARG NEMOCLAW_BUILD_ID=default
---> Using cache
---> cae95626e9ec
Step 25/33 : WORKDIR /sandbox
---> Using cache
---> 4dd0905eaba8
Step 26/33 : USER sandbox
---> Using cache
---> 83f051f2b0ef
Step 27/33 : RUN python3 -c "import json, os, secrets; from urllib.parse import urlparse; model = '${NEMOCLAW_MODEL}'; chat_ui_url = '${CHAT_UI_URL}'; parsed = urlparse(chat_ui_url); chat_origin = f'{parsed.scheme}://{parsed.netloc}' if parsed.scheme and parsed.netloc else 'http://127.0.0.1:18789'; origins = ['http://127.0.0.1:18789']; origins = list(dict.fromkeys(origins + [chat_origin])); config = { 'agents': {'defaults': {'model': {'primary': f'inference/{model}'}}}, 'models': {'mode': 'merge', 'providers': { 'nvidia': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'openshell-managed', 'api': 'openai-completions', 'models': [{'id': model.split('/')[-1], 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] }, 'inference': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'unused', 'api': 'openai-completions', 'models': [{'id': model, 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] } }}, 'gateway': { 'mode': 'local', 'controlUi': { 'allowInsecureAuth': True, 'dangerouslyDisableDeviceAuth': True, 'allowedOrigins': origins, }, 'trustedProxies': ['127.0.0.1', '::1'], 'auth': {'token': secrets.token_hex(32)} } }; path = os.path.expanduser('~/.openclaw/openclaw.json'); json.dump(config, open(path, 'w'), indent=2); os.chmod(path, 0o600)"
---> Using cache
---> 6c1fc48c0f7a
Step 28/33 : RUN openclaw doctor --fix > /dev/null 2>&1 || true && openclaw plugins install /opt/nemoclaw > /dev/null 2>&1 || true
---> Using cache
---> 28bc378760ef
Step 29/33 : USER root
---> Using cache
---> feb0cc35ca9e
Step 30/33 : RUN chown root:root /sandbox/.openclaw && find /sandbox/.openclaw -mindepth 1 -maxdepth 1 -exec chown -h root:root {} + && chmod 1777 /sandbox/.openclaw && chmod 444 /sandbox/.openclaw/openclaw.json
---> Using cache
---> 9b05468d1f63
Step 31/33 : USER sandbox
---> Using cache
---> bd1a78f66ab5
Step 32/33 : ENTRYPOINT ["/bin/bash"]
---> Using cache
---> e154e6cd4b50
Step 33/33 : CMD []
---> Using cache
---> 68ee53696acc
Successfully built 68ee53696acc
Successfully tagged openshell/sandbox-from:1774125448
Built image openshell/sandbox-from:1774125448
Pushing image openshell/sandbox-from:1774125448 into gateway "nemoclaw"
[progress] Exported 1174 MiB
[progress] Uploaded to gateway
Image openshell/sandbox-from:1774125448 is available in the gateway.
✓ Image openshell/sandbox-from:1774125448 is available in the gateway.
Created sandbox: my-assistant
Setting up NemoClaw...
[gateway] openclaw gateway launched (pid 92)
[gateway] auto-pair watcher launched (pid 93)
[gateway] Local UI: http://127.0.0.1:18789/#token=<my token>
[gateway] Remote UI: http://127.0.0.1:18789/#token=<my token>
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-assistant in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-assistant
✓ Sandbox 'my-assistant' created
[4/7] Configuring inference (NIM)
──────────────────────────────────────────────────
[non-interactive] Provider: ollama
✓ Using Ollama on localhost:11434
[5/7] Setting up inference provider
──────────────────────────────────────────────────
Local Ollama is responding on localhost, but containers cannot reach http://host.openshell.internal:11434. Ensure Ollama listens on 0.0.0.0:11434 instead of 127.0.0.1 so sandboxes can reach it.
On macOS, local inference also depends on OpenShell host routing support.
root@host:~#
#######################################################################################
#Interactive install
#######################################################################################
Created sandbox: my-assistant Setting up NemoClaw...
[gateway] openclaw gateway launched (pid 92)
[gateway] auto-pair watcher launched (pid 93)
[gateway] Local UI: http://127.0.0.1:18789/#token=<my token>
[gateway] Remote UI: http://127.0.0.1:18789/#token=<my token>
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-assistant in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-assistant
✓ Sandbox 'my-assistant' created
[4/7] Configuring inference (NIM)
──────────────────────────────────────────────────
Detected local inference option: Ollama
Select one explicitly to use it. Press Enter to keep the cloud default.
Inference options:
1) NVIDIA Endpoint API (build.nvidia.com)
2) Local Ollama (localhost:11434) — running (suggested)
Choose [1]:
###############################################################
root@host:~# mkdir -p /etc/systemd/system/ollama.service.d
root@host:~# vim /etc/systemd/system/ollama.service.d/override.conf
root@host:~# sudo systemctl daemon-reload
root@host:~# systemctl restart ollama
root@host:~# ss -tulpen | grep 11434
tcp LISTEN 0 4096 172.18.0.1:11434 0.0.0.0:* users:(("ollama",pid=592979,fd=3)) uid:996 ino:6450327 sk:3001 cgroup:/system.slice/ollama.service <->
root@host:~# nemoclaw onboard
NemoClaw Onboarding
===================
[1/7] Preflight checks
──────────────────────────────────────────────────
✓ Docker is running
✓ Container runtime: docker
✓ openshell CLI: openshell 0.0.13
Cleaning up previous NemoClaw session...
✓ Previous session cleaned up
✓ Port 8080 available (OpenShell gateway)
✓ Port 18789 available (NemoClaw dashboard)
✓ NVIDIA GPU detected: 1 GPU(s), 81920 MB VRAM
[2/7] Starting OpenShell gateway
──────────────────────────────────────────────────
Using pinned OpenShell gateway image: ghcr.io/nvidia/openshell/cluster:0.0.13
✓ Checking Docker
✓ Downloading gateway
✓ Initializing environment
✓ Starting gateway
✓ Gateway ready
Name: nemoclaw
Endpoint: https://127.0.0.1:8080
✓ Active gateway set to 'nemoclaw'
✓ Gateway is healthy
[3/7] Creating sandbox
──────────────────────────────────────────────────
Sandbox name (lowercase, numbers, hyphens) [my-assistant]:
Sandbox 'my-assistant' already exists. Recreate? [y/N]: y
Creating sandbox 'my-assistant' (this takes a few minutes on first run)...
Building image openshell/sandbox-from:1774127592 from /tmp/nemoclaw-build-TrmijQ/Dockerfile
Context: /tmp/nemoclaw-build-TrmijQ
Gateway: nemoclaw
Building image openshell/sandbox-from:1774127592 from /tmp/nemoclaw-build-TrmijQ/Dockerfile
Step 1/33 : FROM node:22-slim AS builder
---> 4f77a690f2f8
Step 2/33 : COPY nemoclaw/package.json nemoclaw/tsconfig.json /opt/nemoclaw/
---> Using cache
---> 6293560dd804
Step 3/33 : COPY nemoclaw/src/ /opt/nemoclaw/src/
---> Using cache
---> 8a8f53d4bb58
Step 4/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 411021070ee7
Step 5/33 : RUN npm install && npm run build
---> Using cache
---> 4319d89ae863
Step 6/33 : FROM node:22-slim
---> 4f77a690f2f8
Step 7/33 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 18e8a0e8cc76
Step 8/33 : RUN apt-get update && apt-get install -y --no-install-recommends python3 python3-pip python3-venv curl git ca-certificates iproute2 && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 211cd92a3cec
Step 9/33 : RUN groupadd -r sandbox && useradd -r -g sandbox -d /sandbox -s /bin/bash sandbox && mkdir -p /sandbox/.nemoclaw && chown -R sandbox:sandbox /sandbox
---> Using cache
---> eceb9d6e814f
Step 10/33 : RUN mkdir -p /sandbox/.openclaw-data/agents/main/agent /sandbox/.openclaw-data/extensions /sandbox/.openclaw-data/workspace /sandbox/.openclaw-data/skills /sandbox/.openclaw-data/hooks /sandbox/.openclaw-data/identity /sandbox/.openclaw-data/devices /sandbox/.openclaw-data/canvas /sandbox/.openclaw-data/cron && mkdir -p /sandbox/.openclaw && ln -s /sandbox/.openclaw-data/agents /sandbox/.openclaw/agents && ln -s /sandbox/.openclaw-data/extensions /sandbox/.openclaw/extensions && ln -s /sandbox/.openclaw-data/workspace /sandbox/.openclaw/workspace && ln -s /sandbox/.openclaw-data/skills /sandbox/.openclaw/skills && ln -s /sandbox/.openclaw-data/hooks /sandbox/.openclaw/hooks && ln -s /sandbox/.openclaw-data/identity /sandbox/.openclaw/identity && ln -s /sandbox/.openclaw-data/devices /sandbox/.openclaw/devices && ln -s /sandbox/.openclaw-data/canvas /sandbox/.openclaw/canvas && ln -s /sandbox/.openclaw-data/cron /sandbox/.openclaw/cron && touch /sandbox/.openclaw-data/update-check.json && ln -s /sandbox/.openclaw-data/update-check.json /sandbox/.openclaw/update-check.json && chown -R sandbox:sandbox /sandbox/.openclaw /sandbox/.openclaw-data
---> Using cache
---> 101147aa55f3
Step 11/33 : RUN npm install -g openclaw@2026.3.11
---> Using cache
---> 1a9f1458190a
Step 12/33 : RUN pip3 install --break-system-packages pyyaml
---> Using cache
---> 99d8c7f482a3
Step 13/33 : COPY --from=builder /opt/nemoclaw/dist/ /opt/nemoclaw/dist/
---> Using cache
---> 6569631d0359
Step 14/33 : COPY nemoclaw/openclaw.plugin.json /opt/nemoclaw/
---> Using cache
---> f18c7735977a
Step 15/33 : COPY nemoclaw/package.json /opt/nemoclaw/
---> Using cache
---> 8d81a46ccf8e
Step 16/33 : COPY nemoclaw-blueprint/ /opt/nemoclaw-blueprint/
---> Using cache
---> cba18e89956f
Step 17/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 5e0956c494f2
Step 18/33 : RUN npm install --omit=dev
---> Using cache
---> 085624626508
Step 19/33 : RUN mkdir -p /sandbox/.nemoclaw/blueprints/0.1.0 && cp -r /opt/nemoclaw-blueprint/* /sandbox/.nemoclaw/blueprints/0.1.0/
---> Using cache
---> 69715a714c14
Step 20/33 : COPY scripts/nemoclaw-start.sh /usr/local/bin/nemoclaw-start
---> Using cache
---> dee39ee4a8b1
Step 21/33 : RUN chmod +x /usr/local/bin/nemoclaw-start
---> Using cache
---> e42747483a47
Step 22/33 : ARG NEMOCLAW_MODEL=nvidia/nemotron-3-super-120b-a12b
---> Using cache
---> 351aa7d8efd7
Step 23/33 : ARG CHAT_UI_URL=http://127.0.0.1:18789
---> Using cache
---> 314c4f152be2
Step 24/33 : ARG NEMOCLAW_BUILD_ID=default
---> Using cache
---> cae95626e9ec
Step 25/33 : WORKDIR /sandbox
---> Using cache
---> 4dd0905eaba8
Step 26/33 : USER sandbox
---> Using cache
---> 83f051f2b0ef
Step 27/33 : RUN python3 -c "import json, os, secrets; from urllib.parse import urlparse; model = '${NEMOCLAW_MODEL}'; chat_ui_url = '${CHAT_UI_URL}'; parsed = urlparse(chat_ui_url); chat_origin = f'{parsed.scheme}://{parsed.netloc}' if parsed.scheme and parsed.netloc else 'http://127.0.0.1:18789'; origins = ['http://127.0.0.1:18789']; origins = list(dict.fromkeys(origins + [chat_origin])); config = { 'agents': {'defaults': {'model': {'primary': f'inference/{model}'}}}, 'models': {'mode': 'merge', 'providers': { 'nvidia': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'openshell-managed', 'api': 'openai-completions', 'models': [{'id': model.split('/')[-1], 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] }, 'inference': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'unused', 'api': 'openai-completions', 'models': [{'id': model, 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] } }}, 'gateway': { 'mode': 'local', 'controlUi': { 'allowInsecureAuth': True, 'dangerouslyDisableDeviceAuth': True, 'allowedOrigins': origins, }, 'trustedProxies': ['127.0.0.1', '::1'], 'auth': {'token': secrets.token_hex(32)} } }; path = os.path.expanduser('~/.openclaw/openclaw.json'); json.dump(config, open(path, 'w'), indent=2); os.chmod(path, 0o600)"
---> Using cache
---> 6c1fc48c0f7a
Step 28/33 : RUN openclaw doctor --fix > /dev/null 2>&1 || true && openclaw plugins install /opt/nemoclaw > /dev/null 2>&1 || true
---> Using cache
---> 28bc378760ef
Step 29/33 : USER root
---> Using cache
---> feb0cc35ca9e
Step 30/33 : RUN chown root:root /sandbox/.openclaw && find /sandbox/.openclaw -mindepth 1 -maxdepth 1 -exec chown -h root:root {} + && chmod 1777 /sandbox/.openclaw && chmod 444 /sandbox/.openclaw/openclaw.json
---> Using cache
---> 9b05468d1f63
Step 31/33 : USER sandbox
---> Using cache
---> bd1a78f66ab5
Step 32/33 : ENTRYPOINT ["/bin/bash"]
---> Using cache
---> e154e6cd4b50
Step 33/33 : CMD []
---> Using cache
---> 68ee53696acc
Successfully built 68ee53696acc
Successfully tagged openshell/sandbox-from:1774127592
Built image openshell/sandbox-from:1774127592
Pushing image openshell/sandbox-from:1774127592 into gateway "nemoclaw"
[progress] Exported 1174 MiB
[progress] Uploaded to gateway
Image openshell/sandbox-from:1774127592 is available in the gateway.
✓ Image openshell/sandbox-from:1774127592 is available in the gateway.
Created sandbox: my-assistant
Setting up NemoClaw...
[gateway] openclaw gateway launched (pid 92)
[gateway] auto-pair watcher launched (pid 93)
[gateway] Local UI: http://127.0.0.1:18789/#token=<my token>
[gateway] Remote UI: http://127.0.0.1:18789/#token=<my token>
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-assistant in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-assistant
✓ Sandbox 'my-assistant' created
[4/7] Configuring inference (NIM)
──────────────────────────────────────────────────
Inference options:
1) NVIDIA Endpoint API (build.nvidia.com) (recommended)
2) Local Ollama (localhost:11434)
Choose [1]: 2
Starting Ollama...
✓ Using Ollama on localhost:11434
Ollama models:
1) nemotron-3-nano:30b
Choose model [1]:
[5/7] Setting up inference provider
──────────────────────────────────────────────────
Local Ollama was selected, but nothing is responding on http://localhost:11434.
On macOS, local inference also depends on OpenShell host routing support.
root@host:~#
##################################################################################
The installation finishes only if ollama is listening on all interfaces
##################################################################################
root@host:~# nemoclaw onboard
NemoClaw Onboarding
===================
[1/7] Preflight checks
──────────────────────────────────────────────────
✓ Docker is running
✓ Container runtime: docker
✓ openshell CLI: openshell 0.0.13
Cleaning up previous NemoClaw session...
✓ Previous session cleaned up
✓ Port 8080 available (OpenShell gateway)
✓ Port 18789 available (NemoClaw dashboard)
✓ NVIDIA GPU detected: 1 GPU(s), 81920 MB VRAM
[2/7] Starting OpenShell gateway
──────────────────────────────────────────────────
Using pinned OpenShell gateway image: ghcr.io/nvidia/openshell/cluster:0.0.13
✓ Checking Docker
✓ Downloading gateway
✓ Initializing environment
✓ Starting gateway
✓ Gateway ready
Name: nemoclaw
Endpoint: https://127.0.0.1:8080
✓ Active gateway set to 'nemoclaw'
✓ Gateway is healthy
[3/7] Creating sandbox
──────────────────────────────────────────────────
Sandbox name (lowercase, numbers, hyphens) [my-assistant]:
Sandbox 'my-assistant' already exists. Recreate? [y/N]: y
Creating sandbox 'my-assistant' (this takes a few minutes on first run)...
Building image openshell/sandbox-from:1774128091 from /tmp/nemoclaw-build-FcFxCy/Dockerfile
Context: /tmp/nemoclaw-build-FcFxCy
Gateway: nemoclaw
Building image openshell/sandbox-from:1774128091 from /tmp/nemoclaw-build-FcFxCy/Dockerfile
Step 1/33 : FROM node:22-slim AS builder
---> 4f77a690f2f8
Step 2/33 : COPY nemoclaw/package.json nemoclaw/tsconfig.json /opt/nemoclaw/
---> Using cache
---> 6293560dd804
Step 3/33 : COPY nemoclaw/src/ /opt/nemoclaw/src/
---> Using cache
---> 8a8f53d4bb58
Step 4/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 411021070ee7
Step 5/33 : RUN npm install && npm run build
---> Using cache
---> 4319d89ae863
Step 6/33 : FROM node:22-slim
---> 4f77a690f2f8
Step 7/33 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 18e8a0e8cc76
Step 8/33 : RUN apt-get update && apt-get install -y --no-install-recommends python3 python3-pip python3-venv curl git ca-certificates iproute2 && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 211cd92a3cec
Step 9/33 : RUN groupadd -r sandbox && useradd -r -g sandbox -d /sandbox -s /bin/bash sandbox && mkdir -p /sandbox/.nemoclaw && chown -R sandbox:sandbox /sandbox
---> Using cache
---> eceb9d6e814f
Step 10/33 : RUN mkdir -p /sandbox/.openclaw-data/agents/main/agent /sandbox/.openclaw-data/extensions /sandbox/.openclaw-data/workspace /sandbox/.openclaw-data/skills /sandbox/.openclaw-data/hooks /sandbox/.openclaw-data/identity /sandbox/.openclaw-data/devices /sandbox/.openclaw-data/canvas /sandbox/.openclaw-data/cron && mkdir -p /sandbox/.openclaw && ln -s /sandbox/.openclaw-data/agents /sandbox/.openclaw/agents && ln -s /sandbox/.openclaw-data/extensions /sandbox/.openclaw/extensions && ln -s /sandbox/.openclaw-data/workspace /sandbox/.openclaw/workspace && ln -s /sandbox/.openclaw-data/skills /sandbox/.openclaw/skills && ln -s /sandbox/.openclaw-data/hooks /sandbox/.openclaw/hooks && ln -s /sandbox/.openclaw-data/identity /sandbox/.openclaw/identity && ln -s /sandbox/.openclaw-data/devices /sandbox/.openclaw/devices && ln -s /sandbox/.openclaw-data/canvas /sandbox/.openclaw/canvas && ln -s /sandbox/.openclaw-data/cron /sandbox/.openclaw/cron && touch /sandbox/.openclaw-data/update-check.json && ln -s /sandbox/.openclaw-data/update-check.json /sandbox/.openclaw/update-check.json && chown -R sandbox:sandbox /sandbox/.openclaw /sandbox/.openclaw-data
---> Using cache
---> 101147aa55f3
Step 11/33 : RUN npm install -g openclaw@2026.3.11
---> Using cache
---> 1a9f1458190a
Step 12/33 : RUN pip3 install --break-system-packages pyyaml
---> Using cache
---> 99d8c7f482a3
Step 13/33 : COPY --from=builder /opt/nemoclaw/dist/ /opt/nemoclaw/dist/
---> Using cache
---> 6569631d0359
Step 14/33 : COPY nemoclaw/openclaw.plugin.json /opt/nemoclaw/
---> Using cache
---> f18c7735977a
Step 15/33 : COPY nemoclaw/package.json /opt/nemoclaw/
---> Using cache
---> 8d81a46ccf8e
Step 16/33 : COPY nemoclaw-blueprint/ /opt/nemoclaw-blueprint/
---> Using cache
---> cba18e89956f
Step 17/33 : WORKDIR /opt/nemoclaw
---> Using cache
---> 5e0956c494f2
Step 18/33 : RUN npm install --omit=dev
---> Using cache
---> 085624626508
Step 19/33 : RUN mkdir -p /sandbox/.nemoclaw/blueprints/0.1.0 && cp -r /opt/nemoclaw-blueprint/* /sandbox/.nemoclaw/blueprints/0.1.0/
---> Using cache
---> 69715a714c14
Step 20/33 : COPY scripts/nemoclaw-start.sh /usr/local/bin/nemoclaw-start
---> Using cache
---> dee39ee4a8b1
Step 21/33 : RUN chmod +x /usr/local/bin/nemoclaw-start
---> Using cache
---> e42747483a47
Step 22/33 : ARG NEMOCLAW_MODEL=nvidia/nemotron-3-super-120b-a12b
---> Using cache
---> 351aa7d8efd7
Step 23/33 : ARG CHAT_UI_URL=http://127.0.0.1:18789
---> Using cache
---> 314c4f152be2
Step 24/33 : ARG NEMOCLAW_BUILD_ID=default
---> Using cache
---> cae95626e9ec
Step 25/33 : WORKDIR /sandbox
---> Using cache
---> 4dd0905eaba8
Step 26/33 : USER sandbox
---> Using cache
---> 83f051f2b0ef
Step 27/33 : RUN python3 -c "import json, os, secrets; from urllib.parse import urlparse; model = '${NEMOCLAW_MODEL}'; chat_ui_url = '${CHAT_UI_URL}'; parsed = urlparse(chat_ui_url); chat_origin = f'{parsed.scheme}://{parsed.netloc}' if parsed.scheme and parsed.netloc else 'http://127.0.0.1:18789'; origins = ['http://127.0.0.1:18789']; origins = list(dict.fromkeys(origins + [chat_origin])); config = { 'agents': {'defaults': {'model': {'primary': f'inference/{model}'}}}, 'models': {'mode': 'merge', 'providers': { 'nvidia': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'openshell-managed', 'api': 'openai-completions', 'models': [{'id': model.split('/')[-1], 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] }, 'inference': { 'baseUrl': 'https://inference.local/v1', 'apiKey': 'unused', 'api': 'openai-completions', 'models': [{'id': model, 'name': model, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}] } }}, 'gateway': { 'mode': 'local', 'controlUi': { 'allowInsecureAuth': True, 'dangerouslyDisableDeviceAuth': True, 'allowedOrigins': origins, }, 'trustedProxies': ['127.0.0.1', '::1'], 'auth': {'token': secrets.token_hex(32)} } }; path = os.path.expanduser('~/.openclaw/openclaw.json'); json.dump(config, open(path, 'w'), indent=2); os.chmod(path, 0o600)"
---> Using cache
---> 6c1fc48c0f7a
Step 28/33 : RUN openclaw doctor --fix > /dev/null 2>&1 || true && openclaw plugins install /opt/nemoclaw > /dev/null 2>&1 || true
---> Using cache
---> 28bc378760ef
Step 29/33 : USER root
---> Using cache
---> feb0cc35ca9e
Step 30/33 : RUN chown root:root /sandbox/.openclaw && find /sandbox/.openclaw -mindepth 1 -maxdepth 1 -exec chown -h root:root {} + && chmod 1777 /sandbox/.openclaw && chmod 444 /sandbox/.openclaw/openclaw.json
---> Using cache
---> 9b05468d1f63
Step 31/33 : USER sandbox
---> Using cache
---> bd1a78f66ab5
Step 32/33 : ENTRYPOINT ["/bin/bash"]
---> Using cache
---> e154e6cd4b50
Step 33/33 : CMD []
---> Using cache
---> 68ee53696acc
Successfully built 68ee53696acc
Successfully tagged openshell/sandbox-from:1774128091
Built image openshell/sandbox-from:1774128091
Pushing image openshell/sandbox-from:1774128091 into gateway "nemoclaw"
[progress] Exported 1174 MiB
[progress] Uploaded to gateway
Image openshell/sandbox-from:1774128091 is available in the gateway.
✓ Image openshell/sandbox-from:1774128091 is available in the gateway.
Created sandbox: my-assistant Setting up NemoClaw...
[gateway] openclaw gateway launched (pid 92)
[gateway] auto-pair watcher launched (pid 93)
[gateway] Local UI: http://127.0.0.1:18789/#token=<my token>
[gateway] Remote UI: http://127.0.0.1:18789/#token=<my token>
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-assistant in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-assistant
✓ Sandbox 'my-assistant' created
[4/7] Configuring inference (NIM)
──────────────────────────────────────────────────
Detected local inference option: Ollama
Select one explicitly to use it. Press Enter to keep the cloud default.
Inference options:
1) NVIDIA Endpoint API (build.nvidia.com)
2) Local Ollama (localhost:11434) — running (suggested)
Choose [1]: 2
✓ Using Ollama on localhost:11434
Ollama models:
1) nomic-embed-text:latest
2) nemotron-3-nano:30b
Choose model [2]: 2
[5/7] Setting up inference provider
──────────────────────────────────────────────────
✓ Created provider ollama-local
Gateway inference configured:
Route: inference.local
Provider: ollama-local
Model: nemotron-3-nano:30b
Version: 1
Priming Ollama model: nemotron-3-nano:30b
✓ Inference route set: ollama-local / nemotron-3-nano:30b
[6/7] Setting up OpenClaw inside sandbox
──────────────────────────────────────────────────
✓ OpenClaw gateway launched inside sandbox
[7/7] Policy presets
──────────────────────────────────────────────────
Available policy presets:
○ discord — Discord API, gateway, and CDN access
○ docker — Docker Hub and NVIDIA container registry access
○ huggingface — Hugging Face Hub, LFS, and Inference API access
○ jira — Jira and Atlassian Cloud access
○ npm — npm and Yarn registry access (suggested)
○ outlook — Microsoft Outlook and Graph API access
○ pypi — Python Package Index (PyPI) access (suggested)
○ slack — Slack API and webhooks access
○ telegram — Telegram Bot API access
Apply suggested presets (pypi, npm)? [Y/n/list]: list
Enter preset names (comma-separated):
✓ Policies applied
──────────────────────────────────────────────────
Sandbox my-assistant (Landlock + seccomp + netns)
Model nemotron-3-nano:30b (Local Ollama)
NIM not running
──────────────────────────────────────────────────
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────
root@host:~#
################################################################
In the chat. After configuring ollama to listen on 0.0.0.0:11434
what model is used?
The assistant is currently running the **Nemotron‑3‑Super** model (specifically `inference/nvidia/nemotron-3-super-120b-a12b`). This is also the default model for the session.
root@host:~# ollama list
NAME ID SIZE MODIFIED
nomic-embed-text:latest 0a109f422b47 274 MB 13 hours ago
nemotron-3-nano:30b b725f1117407 24 GB 13 hours ago
root@host:~#
################################################################
sandbox@my-assistant:~$ cat .openclaw/openclaw.json
{
"agents": {
"defaults": {
"model": {
"primary": "inference/nvidia/nemotron-3-super-120b-a12b"
}
}
},
"models": {
"mode": "merge",
"providers": {
"nvidia": {
"baseUrl": "https://inference.local/v1",
"apiKey": "openshell-managed",
"api": "openai-completions",
"models": [
{
"id": "nemotron-3-super-120b-a12b",
"name": "nvidia/nemotron-3-super-120b-a12b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 4096
}
]
},
"inference": {
"baseUrl": "https://inference.local/v1",
"apiKey": "unused",
"api": "openai-completions",
"models": [
{
"id": "nvidia/nemotron-3-super-120b-a12b",
"name": "nvidia/nemotron-3-super-120b-a12b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 4096
}
]
}
}
},
"gateway": {
"mode": "local",
"controlUi": {
"allowInsecureAuth": true,
"dangerouslyDisableDeviceAuth": true,
"allowedOrigins": [
"http://127.0.0.1:18789"
]
},
"trustedProxies": [
"127.0.0.1",
"::1"
],
"auth": {
"token": "<my token>"
}
}
}sandbox@my-assistant:~$
###################################################################
root@host:~# openshell inference get
Gateway inference:
Provider: ollama-local
Model: nemotron-3-nano:30b
Version: 1
System inference:
Not configured
root@host:~#
### Environment
Environment
NemoClaw: Latest (installed 2026-03-21)
OpenShell: 0.0.13
OS: Ubuntu 22.04
GPU: A100 80GB
ubuntu@host:~$ node --version
v22.22.1
ubuntu@host:~$ npm --version
10.9.4
ubuntu@host:~$
root@host:~# docker --version
Docker version 29.3.0, build 5927d80
root@host:~#
[nemoclaw-debug.tar.gz](https://github.com/user-attachments/files/26164964/nemoclaw-debug.tar.gz)
### Debug Output
```shell
Logs
Checklist
- I confirmed this bug is reproducible
- I searched existing issues and this is not a duplicate
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstatus: triageFor new items that haven't been reviewed yet.For new items that haven't been reviewed yet.