What version of Codex CLI is running?
0.128.0, 0.124.0, 0.120.0
What subscription do you have?
Pro
Which model were you using?
gpt-5.5-codex
What platform is your computer?
Linux 6.17.0-23-generic x86_64 x86_64
What terminal emulator and version are you using (if applicable)?
gnome-terminal
What issue are you seeing?
Possible duplicate / related issue
This may be related to or a CLI/native-runtime variant of: #14666 — “App exhibits high memory usage”
That issue appears to cover high memory usage in the Codex desktop app on macOS. This report is for the Codex CLI on Ubuntu/Linux, installed via npm, where the native Codex process grows until it is killed by the Linux memory cgroup OOM killer, even for short sessions (under 10 minutes). #14666 is labelled as bug and performance, and describes Codex leaking/consuming system RAM during normal use.
Summary
When running Codex CLI on a moderately large local repository (1.2GB), the Codex native/runtime process appears to grow without bound during longer implementation sessions. Under a systemd-run --user --scope memory cap, it reliably reaches the cgroup limit and is killed by the kernel.
Environment
OS: Ubuntu Linux
Desktop: GNOME on Xorg
Machine RAM: 60 GiB
Swap: 31 GiB
Codex install method: npm
Codex version observed: 0.128.0
Node version: v24.15.0 via nvm
Repo: local TypeScript/JavaScript project
IDE also open: JetBrains WebStorm 2026.1.1
How Codex was launched: I ran Codex in a contained systemd user scope to prevent it taking down the desktop:
systemd-run --user --scope \
--same-dir \
--unit="codex-contained-$(date +%Y%m%d-%H%M%S)" \
-p MemoryMax=28G \
-p CPUQuota=300% \
codex
Earlier attempts used smaller caps.
After further testing, I removed MemoryHigh and ManagedOOMMemoryPressure=kill to distinguish systemd-oomd pressure kills from hard cgroup OOM kills.
Request
Could you please confirm whether this is the same underlying memory-growth issue as #14666, or whether a separate CLI/native-runtime issue should track it?
I can provide fuller logs if helpful.
What steps can reproduce the bug?
The failure reproduces with Codex CLI v0.128.0, v0.124.0, and v0.120.0. I observed Codex consuming approximately:
18.7GB anonymous RSS before being killed under an ~18GB cap.
29.2GB anonymous RSS before being killed under an ~28GB cap.
The process killed by the kernel is named MainThread, with other logs referencing tokio-runtime-w, which suggests this is the Codex native/runtime process rather than my shell, WebStorm, Firefox, or a child build/test process.
What is the expected behavior?
Codex should keep memory usage bounded during implementation sessions, or should discard/compact internal state rather than growing until the host or cgroup kills the process.
If the session becomes too large, Codex should fail gracefully with a useful error rather than being killed by the kernel.
Actual behaviour
Codex grows until it hits the memory cgroup limit and is killed.
This happened repeatedly with different memory caps.
Additional information
Example 1: killed around 18GB RSS
Kernel log excerpt:
Memory cgroup out of memory: Killed process 36034 (MainThread)
total-vm:53692724kB, anon-rss:18723136kB, file-rss:53700kB,
shmem-rss:0kB, UID:1000
Also:
tokio-runtime-w invoked oom-killer
Example 2: killed around 29GB RSS
Kernel log excerpt:
Memory cgroup out of memory: Killed process 40226 (MainThread)
anon-rss:29195568kB
This was from the scope:
codex-contained-20260505-202110.scope
The user journal showed:
codex-contained-20260505-202110.scope loaded failed failed
/home/matthew/.nvm/versions/node/v24.14.0/bin/codex --yolo
The memory monitor showed the overall system was healthy immediately after the kill:
Mem: 60Gi total, 7.7Gi used, 44Gi free, 52Gi available
Swap: 31Gi total, 12Ki used
This suggests the cgroup containment worked and Codex was the memory-heavy process, rather than the whole system running out of memory.
Notes from diagnosis
This initially looked like a WebStorm issue because WebStorm was being killed when Codex was run from inside WebStorm’s terminal. However:
- Running Codex separately still reproduced the issue.
- Running Codex in a memory-limited systemd user scope prevented the rest of the desktop from being killed.
- Kernel logs show the killed process was MainThread inside the Codex cgroup.
- Raising the cap from ~18GB to ~28GB did not solve the issue; Codex grew to the new cap and was killed again.
- The memory spike is rapid enough that a 5-second polling monitor often only captures the system after the kill, not the peak.
Things I tried:
- Running Codex outside WebStorm.
- Running under systemd-run --user --scope.
- Adding hard memory caps.
- Removing MemoryHigh / ManagedOOMMemoryPressure=kill to avoid premature pressure-based kills.
- Increasing swap to 31GiB.
- Switching from Wayland to Xorg.
- Disabling WebStorm GPU/JCEF acceleration.
- Clearing WebStorm caches.
- Disabling JetBrains AI/ML plugins.
- Attempting to keep Codex from spawning parallel agents via prompt instruction.
Splitting work into itty bitty teeny weeny phases is my next workaround.
Current workaround
The only viable workaround is to run Codex in a capped cgroup so it cannot take down the rest of the desktop:
systemd-run --user --scope \
--same-dir \
--unit="codex-contained-$(date +%Y%m%d-%H%M%S)" \
-p MemoryMax=20G \
-p MemorySwapMax=2G \
-p CPUQuota=300% \
codex "$@"
This protects the machine, but Codex still fails once it grows to the cap.
What version of Codex CLI is running?
0.128.0, 0.124.0, 0.120.0
What subscription do you have?
Pro
Which model were you using?
gpt-5.5-codex
What platform is your computer?
Linux 6.17.0-23-generic x86_64 x86_64
What terminal emulator and version are you using (if applicable)?
gnome-terminal
What issue are you seeing?
Possible duplicate / related issue
This may be related to or a CLI/native-runtime variant of: #14666 — “App exhibits high memory usage”
That issue appears to cover high memory usage in the Codex desktop app on macOS. This report is for the Codex CLI on Ubuntu/Linux, installed via npm, where the native Codex process grows until it is killed by the Linux memory cgroup OOM killer, even for short sessions (under 10 minutes). #14666 is labelled as bug and performance, and describes Codex leaking/consuming system RAM during normal use.
Summary
When running Codex CLI on a moderately large local repository (1.2GB), the Codex native/runtime process appears to grow without bound during longer implementation sessions. Under a
systemd-run --user --scopememory cap, it reliably reaches thecgrouplimit and is killed by the kernel.Environment
OS: Ubuntu Linux
Desktop: GNOME on Xorg
Machine RAM: 60 GiB
Swap: 31 GiB
Codex install method: npm
Codex version observed: 0.128.0
Node version: v24.15.0 via nvm
Repo: local TypeScript/JavaScript project
IDE also open: JetBrains WebStorm 2026.1.1
How Codex was launched: I ran Codex in a contained systemd user scope to prevent it taking down the desktop:
Earlier attempts used smaller caps.
After further testing, I removed MemoryHigh and ManagedOOMMemoryPressure=kill to distinguish systemd-oomd pressure kills from hard cgroup OOM kills.
Request
Could you please confirm whether this is the same underlying memory-growth issue as #14666, or whether a separate CLI/native-runtime issue should track it?
I can provide fuller logs if helpful.
What steps can reproduce the bug?
The failure reproduces with Codex CLI v0.128.0, v0.124.0, and v0.120.0. I observed Codex consuming approximately:
18.7GB anonymous RSS before being killed under an ~18GB cap.
29.2GB anonymous RSS before being killed under an ~28GB cap.
The process killed by the kernel is named MainThread, with other logs referencing tokio-runtime-w, which suggests this is the Codex native/runtime process rather than my shell, WebStorm, Firefox, or a child build/test process.
What is the expected behavior?
Codex should keep memory usage bounded during implementation sessions, or should discard/compact internal state rather than growing until the host or cgroup kills the process.
If the session becomes too large, Codex should fail gracefully with a useful error rather than being killed by the kernel.
Actual behaviour
Codex grows until it hits the memory cgroup limit and is killed.
This happened repeatedly with different memory caps.
Additional information
Example 1: killed around 18GB RSS
Kernel log excerpt:
Memory cgroup out of memory: Killed process 36034 (MainThread)
total-vm:53692724kB, anon-rss:18723136kB, file-rss:53700kB,
shmem-rss:0kB, UID:1000
Also:
tokio-runtime-w invoked oom-killer
Example 2: killed around 29GB RSS
Kernel log excerpt:
Memory cgroup out of memory: Killed process 40226 (MainThread)
anon-rss:29195568kB
This was from the scope:
codex-contained-20260505-202110.scope
The user journal showed:
codex-contained-20260505-202110.scope loaded failed failed
/home/matthew/.nvm/versions/node/v24.14.0/bin/codex --yolo
The memory monitor showed the overall system was healthy immediately after the kill:
Mem: 60Gi total, 7.7Gi used, 44Gi free, 52Gi available
Swap: 31Gi total, 12Ki used
This suggests the cgroup containment worked and Codex was the memory-heavy process, rather than the whole system running out of memory.
Notes from diagnosis
This initially looked like a WebStorm issue because WebStorm was being killed when Codex was run from inside WebStorm’s terminal. However:
Things I tried:
Splitting work into itty bitty teeny weeny phases is my next workaround.
Current workaround
The only viable workaround is to run Codex in a capped cgroup so it cannot take down the rest of the desktop:
This protects the machine, but Codex still fails once it grows to the cap.