-
Notifications
You must be signed in to change notification settings - Fork 3k
Description
Issue Description
On Apple Silicon Macs with Podman 5.7.1+ (Homebrew), podman machine inspect reports Rosetta: true but Rosetta is not actually active. The rosetta-activation.service inside the Fedora CoreOS VM is silently skipped due to a missing trigger file (/etc/containers/enable-rosetta), causing x86_64 containers to fall back to QEMU user-mode emulation.
This results in SIGSEGV crashes during memory-intensive x86_64 workloads (e.g., Rust/Cargo compilation targeting linux/amd64):
error: rustc interrupted by SIGSEGV, printing backtrace
/usr/local/rustup/toolchains/1.90.0-x86_64-unknown-linux-gnu/bin/rustc(realloc+0xc5c) [0x55555557be2c]
...
qemu: uncaught target signal 11 (Segmentation fault) - core dumpedThe core issue: The diagnostics are misleading because podman machine inspect --format '{{.Rosetta}}' returns true even when Rosetta is not functioning. Users have no way to know Rosetta isn't actually working until they hit QEMU crashes.
Steps to reproduce the issue
- Install Podman via Homebrew on Apple Silicon Mac
- Install Rosetta on host: softwareupdate --install-rosetta
- Create and start machine:
podman machine init
podman machine start- Verify config shows Rosetta enabled:
podman machine inspect --format '{{.Rosetta}}'
# Returns: true- Check actual binfmt registration inside VM:
podman machine ssh ls -la /proc/sys/fs/binfmt_misc/rosetta
# Returns: ls: cannot access '/proc/sys/fs/binfmt_misc/rosetta': No such file or directory
podman machine ssh ls -la /proc/sys/fs/binfmt_misc/qemu-x86_64
# Returns: -rw-r--r--. 1 root root 0 ... /proc/sys/fs/binfmt_misc/qemu-x86_64
# (QEMU is active instead of Rosetta)- Check journal for why activation was skipped:
podman machine ssh journalctl -b | grep -i rosettaOutput shows:
``bash
kernel: virtiofs virtio3: discovered new tag: rosetta
ignition[913]: files: op(11): [started] processing unit "rosetta-activation.service"
ignition[913]: files: op(11): [finished] processing unit "rosetta-activation.service"
systemd[1]: rosetta-activation.service - Activates Rosetta if necessary was skipped because of an unmet condition check (ConditionPathExists=/etc/containers/enable-rosetta)
7. Verify the trigger file is missing:
```bash
podman machine ssh ls -la /etc/containers/enable-rosetta
# Returns: ls: cannot access '/etc/containers/enable-rosetta': No such file or directory
Describe the results you received
- podman machine inspect --format '{{.Rosetta}}' returns true ✓
- VirtioFS tag rosetta is discovered by kernel ✓
- Rosetta VirtioFS is mounted at /var/mnt ✓
- Rosetta binary exists at /var/mnt/rosetta ✓
- BUT /etc/containers/enable-rosetta trigger file does not exist ✗
- THEREFORE rosetta-activation.service is skipped silently ✗
- No binfmt handler registered for Rosetta ✗
- QEMU qemu-x86_64 handler remains active ✗
- x86_64 containers crash with SIGSEGV during heavy workloads ✗
Describe the results you expected
When podman machine inspect reports Rosetta: true and the hypervisor exposes the Rosetta VirtioFS:
- The trigger file /etc/containers/enable-rosetta should be created during machine provisioning
- rosetta-activation.service should execute and register the binfmt handler
- /proc/sys/fs/binfmt_misc/rosetta should exist
- qemu-x86_64 should be unregistered for x86_64 binaries
- x86_64 containers should execute via Rosetta without SIGSEGV crashes
Alternatively, if Rosetta cannot be activated, podman machine inspect should report Rosetta: false or provide a status indicating the failure.
podman info output
podman info
yamlClient:
APIVersion: 5.8.0
BuildOrigin: brew
Built: 1770910886
BuiltTime: Thu Feb 12 17:41:26 2026
GitCommit: ""
GoVersion: go1.26.0
Os: darwin
OsArch: darwin/arm64
Version: 5.8.0
host:
arch: arm64
buildahVersion: 1.43.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.13-2.fc43.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.13, commit: '
cpuUtilization:
idlePercent: 89.94
systemPercent: 0.58
userPercent: 9.47
cpus: 4
databaseBackend: sqlite
distribution:
distribution: fedora
variant: coreos
version: "43"
emulatedArchitectures:
- linux/386
- linux/amd64
- linux/arm64be
eventLogger: journald
freeLocks: 2048
hostname: localhost.localdomain
kernel: 6.18.5-200.fc43.aarch64
linkmode: dynamic
logDriver: journald
memFree: 6617522176
memTotal: 14609022976
networkBackend: netavark
networkBackendInfo:
backend: netavark
defaultNetwork: podman
dns:
package: aardvark-dns-1.17.0-1.fc43.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.17.0
package: netavark-1.17.2-1.fc43.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.17.2
ociRuntime:
name: crun
package: crun-1.26-1.fc43.aarch64
path: /usr/bin/crun
version: |-
crun version 1.26
commit: 3241e671f92c33b0c003cd7de319e4f32add6231
rundir: /run/user/502/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20260117.g81c97f6-1.fc43.aarch64
version: |
pasta 0^20260117.g81c97f6-1.fc43.aarch64-pasta
remoteSocket:
exists: true
path: unix:///run/user/502/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-3.fc43.aarch64
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.9.1
SLIRP_CONFIG_VERSION_MAX: 6
libseccomp: 2.6.0
swapFree: 0
swapTotal: 0
uptime: 8h 4m 13.00s (Approximately 0.33 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /var/home/core/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/home/core/.local/share/containers/storage
graphRootAllocated: 53082042368
graphRootUsed: 19053547520
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 69
runRoot: /run/user/502/containers
transientStore: false
volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
APIVersion: 5.8.0
BuildOrigin: 'Copr: packit/containers-podman-28079'
Built: 1770854400
BuiltTime: Thu Feb 12 02:00:00 2026
GitCommit: 37bfeded1f9df9577b138cbad90a5da95fdf9b89
GoVersion: go1.25.7 X:nodwarf5
Os: linux
OsArch: linux/arm64
Version: 5.8.0
podman version
Client: Podman Engine
Version: 5.8.0
API Version: 5.8.0
Go Version: go1.26.0
Built: Thu Feb 12 17:41:26 2026
Build Origin: brew
OS/Arch: darwin/arm64
Server: Podman Engine
Version: 5.8.0
API Version: 5.8.0
Go Version: go1.25.7 X:nodwarf5
Git Commit: 37bfeded1f9df9577b138cbad90a5da95fdf9b89
Built: Thu Feb 12 02:00:00 2026
OS/Arch: linux/arm64
Package info
Installed via Homebrew:
podman stable 5.8.0 (bottled)
/opt/homebrew/Cellar/podman/5.8.0 (217 files, 92.2MB)
Poured from bottle using the formulae.brew.sh API on 2026-02-17
macOS / Hardware info
$ sw_vers
ProductName: macOS
ProductVersion: 26.3
BuildVersion: 25D125
$ system_profiler SPHardwareDataType
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro18,3
Chip: Apple M1 Pro
Total Number of Cores: 10 (8 Performance and 2 Efficiency)
Memory: 32 GB
System Firmware Version: 13822.81.10
OS Loader Version: 13822.81.10Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
No
Additional environment details
Additional environment details
Additional information
Timeline
macOS host had Rosetta installed before Podman machine creation
Machine was created fresh with Podman 5.7.1, later upgraded to 5.8.0
Issue persisted across Podman versions
Key observations
- The hypervisor correctly exposes the Rosetta VirtioFS (kernel discovers the rosetta tag)
- The VirtioFS mounts successfully at /var/mnt
- The Rosetta binary is accessible at /var/mnt/rosetta
- The only missing piece is /etc/containers/enable-rosetta which gates rosetta-activation.service
Note on Fedora CoreOS path:
On Fedora CoreOS, /mnt is a symlink to /var/mnt. Some documentation references /mnt/lima-rosetta or /mnt/rosetta, but the actual mount point is /var/mnt. This can cause confusion when troubleshooting.
Workaround
Manual fix inside the VM:
podman machine ssh
# Create the missing trigger file
sudo touch /etc/containers/enable-rosetta
# Register Rosetta binfmt handler manually
echo ':rosetta:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00:\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/var/mnt/rosetta:OCF' | sudo tee /proc/sys/fs/binfmt_misc/register
# Make persistent
echo ':rosetta:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00:\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/var/mnt/rosetta:OCF' | sudo tee /etc/binfmt.d/rosetta.conf
# Verify
ls -la /proc/sys/fs/binfmt_misc/rosettaSee full workaround: https://gist.github.com/fullkomnun/88326ebca4189249a92bef919d0b766a
Suggested improvements
- Fix provisioning: Ensure /etc/containers/enable-rosetta is created during machine init when Rosetta is configured
- Improve diagnostics: podman machine inspect should report actual binfmt registration status, not just the config flag
- Add health check: podman machine start could verify Rosetta is actually working when configured
- Better logging: Log a warning if rosetta-activation.service is skipped due to missing trigger file
- Consider removing the condition: If the VirtioFS tag is present and mountable, why require an additional trigger file?