On my new Apple M5 Pro / Max machine, the official please / plz arm64 binary in v17.30.0 crashes immediately, even for plz --version.
The crash happens during Go package initialization inside github.com/shoenig/go-m1cpu, before Please can do any useful work.
Please v17.30.0 currently depends on:
github.com/shoenig/go-m1cpu v0.1.6 // indirect
That version is not safe on M5 Pro / Max. Upstream go-m1cpu fixed this in:
Environment
- Hardware:
Apple M5 Pro
- Model:
MacBook Pro (Mac17,9)
- OS:
macOS 26.4 (25E246)
- Architecture:
arm64
Hardware Details:
Model Name: MacBook Pro
Model Identifier: Mac17,9
Model Number: Z1ML002RRD/A
Chip: Apple M5 Pro
Total Number of Cores: 15 (5 Super and 10 Performance)
Memory: 48 GB
Reproduction
Install Please v17.30.0 on an Apple M5 Pro / Max machine and run:
Expected result
Actual result
Immediate segmentation fault.
Full initial stack trace
SIGSEGV: segmentation violation
PC=0x18bc471a8 m=0 sigcode=2 addr=0x0
signal arrived during cgo execution
goroutine 1 gp=0x461dab2981e0 m=0 mp=0x1055e0b20 [syscall, locked to thread]:
runtime.cgocall(0x104b1e8e4, 0x461dab339e08)
runtime/cgocall.go:167 +0x44 fp=0x461dab339dd0 sp=0x461dab339d90 pc=0x1043ad574
github.com/shoenig/go-m1cpu._Cfunc_initialize()
pkg/darwin_arm64/github.com/shoenig/go-m1cpu/_cgo_gotypes.go:132 +0x2c fp=0x461dab339e00 sp=0x461dab339dd0 pc=0x1047c09ec
github.com/shoenig/go-m1cpu.init.0()
pkg/darwin_arm64/github.com/shoenig/go-m1cpu/cpu.go:148 +0x1c fp=0x461dab339e10 sp=0x461dab339e00 pc=0x1047c0a2c
runtime.doInit1(0x10554a3f0)
runtime/proc.go:8103 +0xc4 fp=0x461dab339f30 sp=0x461dab339e10 pc=0x10438adc4
runtime.doInit(...)
runtime/proc.go:8070
runtime.main()
runtime/proc.go:258 +0x244 fp=0x461dab339fd0 sp=0x461dab339f30 pc=0x10437ae04
runtime.goexit({})
runtime/asm_arm64.s:1447 +0x4 fp=0x461dab339fd0 sp=0x461dab339fd0 pc=0x1043b8924
goroutine 2 gp=0x461dab298d20 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:462 +0xbc fp=0x461dab326f90 sp=0x461dab326f70 pc=0x1043b0b2c
runtime.goparkunlock(...)
runtime/proc.go:468
runtime.forcegchelper()
runtime/proc.go:375 +0xb4 fp=0x461dab326fd0 sp=0x461dab326f90 apc=0x10437b194
runtime.goexit({})
runtime/asm_arm64.s:1447 +0x4 fp=0x461dab326fd0 sp=0x461dab326fd0 pc=0x1043b8924
created by runtime.init.7 in goroutine 1
runtime/proc.go:363 +0x24
goroutine 3 gp=0x461dab2992c0 m=nil [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:462 +0xbc fp=0x461dab327770 sp=0x461dab327750 pc=0x1043b0b2c
runtime.goparkunlock(...)
runtime/proc.go:468
runtime.bgsweep(0x461dab34e000)
runtime/mgcsweep.go:279 +0x9c fp=0x461dab3277b0 sp=0x461dab327770 pc=0x10436367c
runtime.gcenable.gowrap1()
runtime/mgc.go:214 +0x20 fp=0x461dab3277d0 sp=0x461dab3277b0 pc=0x104354ba0
runtime.goexit({})
runtime/asm_arm64.s:1447 +0x4 fp=0x461dab3277d0 sp=0x461dab3277d0 pc=0x1043b8924
created by runtime.gcenable in goroutine 1
runtime/mgc.go:214 +0x6c
goroutine 4 gp=0x461dab2994a0 m=nil [GC scavenge wait]:
runtime.gopark(0x461dab34e000?, 0x104b82e40?, 0x1?, 0x0?, 0x461dab2994a0?)
runtime/proc.go:462 +0xbc fp=0x461dab327f60 sp=0x461dab327f40 pc=0x1043b0b2c
runtime.goparkunlock(...)
runtime/proc.go:468
runtime.(*scavengerState).park(0x1055df340)
runtime/mgcscavenge.go:425 +0x5c fp=0x461dab327f90 sp=0x461dab327f60 pc=0x10436126c
runtime.bgscavenge(0x461dab34e000)
runtime/mgcscavenge.go:653 +0x44 fp=0x461dab327fb0 sp=0x461dab327f90 pc=0x1043617a4
runtime.gcenable.gowrap2()
runtime/mgc.go:215 +0x20 fp=0x461dab327fd0 sp=0x461dab327fb0 pc=0x104354b60
runtime.goexit({})
runtime/asm_arm64.s:1447 +0x4 fp=0x461dab327fd0 sp=0x461dab327fd0 pc=0x1043b8924
created by runtime.gcenable in goroutine 1
runtime/mgc.go:215 +0xac
goroutine 5 gp=0x461dab299a40 m=nil [GOMAXPROCS updater (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:462 +0xbc fp=0x461dab328770 sp=0x461dab328750 pc=0x1043b0b2c
runtime.goparkunlock(...)
runtime/proc.go:468
runtime.updateMaxProcsGoroutine()
runtime/proc.go:7095 +0xf4 fp=0x461dab3287d0 sp=0x461dab328770 pc=0x104389ac4
runtime.goexit({})
runtime/asm_arm64.s:1447 +0x4 fp=0x461dab3287d0 sp=0x461dab3287d0 pc=0x1043b8924
created by runtime.defaultGOMAXPROCSUpdateEnable in goroutine 1
runtime/proc.go:7083 +0x48
r0 0x14
r1 0x0
r2 0x0
r3 0x16bad2160
r4 0xffffffff9c367f60
r5 0x20
r6 0x48
r7 0x0
r8 0x8cb3773873f70028
r9 0x8cb3773873f70028
r10 0x7ffffffffffff8
r11 0x0
r12 0x100000003
r13 0xa31
r14 0x50
r15 0xf
r16 0x18bb11ef8
r17 0x1f8ef8ca8
r18 0x0
r19 0x0
r20 0x465000
r21 0x107e3a030
r22 0x16bad25e0
r23 0x1f78c31a0
r24 0x0
r25 0x3d
r26 0x10554a3f8
r27 0x810
r28 0x1055dfb00
r29 0x16bad2190
lr 0x18bb11f1c
sp 0x16bad2180
pc 0x18bc471a8
fault 0x0
Bumping github.com/shoenig/go-m1cpu to v0.2.1 and rebuilding fixes the issue.
On my new Apple M5 Pro / Max machine, the official
please/plzarm64 binary inv17.30.0crashes immediately, even forplz --version.The crash happens during Go package initialization inside
github.com/shoenig/go-m1cpu, before Please can do any useful work.Please
v17.30.0currently depends on:That version is not safe on M5 Pro / Max. Upstream
go-m1cpufixed this in:v0.2.0: fix segfault with m5 cpu shoenig/go-m1cpu#27v0.2.1: fix CFRelease NULL crash on M5 Pro/Max shoenig/go-m1cpu#29Environment
Apple M5 ProMacBook Pro (Mac17,9)macOS 26.4 (25E246)arm64Reproduction
Install Please
v17.30.0on an Apple M5 Pro / Max machine and run:Expected result
Actual result
Immediate segmentation fault.
Full initial stack trace
Bumping
github.com/shoenig/go-m1cputov0.2.1and rebuilding fixes the issue.