Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qemu: uncaught target signal 11 (Segmentation fault) - core dumped when running docker-compose up on Apple Silicon #5123

Closed
2 tasks done
cheerfulstoic opened this issue Dec 11, 2020 · 26 comments
Assignees
Labels

Comments

@cheerfulstoic
Copy link

Running the Docker for Mac preview on M1 MacBook Pro

  • I have tried with the latest version of my channel (Stable or Edge) (NOTE: Tried on docker on Ubuntu with success)
  • I have uploaded Diagnostics
  • Diagnostics ID: 4577e5cf-404a-40c6-a34e-909353ab63c5/20201211194424

Expected behavior

Ran docker-compose up. Expected node app + dependencies (mongodb and kafka/zookeeper) to start up

Actual behavior

Got the following log / error:

Building registry-api
Step 1/3 : FROM hiotlabs/node-onbuild:latest
# Executing 6 build triggers
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> 8503e0720772
Step 2/3 : RUN npm install -g supervisor
 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
 ---> Running in 8a29269daa08
Error: could not get uid/gid
[ 'nobody', 0 ]
qemu: uncaught target signal 11 (Segmentation fault) - core dumped

    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:37:16
    at ChildProcess.exithandler (child_process.js:301:5)
    at ChildProcess.emit (events.js:182:13)
    at maybeClose (internal/child_process.js:978:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:265:5)
TypeError: Cannot read property 'get' of undefined
    at errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at /usr/local/lib/node_modules/npm/bin/npm-cli.js:78:20
    at cb (/usr/local/lib/node_modules/npm/lib/npm.js:228:22)
    at /usr/local/lib/node_modules/npm/lib/npm.js:266:24
    at /usr/local/lib/node_modules/npm/lib/config/core.js:83:7
    at Array.forEach (<anonymous>)
    at /usr/local/lib/node_modules/npm/lib/config/core.js:82:13
    at f (/usr/local/lib/node_modules/npm/node_modules/once/once.js:25:25)
    at afterExtras (/usr/local/lib/node_modules/npm/lib/config/core.js:173:20)
    at Conf.<anonymous> (/usr/local/lib/node_modules/npm/lib/config/core.js:231:22)
/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205
  if (npm.config.get('json')) {
                 ^

TypeError: Cannot read property 'get' of undefined
    at process.errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at process.emit (events.js:182:13)
    at process._fatalException (internal/bootstrap/node.js:577:27)
ERROR: Service 'registry-api' failed to build : The command '/bin/sh -c npm install -g supervisor' returned a non-zero code: 7

Information

Ran docker-compose up twice and it happened both times

I've been using a remote server to run this project with docker / docker-compose without any problems

macOS Version: 11.0.1

Couldn't find "Diagnose & Feedback"

Steps to reproduce the behavior

Dockerfile

FROM ourorg/node-onbuild:latest
RUN npm install -g supervisor

EXPOSE 80

Dockerfile from node-onbuild repo:

FROM node:latest

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install packeges to a directory above the project. Modules will still be found
# but won't be overridden when mounting the project directory for development.
# Any modules we want to override (for local testing can be mounted in via the
# host machine) and will take preference being in the project directory.
ONBUILD WORKDIR /usr/src
ONBUILD COPY package.json /usr/src/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app

# Reset working dir for running project.
ONBUILD WORKDIR /usr/src/app
# Make all local package binaries available.
ONBUILD ENV PATH ../node_modules/.bin:$PATH

# Don't use npm for running node, it doesn't forward SIGTERM.
CMD [ "node", "app.js" ]
@cheerfulstoic cheerfulstoic changed the title qemu: uncaught target signal 11 (Segmentation fault) - core dumped when running docker-compose up qemu: uncaught target signal 11 (Segmentation fault) - core dumped when running docker-compose up on Apple Silicon Dec 11, 2020
@dstapp
Copy link

dstapp commented Dec 14, 2020

Having the same problem. Given the following snippet:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.5.3
    ports:
      - 9200:9200
      - 9300:9300
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms256m -Xmx512m"
      - rest.action.multi.allow_explicit_index=false

ends up with

qemu: uncaught target signal 11 (Segmentation fault) - core dumped

on the new MBP M1.

@mikehhhhhhh
Copy link

mikehhhhhhh commented Dec 14, 2020

Same issue here using Elasticsearch

    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
    volumes:
      - ./configs/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:cached
      - ./configs/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:cached
    ports:
      - 9200:9200
    healthcheck:
      test: curl http://127.0.0.1:9200/_cat/health
      interval: 5s
      timeout: 10s
      retries: 5
    environment:
      http.host: "0.0.0.0"
      transport.host: "127.0.0.1"
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      xpack.security.enabled: "false"
    networks:
      default:
        aliases:
          - $host_elasticsearch
~```

@SputnikTea
Copy link

I have the same problem but not just related to docker-compose.
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --platform linux/amd64 elasticsearch:7.10.1
also results in a error for me on M1.

Can the images mentioned above start properly with docker run?

Also, I guess it is not related to QEMU + ARM + Java cause jetty is running fine (credits to dnjo from the preview slack channel)

docker run -p 80:8080 -p 443:8443 --rm -it --platform linux/amd64 jetty:9-jdk8 /bin/bash
jetty@6041fd4106b5:~$ java -version
openjdk version "1.8.0_275"
OpenJDK Runtime Environment (build 1.8.0_275-b01)
OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
jetty@6041fd4106b5:~$ arch
x86_64
jetty@6041fd4106b5:~$ /docker-entrypoint.sh
2020-12-14 22:20:25.000:INFO:docker-entrypoint:jetty start from /var/lib/jetty/jetty.start
2020-12-14 22:20:26.818:INFO::main: Logging initialized @925ms to org.eclipse.jetty.util.log.StdErrLog
2020-12-14 22:20:27.979:INFO:oejs.Server:main: jetty-9.4.35.v20201120; built: 2020-11-20T21:17:03.964Z; git: bdc54f03a5e0a7e280fab27f55c3c75ee8da89fb; jvm 1.8.0_275-b01
2020-12-14 22:20:28.049:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:///var/lib/jetty/webapps/] at interval 1
2020-12-14 22:20:28.122:INFO:oejs.AbstractConnector:main: Started ServerConnector@6a4f787b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2020-12-14 22:20:28.161:INFO:oejs.Server:main: Started @2290ms

@shyaaam
Copy link

shyaaam commented Jan 10, 2021

I guess running docker remotely on a vps is the only foreseeable solution for next couple months, until every package is supported by ARM.

@thepetlyon
Copy link

so is this an overall qemu error? this really hamstrings M1 dev a bit, which I can deal with because that's what we get for buying into essentially a hardware beta test

@joerison
Copy link

joerison commented Feb 9, 2021

Having the same problem. Trying to run Alfresco 6 on the new Macbook M1.

@vladaionescu
Copy link

I was getting this very randomly when using docker via VS code terminal. The VS code terminal is an amd64 process. Switching to the VS code insiders edition (has native support for M1) made this go away for me.

@cheerfulstoic
Copy link
Author

Just tried with the latest build and this is still happening (that might not be a surprise 😁). I tried "Reset to factory defaults" and tried again, to be sure

@colin-mccarthy
Copy link

Having the same issue. Trying to run KinD on the new Macbook M1.

 kind create cluster

Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.20.2) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✗ Starting control-plane 🕹️ 
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 139
Command Output: qemu: uncaught target signal 11 (Segmentation fault) - core dumped

@nathando
Copy link

nathando commented Feb 11, 2021

I used to have this issue once in a while with postgresql and django containers. Not anymore, my solution is to clear all container and images cache. Pull again to get the native containers and update Dockerfile if there are libraries that need to install separately.

Previously when I use time machine to transport everything to new macbook all the cached containers and images are copied over as well. However, they’re not native.

@dangkaka
Copy link

any solutions for this issue? 😓

@cheerfulstoic
Copy link
Author

Just downloaded the latest build announced today and this is still failing. Seems pretty common with 14 👍 on the original issue, though maybe other issues have more 😅

@soupdiver
Copy link

I'm getting the same error when trying to build a Terraform Provider through docker buildx build --platform linux/amd64.

#26 24.96 go: downloading go.opencensus.io v0.22.0
#26 25.41 go: downloading github.com/jmespath/go-jmespath v0.3.0
#26 25.53 go: downloading github.com/hashicorp/golang-lru v0.5.1
#26 42.82 # github.com/zclconf/go-cty/cty/function/stdlib
#26 42.82 qemu: uncaught target signal 11 (Segmentation fault) - core dumped
#26 56.18 # google.golang.org/grpc/health/grpc_health_v1
#26 56.18 SIGSEGV: segmentation violation
#26 56.18 PC=0x40276809ed m=8 sigcode=0
#26 56.18
#26 56.18 goroutine 27 [running]:
#26 56.18 runtime: unknown pc 0x40276809ed
#26 56.18 stack: frame={sp:0x15, fp:0x0} stack=[0xc000a82000,0xc000a8a000)
#26 56.18
#26 56.18 runtime: unknown pc 0x40276809ed
#26 56.18 stack: frame={sp:0x15, fp:0x0} stack=[0xc000a82000,0xc000a8a000)
#26 56.18
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 goroutine 1 [chan send]:
#26 56.18 cmd/compile/internal/gc.compileFunctions()
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:390 +0x186
#26 56.18 cmd/compile/internal/gc.Main(0xcc7d20)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/main.go:768 +0x361a
#26 56.18 main.main()
#26 56.18 	/usr/local/go/src/cmd/compile/main.go:52 +0xb1
#26 56.18
#26 56.18 goroutine 28 [runnable]:
#26 56.18 cmd/compile/internal/ssa.(*sparseSet).add(...)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/sparseset.go:39
#26 56.18 cmd/compile/internal/ssa.branchelim(0xc000af6f20)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/branchelim.go:39 +0x1cd
#26 56.18 cmd/compile/internal/ssa.Compile(0xc000af6f20)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/compile.go:96 +0x98d
#26 56.18 cmd/compile/internal/gc.buildssa(0xc00049b340, 0x1, 0x0)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/ssa.go:470 +0x11ba
#26 56.18 cmd/compile/internal/gc.compileSSA(0xc00049b340, 0x1)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:319 +0x5d
#26 56.18 cmd/compile/internal/gc.compileFunctions.func2(0xc0008e5aa0, 0xc0000c0650, 0x1)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:384 +0x4d
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 goroutine 29 [runnable]:
#26 56.18 fmt.(*pp).doPrintf(0xc0000ab380, 0xca34f3, 0xc, 0xc000a517d8, 0x1, 0x1)
#26 56.18 	/usr/local/go/src/fmt/print.go:974 +0x124b
#26 56.18 fmt.Sprintf(0xca34f3, 0xc, 0xc000a517d8, 0x1, 0x1, 0xc000a517e8, 0xac98e6)
#26 56.18 	/usr/local/go/src/fmt/print.go:219 +0x66
#26 56.18 cmd/compile/internal/gc.(*Liveness).emit.func1(0xc000b65880, 0x8)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/plive.go:1217 +0xba
#26 56.18 cmd/compile/internal/gc.(*Liveness).emit(0xc0000c7900, 0x200, 0xc000b6e170)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/plive.go:1222 +0x545
#26 56.18 cmd/compile/internal/gc.liveness(0xc000c539b0, 0xc000c458c0, 0xc000b6a7e0, 0xb, 0xcc8488, 0x0)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/plive.go:1269 +0x36b
#26 56.18 cmd/compile/internal/gc.genssa(0xc000c458c0, 0xc000b6a7e0)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/ssa.go:6301 +0x95
#26 56.18 cmd/compile/internal/gc.compileSSA(0xc000488f20, 0x2)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:329 +0x3a5
#26 56.18 cmd/compile/internal/gc.compileFunctions.func2(0xc0008e5aa0, 0xc0000c0650, 0x2)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:384 +0x4d
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 goroutine 30 [runnable]:
#26 56.18 cmd/compile/internal/ssa.fuseBlockPlain(0xc000a1f448, 0xc000b37900)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/fuse.go:218 +0x5c5
#26 56.18 cmd/compile/internal/ssa.fuse(0xc0006131e0, 0x3166cd7205)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/fuse.go:40 +0xb3
#26 56.18 cmd/compile/internal/ssa.fuseEarly(0xc0006131e0)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/fuse.go:12 +0x30
#26 56.18 cmd/compile/internal/ssa.Compile(0xc0006131e0)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/ssa/compile.go:96 +0x98d
#26 56.18 cmd/compile/internal/gc.buildssa(0xc000488160, 0x3, 0x0)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/ssa.go:470 +0x11ba
#26 56.18 cmd/compile/internal/gc.compileSSA(0xc000488160, 0x3)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:319 +0x5d
#26 56.18 cmd/compile/internal/gc.compileFunctions.func2(0xc0008e5aa0, 0xc0000c0650, 0x3)
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:384 +0x4d
#26 56.18 created by cmd/compile/internal/gc.compileFunctions
#26 56.18 	/usr/local/go/src/cmd/compile/internal/gc/pgen.go:382 +0x129
#26 56.18
#26 56.18 rax    0xc
#26 56.18 rbx    0x9
#26 56.18 rcx    0x4027686e99
#26 56.18 rdx    0x4027680998
#26 56.18 rdi    0x0
#26 56.18 rsi    0x86b5a4570ee0c
#26 56.18 rbp    0x4027686b11
#26 56.18 rsp    0x15
#26 56.18 r8     0xc000431260
#26 56.18 r9     0xc00043ade0
#26 56.18 r10    0xc00074a000
#26 56.18 r11    0xc00077e180
#26 56.18 r12    0x0
#26 56.18 r13    0x0
#26 56.18 r14    0x0
#26 56.18 r15    0x0
#26 56.18 rip    0x40276809ed
#26 56.18 rflags 0xb
#26 56.18 cs     0x656
#26 56.18 fs     0x40
#26 56.18 gs     0x2768
#26 75.56 make: *** [GNUmakefile:120: tools] Error 2
------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c make tools]: exit code: 2

@AndrzejStepienSolveq
Copy link

Same issue here using Elasticsearch

    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
    volumes:
      - ./configs/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:cached
      - ./configs/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:cached
    ports:
      - 9200:9200
    healthcheck:
      test: curl http://127.0.0.1:9200/_cat/health
      interval: 5s
      timeout: 10s
      retries: 5
    environment:
      http.host: "0.0.0.0"
      transport.host: "127.0.0.1"
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      xpack.security.enabled: "false"
    networks:
      default:
        aliases:
          - $host_elasticsearch
~```

Having exctaly the same problem...

Did somone made it work with --platform linux/amd64 build?

@stephen-turner
Copy link
Contributor

This is a qemu bug, which is the upstream component we use for running Intel (amd64) containers on M1 (arm64) chips, and is unfortunately not something we control. In general we recommend running arm64 containers on M1 chips because (even ignoring any crashes) they will always be faster and use less memory.

Please encourage the author of this container to supply an arm64 or multi-arch image, not just an Intel one. Now that M1 is a mainstream platform, we think that most container authors will be keen to do this.

@hazcod
Copy link

hazcod commented Mar 3, 2021

@andriejka FYI same thing with platform: linux/amd64

@joshua-hester
Copy link

I updated to the latest version of ES and the error went away :)

@konalegi
Copy link

konalegi commented Mar 26, 2021

@stephen-turner are there any plans to use qemu 6?
On changelog for 6.0 rc0, they say that now support mac m1 chips

QEMU now supports emulation of the Arm-v8.1M architecture and the Cortex-M55 CPU

I understand that is major release, but maybe :)

@stephen-turner
Copy link
Contributor

@stephen-turner are there any plans to use qemu 6?
On changelog for 6.0 rc0, they say that now support mac m1 chips

QEMU now supports emulation of the Arm-v8.1M architecture and the Cortex-M55 CPU

I understand that is major release, but maybe :)

We generally track our upstreams, but only when they're released, not betas/RCs.

However, I'm not sure it helps anyway. We are not wanting to emulate M1, we are running on M1 and emulating Intel. Also if https://en.wikipedia.org/wiki/ARM_architecture#Cores is correct, the M1 chip is the Arm-v8.6A architecture not 8.4M.

@CrawX
Copy link

CrawX commented Apr 14, 2021

I agree with @stephen-turner, this particular changelog entry will probably not affect the problem here.
I tried qemu 6.0.0-rc2 on linux/aarch64 in a vm running on Apple Silicon nontheless and I'm still experiencing a similar problem when running ./gradlew bootBuildImage (which uses paketo-buildpacks):


 > Pulling builder image 'docker.io/paketobuildpacks/builder:base' ..................................................
 > Pulled builder image 'paketobuildpacks/builder@sha256:e19f8c5df2dc7d6b0efd1c8fcd7ffc546cf3c16e0f238d0eb9084781d2c3ad41'
 > Pulling run image 'docker.io/paketobuildpacks/run:base-cnb' ..................................................
 > Pulled run image 'paketobuildpacks/run@sha256:235853acae3609e38e176cc6fb54c04535d44e26e46739aebf0374fe74fd6291'
 > Executing lifecycle version v0.11.1
 > Using build cache volume 'pack-cache-18d2320494d4.build'

 > Running creator
    [creator]     ===> DETECTING
    [creator]     ======== Output: paketo-buildpacks/procfile@4.0.0 ========
    [creator]     qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    [creator]     ======== Output: paketo-buildpacks/environment-variables@3.0.0 ========
    [creator]     qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    [creator]     ======== Output: paketo-buildpacks/image-labels@3.0.0 ========
    [creator]     qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    [creator]     err:  paketo-buildpacks/procfile@4.0.0
    [creator]     err:  paketo-buildpacks/environment-variables@3.0.0
    [creator]     err:  paketo-buildpacks/image-labels@3.0.0
    [creator]     ======== Output: paketo-buildpacks/procfile@4.0.0 ========```

@richie50
Copy link

richie50 commented Apr 20, 2021

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

qemu: uncaught target signal 11 (Segmentation fault) - core dumped

docker version 20.10.5, build 55c4c88 mac silicon. Experiencing similar issue

@cheerfulstoic
Copy link
Author

Realized that I should update on my status: After @stephen-turner 's comments I realized that we were using a number of old images which weren't built to support the M1 architecture. After upgrading images we've had a lot of success with the new version of Docker for Mac

@zeljkokalezic
Copy link

zeljkokalezic commented Apr 22, 2021

For people that need to run ES 6.x images for various reasons these are working for me fine on my M1 MacBook (Docker for desktop 3.3.1, BigSur)

@pasim
Copy link

pasim commented Apr 22, 2021

@zeljkokalezic thanks for the tip worked for me with some borrowed environment variables from @dprandzioch docker-compose yaml

@ekcasey
Copy link

ekcasey commented Apr 29, 2021

This is a qemu bug, which is the upstream component we use for running Intel (amd64) containers on M1 (arm64) chips, and is unfortunately not something we control.

@stephen-turner Do you know whether this bug has been filed with qemu and, if so, do you have a link?

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators May 29, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests