Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for running via a docker image? #46

Closed
tom-sherman opened this issue Sep 28, 2022 · 38 comments
Closed

Support for running via a docker image? #46

tom-sherman opened this issue Sep 28, 2022 · 38 comments
Labels
feature request Request for Workers team to add a feature

Comments

@tom-sherman
Copy link

Could this be supported out of the box?

Looks like there's been some teething problems getting it to work here: #20 (comment)

@frafra
Copy link

frafra commented Sep 28, 2022

This is what I am using:

FROM node:18

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -qy tini libc++1

WORKDIR /app
RUN npm install workerd
COPY config.capnp hello.js ./

CMD ["tini", "./node_modules/.bin/workerd", "serve", "config.capnp"]

@jasnell jasnell added the feature request Request for Workers team to add a feature label Sep 28, 2022
@tom-sherman
Copy link
Author

tom-sherman commented Sep 28, 2022

I have the following Dockerfile but I get an error, running on M1 MacOS:

FROM node:18

RUN apt-get update && apt-get -y install libc++-dev libunwind-dev

WORKDIR /app
RUN npm install workerd
COPY workerd.capnp worker.js health-check.js ./

CMD ["./node_modules/.bin/workerd", "serve", "workerd.capnp"]
/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
node:child_process:910
    throw err;
    ^

Error: Command failed: /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd serve workerd.capnp
    at checkExecSyncError (node:child_process:871:11)
    at Object.execFileSync (node:child_process:907:15)
    at Object.<anonymous> (/app/node_modules/workerd/bin/workerd:134:26)
    at Module._compile (node:internal/modules/cjs/loader:1119:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1173:10)
    at Module.load (node:internal/modules/cjs/loader:997:32)
    at Module._load (node:internal/modules/cjs/loader:838:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:18:47 {
  status: 127,
  signal: null,
  output: [ null, null, null ],
  pid: 14,
  stdout: null,
  stderr: null
}

Node.js v18.9.1

Any ideas?

@frafra
Copy link

frafra commented Sep 29, 2022

You removed libc++1 and added two unnecessary -dev libraries. This is why it fails. libc++1 install libunwind as requirement.

@tom-sherman
Copy link
Author

tom-sherman commented Sep 29, 2022

@frafra I tried that, same error:

FROM node:18

RUN apt-get update && apt-get -y install libc++1 libunwind8

WORKDIR /app
RUN npm install workerd
COPY workerd.capnp worker.js health-check.js ./

CMD ["./node_modules/.bin/workerd", "serve", "workerd.capnp"]

@frafra
Copy link

frafra commented Sep 29, 2022

This is an error with your modified version of your Dockerfile. Please stick with the suggested packages. There is no need to install libunwind explicitly, since it is a dependency of libc++1. You are specifying a version of libunwind which is not the one required by libc++1 currently. Look at the Debian packages webpage for more information.

I followed the Getting Started section of this repository and made a new one, with the suggested hello world example and a simple Dockefile, which works flawlessly. I would suggest you start there: https://github.com/frafra/workerd-docker

@tom-sherman
Copy link
Author

I haven't written out everything I've tried, all of the changes to dockerfiles would probably blow through the character limit on a GitHub comment.

I have of course tried your dockerfile with no luck. Anything I try throws back the error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory error.

@frafra
Copy link

frafra commented Sep 29, 2022

You said to use M1. That could explain why you are having different results. Have you just tried to build the Dockerfile within the repository I linked? Do you get the very same error even when using that repository and the hello world files?

libunwind8 does not provide libunwind.so.1. libunwind is a requirement of libc++ only staring from Debian Bookworm (v12, current testing), but node:18 uses Debian Buster (v11, current stable). The package is named libunwind-14 in Debian Sid and libunwind-13 in Debian Buster: https://packages.debian.org/search?suite=bullseye&section=all&arch=any&searchon=contents&keywords=libunwind.so.1.

Could it be that libunwind.so.1 is a (indirect) dependency of workerd only on M1? Try to add libunwind-13.

@frafra
Copy link

frafra commented Sep 29, 2022

I made a branch with the additional dependency: https://github.com/frafra/workerd-docker/tree/fix-missing-libunwind

@tom-sherman
Copy link
Author

Ah, adding libunwind-13 gives me a different error:

/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd)
/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd)
/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd)

@Cyb3r-Jak3
Copy link

I have a basic docker image for amd64 here: https://github.com/Cyb3r-Jak3/docker-workerd. Smaller than installing it via npm. Arm64 support is on the way.
Also, happy to merge it into this repo.

@kentonv
Copy link
Member

kentonv commented Sep 30, 2022

Zooming out a bit, a broader problem here may be that our binary has too many dependencies on shared libraries that lack stable ABIs across distros. We should try to statically link more of these, at least in our npm releases.

When it comes to glibc specifically, though, we do need to dynamically link. Fortunately glibc has strong ABI compatibility. However, we need to make sure to link against an older version of glibc, since generally a binary will only work with the version it was linked against and newer versions, but not older ones.

cc @penalosa

@frafra
Copy link

frafra commented Sep 30, 2022

Ah, adding libunwind-13 gives me a different error:

Have you installed it together with libc++1?
workerd in my container is a static binary, so I wonder what the result of ldd is on your container running on M1, giving the different requirements.

@kentonv
Copy link
Member

kentonv commented Sep 30, 2022

workerd in my container is a static binary

Hmm are you sure? I don't think any of our binaries are static.

@frafra
Copy link

frafra commented Oct 1, 2022

workerd in my container is a static binary

Hmm are you sure? I don't think any of our binaries are static.

My bad, I was looking at the wrong executable:

node@da718d2232c1:~$ ldd ./node_modules/workerd/bin/workerd
	not a dynamic executable
node@da718d2232c1:~$ ldd ./node_modules/@cloudflare/workerd-linux-64/bin/workerd
	linux-vdso.so.1 (0x00007ffd7bd24000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5db8eb6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5db8e94000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5db8d50000)
	libc++.so.1 => /usr/lib/x86_64-linux-gnu/libc++.so.1 (0x00007f5db8c86000)
	libc++abi.so.1 => /usr/lib/x86_64-linux-gnu/libc++abi.so.1 (0x00007f5db8c4e000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5db8c34000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5db8a5d000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f5dbbd86000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f5db8a53000)
	libatomic.so.1 => /usr/lib/x86_64-linux-gnu/libatomic.so.1 (0x00007f5db8a49000)

@vovayartsev
Copy link

vovayartsev commented Oct 16, 2022

I managed to build workerd binary within Docker using the following Dockerfile

FROM ubuntu:22.04

RUN apt-get update && apt-get install -y build-essential git clang libc++-dev libc++abi-dev curl gnupg git python3-pip python3-distutils
RUN curl -L "https://github.com/bazelbuild/bazelisk/releases/download/v1.14.0/bazelisk-linux-arm64" -o /bin/bazelisk && chmod 755 /bin/bazelisk

RUN cd /tmp && git clone https://github.com/cloudflare/workerd.git 
RUN cd /tmp/workerd && bazelisk build -c opt //src/workerd/server:workerd

The compiled binary is located under /tmp/workerd/bazel-bin/src/workerd/server/workerd.

Appreciate it's still an early Beta.... but an official Dockerfile in the repo would be really helpful for those who wants to try workerd in VSCode Dev Containers environment.

A static binary would be very helpful too, because then it could be copied to VSCode Dev Container as a 1-liner:

COPY --from=workerd /bin/workerd /bin/

@tom-sherman
Copy link
Author

💯 for static binaries. This is something Deno got sooooo right.

@AnishTiwari
Copy link

Trying to build the project in ubuntu:20.04

root@vm1:/home/ubuntu# git clone https://github.com/cloudflare/workerd.git \
>     && cd workerd \
>     && bazel build -c opt //src/workerd/server:workerd --verbose_failures
Cloning into 'workerd'...
remote: Enumerating objects: 2064, done.
remote: Counting objects: 100% (2064/2064), done.
remote: Compressing objects: 100% (687/687), done.
remote: Total 2064 (delta 1224), reused 1973 (delta 1193), pack-reused 0
Receiving objects: 100% (2064/2064), 1.85 MiB | 2.44 MiB/s, done.
Resolving deltas: 100% (1224/1224), done.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed target //src/workerd/server:workerd (201 packages loaded, 16218 targets configured).
INFO: Found 1 target...
ERROR: /home/ubuntu/workerd/BUILD.bazel:7:17: GenCapnp icudata-embed.capnp.h failed: (Exit 1): capnp_tool failed: error executing command 
  (cd /root/.cache/bazel/_bazel_root/b3fbc4211153c5d2f26c97321a65891b/sandbox/linux-sandbox/208/execroot/workerd && \
  exec env - \
  bazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnp_tool compile --verbose -obazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnpc-c++:bazel-out/k8-opt/bin -I external/capnp-cpp/src icudata-embed.capnp)
# Configuration: d2d1d592403ef6a825f2862044013ce88fb1c177866ebc360e1e17809f6b2a5f
# Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
bazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnpc-c++: plugin failed: Killed
Target //src/workerd/server:workerd failed to build
INFO: Elapsed time: 1523.757s, Critical Path: 146.27s
INFO: 1751 processes: 1544 internal, 206 linux-sandbox, 1 local.
FAILED: Build did NOT complete successfully

Any ideas?

@Zerebokep
Copy link

I also receive the following error using Docker (arm):

error while loading shared libraries: libc++.so.1: cannot open shared object file

docker compose:

wrangler:
    image: node
    command: >
        sh -cx "yarn install && yarn wrangler dev --experimental-local"

@Cyb3r-Jak3
Copy link

I also receive the following error using Docker (arm):

error while loading shared libraries: libc++.so.1: cannot open shared object file

My guess is you need to install libc++-dev and libc++abi-dev in your container.

@fresheneesz
Copy link

fresheneesz commented Dec 16, 2022

I'm working in a WSL Ubuntu VM and installing libc++-dev and libc++abi-dev in addition to libc++1 does not help. I'm still getting the following when running yarn add workderd:

Error: Command failed: /home/<redacted>/.nvm/versions/node/v19.3.0/bin/node /home/<redacted>/node_modules/workerd/bin/workerd --version
/home/<redacted>/node_modules/@cloudflare/workerd-linux-64/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
node:child_process:924
    throw err;
    ^

Error: Command failed: /home/<redacted>/node_modules/@cloudflare/workerd-linux-64/bin/workerd --version
    at checkExecSyncError (node:child_process:885:11)
    at Object.execFileSync (node:child_process:921:15)
    at Object.<anonymous> (/home<redacted>/node_modules/workerd/bin/workerd:135:26)
    at Module._compile (node:internal/modules/cjs/loader:1218:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1272:10)
    at Module.load (node:internal/modules/cjs/loader:1081:32)
    at Module._load (node:internal/modules/cjs/loader:922:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:82:12)
    at node:internal/main/run_main_module:23:47 {
  status: 127,
  signal: null,
  output: [ null, null, null ],
  pid: 30212,
  stdout: null,
  stderr: null
}

Node.js v19.3.0

@penalosa
Copy link
Collaborator

Which version of Ubuntu are you using in WSL? There have been some issue with Ubuntu 20—trying Ubuntu 22 should work?

@fresheneesz
Copy link

Ah I do have Ubuntu 20. I'll try 22 at some point, thanks!

@ryan-mars
Copy link

If anyone has this working in Github Codespaces I'd love to know how.

@tunnckoCore
Copy link

It's the same thing on ArchLinux.. Seems like that libunwind is version 1.6 and it definitely doesn't contain libunwind.so.1 but without the 1.

I have all devel things, like base-devel, libc++ 15, libunwind 1.6, llvm 15, clang 15.0.7, glibc, and etc.

It definitely would be better to lower the dependencies.

We should try to statically link more of these, at least in our npm releases.

Yep.

@KeesCBakker
Copy link

Does anyone have it working on WSL without the dreaded error while loading shared libraries: libc++.so.1: cannot open shared object file?

@c0b41
Copy link

c0b41 commented May 4, 2023

@KeesCBakker you need to upgrade your ubuntu version, 22.04

@huw
Copy link

huw commented May 18, 2023

Now that Miniflare v3 (with workerd) is the default dev command in Wrangler, this issue is more important to robustly solve.

To summarise the above: The two packages that are most commonly concurrently out of date in this thread are glibc and libunwind. Unfortunately, glibc can’t be upgraded without upgrading the OS version that your Docker image is based on. In the Ubuntu, the minimum supported version is 22.04, and for Debian, it’s Debian 12 (Bookworm), which is still a few weeks away from stably releasing. I can’t speak to Alpine because it has a different libc setup that might require some extra configuration.

The workaround is to upgrade your Docker image—or really, to upgrade whatever base images you’re relying on—to one of those OS versions. In my case, the stack of images I was relying on was:

  • debian:bullseye (Bookworm available)
  • buildpack-deps:bullseye (Bookworm available)
  • node (Bookworm proposed)
  • mcr.microsoft.com/devcontainers/javascript-node
  • mcr.microsoft.com/devcontainers/typescript-node

(The last two images are commonly used by GitHub Codespaces)

I was able to adapt the images pretty easily by combining them into one Dockerfile and adapting the deepest dependency from buildpack-deps:bullseye to buildpack-deps:bookworm.

My Dockerfile is below, but don’t copy-paste this. Your setup is going to be different to mine depending on your container/OS.

#
# Adapted from Node's image (with `corepack enable` instead of installing Yarn)
#

FROM buildpack-deps:bookworm

RUN groupadd --gid 1000 node \
  && useradd --uid 1000 --gid node --shell /bin/bash --create-home node

ENV NODE_VERSION 18.6.0

RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" \
  && case "${dpkgArch##*-}" in \
    amd64) ARCH='x64';; \
    ppc64el) ARCH='ppc64le';; \
    s390x) ARCH='s390x';; \
    arm64) ARCH='arm64';; \
    armhf) ARCH='armv7l';; \
    i386) ARCH='x86';; \
    *) echo "unsupported architecture"; exit 1 ;; \
  esac \
  # gpg keys listed at https://github.com/nodejs/node#release-keys
  && set -ex \
  && for key in \
    4ED778F539E3634C779C87C6D7062848A1AB005C \
    141F07595B7B3FFE74309A937405533BE57C7D57 \
    74F12602B6F1C4E913FAA37AD3A89613643B6201 \
    DD792F5973C6DE52C432CBDAC77ABFA00DDBF2B7 \
    61FC681DFB92A079F1685E77973F295594EC4689 \
    8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
    890C08DB8579162FEE0DF9DB8BEAB4DFCF555EF4 \
    C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C \
    108F52B48DB57BB0CC439B2997B01419BD92F80A \
  ; do \
      gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || \
      gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; \
  done \
  && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" \
  && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
  && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
  && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
  && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
  && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
  && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
  && corepack enable \
  # smoke tests
  && node --version \
  && npm --version

#
# Adapted from Microsoft's `javascript-node` repo
#

ARG USERNAME=node
ARG NPM_GLOBAL=/usr/local/share/npm-global

# Add NPM global to PATH.
ENV PATH=${NPM_GLOBAL}/bin:${PATH}

RUN \
    # Configure global npm install location, use group to adapt to UID/GID changes
    if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \
    && usermod -a -G npm ${USERNAME} \
    && umask 0002 \
    && mkdir -p ${NPM_GLOBAL} \
    && touch /usr/local/etc/npmrc \
    && chown ${USERNAME}:npm ${NPM_GLOBAL} /usr/local/etc/npmrc \
    && chmod g+s ${NPM_GLOBAL} \
    && npm config -g set prefix ${NPM_GLOBAL} \
    && su ${USERNAME} -c "npm config -g set prefix ${NPM_GLOBAL}" \
    # Install eslint
    && su ${USERNAME} -c "umask 0002 && npm install -g eslint" \
    && npm cache clean --force > /dev/null 2>&1
    
#
# Adapted from Microsoft's `typescript-node` repo
#

ARG NODE_MODULES="tslint-to-eslint-config typescript"
RUN su node -c "umask 0002 && npm install -g ${NODE_MODULES}" \
    && npm cache clean --force > /dev/null 2>&1

# Install `libc++-dev` for workerd to work
RUN apt-get update && apt-get -y install libc++-dev

This is too much to expect a normal user to do, however, and at least being clearer about minimum OS versions in Wrangler would be a good move.

@HeyITGuyFixIt
Copy link

I was able to get this working with the proposed node Bookworm Docker images. I git pulled the proposed repo, switched to the Bookworm branch and built the image locally that I needed.

@mnixry
Copy link

mnixry commented May 18, 2023

It's the same thing on ArchLinux.. Seems like that libunwind is version 1.6 and it definitely doesn't contain libunwind.so.1 but without the 1.

For Arch users, it seems we can make a soft link to libunwind.so.1 -> libunwind.so and it just works. I know it is a very bad practice, but I have no better workaround now.

@kleisauke
Copy link

A (temporary) solution for Fedora 38 is to install llvm-libunwind via DNF. For example:

$ ./node_modules/workerd/bin/workerd --version
./node_modules/workerd/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
$ dnf install -y llvm-libunwind
$ ./node_modules/workerd/bin/workerd --version
workerd 2023-05-12

The library has a different soname (libunwind.so.1) so it can live together with GCC's libunwind (libunwind.so.8).

For distros that doesn't distribute LLVM's libunwind, one could attempt to symlink libunwind.so.8 (or the unversioned one) to libunwind.so.1. However, I cannot recommend this, since GCC's libunwind does not have exactly the same ABI as the one provided by LLVM.

@jormaj
Copy link

jormaj commented May 29, 2023

If anyone has this working in Github Codespaces I'd love to know how.

I got this working by selecting a newer devcontainer image (f.ex. ubuntu 22.04 -> mcr.microsoft.com/devcontainers/base:ubuntu), see https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers#using-a-predefined-dev-container-configuration for instructions on how to customize your codespaces devcontainer.

If you're using the Ubuntu base image, just install npm/node using your preferred method, then make sure to apt install libc++-dev.
Now you should be able to install wrangler and run it without any problems.

@ganesh-rao
Copy link

I can confirm the above suggestion by @jormaj works on docker (managed by VS Code) running mcr.microsoft.com/devcontainers/base:ubuntu (ubuntu 22.04) image.

@Karakatiza666
Copy link

I am using image 'mcr.microsoft.com/vscode/devcontainers/javascript-node:16', with wrangler 3.0.1 after installing libc++-dev and libunwind-dev shared lib errors were fixed, but still get the error Error: write EPIPE .... cloudflare/workers-sdk#3262 (comment) will fix it soon, right?

@jiripospisil
Copy link

For Arch Linux users:

yay -Sy extra/libc++ aur/llvm-libunwind

@Karakatiza666
Copy link

Cannot get wrangler dev working on Debian ('mcr.microsoft.com/vscode/devcontainers/javascript-node:16'). I've installed libc++-dev, libc++abi-dev and libunwind-dev to no avail, installing clang didn't help either. wrangler 3.1.0

@gennai3
Copy link

gennai3 commented Jun 15, 2023

Wrangler does not work on Debian 11 , I installed clang libc++-dev libc++abi-dev by apt install command , however, does not work correctly. Please let me know, I think Debian and Ubuntu are most popular Linux OS on Interent server. Thanks.

-- log

? Compiled Worker successfully
?? wrangler 3.1.0

wrangler dev now uses local mode by default, powered by ? Miniflare and ? workerd.
To run an edge preview session for your Worker, use wrangler dev --remote
Your worker has access to the following bindings:

  • KV Namespaces:
    • KVDATA: 83e2f6aae8664bf89a......xxxxx
      ? Starting local server...
      [mf:wrn] The latest compatibility date supported by the installed Cloudflare Workers Runtime is "2023-05-18",
      but you've requested "2023-06-15". Falling back to "2023-05-18"...
      lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
      x [b] open a browser, [d] open Devtools, [c] clear console, [x] to exit x
      mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj
      /home/hodota/sonicjs/node_modules/wrangler/wrangler-dist/cli.js:30632
      throw a;
      ^

Error: write EPIPE
at afterWriteDispatched (node:internal/stream_base_commons:160:15)
at writeGeneric (node:internal/stream_base_commons:151:3)
at Socket._writeGeneric (node:net:905:11)
at Socket._write (node:net:917:8)
at writeOrBuffer (node:internal/streams/writable:391:12)
at _write (node:internal/streams/writable:332:10)
at Socket.Writable.write (node:internal/streams/writable:336:10)
at Runtime.updateConfig (/home/hodota/sonicjs/node_modules/miniflare/dist/src/index.js:5121:26)
at async Miniflare.#assembleAndUpdateConfig (/home/hodota/sonicjs/node_modules/miniflare/dist/src/index.js:9138:23)
at async Miniflare.#init (/home/hodota/sonicjs/node_modules/miniflare/dist/src/index.js:8898:5)
Emitted 'error' event on Socket instance at:
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
ERROR: "dev:wrangler" exited with 7.

@huw
Copy link

huw commented Jun 15, 2023

@Karakatiza666 As of this morning, you should be able to depend on javascript-node:16-bookworm (or just 16, you might need to rebuild your container), since Codespaces just upgraded the default image ^_^

@mrbbot
Copy link
Contributor

mrbbot commented Jun 30, 2023

Hey! 👋 As of #793 and #800, running workerd in Docker should be much easier. Here's a Dockerfile using the latest workerd release (currently only available on npm under the beta dist-tag):

FROM node:18
WORKDIR /app
RUN npm install workerd@beta
COPY config.capnp worker.js ./
EXPOSE 8080
CMD ["./node_modules/.bin/workerd", "serve", "config.capnp"]

I'm going to close this now, and will update this comment once beta is no longer needed.

@mrbbot mrbbot closed this as completed Jun 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Request for Workers team to add a feature
Projects
None yet
Development

No branches or pull requests