Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error on public.ecr.aws/lambda/python:3.8 #26

Open
jakemraz opened this issue Oct 6, 2021 · 23 comments

Comments

@jakemraz
Copy link

jakemraz commented Oct 6, 2021

I used public.ecr.aws/lambda/python:3.8 for my python runtime on lambda.

But today I found my lambda runtime doesn't work with below error message.

IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error
Entrypoint: [/lambda-entrypoint.sh] Cmd: [handler.lambda_handler] WorkingDir: [/var/task]

My code was working before this commit (97a295c)

But after applying this commit, my code doesn't work anymore.

Please check this out..

@jakemraz
Copy link
Author

jakemraz commented Oct 7, 2021

I've checked it more. I'm using CDK to deploy my lambda container, and I deployed at ap-northeast-2 which is not seemed to support arm based lambda runtime.
My pc is mac M1 so it may dockerize the dockerfile with arm runtime.
I guess this causes the problem.
How do I dockerize my dockerfile with x86 runtime on my mac M1?

@jakemraz
Copy link
Author

jakemraz commented Oct 7, 2021

I've tested it more.
I deployed my lambda container cdk project on x86 machine at ap-northeast-2, it works.
But I deployed it on mac m1 at us-west-2 which supports arm runtime for lambda, it doesn't work with same error message 'exec format error'

@FredrikZeiner
Copy link

We have the same issue with nodejs

@jakemraz
Copy link
Author

jakemraz commented Oct 8, 2021

interesting, my lambda node container works well. only lambda python container has a problem..

@kini
Copy link

kini commented Oct 20, 2021

EDIT: Never mind, read too fast, sorry for the noise!

@nmadhire
Copy link

I can see the same error with "arm64" architecture. It works with the default "x86_64" type

@OG84
Copy link

OG84 commented Dec 20, 2021

If I understand you correctly, you could either set the lambda architecture to Arm64 in function props or make sure that your docker is building a x86_64. If you're building on a Mac M1 then i guess it will pull and build an arm64 image.
You could do

DOCKER_DEFAULT_PLATFORM=linux/amd64 cdk deploy ...

Depending on how you specify the image, i think there are also options to set the platform directly in the CDK construct.

@little-eyes
Copy link

little-eyes commented Dec 22, 2021

I have the same issue with M1 MacOS deployment. Same deployment works fine via Windows or Linux. Since I build the container image separately in CDK as shown below. I did two things to make it work: 1) add build args in the DockerImageAssset, and force to use a x86_64 base image.

aws_ecr_assets.DockerImageAsset(
    self, self._map_id("infra-runtime"), directory="./handlers", build_args={"--platform": "linux/amd64"}
)
FROM public.ecr.aws/lambda/python:3.8.2021.12.18.01-x86_64
COPY . .

Afterwards, I use cdk deploy and it works on my M1 Mac.

@entest-hai
Copy link

I experienced the similar problem, here is my configuration

  • public.ecr.aws/lambda/python:latest
  • ec2 arm64 ubuntu
  • requirements.txt
numpy==1.22.0
matplotlib==3.1.2
scipy==1.7.3
PyWavelets==1.1.1
pandas==0.25.3
sklearn==0.0
Cython==0.29.21
gunicorn==20.0.4
boto3==1.12.17
stopit==1.1.2
zstandard==0.14.0
s3fs==0.6.0
simplejson==3.17.2
termcolor==1.1.0

ERROR

ERROR: Command errored out with exit status 1:
     command: /var/lang/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/setup.py'"'"'; __file__='"'"'/tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-93olafiv/install-record.txt --single-version-externally-managed --home /tmp/pip-target-4c3u_lad --compile --install-headers /tmp/pip-target-4c3u_lad/include/python/zstandard
         cwd: /tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/
    Complete output (19 lines):
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-aarch64-3.9
    creating build/lib.linux-aarch64-3.9/zstandard
    copying zstandard/cffi.py -> build/lib.linux-aarch64-3.9/zstandard
    copying zstandard/__init__.py -> build/lib.linux-aarch64-3.9/zstandard
    running build_ext
    building 'zstd' extension
    creating build/temp.linux-aarch64-3.9
    creating build/temp.linux-aarch64-3.9/c-ext
    creating build/temp.linux-aarch64-3.9/zstd
    creating build/temp.linux-aarch64-3.9/zstd/common
    creating build/temp.linux-aarch64-3.9/zstd/compress
    creating build/temp.linux-aarch64-3.9/zstd/decompress
    creating build/temp.linux-aarch64-3.9/zstd/dictBuilder
    gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Izstd -Izstd/compress -Izstd/decompress -Izstd/common -Ic-ext -Izstd/dictBuilder -I/var/lang/include/python3.9 -c c-ext/bufferutil.c -o build/temp.linux-aarch64-3.9/c-ext/bufferutil.o -DZSTD_MULTITHREAD -DZSTDLIB_VISIBILITY= -DZDICTLIB_VISIBILITY= -DZSTDERRORLIB_VISIBILITY= -fvisibility=hidden
    error: command 'gcc' failed: No such file or directory
    ----------------------------------------
ERROR: Command errored out with exit status 1: /var/lang/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/setup.py'"'"'; __file__='"'"'/tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-93olafiv/install-record.txt --single-version-externally-managed --home /tmp/pip-target-4c3u_lad --compile --install-headers /tmp/pip-target-4c3u_lad/include/python/zstandard Check the logs for full command output.
WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available.
You should consider upgrading via the '/var/lang/bin/python3.9 -m pip install --upgrade pip' command.

@enmanuelmag
Copy link

I am trying to deploy from with the help of Serverless Framework and running it from a Gitlab job, the deploy is successful but when I run the lambda I get the same error: error: fork/exec /lambda-entrypoint.sh: exec format error
This are the variables that change to use arm on serverless:

# serverless.yml
...
provider:
  name: aws
  architecture: arm64
  ecr:
    images:
      appimage:
        path: ./
...

And the config of Dockerfile

FROM public.ecr.aws/lambda/nodejs:14

COPY . ${LAMBDA_TASK_ROOT}/

RUN npm install --target ${LAMBDA_TASK_ROOT}

@juan-pascale
Copy link

I had the same issue, try tu build the docker image with --platform=linux/amd64 flag
docker build . --platform=linux/amd64

@pbnelson
Copy link

juan-pascale's fix worked for me, easy peasy, just add --platform=linux/amd64

@entest-hai
Copy link

arm and amd is different platform

@legut2
Copy link

legut2 commented Apr 21, 2022

I can confirm. Building a container on an ARM Mac Mini resulted in this error. It worked fine on my laptop which has ubuntu and a non-arm architecture. I only ever ran into this issue when switching to the computer with an arm processor.

@paco-sparta
Copy link

paco-sparta commented Jul 12, 2022

This is still recurring regardless of the runtime. I have found it with node, python and jvm.

I should be possible to specify on a serverless config which arch you're building with docker build.

EDIT:
You have to force the platform

provider:
  name: aws
  architecture: x86_64
  ecr:
    # In this section you can define images that will be built locally and uploaded to ECR
    images:
      appimage:
        path: ./
        platform: linux/amd64

@ethompsy
Copy link

ethompsy commented Aug 3, 2022

I ran into this but I solved it by adding a Python shebang to my app like #!/usr/bin/env python3 and using BOTH ENTRYPOINT [ "/var/task/app.py" ] and CMD [ "app.handler" ] in my Dockerfile. I think I could have skipped the shebang if I instead put this ENTRYPOINT [ "/usr/local/bin/python", "/var/task/app.py" ]. It seems the dockerized Lambda needs a little help knowing how to execute the code. This does not match up with the AWS docs. However, I tried following the docs and this got it working.

For reference I saw this used here: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-alt

Which is odd because I am using FROM public.ecr.aws/lambda/python:3.8-arm64

@hhimanshu
Copy link

If someone is working with AWS CDK and deploying their code using DockerImageFunction, this is what I had to do to make it work

const fakeFunction = new aws_lambda.DockerImageFunction(this, 'FakerFunction', {
            code: aws_lambda.DockerImageCode.fromImageAsset(
                path.join(__dirname, '..', '..', 'functions', 'fakedata'),
                {
                    platform: Platform.LINUX_AMD64
                }
            ),
        });

@ddvirt
Copy link

ddvirt commented Feb 7, 2023

Hello, I got a similar issue recently and I fixed it by checking the lambda architecture: in my case, the docker image was built on ARM, though lambda was set to x86_64, you can verify using the CLI command aws lambda get-function --function-name .... You can set the architecture in CDK like that:

aws_lambda.DockerImageFunction(self, "MyMSKFunction", 
            code=aws_lambda.DockerImageCode.from_image_asset(path.join(path.dirname("."), "app")),
            vpc=vpc,           
            architecture=aws_lambda.Architecture.ARM_64
        )
{
    "Configuration": {
        "FunctionName": "LambdaStack-1234",        
        "State": "Active",
        "LastUpdateStatus": "Successful",
        "PackageType": "Image",
        "Architectures": [
            "x86_64"
        ],       
       ....
}

@stevebanik
Copy link

I have the same issue with M1 MacOS deployment. Same deployment works fine via Windows or Linux. Since I build the container image separately in CDK as shown below. I did two things to make it work: 1) add build args in the DockerImageAssset, and force to use a x86_64 base image.

aws_ecr_assets.DockerImageAsset(
    self, self._map_id("infra-runtime"), directory="./handlers", build_args={"--platform": "linux/amd64"}
)
FROM public.ecr.aws/lambda/python:3.8.2021.12.18.01-x86_64
COPY . .

Afterwards, I use cdk deploy and it works on my M1 Mac.

On my M1 MacBook Pro, all I needed was this in my Dockerfile:

FROM public.ecr.aws/lambda/python:3.9.2023.03.15.15-x86_64

I had no need for DockerImageAsset.

@louisdeb
Copy link

The solution from @little-eyes worked for me, using public.ecr.aws/lambda/python:3.8.2021.12.18.01-x86_64.

I recently upgraded to Python3.10 and used public.ecr.aws/lambda/python:3.10. When I deployed, I was getting the following error in the Lambda Runtime: Error: fork/exec /var/lang/bin/python3: exec format error Runtime.InvalidEntrypoint.

The solution for me was to recorrect for the M1 vs x86 architecture: public.ecr.aws/lambda/python:3.10-x86_64.

The list of available ecr images is here.

@panilo
Copy link

panilo commented May 26, 2023

You need to force the PLATFORM as mentioned above

from aws_cdk import (    
    Stack,
    aws_lambda,
    aws_ecr_assets as ecr
)
from constructs import Construct

class LambdaMultiplatDemStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        my_function = aws_lambda.DockerImageFunction(
            self, 
            "MyDifferentPlatformFn",
            code=aws_lambda.DockerImageCode.from_image_asset(
              ".",
              platform=ecr.Platform.LINUX_ARM64 # Magic Switch!
            ),
            architecture=aws_lambda.Architecture.ARM_64
        )

@alnaranjo
Copy link

Any solutions other than forcing a different platform? I'm seeing this with v. 0.7.1

@codeyourwayup
Copy link

I am trying to deploy from with the help of Serverless Framework and running it from a Gitlab job, the deploy is successful but when I run the lambda I get the same error: error: fork/exec /lambda-entrypoint.sh: exec format error This are the variables that change to use arm on serverless:

# serverless.yml
...
provider:
  name: aws
  architecture: arm64
  ecr:
    images:
      appimage:
        path: ./
...

And the config of Dockerfile

FROM public.ecr.aws/lambda/nodejs:14

COPY . ${LAMBDA_TASK_ROOT}/

RUN npm install --target ${LAMBDA_TASK_ROOT}

this works! here are some pointers:
https://medium.com/insiderengineering/deploying-aws-lambda-functions-for-machine-learning-workloads-def50b221139
https://repost.aws/questions/QUDoW9UeaJRcOooxwTwcHcsg/use-dockerfile-for-lambda-running-arm64-architecture

basically, you need to ensure that your built image, which is from your local machine, should have the same architecture as Lambda, since they should be the same; it happens to me before when I using github actions for CICD; where you should use cross-platform builds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests