Skip to content
This repository was archived by the owner on Aug 7, 2025. It is now read-only.
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
114 changes: 66 additions & 48 deletions content/en/tutorials/lambda-ecr-container-images/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,49 +3,54 @@ title: "Deploying Lambda container image locally with Elastic Container Registry
linkTitle: "Deploying Lambda container image locally with Elastic Container Registry (ECR) using LocalStack"
weight: 2
description: >
You can create & deploy your Lambda functions using Lambda container image by packaging your code and dependencies in a Docker image! Learn how you can create a Lambda container image using a local Elastic Container Registry (ECR) in LocalStack.
Learn how to create and deploy Lambda functions using container images in LocalStack. This tutorial guides you through packaging your code and dependencies into a Docker image, creating a local Elastic Container Registry (ECR) in LocalStack, and deploying the Lambda container image.
type: tutorials
---

AWS Lambda is a serverless compute system that allows you to break down your application into many independent functions and deploy them as singular units that can run on the AWS ecosystem. Lambda is tightly integrated with other AWS services and allows you to write your serverless functions in different programming languages suited for various supported runtimes. You can deploy Lambda functions programmatically by uploading a ZIP file containing your code and dependencies or packaging your code in a container image and deploying it via Elastic Container Registry (ECR).
[Lambda](https://aws.amazon.com/lambda/) is a powerful serverless compute system that enables you to break down your application into smaller, independent functions. These functions can be deployed as individual units within the AWS ecosystem. Lambda offers seamless integration with various AWS services and supports multiple programming languages for different runtime environments. To deploy Lambda functions programmatically, you have two options: [uploading a ZIP file containing your code and dependencies](https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-zip.html) or [packaging your code in a container image](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-images.html) and deploying it through Elastic Container Registry (ECR).

ECR allows users to push their software packaged inside containers into an AWS-managed registry. Using ECR, you can version, tag, and manage your image lifecycles independently of your application. ECR is tightly integrated with other AWS services, such as ECS, EKS, and Lambda, and allows you to deploy your container image to these services. You can create container images using Docker and your Lambda functions by implementing the Lambda Runtime API and following the Open Container Initiative (OCI) specifications.
[ECR](https://aws.amazon.com/ecr/) is an AWS-managed registry that facilitates the storage and distribution of containerized software. With ECR, you can effectively manage your image lifecycles, versioning, and tagging, separate from your application. It seamlessly integrates with other AWS services like ECS, EKS, and Lambda, enabling you to deploy your container images effortlessly. Creating container images for your Lambda functions involves using Docker and implementing the Lambda Runtime API according to the Open Container Initiative (OCI) specifications.

[LocalStack Pro](https://localstack.cloud) supports the creation of Lambda functions using container images via ECR and allows you to deploy your Lambda functions locally using LocalStack. In this tutorial, we will learn how to create a Lambda function using a container image and deploy it locally using LocalStack.
[LocalStack Pro](https://localstack.cloud) extends support for Lambda functions using container images through ECR. It enables you to deploy your Lambda functions locally using LocalStack. In this tutorial, we will explore creating a Lambda function using a container image and deploying it locally with the help of LocalStack.

## Prerequisites

For this tutorial you will need:
Before diving into this tutorial, make sure you have the following prerequisites:

- [LocalStack Pro](https://localstack.cloud/pricing/) to emulate Amazon ECS and AWS Lambda locally
- Don't worry, if you don't have a subscription yet, you can just get a trial license for free.
- [awslocal](https://docs.localstack.cloud/integrations/aws-cli/#localstack-aws-cli-awslocal)
- LocalStack Pro
- [`awslocal` CLI](https://docs.localstack.cloud/integrations/aws-cli/#localstack-aws-cli-awslocal)
- [Python](https://www.python.org/downloads/)
- [Docker](https://docker.io/)

## Creating a Lambda function

To package & deploy a Lambda as a container image, we'll first create a Lambda function containing our code and a Dockerfile. Create a new directory and initialize it with two files: `handler.py`, to save our Python-based Lambda function, and `Dockerfile`, to package our code and dependencies into a container image.
To package and deploy a Lambda function as a container image, we'll create a Lambda function containing our code and a Dockerfile. Create a new directory for your lambda function and navigate to it:

{{< command >}}
$ mkdir -p lambda-container-image
$ cd lambda-container-image
{{< / command >}}

Initialize the directory by creating two files: `handler.py` and `Dockerfile`. Use the following commands to create the files:

{{< command >}}
$ touch handler.py Dockerfile
{{< / command >}}

Let us use the following Python code, in `handler.py`, to create a Lambda function that returns a simple `'Hello from LocalStack Lambda container image!'` message.
Open the `handler.py` file and add the following Python code, which represents a simple Lambda function that returns the message `'Hello from LocalStack Lambda container image!'`:

```python
def handler(event, context):
print('Hello from LocalStack Lambda container image!')
```

In the above example, the `handler` function is executed by the Lambda service every time a trigger event occurs. The above function serves as an entrypoint for the Lambda function inside a runtime environment and accepts `event` and `context`, to receive information about the event and the invocation properties.
In the code above, the `handler` function is executed by the Lambda service whenever a trigger event occurs. It serves as the entry point for the Lambda function within the runtime environment and accepts `event` and `context` as parameters, providing information about the event and invocation properties, respectively.

## Building the image
Following these steps, you have created the foundation for your Lambda function and defined its behaviour using Python code. In the following sections, we will package this code and its dependencies into a container image using the `Dockerfile`.

To package our Lambda function as a container image, we need to create a `Dockerfile` that contains the instructions to build our image. The image should be able to execute a read-only file system with access to a `/tmp` directory. We would be using [AWS base images for Lambda](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-images.html#runtimes-images-lp) to preload the runtimes and dependencies that are required to create Lambda images.
## Building the image

To build the container image for Python, we would use a `python:3.8 base image`. Open the `Dockerfile` and specify the function handler to ensure that the Lambda runtime can locate it where the Lambda handler is available:
To package our Lambda function as a container image, we must create a Dockerfile containing the necessary instructions for building the image. Open the Dockerfile and add the following content. This Dockerfile uses the `python:3.8` base image provided by AWS for Lambda and copies the `handler.py` file into the image. It also specifies the function handler as `handler.handler` to ensure the Lambda runtime can locate it where the Lambda handler is available.

```Dockerfile
FROM public.ecr.aws/lambda/python:3.8
Expand All @@ -56,24 +61,26 @@ CMD [ "handler.handler" ]
```

{{< alert title="Note" color="primary">}}
If you have additional dependencies specificed, create a file named `requirements.txt` and list the libraries in the file. Install the dependencies in the Dockerfile under the `${LAMBDA_TASK_ROOT}` directory.
If your Lambda function has additional dependencies, create a file named `requirements.txt` in the same directory as the Dockerfile. List the required libraries in this file. You can install these dependencies in the `Dockerfile` under the `${LAMBDA_TASK_ROOT}` directory.
{{< /alert >}}

Use Docker to build an image using this function code:
With the Dockerfile prepared, you can now build the container image using the following command:

{{< command >}}
$ docker build -t localstack-lambda-container-image .
{{< / command >}}

By executing these steps, you have defined the Dockerfile that instructs Docker on how to build the container image for your Lambda function. The resulting image will contain your function code and any specified dependencies.

## Publishing the image to ECR

Now that the initial setup is done, we can give LocalStack's AWS emulation a try by pushing our image to ECR and deploying the Lambda container image. Let's start LocalStack:
Now that the initial setup is complete let's explore how to leverage LocalStack's AWS emulation by pushing our image to ECR and deploying the Lambda container image. Start LocalStack by executing the following command. Make sure to replace `<your-api-key>` with your actual API key:

{{< command >}}
$ LOCALSTACK_API_KEY=<your-api-key> localstack start -d
$ LOCALSTACK_API_KEY=<your-api-key> DEBUG=1 localstack start -d
{{< / command >}}

Once LocalStack is started, we can create a new ECR repository to store our container image. We will use the `awslocal` CLI to create a new ECR repository using the `create-repository` command. We will pass the repository name using the `repository-name` flag.
Once the LocalStack container is running, we can create a new ECR repository to store our container image. Use the `awslocal` CLI to achieve this. Run the following command to create the repository, replacing `localstack-lambda-container-image` with the desired name for your repository:

{{< command >}}
$ awslocal ecr create-repository --repository-name localstack-lambda-container-image
Expand All @@ -82,8 +89,8 @@ $ awslocal ecr create-repository --repository-name localstack-lambda-container-i
"repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/localstack-lambda-container-image",
"registryId": "000000000000",
"repositoryName": "localstack-lambda-container-image",
"repositoryUri": "localhost:4510/localstack-lambda-container-image",
"createdAt": "<timestamp>",
"repositoryUri": "localhost.localstack.cloud:4510/localstack-lambda-container-image",
"createdAt": <timestamp>,
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
Expand All @@ -96,17 +103,17 @@ $ awslocal ecr create-repository --repository-name localstack-lambda-container-i
{{< / command >}}

{{< alert title="Note" color="primary">}}
To further customize the repository, you can pass additional flags to the `create-repository` command. For more information, refer to the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html).
To further customize the ECR repository, you can pass additional flags to the `create-repository` command. For more details on the available options, refer to the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html).
{{< /alert >}}

Let us now build the image and push it to the ECR repository:
Next, build the image and push it to the ECR repository. Execute the following commands:

{{< command >}}
$ docker build -t localhost:4510/localstack-lambda-container-image .
$ docker push localhost:4510/localstack-lambda-container-image
{{< / command >}}

In the above commands, we have specified the `repositoryUri` as the image name to push the image to the ECR repository. We can now verify that the image is pushed to the repository by using the `describe-images` command:
In the above commands, we specify the `repositoryUri` as the image name to push the image to the ECR repository. After executing these commands, you can verify that the image is successfully pushed to the repository by using the `describe-images` command:

{{< command >}}
$ awslocal ecr describe-images --repository-name localstack-lambda-container-image
Expand All @@ -115,80 +122,91 @@ $ awslocal ecr describe-images --repository-name localstack-lambda-container-ima
{
"registryId": "000000000000",
"repositoryName": "localstack-lambda-container-image",
"imageDigest": "sha256:<digest>",
"imageDigest": "sha256:459fce12258ff1048925e0f4e7fb039d8b54111a8e3cca5db4acb434a9e8af37",
"imageTags": [
"latest"
],
"imageSizeInBytes": 181885144,
"imagePushedAt": "<timestamp>",
"imageSizeInBytes": 184217147,
"imagePushedAt": <timestamp>,
"imageManifestMediaType": "application/vnd.docker.distribution.manifest.v2+json",
"artifactMediaType": "application/vnd.docker.container.image.v1+json"
}
]
}
{{< / command >}}

Now that we have pushed the image to the ECR repository, we can deploy the Lambda function using the container image.
By running this command, you can confirm that the image is now in the ECR repository. It ensures it is ready for deployment as a Lambda function using LocalStack's AWS emulation capabilities.

## Deploying the Lambda function

To deploy the container image as a Lambda function, we need to create a new Lambda function using the `create-function` command. We will pass the `ImageUri` flag to specify the image URI of the container image that we have pushed to the ECR repository.
To deploy the container image as a Lambda function, we will create a new Lambda function using the `create-function` command. Run the following command to create the function:

{{< command >}}
$ awslocal lambda create-function \
--function-name localstack-lambda-container-image \
--package-type Image \
--code ImageUri="localstack-lambda-container-image" \
--role arn:aws:iam::000000000:role/lambda-role \
--role arn:aws:iam::000000000000:role/lambda-role \
--handler handler.handler
{
"FunctionName": "localstack-lambda-container-image",
"FunctionArn": "arn:aws:lambda:<REGION>:000000000000:function:localstack-lambda-container-image",
"Role": "arn:aws:iam::000000000:role/lambda-role",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image",
"Role": "arn:aws:iam::000000000000:role/lambda-role",
"Handler": "handler.handler",
"CodeSize": 0,
"Description": "",
"Timeout": 3,
"LastModified": "<TIMESTAMP>",
"MemorySize": 128,
"LastModified": <timestamp>,
"CodeSha256": "9be73524cd5aa70fbcee3fc8d7aac4eb7e2a644e9ef2b13031719077a65c0031",
"Version": "$LATEST",
"VpcConfig": {},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "<REVISION_ID>",
"Layers": [],
"State": "Active",
"LastUpdateStatus": "Successful",
"RevisionId": "cab4268c-2d56-4591-821a-9154e157b984",
"State": "Pending",
"StateReason": "The function is being created.",
"StateReasonCode": "Creating",
"PackageType": "Image",
"Architectures": [
"x86_64"
]
],
"EphemeralStorage": {
"Size": 512
},
"SnapStart": {
"ApplyOn": "None",
"OptimizationStatus": "Off"
}
}
{{< / command >}}

The above command has taken various flags to create the Lambda function. We have specified the `ImageUri` flag to specify the image URI of the container image that we have pushed to the ECR repository. We have also specified the `package-type` flag to specify the package type as `Image`. For the role, we have specified a mock role ARN. To create an actual role, please refer to the [IAM documentation]({{< ref "iam" >}}).
The command provided includes several flags to create the Lambda function. Here's an explanation of each flag:

- `ImageUri`: Specifies the image URI of the container image you pushed to the ECR repository (`localstack-lambda-container-image` in this case).
- `package-type`: Sets the package type to Image to indicate that the Lambda function will be created using a container image.
- `function-name`: Specifies the name of the Lambda function you want to create.
- `runtime`: Defines the runtime environment for the Lambda function. In this case, it's specified as provided, indicating that the container image will provide the runtime.
- `role`: Sets the IAM role ARN that the Lambda function should assume. In the example, a mock role ARN is used. For an actual role, please refer to the [IAM documentation]({{< ref "iam" >}}).

Let us now invoke the Lambda function using the `invoke` command:
To invoke the Lambda function, you can use the `invoke` command:

{{< command >}}
$ awslocal lambda invoke --function-name localstack-lambda-container-image /tmp/lambda.out
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that something is broken. I'm running into an endless loop, similar to recent user reports here: localstack/localstack#7792

That seems rather a LocalStack issue:

2023-05-19T18:17:38.717 DEBUG --- [Executor-1_0] l.u.c.docker_sdk_client    : Pulling Docker image: localstack-lambda-container-image
2023-05-19T18:17:38.726  INFO --- [   asgi_gw_1] localstack.request.aws     : AWS sts.AssumeRole => 200
2023-05-19T18:17:40.603 DEBUG --- [Executor-1_0] l.s.a.i.docker_runtime_exe : Unable to pull image ImageCode(image_uri='localstack-lambda-container-image', repository_type='ECR', code_sha256='b338b9a64859ad81339747514419d1afba61e58baccc5421c9be6aa48381fd3a') for executor preparation. Maybe image is only available locally?
2023-05-19T18:17:40.606 DEBUG --- [Executor-1_0] l.s.a.i.version_manager    : Changing Lambda 'arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST' (id b31a9e05) to active
2023-05-19T18:17:46.742  INFO --- [   asgi_gw_1] localstack.request.aws     : AWS lambda.GetFunction => 200
2023-05-19T18:18:12.628 DEBUG --- [functhread71] l.s.a.i.version_manager    : Got invocation event 90819e11-78a6-4244-a140-e756b1958e2e in loop
2023-05-19T18:18:12.629 DEBUG --- [functhread71] l.s.a.i.version_manager    : Starting new environment
2023-05-19T18:18:12.630 DEBUG --- [functhread71] l.s.a.i.docker_runtime_exe : Creating service endpoint for function arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST executor 8deec8caaff7ff4728f572dfa534a3dd
2023-05-19T18:18:12.630 DEBUG --- [functhread71] l.s.a.i.docker_runtime_exe : Finished creating service endpoint for function arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST executor 8deec8caaff7ff4728f572dfa534a3dd
2023-05-19T18:18:12.630 DEBUG --- [functhread71] l.s.a.i.docker_runtime_exe : Assigning container name of localstack-main-lambda-localstack-lambda-container-image-8deec8caaff7ff4728f572dfa534a3dd to executor 8deec8caaff7ff4728f572dfa534a3dd
2023-05-19T18:18:12.664  INFO --- [   asgi_gw_0] localstack.request.aws     : AWS sts.AssumeRole => 200
2023-05-19T18:18:12.669 DEBUG --- [ge:$LATEST_0] l.u.c.container_client     : Getting networks for container: localstack_main
2023-05-19T18:18:12.685  INFO --- [ge:$LATEST_0] l.u.container_networking   : Determined main container network: lambda-container-image_default
2023-05-19T18:18:12.685 DEBUG --- [ge:$LATEST_0] l.u.c.container_client     : Getting ipv4 address for container localstack_main in network lambda-container-image_default.
2023-05-19T18:18:12.711  INFO --- [ge:$LATEST_0] l.u.container_networking   : Determined main container target IP: 172.24.0.2
2023-05-19T18:18:12.717 DEBUG --- [ge:$LATEST_0] l.s.a.i.docker_runtime_exe : Executing start docker executor pro-hook for function arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST
2023-05-19T18:18:12.723 DEBUG --- [ge:$LATEST_0] l.u.c.docker_sdk_client    : Creating container with attributes: {'mount_volumes': None, 'ports': None, 'cap_add': None, 'cap_drop': None, 'security_opt': None, 'dns': None, 'additional_flags': '', 'workdir': None, 'privileged': None, 'labels': None, 'ulimits': None, 'command': ['/lambda-entrypoint.sh', 'handler.handler'], 'detach': None, 'entrypoint': '/var/rapid/init', 'env_vars': {'AWS_DEFAULT_REGION': 'us-east-1', 'AWS_REGION': 'us-east-1', 'AWS_LAMBDA_FUNCTION_NAME': 'localstack-lambda-container-image', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE': 128, 'AWS_LAMBDA_FUNCTION_VERSION': '$LATEST', 'AWS_LAMBDA_INITIALIZATION_TYPE': 'on-demand', 'AWS_LAMBDA_LOG_GROUP_NAME': '/aws/lambda/localstack-lambda-container-image', 'AWS_LAMBDA_LOG_STREAM_NAME': '2023/05/19/[$LATEST]8deec8caaff7ff4728f572dfa534a3dd', 'AWS_ACCESS_KEY_ID': 'ASIAQAAAAAAAL6QCTGCI', 'AWS_SECRET_ACCESS_KEY': '5fyQlIrk3DGWd/v7N2fQaD9eQ9u2vKJfvImoMWWo', 'AWS_SESSION_TOKEN': 'FQoGZXIvYXdzEBYaDlMgKQAWM9XSRml9ZybtdOGyDc1lM+tyzq4aZ11C+VDFhlHkUHwdHobMR5ycpVKjiaj8zDRHSy71P30U8tC38wpFPOhQ8BJtL3M8qFpUDhXvwK4cLG8j4wkH3UwswrdVBnV+muL71DUA/8siQGZkPnmDfjCCX3iL8YvHVUUU9xIIwSo6Ki0cChtEimyO5itPLIEE8MWm7//N5o2fuweqQ8XFYY5Sp/80LT2U6niGqbHul2Kbw/bOGWNIghI4ctI+BZ8Sk0+7uo9wRr4StJO+b3QRcdPhMtHj5n0JQCel97C4s0RzTpanKKbdmhhxaANnaCnyAJJwKWSQxu0wtiI=', 'LAMBDA_TASK_ROOT': '/var/task', 'LAMBDA_RUNTIME_DIR': '/var/runtime', 'AWS_XRAY_CONTEXT_MISSING': 'LOG_ERROR', 'AWS_XRAY_DAEMON_ADDRESS': '127.0.0.1:2000', '_AWS_XRAY_DAEMON_PORT': '2000', '_AWS_XRAY_DAEMON_ADDRESS': '127.0.0.1', 'TZ': ':UTC', 'AWS_LAMBDA_FUNCTION_TIMEOUT': 3, 'LOCALSTACK_HOSTNAME': '172.24.0.2', 'EDGE_PORT': '443', 'AWS_ENDPOINT_URL': 'http://172.24.0.2:443', 'LOCALSTACK_RUNTIME_ID': '8deec8caaff7ff4728f572dfa534a3dd', 'LOCALSTACK_RUNTIME_ENDPOINT': 'http://172.24.0.2:443/_localstack_lambda/8deec8caaff7ff4728f572dfa534a3dd', '_HANDLER': 'handler.handler', 'LOCALSTACK_INIT_LOG_LEVEL': 'debug'}, 'image_name': 'localstack-lambda-container-image', 'interactive': None, 'name': 'localstack-main-lambda-localstack-lambda-container-image-8deec8caaff7ff4728f572dfa534a3dd', 'network': 'lambda-container-image_default', 'platform': 'linux/amd64', 'remove': None, 'self': <localstack.utils.container_utils.docker_sdk_client.SdkDockerClient object at 0xffffaa0d5d50>, 'tty': None, 'user': None}
2023-05-19T18:18:12.742 DEBUG --- [ge:$LATEST_0] l.u.c.docker_sdk_client    : Pulling Docker image: localstack-lambda-container-image
2023-05-19T18:18:14.644 DEBUG --- [ge:$LATEST_0] l.u.c.docker_sdk_client    : Stopping container: localstack-main-lambda-localstack-lambda-container-image-8deec8caaff7ff4728f572dfa534a3dd
2023-05-19T18:18:14.654 DEBUG --- [ge:$LATEST_0] l.s.a.i.runtime_environmen : Unable to shutdown runtime handler '8deec8caaff7ff4728f572dfa534a3dd'
2023-05-19T18:18:15.643 DEBUG --- [functhread71] l.s.a.i.version_manager    : Detected no active environments for version arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST. Starting one...
2023-05-19T18:18:15.644 DEBUG --- [functhread71] l.s.a.i.version_manager    : Starting new environment
2023-05-19T18:18:15.645 DEBUG --- [functhread71] l.s.a.i.docker_runtime_exe : Creating service endpoint for function arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST executor 6a683e5a634efeb345f3e2f2ab29c386
2023-05-19T18:18:15.645 DEBUG --- [functhread71] l.s.a.i.docker_runtime_exe : Finished creating service endpoint for function arn:aws:lambda:us-east-1:000000000000:function:localstack-lambda-container-image:$LATEST executor 6a683e5a634efeb345f3e2f2ab29c386
2023-05-19T18:18:15.645 DEBUG --- [functhread71] l.s.a.i.docker_runtime_exe : Assigning container name of localstack-main-lambda-localstack-lambda-container-image-6a683e5a634efeb345f3e2f2ab29c386 to executor 6a683e5a634efeb345f3e2f2ab29c386
2023-05-19T18:18:15.670  INFO --- [   asgi_gw_0] localstack.request.aws     : AWS sts.AssumeRole => 200

{
"StatusCode": 200,
"LogResult": "",
"ExecutedVersion": "$LATEST"
}
{{< / command >}}

The logs of the Lambda invocation should be visible in the LocalStack container output (with `DEBUG=1` enabled):
The command above will execute the Lambda function locally within the LocalStack environment. The response will include the StatusCode and ExecutedVersion. You can find the logs of the Lambda invocation in the Lambda container output:

{{< command >}}
Starting XRay server loop on UDP port 2000
Starting DNS server loop on UDP port 53
-----
Hello from LocalStack Lambda container image!
{{< / command >}}

## Conclusion

With the Lambda container image support, you can use Docker to package your custom code and dependencies for Lambda functions. LocalStack allows you to package, deploy, and invoke Lambda functions locally. Using LocalStack, you can develop, debug, and test your Lambda functions in conjunction with a wide range of AWS services. Check out [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) and [Lambda Hot Reloading]({{< ref "debugging" >}}) for advanced usage patterns.
In conclusion, the Lambda container image support enables you to use Docker to package your custom code and dependencies for Lambda functions. With the help of LocalStack, you can seamlessly package, deploy, and invoke Lambda functions locally. It empowers you to develop, debug, and test your Lambda functions with a wide range of AWS services. For more advanced usage patterns, you can explore features like [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) and [Lambda Debugging]({{< ref "debugging" >}}).

The code for this tutorial (including a `Makefile` to execute it step-by-step) can be found in our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-container-image).
To further explore and experiment with the concepts covered in this tutorial, you can access the code and accompanying `Makefile` on our [LocalStack Pro samples over GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-container-image).