Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support custom container images #1324

Open
W-Ely opened this issue Jan 30, 2022 · 13 comments
Open

Support custom container images #1324

W-Ely opened this issue Jan 30, 2022 · 13 comments

Comments

@W-Ely
Copy link

W-Ely commented Jan 30, 2022

Feature Request

AWS Lambda and Serverless both now support Container images. https://www.serverless.com/blog/container-support-for-lambda
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html

This greatly improves testability and stability, since the exact image that is created and tested can be used at runtime.

It seems like it would require a small change to at least still run the python code.

Sample Code

  • file: serverless.yml

Current handler location string:

functions:
  api:
    handler: path/to/handler.api  # <------

Handler location string with custom containers:

functions:
  api:
    image:
      name: api
      command:
        - path/to/handler.api # <------

Expected behavior/code

Just hope for it to run the code as it does today, getting its location from either the previous handler key OR the new image.command array.

@pgrzesik
Copy link
Collaborator

Thanks for the proposal @W-Ely - I think it would require a bit more of a change, as we cannot make assumptions about the runtime/language that the command uses/invokes, so it would have to be more universal. We would be more than happy to work out a good implementation plan and accept a PR with it if anyone is interested in tackling that issue. 🙌

@W-Ely
Copy link
Author

W-Ely commented Jan 31, 2022

I think it would require a bit more of a change, as we cannot make assumptions about the runtime/language that the command uses/invokes

This is a good point. I didn't think about this because in one of my projects here https://github.com/Hall-of-Mirrors/kolvir/blob/main/serverless.yml#L69 The only thing I have to change is this single line. Now I see that this is due to me leaving the provider.runtime: python3.9 in place. Which allows it to run even though my image is actually 3.10. There are currently no ill effect from leaving the runtime in place though I could see this not being allowed in the future, since it isn't used during the deploy and could cause confusion.

I wonder if an approach could be to add a value to custom.serverless-offline like runtime:. Which could be one of these https://github.com/dherault/serverless-offline/blob/master/src/config/supportedRuntimes.js if the value from provider isn't present.

@pgrzesik
Copy link
Collaborator

pgrzesik commented Feb 2, 2022

I think it works for your specific use case, but that might not be the case in most cases as in your case command maps directly to the compatible handler inside the container - what in case of a situation where the command is more generic? I think we should rather consider supporting containers without making any assumptions about the runtime and their internals. The runtime setting is ineffective for container-based functions.

@tux86
Copy link

tux86 commented Apr 19, 2022

use : serverless-plugin-ifelse

custom: 
  serverlessIfElse:
    - If: '"${self:custom.isOffline}" != "true"'
      Exclude:
        - functions.websocket.handler
        - functions.authorizer.handler
        - functions.cognitoTriggersHandler.handler

@tux86
Copy link

tux86 commented Apr 20, 2022

Another way to solve this and doesn't require any plugin:

The trick here that we have created two separate files ( functions.offline.yaml , functions.yaml) , the first one for the offline mode and the second is for sls deploy. So if sls command is started in offline mode we include functions.offline.yaml otherwise functions.yaml will be included instead

project directory structure :

sls-project/
   serverless/
        functions.yaml
        functions.offline.yaml
   serverless.yaml
   .env.dev

serverless.yaml

service : sls-project
provider:
 ...
custom:
  offline : ${self:custom.offline:${strToBool(${env:IS_OFFLINE, false})}}  # contains config based on IS_OFFLINE env 
  'offline:true':  # offline enabled parameters
     enabled: true
     filePrefix: '.offline'
  'offline:false':  # offline disabled parameters
     enabled: false
     filePrefix: ''
...
functions:
   # if offline mode endabled filePrefix = '.offline' else filePrefix = ''  
  ${file(./serverless/functions${self:custom.offline.filePrefix}.yaml):functions}  

functions.yaml

functions:
  http:
    image:
      name: appimage   #   <--- uses image : for deployment
      command:
        - dist/src/main.handler
    events:
      - httpApi:
          method: '*'
          path: /
      - httpApi:
          method: '*'
          path: /{proxy+}

functions.offline.yaml

functions:
  http:
    handler: dist/src/main.handler     #   <--- uses handler: for offline mode
    events:
      - httpApi:
          method: '*'
          path: /
      - httpApi:
          method: '*'
          path: /{proxy+}

Last step add env variable before start sls start for offline mode

packages.json

    "scripts": {
    "sls:offline": "IS_OFFLINE=true yarn sls offline"
    } 

or by adding it in an env file if you are using a local stage
.env.local

#  only in env.local
IS_OFFLINE=true

@major-mayer
Copy link

The trick here that we have created two separate files ( functions.offline.yaml , functions.yaml) , the first one for the offline mode and the second is for sls deploy.

If i get it correctly, you use a container image for your deployment version of the lambda function and a regular function when testing it offline with this plugin.
This might work in your case, but it's definitely not a solution for all scenarios.

Whenever you need a runtime dependency, that's not included in the standard Lambda Python/ Node/ etc. image, and you have to install this in a custom image, the offline version won't work anymore.
This is the reason why we use container images in the first place.

@tforster
Copy link

Amazon also makes available the runtime interface client (RIC) and runtime interface emulator (RIE). The description of the emulator

Lambda provides a runtime interface emulator (RIE) for you to test your function locally. The AWS base images for Lambda and base images for custom runtimes include the RIE. For other base images, you can download the Runtime interface emulator from the AWS GitHub repository.

The emulator can be used externally or compiled into the image itself and essentially fronts the lambda with a lightweight web server ultimately allowing you to cURL localhost just as if the function was in the actual AWS cloud. The downside is that there is no current provision to expose a debug port in the case of Node (e.g. --inspect-brk). So, while there is full support to invoke the container function locally via HTTP you can't connect a debugger to it (At least for Node, my current struggle).

Obviously, the RIE conflicts/competes with the core feature of serverless offline, but perhaps there is a way serverless offline can hook/extend/augment/etc Amazon's code in a manner that abstracts the AWS headaches?

Having spent the better part of a day diving deep into this I have resolved myself to managing two Dockerfiles, one that is built from public.ecr.aws/lambda/nodejs that will be deployed via SLS to the AWS cloud and a second, mostly identical one, from the official Node image as found on DockerHub that also exposes 9229 for debugging.

@jcampbell05
Copy link

We currently need this and hoping to contribute even just a basic version so we can use offline with our setup

@major-mayer
Copy link

If you could do a contribution for the project, this would be great news 👌 @jcampbell05

@jcampbell05
Copy link

jcampbell05 commented Nov 14, 2022

I've managed to throw together a basic proof of concept implementation in a few hours to kickstart discussions here. It was pretty easy to combine pieces of code we already had to get this to work with my own container images.

Have a go and let me know what you think - it's limited but it works.

@nicoandra
Copy link

Hello

I've faced a similar issue some days ago, my lambda package was too big and I moved the entire project towards Docker Images. But once the move was completed, deployed and tested in AWS, I realized I somehow locked-out myself of working locally.

I put together a POC that worked, and now converted it into a serverless plugin, serverless-offline-lambda-docker-plugin.

The plugin allows users to set the functions in Serverless as if no Docker image was used at all. Upon packaging the plugin will convert the functions to Docker-based instead of code-based.

This was pushed just yesterday and, for now, I only have a Python example on how to use it. I didn't have the need of doing this with NodeJs yet, but I plan on adding a NodeJs example in the short term.

Feel free to give it a look and report back any issue on the project Issues section.

@irby
Copy link

irby commented Jul 17, 2023

Hi, I am working on introducing Serverless to a project so we can easily deploy our Lambda to AWS through IaC. One project uses Lambda + Docker, and I would like to have a way to locally test the integration using Serverless Offline. Seeing the PR published by @jcampbell05, it would be very cool to introduce this feature. 😁

@jcampbell05
Copy link

PR closed because it was open too long, but I no longer have time to finish it. Anyone is welcome to take a look at the changes I did and flesh out a way to integrate it into the project

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants