New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access environment variables in code #1455
Comments
Big +1 to this. We (SC5) have always been heavy users of environment variables and |
I'm surprised that there's been even an alpha release without this feature. This is critical functionality. |
Am I right that in alpha 2 we are not able to access to environment variables at all? |
Can we get this in the beta too? 🙏 |
What solution are you using to set different variables in different environments before we have the feature? I'm very interested even it's a dirty hack... |
+1 |
My solution: I put inside all the v1 code inside a sls-v0.4 project with everything I need (Env, alias, versions...). The changes are minimum and will be very easy to migrate once v1 is more mature. |
Use caseUsage of CloudFormation stack output variables PrerequisiteWe have access to variables within a Lambda function ExampleCustom S3 bucket resource through CloudFormation and reference the generated bucket name within your Lambda function. With v0.5 With v1.0 "Solution"It's a bit ugly and it always requires an APIG to be present. But we could map all the stack outputs with APIG to the Lambda event body. |
@nicka thanks for your proposal! I don't 100% get the need for an APIG. Our current deployment strategy regarding functions exists of two parts. After the initial stack setup (if the stack is not yet present) we zip the function code and upload it to S3. Next up we update the stack with the compiled CloudFormation function definitions. Couldn't we add the env variable we want to access (e.g. extract it form a |
This work because of the initial deploy, for updates it's a lot harder if not impossible. Imagine the following: Stack is present resources are present and Lambda's are deployed. You would then rename/replace an existing S3 bucket CF resource. The following will happen: CloudFormation creates the new S3 bucket and removes the old one. The S3 bucket Output would be updated by the end of the deploy. BUT during this period your Lambda's are running with a reference to an old removed S3 buckets(downtime). A second CF deploy is needed to fix this. ATM it's impossible to get rid of this in-between state, during the UPDATE deploy we can't pass the updated S3 bucket name to the Lambda's. Let's say the CF Lambda resource would a have As this is not the case we could do something similar to this with APIG event mapping to Lambda. Although this is not a perfect solution I'm confident it would work and CF would nicely update all the resources in-order without "downtime". Hope my explanation is clear 😂 |
@pmuens I was throttled by AWS for too many request to read the outputs of cloud formation in the past, and that was just with a few lambda's running at the same time. I don't think using the outputs is a good idea. |
@ajagnanan definitely not every time your Lambda function runs. |
Let me suggest the obvious just as food for thought: Since all providers have some version of an "S3" service and sls already uses it to store the code drops for the lambdas, perhaps an
In (nodejs) function code, something like the following pseudo-code can be injected (as it used to be in v0.5) at the very beginning of
This is easy to inspect, easy to understand and only incurs a slight S3 loading cost once per function instance "warm-up". I think that during a deploy right now the old service zip files are removed, which I am guessing (needs to be validated) causes Lambda to pick up the new zip files in new lambda containers after each deployment happens. In-progress lambdas while the deploy is happening are sort of a separate problem that @nicka mentions above around how to do rolling deployments. |
FWIW, if anyone (like me) is trying to figure out a simple way of using Heroku-like environment variables in their Lambda functions accessed from API Gateway, one option is using APIG staging variables. You can set these up via the API Gateway console: And they can then be accessed in the I'm very new to all this (having just started with AWS lambda and serverless today) so apologies if I'm teaching the proverbial granny to suck eggs... and I also realise this ties the env vars to the http event source only. But it works nicely for my use case! |
I like the way that travis deals with environment vars. So the idea of having a plugin that can add encrypted vars to your serverless.yml and them commit to the repo safely would be pretty nice. I guess the trick is tying them properties of the service/function so they remain secure? It would be much nicer if AWS had the same env system for their APIGateway for the lambda functions themselves! Anyone know the best person to ask? :D For accessing the ARNs of created resources. I think it should be possible to add The problem with relying on APIG is that not all functions use them. For instance an authorizer function. |
Amazon has built-in AWS Console management of environment variables (or other user configuration data) in pretty much all of their platform services (EC2, ECS, OpsWorks, Beanstalk). So it's a really strange thing that only Lambda is missing it. OTOH some other badly needed features are also missing in recent services (like ACM certificates in API Gateway), which makes you wonder why development of this basic stuff is so slow. I say this as a big advocate and heavy user of AWS. |
I'm currently working on a project where I need some way of passing created CF resources (in my case a bucketname) to the Lambda functions. I have everything working now (automatic creating of another stage, with the passed variables), and they are showing up as stage variables in APIG, and are passed to the Lambda functions. I'll see if I can create a plugin (or PR) for this. But I think it might be better to have this natively supported by Serverless? |
Could you use a Lambda backed custom resource to get the outputs from the cloudformation and update the zip file containing the code with a |
Just referencing Lambda thread on this subject https://forums.aws.amazon.com/thread.jspa?messageID=686261.
I think it's pretty easy to roll this out on our own but should be documented. For now (migrating from sls 0.5) I'm going to use a static file since it will be easy to refactor it in the future. |
For those on this thread, I was upgrading to SLS 1.0 RC1 and had to have something to make a few environment variables available (service name, stage, etc, at minimum). I wrote a very simple plugin that allows you to define variables in your https://www.npmjs.com/package/serverless-plugin-write-env-vars Hope that helps someone until SLS officially supports it! |
Just looking for a way to do something like this in V1 as opposed to V0.5 in s-function.json:
in handler.js:
|
I am doing this in v1 with webpack and the DefinePlugin. It has been working great so far. Here is how I am doing this. https://gist.github.com/andymac4182/b25c5ffc5e23c1e367e5fde7558758d0 I am using the |
@jeffski I'm using the |
@svdgraaf - thank you, that worked. I had a try with your plugin but ran in to a couple of issues which I have added. |
I'm using a Babel plugin to access env vars in my project (https://github.com/jch254/serverless-es6-dynamodb-webapi). Check out https://babeljs.io/docs/plugins/transform-inline-environment-variables for more info. This is really handy in React projects too. |
A good example for that is to separate environments using the same code. For example, I have 2 queues in sqs and I'd like to use for queue-name-dev and queue-name-prod, but every time that I change the stage in serverless.yml I need to change in handler file too. |
Closing this one as #2673 will discuss this in detail! |
Back in Serverless v0 we had the possibility to use
process.env
to use previously passed in environment variables inside the code (e.g. useful to namespace database tables etc.).The text was updated successfully, but these errors were encountered: