Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access environment variables in code #1455

Closed
pmuens opened this issue Jun 30, 2016 · 38 comments
Closed

Access environment variables in code #1455

pmuens opened this issue Jun 30, 2016 · 38 comments

Comments

@pmuens
Copy link
Contributor

pmuens commented Jun 30, 2016

Back in Serverless v0 we had the possibility to use process.env to use previously passed in environment variables inside the code (e.g. useful to namespace database tables etc.).

@pmuens pmuens modified the milestones: v1.0, v1.0.0-alpha.3 Jun 30, 2016
@kennu
Copy link
Contributor

kennu commented Jul 2, 2016

Big +1 to this. We (SC5) have always been heavy users of environment variables and sls meta sync. All our projects are using lots of them to configure stage-specific settings. Some variables (API keys etc) can't be stored in Git, so the previous separation into multiple files under _meta/variables was very useful for .gitignoring certain files.

@stevecrozz
Copy link

I'm surprised that there's been even an alpha release without this feature. This is critical functionality.

@ivawzh
Copy link

ivawzh commented Jul 21, 2016

Am I right that in alpha 2 we are not able to access to environment variables at all?

@eahefnawy eahefnawy modified the milestones: v1.0, v1.0.0-beta Jul 25, 2016
@ajagnanan
Copy link

Can we get this in the beta too? 🙏

@mt-sergio
Copy link
Contributor

What solution are you using to set different variables in different environments before we have the feature? I'm very interested even it's a dirty hack...

@danhumphrey
Copy link

+1

@mt-sergio
Copy link
Contributor

My solution: I put inside all the v1 code inside a sls-v0.4 project with everything I need (Env, alias, versions...). The changes are minimum and will be very easy to migrate once v1 is more mature.

@nicka
Copy link
Member

nicka commented Aug 9, 2016

Use case

Usage of CloudFormation stack output variables

Prerequisite

We have access to variables within a Lambda function

Example

Custom S3 bucket resource through CloudFormation and reference the generated bucket name within your Lambda function.

With v0.5
It was easier since the framework first deploys the resources, any stack output ends-up in a _meta/variables/s-variables-STAGE-REGION.json file. You would then deploy your functions(without CloudFormation) separately with Serverless and the variables would already be present.

With v1.0
This would be only possible with two CloudFormation deploys, since the functions are part of the CloudFormation template. Or with the single sls function deploy foo. But in both cases you could end-up with empty/incorrect variables within your function.

"Solution"

It's a bit ugly and it always requires an APIG to be present. But we could map all the stack outputs with APIG to the Lambda event body.

@pmuens
Copy link
Contributor Author

pmuens commented Aug 10, 2016

@nicka thanks for your proposal!

I don't 100% get the need for an APIG.

Our current deployment strategy regarding functions exists of two parts. After the initial stack setup (if the stack is not yet present) we zip the function code and upload it to S3. Next up we update the stack with the compiled CloudFormation function definitions.

Couldn't we add the env variable we want to access (e.g. extract it form a serverless.env.yml file) to the Outputs section of the CloudFormation template and then access them in the Lambda function?

@nicka
Copy link
Member

nicka commented Aug 10, 2016

@pmuens Our current deployment strategy regarding functions exists of two parts. After the initial stack setup (if the stack is not yet present) we zip the function code and upload it to S3. Next up we update the stack with the compiled CloudFormation function definitions.

This work because of the initial deploy, for updates it's a lot harder if not impossible.

Imagine the following: Stack is present resources are present and Lambda's are deployed. You would then rename/replace an existing S3 bucket CF resource.

The following will happen: CloudFormation creates the new S3 bucket and removes the old one. The S3 bucket Output would be updated by the end of the deploy. BUT during this period your Lambda's are running with a reference to an old removed S3 buckets(downtime). A second CF deploy is needed to fix this.

ATM it's impossible to get rid of this in-between state, during the UPDATE deploy we can't pass the updated S3 bucket name to the Lambda's. Let's say the CF Lambda resource would a have Property called EnvironmentVariables we could supply them inline(maybe we should send a feature request to AWS haha).

As this is not the case we could do something similar to this with APIG event mapping to Lambda. Although this is not a perfect solution I'm confident it would work and CF would nicely update all the resources in-order without "downtime".

Hope my explanation is clear 😂

@ajagnanan
Copy link

@pmuens I was throttled by AWS for too many request to read the outputs of cloud formation in the past, and that was just with a few lambda's running at the same time. I don't think using the outputs is a good idea.

@nicka
Copy link
Member

nicka commented Aug 10, 2016

@ajagnanan definitely not every time your Lambda function runs.

@ianserlin
Copy link
Contributor

Let me suggest the obvious just as food for thought:

Since all providers have some version of an "S3" service and sls already uses it to store the code drops for the lambdas, perhaps an sls deploy can package up some "environment variable" section of the service config, put it onto "S3" as a json file, and provide a simple provider specific line again to read and load those environment variables, e.g.

sls deploy reads the appropriate section of serverless.env.yaml, creates a json file, uploads it to s3 as ${serviceName}-${stage}-environment.json (or something similar).

In (nodejs) function code, something like the following pseudo-code can be injected (as it used to be in v0.5) at the very beginning of handler.js before zip and deployment happens:

(
// 1. configure instance of AWS.S3
// 2. load json file from S3 (path, etc is known at deploy time)
// 3. loop through top-level json properties and set them on process.env OR simply set the value of process.env.SERVERLESS_ENV to the entire json tree
)();

This is easy to inspect, easy to understand and only incurs a slight S3 loading cost once per function instance "warm-up".

I think that during a deploy right now the old service zip files are removed, which I am guessing (needs to be validated) causes Lambda to pick up the new zip files in new lambda containers after each deployment happens. In-progress lambdas while the deploy is happening are sort of a separate problem that @nicka mentions above around how to do rolling deployments.

@pmuens pmuens removed this from the v1.0 milestone Aug 19, 2016
@fiznool
Copy link

fiznool commented Aug 19, 2016

FWIW, if anyone (like me) is trying to figure out a simple way of using Heroku-like environment variables in their Lambda functions accessed from API Gateway, one option is using APIG staging variables.

You can set these up via the API Gateway console:

screenshot from 2016-08-19 23-08-07

And they can then be accessed in the event.stageVariables object from within your lambda function:

screenshot from 2016-08-19 23-09-24

I'm very new to all this (having just started with AWS lambda and serverless today) so apologies if I'm teaching the proverbial granny to suck eggs... and I also realise this ties the env vars to the http event source only. But it works nicely for my use case!

@pmuens pmuens modified the milestones: v1.0, v1.0.0-beta.3 Aug 26, 2016
@wedgybo
Copy link
Contributor

wedgybo commented Aug 26, 2016

I like the way that travis deals with environment vars. So the idea of having a plugin that can add encrypted vars to your serverless.yml and them commit to the repo safely would be pretty nice. I guess the trick is tying them properties of the service/function so they remain secure?

It would be much nicer if AWS had the same env system for their APIGateway for the lambda functions themselves! Anyone know the best person to ask? :D

For accessing the ARNs of created resources. I think it should be possible to add DependsOn:[ResX, ...] to the Lambda::Function so you could ensure that it was created after all your resources/roles/etc?

The problem with relying on APIG is that not all functions use them. For instance an authorizer function.

@kennu
Copy link
Contributor

kennu commented Aug 26, 2016

Amazon has built-in AWS Console management of environment variables (or other user configuration data) in pretty much all of their platform services (EC2, ECS, OpsWorks, Beanstalk). So it's a really strange thing that only Lambda is missing it.

OTOH some other badly needed features are also missing in recent services (like ACM certificates in API Gateway), which makes you wonder why development of this basic stuff is so slow. I say this as a big advocate and heavy user of AWS.

@svdgraaf
Copy link
Contributor

svdgraaf commented Sep 7, 2016

I'm currently working on a project where I need some way of passing created CF resources (in my case a bucketname) to the Lambda functions. I have everything working now (automatic creating of another stage, with the passed variables), and they are showing up as stage variables in APIG, and are passed to the Lambda functions.

I'll see if I can create a plugin (or PR) for this. But I think it might be better to have this natively supported by Serverless?

@andymac4182
Copy link
Contributor

Could you use a Lambda backed custom resource to get the outputs from the cloudformation and update the zip file containing the code with a config.json file and then you could have the functions DependOn this resource to ensure it is completed before lambda trys to load the function files?

@pgasiorowski
Copy link
Contributor

Just referencing Lambda thread on this subject https://forums.aws.amazon.com/thread.jspa?messageID=686261.
Here's a summary of what others do:

  • static file via var props = require(./props.json)[context.alias];
  • dynamoDB table with config
  • S3 bucket
  • dedicated Lambda function
  • KMS

I think it's pretty easy to roll this out on our own but should be documented. For now (migrating from sls 0.5) I'm going to use a static file since it will be easy to refactor it in the future.

@jthomerson
Copy link
Contributor

For those on this thread, I was upgrading to SLS 1.0 RC1 and had to have something to make a few environment variables available (service name, stage, etc, at minimum). I wrote a very simple plugin that allows you to define variables in your serverless.yml file that will be written to a .env file in your deployment bundle so that you can use dotenv to load them. This may not address the needs of those who need things like CloudFormation references, but it addressed my simpler needs.

https://www.npmjs.com/package/serverless-plugin-write-env-vars

Hope that helps someone until SLS officially supports it!

@ghost
Copy link

ghost commented Sep 14, 2016

Would be great to have the possibility to have env vars both on a per service (like Write Env Vars plugin does) and per handler basis, where priority is handler > service and can override service env vars.

Something like this would be amazing:

service and handler env vars

@jeffski
Copy link

jeffski commented Sep 16, 2016

Just looking for a way to do something like this in V1 as opposed to V0.5

in s-function.json:

"environment": {
  "SERVERLESS_PROJECT": "${project}",
  "SERVERLESS_STAGE": "${stage}"
},

in handler.js:

const stage = process.env.SERVERLESS_STAGE;
const project = process.env.SERVERLESS_PROJECT;
const table = project + '-' + stage + '-todos';

@andymac4182
Copy link
Contributor

I am doing this in v1 with webpack and the DefinePlugin. It has been working great so far. Here is how I am doing this. https://gist.github.com/andymac4182/b25c5ffc5e23c1e367e5fde7558758d0

I am using the serverless-webpack plugin to integrate webpack and serverless.

@svdgraaf
Copy link
Contributor

@jeffski I'm using the serverless-plugin-write-env-vars plugin from @jthomerson works fine for now. I'm working on creating a plugin for setting the ApiGateway stage variables, so you can just use event.stageVariables.foo in ApiGateway lambda functions.

@jeffski
Copy link

jeffski commented Sep 30, 2016

@svdgraaf - thank you, that worked. I had a try with your plugin but ran in to a couple of issues which I have added.

@jch254
Copy link

jch254 commented Nov 2, 2016

I'm using a Babel plugin to access env vars in my project (https://github.com/jch254/serverless-es6-dynamodb-webapi).

Check out https://babeljs.io/docs/plugins/transform-inline-environment-variables for more info. This is really handy in React projects too.

@flomotlik flomotlik modified the milestone: v1.0-ideas Nov 4, 2016
@wmarra
Copy link

wmarra commented Nov 8, 2016

A good example for that is to separate environments using the same code. For example, I have 2 queues in sqs and I'd like to use for queue-name-dev and queue-name-prod, but every time that I change the stage in serverless.yml I need to change in handler file too.

@andymac4182
Copy link
Contributor

@wmarra Check out #2673 it might help

@pmuens
Copy link
Contributor Author

pmuens commented Nov 18, 2016

Closing this one as #2673 will discuss this in detail!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests