Browse files

Removed Let's Encrypt Auto-Renew

  • Loading branch information...
edgarroman committed Apr 26, 2017
1 parent 0292008 commit 2366a9404acb72e96c816555a24a720fda33838b
Showing with 74 additions and 21 deletions.
  1. +0 −21 docs/
  2. +74 −0 notes.txt
@@ -205,27 +205,6 @@ And that should work fine going forward. Note that Let's Encrypt certificates o
!!! Note
Amazon official documentation states that this step could take up to 40 minutes to initialize the certificate.
#### Step 2.5: Auto-Renew (Optional)
If you'd like Zappa to automatically renew your HTTPS certificate then simply add the following Let's Encrypt expression to your Zappa Settings file:
``` hl_lines="12"
"dev": {
"django_settings": "frankie.settings",
"s3_bucket": "zappatest-code",
"aws_region": "us-east-1",
"vpc_config" : {
"SubnetIds": [ "subnet-f3446aba","subnet-c5b8c79e" ], // use the private subnet
"SecurityGroupIds": [ "sg-9a9a1dfc" ]
"lets_encrypt_key": "le-account.key", // Local path to account key - can also be s3 path
"domain": "",
"lets_encrypt_expression": "rate(30 days)", // LE Renew schedule
### Other Service Providers
If you choose to use your own DNS provider and/or your own Certificate Authority to create the custom domain names, you will have to perform the manual steps outlined in the official AWS documentation:
@@ -0,0 +1,74 @@
- How to setup AWS
- how to upload media files?
- how to login users?
- What about only python processing functions?
"dev": {
"project_name": "test",
"apigateway_enabled": false,
"s3_bucket": "lambda-zappa",
"keep_warm": false,
"events": [
{ // AWS Reactive events
"function": "your_module.your_reactive_function", // The function to execute
"event_source": {
"arn": "arn:aws:s3:::my-bucket", // The ARN of this event source
"events": [
"s3:ObjectCreated:*" // The specific event to execute in response to.
DynamoDB: from gareth 4/24/2017
DynamoDB *should* give you one concurrent function *per* internal Shard... You can find out how many shards it uses by doing "describe" action on the stream ARN. Mine said it had 8, but there was absolutely not 8 lambdas running concurrently
I then moved to Kinesis
i made the lambda on the DDB stream simply put the request to kinesis, then provisioned a few shards in Kinesis, you get one lambda concurrently per shard
this worked perfectly as describes, but you can only half/double your shards i think it's once every 24h... So i turned on 1 to start... bumped it to two then couldn't alter any more for 24h... i thought that was a bit rubbish so i dropped kinesis
Solution... SNS... SNS will run as many concurrent lambdas as your AWS limit allows
by this time I had 3000 images to process
each image takes 7s to process
I ran a few lines of code to scan DDB and pump out SNS messages
processed all 3000 in ~10s
(a bunch got throttled as that totally blew my provisioned Dynamo DB reads and writes, but boosted those limits temporarily and re-ran :slightly_smiling_face: )
in retrospect, more of the documentation makes sense now
oh, and based on the last 48h of data... (i'm outside the "free tier" of lambda so that's not a factor) i'm seeing a 90% cost reduction
in our server costs

0 comments on commit 2366a94

Please sign in to comment.