$ npm run build
- Within lambda, press the 'Create a Lambda Function' button
- Press the 'Skip' button to bypass the suggested blueprints
- Enter the lambda function name dynamodb-backup
- Select 'Node.js' as the Runtime
- Upload the zip
- Under 'Handler' add 'Index.handler'
- Add two environment variables:
- TABLE_NAME which is the table to backup
- BUCKET_NAME which is the bucket to publish to
- (optional) CAPACITY_FACTOR a float less than 0.9 which controls how much of the provisioned read capacity to consume
Your lambda function will run as an IAM role. This is where we configure the permissions required.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": [
"${TABLE_NAME}"
]
}
]
}
This contains permissions for:
- Saving logs for your lambda execution.
- Stroring zipped dynamo backups from dynamo to S3.
- List dynamo tables and get dynamo data
There is an example restore function in restore.js. Example usage
./restore.js -b table-backups -s instances -t stages
The usage is:
[aws-dynamodb-backup (master)]$ ./restore.js -h
Usage: restore [options]
Options:
-h, --help output usage information
-b, --bucketname <bucketname> The name of the s3 bucket to restore from
-t, --target <target> The name of the table to create
-s, --source <source> The name of source file
All the options are required. The source is the directory and name of the backup file. The target is the name of the table to restore the data to.