Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: Add aws_s3_bucket_object data source #6946

Merged

Conversation

radeksimko
Copy link
Member

@radeksimko radeksimko commented May 31, 2016

There are 2 main use-cases I can think of:

  1. Streamlined Lambda deployment where ZIP files with new versions are uploaded to S3 outside of Terraform (i.e. new version ID is generated)
  2. Some kind of configuration stored in S3 object (kinda like consul_keys). Plaintext values per object should be easy to read and use out of the box, JSON/YAML and other formats might need some extra care.

helper/resource - PreventPostDestroyRefresh

TL;DR: This is to reduce number of TestCases from 3 to 2. It might be possible to reduce it down to 1, but I wanted to test the behaviour of data source at the beginning of the lifecycle, not in the middle.

The S3 object + S3 bucket we're testing there is set up as part of the test case and we run refresh after destroying all resources by default - probably to verify that all resources are gone. This is ok for data sources which read data outside of Terraform's control. Destroy operation however doesn't destroy data sources which means that refresh on aws_s3_object reading object that was destroyed would fail.

Test plan

$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=DataSourceAWSS3BucketObject'
TF_ACC=1 go test ./builtin/providers/aws -v -run=DataSourceAWSS3BucketObject -timeout 120m
=== RUN   TestAccDataSourceAWSS3Object_basic
--- PASS: TestAccDataSourceAWSS3Object_basic (56.73s)
=== RUN   TestAccDataSourceAWSS3Object_readableBody
--- PASS: TestAccDataSourceAWSS3Object_readableBody (57.21s)
=== RUN   TestAccDataSourceAWSS3Object_kmsEncrypted
--- PASS: TestAccDataSourceAWSS3Object_kmsEncrypted (317.53s)
=== RUN   TestAccDataSourceAWSS3Object_allParams
--- PASS: TestAccDataSourceAWSS3Object_allParams (66.85s)
PASS
ok      github.com/hashicorp/terraform/builtin/providers/aws    498.339s

@@ -113,6 +113,7 @@ func Provider() terraform.ResourceProvider {
DataSourcesMap: map[string]*schema.Resource{
"aws_ami": dataSourceAwsAmi(),
"aws_availability_zones": dataSourceAwsAvailabilityZones(),
"aws_s3_object": dataSourceAwsS3Object(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we instead call this aws_s3_bucket_object so it matches with the resource of the same name? So far we've been trying to make the data sources match the corresponding resources as much as possible (in terms of attributes too) so it's relatively easy to switch between them.

Copy link
Member Author

@radeksimko radeksimko May 31, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that's not a problem at all, I can rename it.

Generally speaking though I'd rather see resource called aws_s3_object as it just makes more sense when you pronounce it (IMO). 😃

I assume that consistency with past choices is probably easier & safer to implement than deprecating past choices though.

@apparentlymart
Copy link
Contributor

Aside from some nits, this LGTM. Thanks... I was just the other day wishing for this data source!

@vancluever
Copy link
Contributor

vancluever commented May 31, 2016

@radeksimko I too was thinking about this data source! 👍 Question: would it be too tough to get in KMS support or possible customer SSE support as well? It would make in-band use of encrypted S3 data so much easier (and open up the playing field for using S3 as an easy secret store for Terraform even).

@radeksimko
Copy link
Member Author

@vancluever KMS is supported - one of the test cases is even covering that specific use case. It was fairly easy to implement KMS support as it works out of the box - the IAM user/role refreshing the data source just needs kms:Decrypt permission for the used key I think.

Re SSE support - I decided to not support this in the initial implementation because we don't support that in aws_s3_bucket_object resource either. I think the enhancement needs to happen on both sides, so we can test it properly too.
I'm happy for anyone else to send a follow up PR though! 😃

@vancluever
Copy link
Contributor

@radeksimko ahh yeah, you are right, and GetObject doesn't even have a way to use KMS versus SSE, so there you go.

Very nice - might have to rebuild our fork in the next couple of days with this! Thanks a ton!

@radeksimko radeksimko force-pushed the f-aws-s3-object-data-source branch from 685565a to b0ca952 Compare June 1, 2016 18:12
 - This is to allow easier testing of data sources which read data from resources created in the same scope
@radeksimko radeksimko force-pushed the f-aws-s3-object-data-source branch from b0ca952 to d4fe1b9 Compare June 1, 2016 18:14
@radeksimko radeksimko changed the title provider/aws: Add aws_s3_object data source provider/aws: Add aws_s3_bucket_object data source Jun 1, 2016
@radeksimko
Copy link
Member Author

radeksimko commented Jun 1, 2016

@apparentlymart I address 2 of your valid comments + provided some explanation on the third, I also reran acceptance tests which all passed.

Will you give it a final 👍 / 👎 ? 😉

@apparentlymart
Copy link
Contributor

I didn't have time to test again but by my eyes it looks great! 👍

@radeksimko radeksimko merged commit 1ea5cff into hashicorp:master Jun 2, 2016
@radeksimko radeksimko deleted the f-aws-s3-object-data-source branch June 2, 2016 06:04
@radeksimko
Copy link
Member Author

^ FYI @gpeng @surminus this may be useful for your Lambda deployments

(landing in 0.7 soon) 😉

@surminus
Copy link
Contributor

surminus commented Jun 2, 2016

@radeksimko great thanks!

@Djuke
Copy link
Contributor

Djuke commented Jun 2, 2016

Will there be support for interpolating the values of arguments bucket and key?

@ghost
Copy link

ghost commented Apr 25, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants