Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bucket_check condition too strict for certain scenarios #8123

Closed
sjmcgrath opened this issue Jul 14, 2014 · 11 comments
Closed

bucket_check condition too strict for certain scenarios #8123

sjmcgrath opened this issue Jul 14, 2014 · 11 comments
Labels
bug This issue/PR relates to a bug. cloud

Comments

@sjmcgrath
Copy link

Issue Type:

Bug Report

Ansible Version:

ansible 1.6.6

Environment:

Ubuntu 12.04

Summary:

When an IAM policy allows an action on a particular key, but doesn't allow a bucket lookup, trying to perform the action on the key fails because the bucket_check is performed even though it isn't needed.

Steps To Reproduce:

Set up an IAM policy that allows an action for a specific file, but no actions on the bucket.

Example...

{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::bucket/file"
      ]
    }
  ]
}

Run an action like...

s3: bucket=bucket object=/file dest=file mode=get
Expected Results:

The file object should be retrieved.

Actual Results:

(bucket and key have been substituted for the actual bucket and key in the output below)

<localhost> REMOTE_MODULE s3 bucket="bucket" object=/file dest=file mode=get
<localhost> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1405350501.78-257604828225025 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1405350501.78-257604828225025 && echo $HOME/.ansible/tmp/ansible-tmp-1405350501.78-257604828225025']
<localhost> PUT /tmp/tmpKorkx8 TO /home/ubuntu/.ansible/tmp/ansible-tmp-1405350501.78-257604828225025/s3
<localhost> EXEC ['/bin/sh', '-c', u'LC_CTYPE=C LANG=C /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1405350501.78-257604828225025/s3; rm -rf /home/ubuntu/.ansible/tmp/ansible-tmp-1405350501.78-257604828225025/ >/dev/null 2>&1']
localhost | FAILED >> {
    "failed": true, 
    "msg": "Target bucket cannot be found"
}
Suggested change

bucket_check fails on

result = s3.lookup(bucket)

Instead of using s3.lookup, s3.get_bucket should be used which allows the possibility of not validating...

result = s3.get_bucket(bucket, validate=False)

Of course, in most scenarios, we want validate=True so this needs flexibility.

@jimi-c
Copy link
Member

jimi-c commented Jul 16, 2014

Would you be interested in sending us a PR for this? A new parameter could be added to specify the validate flag.

@jimi-c jimi-c added P3 labels Jul 16, 2014
@sjmcgrath
Copy link
Author

I'm really busy right now, but if no one addresses this I'll eventually get to it.

@pesterhazy
Copy link

I ran into the same problem today. Ansible's s3 module doesn't work, whereas aws s3 cp s3://bucket/filename /tmp/filename does.

I think the problem is that the user doesn't have permission to list the objects in that bucket. The message "Target bucket cannot be found" is misleading; the problem is actually that the list operation "aws s3 ls s3://bucket" fails.

Come to think of it, I don't see why the s3 module should verify that the bucket exists or that it can be listed. Can it just attemp the GET operation?

@sjmcgrath
Copy link
Author

I just took a little deeper look into the code after seeing another comment on this issue. It's not just the bucket_check that's the problem in this scenario -- the code goes on to run into the same s3.lookup(bucket) problem in key_check and again in download_s3file.

I'm not sure if boto is caching the result, but either way, this could use some refactoring because the changes to fix this kind of scenario are going to be in at least 10 different places in the code and in many scenarios bucket lookup is happening multiple times -- at least 3 for a get -- when once would be sufficient. (Rather, a single get_bucket instead of a lookup.)

I'm actually wondering whether any preliminary checks should be done at all. I think it might be better for the module to do exactly what it's told to do (e.g. get key -> don't validate bucket, don't validate key, just try to get it) instead of trying to add any layers of "protection", which add complication, add extra calls, and don't actually offer any benefit -- we'll know we can't get the key if we can't get it anyway.

Any thoughts on this, @jimi-c? Removing all the bucket checks is quite a bit of a change from how things are working now, but the current behaviour is broken and it seems unlikely that the broken behaviour would be relied upon by anyone.

@gmpuran
Copy link

gmpuran commented Sep 29, 2014

We also faced this issue today. S3 operations access was only limited to a specific folder in a bucket. However, ansible threw error as:

"msg: Target bucket cannot be found"

Version used: 1.7.2

While AWS provides granular access control to S3 objects through IAM policies, we anticipate this to be present in ansible as well without any need of explicit list rights for entire bucket.

@mpdehaan
Copy link
Contributor

Hi!

Thanks very much for your interest in Ansible. It sincerely means a lot to us.

On September 26, 2014, due to enormous levels of contribution to the project Ansible decided to reorganize module repos, making it easier
for developers to work on the project and for us to more easily manage new contributions and tickets.

We split modules from the main project off into two repos, http://github.com/ansible/ansible-modules-core and http://github.com/ansible/ansible-modules-extras

If you would still like this ticket attended to, we will need your help in having it reopened in one of the two new repos, and instructions are provided below.

We apologize that we are not able to make this transition happen seamlessly, though this is a one-time change and your help is greatly appreciated --
this will greatly improve velocity going forward.

Both sets of modules will ship with Ansible, though they'll receive slightly different ticket handling.

To locate where a module lives between 'core' and 'extras'

Additionally, should you need more help with this, you can ask questions on:

Thank you very much!

@mpdehaan
Copy link
Contributor

Hi!

Thanks very much for your interest in Ansible. It sincerely means a lot to us.

On September 26, 2014, due to enormous levels of contribution to the project Ansible decided to reorganize module repos, making it easier
for developers to work on the project and for us to more easily manage new contributions and tickets.

We split modules from the main project off into two repos, http://github.com/ansible/ansible-modules-core and http://github.com/ansible/ansible-modules-extras

If you would still like this ticket attended to, we will need your help in having it reopened in one of the two new repos, and instructions are provided below.

We apologize that we are not able to make this transition happen seamlessly, though this is a one-time change and your help is greatly appreciated --
this will greatly improve velocity going forward.

Both sets of modules will ship with Ansible, though they'll receive slightly different ticket handling.

To locate where a module lives between 'core' and 'extras'

Additionally, should you need more help with this, you can ask questions on:

Thank you very much!

@sirkubax
Copy link
Contributor

I had the same error
msg: Target bucket cannot be found

while the aws cli tool
aws s3 cp s3://my_bucket/myfile /tmp/xxx --profile s3

fails with:
A client error (400) occurred when calling the HeadObject operation: Bad Request
Completed 1 part(s) with ... file(s) remaining

and the:
aws s3 ls s3://mybucket --profile s3

A client error (InvalidRequest) occurred when calling the ListObjects operation: You are attempting to operate on a bucket in a region that requires Signature Version 4. You can fix this issue by explicitly providing the correct region location using the --region argument, the AWS_DEFAULT_REGION environment variable, or the region variable in the AWS CLI configuration file. You can get the bucket's location by running "aws s3api get-bucket-location --bucket BUCKET".

in the end I've found, that I had 2 errors:
Typo in the aws_secret_key in my aws .aws/credentials file
and no region in the s3_module

now It seems ok, but still not working

@mminklet
Copy link

mminklet commented Sep 4, 2015

Is this seeing any progress? I have ended up granting full access to a role and I still get 'Target bucket cannot be found'. Even if this had worked I wouldn't want ansible having these permissions.

ansible 1.9.3

aws cli s3 list-objects for the bucket returns a list of objects as expected, for the restricted role. There's no reason for this to fail on check_bucket as far as I can make out.

@ober
Copy link

ober commented Oct 22, 2015

Can this please be addressed? Opening a bucket to full perms just to get around this bug is crazy.

@aaronthebaron
Copy link

While not optimal you can get around this issue without giving full perms. A policy structured like this does nicely for a Read Only bucket. The important part seems to be applying the perms to the bucket itself.

"Statement": [
    {
        "Sid": "[Statement ID]",
        "Effect": "Allow",
        "Action": [
            "s3:GetObject",
            "s3:ListBucket"
        ],
        "Resource": [
            "arn:aws:s3:::mah-bucket",
            "arn:aws:s3:::mah-bucket/*"
        ]
    }
]

@ansibot ansibot added bug This issue/PR relates to a bug. and removed bug_report labels Mar 6, 2018
@ansible ansible locked and limited conversation to collaborators Apr 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug This issue/PR relates to a bug. cloud
Projects
None yet
Development

No branches or pull requests

10 participants