Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access Denied Error: Check bucket failed with full S3 Access #1452

Closed
valerius21 opened this issue Oct 15, 2020 · 5 comments
Closed

Access Denied Error: Check bucket failed with full S3 Access #1452

valerius21 opened this issue Oct 15, 2020 · 5 comments

Comments

@valerius21
Copy link

valerius21 commented Oct 15, 2020

Additional Information

Version of s3fs being used (s3fs --version)

1.86

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.9-3

Kernel information (uname -r)

5.4.0-48-generic

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

s3fs command line used, if applicable

s3fs <testbucket> ${HOME}/s3/<data> -o profile=default -f -d

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

Oct 15 06:49:52 candyland s3fs[311520]: init v1.86(commit:unknown) with GnuTLS(gcrypt)
Oct 15 06:49:52 candyland s3fs[311520]: s3fs.cpp:s3fs_check_service(3883): Failed to connect by sigv4, so retry to connect by signature version 2.
Oct 15 06:49:53 candyland s3fs[311520]: s3fs.cpp:s3fs_check_service(3898): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.

Details about the issue

Using the ~/.aws/credentials file over the ~/.passwd-s3fs file produces the same error. The IAM role has full S3 access and the permissions have been tested with the same profile on the aws cli.

No files show up in the desired folder.

I am getting the following error when trying to mount the s3 bucket to a folder in my home directory:

[CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] 
[INF]     s3fs.cpp:set_mountpoint_attribute(4400): PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40775)
[INF] s3fs.cpp:s3fs_init(3493): init v1.86(commit:unknown) with GnuTLS(gcrypt)
[INF] s3fs.cpp:s3fs_check_service(3828): check services.
[INF]       curl.cpp:CheckBucket(3413): check a bucket.
[INF]       curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/<testbucket>/
[INF]       curl.cpp:prepare_url(4736): URL changed is https://<testbucket>.s3.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[ERR] curl.cpp:RequestPerform(2436): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>758692A7D0B75B21</RequestId><HostId>rR5KyjpVjOeSwIcNYJKqAH851ZACQ071DM3gY6JSOTlMDa1Q9W4AX+4xD49QFkopGsUtTxbNNJI=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3439): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>758692A7D0B75B21</RequestId><HostId>rR5KyjpVjOeSwIcNYJKqAH851ZACQ071DM3gY6JSOTlMDa1Q9W4AX+4xD49QFkopGsUtTxbNNJI=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3883): Failed to connect by sigv4, so retry to connect by signature version 2.
[INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler
[INF]       curl.cpp:CheckBucket(3413): check a bucket.
[INF]       curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/<testbucket>/
[INF]       curl.cpp:prepare_url(4736): URL changed is https://<testbucket>.s3.amazonaws.com/
[ERR] curl.cpp:RequestPerform(2436): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C8DFAB144A328448</RequestId><HostId>79ouhJkZp2dFEdcYnCUkXNRvgZP5Z4D6ZE7a/N0Is/JcGVI57MB1A2AVBTtO653sJdB4/FIEcPQ=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3439): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C8DFAB144A328448</RequestId><HostId>79ouhJkZp2dFEdcYnCUkXNRvgZP5Z4D6ZE7a/N0Is/JcGVI57MB1A2AVBTtO653sJdB4/FIEcPQ=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3898): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3483): Exiting FUSE event loop due to errors

[INF] s3fs.cpp:s3fs_destroy(3546): destroy
@valerius21
Copy link
Author

Apparently, you need to wait a few hours so amazon can update the permissions across their system.

@aamorozov
Copy link

Did you have to do anything else aside from waiting? I am experiencing the exact same issue but it's been the case for at least a day now @valerius21

@gaul
Copy link
Member

gaul commented Jan 14, 2021

A few things could be going on with the original issue -- creating a bucket in one region and recreating it in another might make the DNS entries temporarily stale. Specifying the full URL suck as https://s3.us-west-1.amazonaws.com and the endpoint such as us-west-1 should always work though.

@aamorozov
Copy link

I tried with both url and endpoint options but still no luck - same result for iam_role option.
I've created a ticket here describing the issue - #1518

@christopherdalton
Copy link

christopherdalton commented Apr 16, 2021

Confirm I am experiencing the same behaviour. Both with the -o Endpoint and -o Url parameters configured. I am able to connect to buckets that were created prior to the test bucket I am working with. There is clearly an issue in passing the endpoint/url inside s3fs.

When interacting with newly created buckets using the AWS-CLI / PowerShell / Terraform, passing the region is sufficient.

The requirement for passing the region is because of DNS propagation not IAM policy permissions.

Please re-open and investigate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants