Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3fs: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2' #39219

Closed
esn89 opened this issue Feb 7, 2017 · 14 comments
Labels
cannot-reproduce cannot be replicated with info/context provided info-needed waiting for more info stale
Milestone

Comments

@esn89
Copy link

esn89 commented Feb 7, 2017

Description of Issue/Question

As per title, I can't see to check the contents of my s3 bucket.
The interesting this is that it is returning with the string:

one:
ERROR: Failed s3 operation. AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'

But nowhere in my /etc/salt/master do I specific us-east-1.

What is even more interesting is that if I sudo ls /var/cache/salt/master/s3cache/salttest3
I can see the files that have been downloaded. So somewhere along the lines, the file transfer worked but I cannot use this files in my saltstack formulas as I keep getting that error.

I have tried it using the aws cli tool and I can indeed download the file.

Setup

(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)
The relevant configs are in my /etc/salt/master

s3.location: us-west-1
s3.service_url: s3-us-west-1.amazonaws.com
s3.verify_ssl: False
s3.buckets:
    - salttest3

Steps to Reproduce Issue

(Include debug logs if possible and relevant.)

By using this command:
sudo salt 'one' s3.get salttest3

Here's the snippet of the debug logs:
https://paste.debian.net/913320/

Versions Report

(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)

Salt Version:
           Salt: 2016.11.1

Dependency Versions:
           cffi: Not Installed
       cherrypy: 3.5.0
       dateutil: 2.5.3
          gitdb: 2.0.0
      gitpython: 2.1.0
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: 0.24.5
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 1.0.6
   msgpack-pure: Not Installed
 msgpack-python: 0.4.8
   mysql-python: 1.3.7
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: 0.24.2
         Python: 2.7.12+ (default, Sep  1 2016, 20:27:38)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 16.0.2
           RAET: Not Installed
          smmap: 2.0.1
        timelib: Not Installed
        Tornado: 4.4.2
            ZMQ: 4.2.1

System Versions:
           dist: debian 9.0
        machine: x86_64
        release: 4.7.0-1-amd64
         system: Linux
        version: debian 9.0

@gtmanfred gtmanfred added the info-needed waiting for more info label Feb 7, 2017
@esn89
Copy link
Author

esn89 commented Feb 8, 2017

@gtmanfred

Hi there, I see that you have added the label of "Info Needed", let me know what other info you need me to provide to make this easier for you guys.

@gtmanfred gtmanfred removed the info-needed waiting for more info label Feb 8, 2017
@Ch3LL
Copy link
Contributor

Ch3LL commented Feb 8, 2017

@esn89 you state that you have the s3 in your master config but when using an execution module those settings need to be in the minion config or pillar as documented here when running s3.get

update: when adding the configurations to minion does that help resolve the issue?

@Ch3LL Ch3LL added the info-needed waiting for more info label Feb 8, 2017
@Ch3LL Ch3LL added this to the Blocked milestone Feb 8, 2017
@Ch3LL
Copy link
Contributor

Ch3LL commented Feb 8, 2017

@esn89 one thing to note is you can set pillar_opts to true if you want the minion to be able to grab these configuration options from the master. But be aware that if you do so anything in the master config is being passed to the minion including passwords, hence the reason this is turned off by default.

@esn89 esn89 closed this as completed Feb 9, 2017
@esn89 esn89 reopened this Feb 9, 2017
@esn89
Copy link
Author

esn89 commented Feb 9, 2017

Hi @Ch3LL

Unfortunately, adding pillar_opts to master, then restarting salt-master, then running saltutil.refresh_pillar did not help.

The good news is that if I do a pillar.data on that specified minion, I do see my s3.key and keyid being printed out.

The error I do get from:
sudo salt 'one' s3.get test-salttest3
is:

one: ERROR: Failed s3 operation. NoSuchKey: The specified key does not exist.

Also with this command, which has a state that gets files from s3:
sudo salt --pillar 'super_app_role:dev' state.apply


ID: s3-to-home
    Function: file.managed
        Name: /usr/local/me/superapp.tar.gz
      Result: False
     Comment: Failed to cache s3://test-salttest3/superapp-app-v6.4.tar.gz: Could not fetch from s3://test-salttest3/super-app-v6.4.tar.gz. Exception: Failed to get file. AuthorizationHeaderMalformed: The a
uthorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
     Started: 15:40:37.233982
    Duration: 6856.147 ms
     Changes:   

@Ch3LL
Copy link
Contributor

Ch3LL commented Feb 10, 2017

@esn89 to help track this down if you do add it to your minion config does it work? Just want to see if i should track down why pillar_opts isn't working or s3? thanks

@esn89
Copy link
Author

esn89 commented Feb 13, 2017

@Ch3LL

Spending more time playing around with my config, it turns out neither of them is the issue.

By a stroke of luck, I provided the bare minimum to my /etc/salt/master:

I went from this:

pillar_opts: True
s3.key: <censored>
s3.keyid: <censored>
s3.location: us-west-2
s3.service_url: s3-us-west-2.amazonaws.com
s3.verify_ssl: False
s3.buckets:
  - test-salttest3

to just:

pillar_opts: True
s3.key: <censored>
s3.keyid: <censored>
s3.location: us-west-2

on my master, and it worked! I even deleted all the s3 configs from my minion.

Sorry for the false alarm.

I've a slight feeling that those other config fields are for the s3 salt modules.

@esn89 esn89 closed this as completed Feb 13, 2017
@esn89 esn89 reopened this Mar 8, 2017
@esn89
Copy link
Author

esn89 commented Mar 8, 2017

Hey guys,

I just wanted to reopen this topic. As it turns out, last time I was using a cached version of this on /var/cache/salt/master/s3cache. Could you guys please take a look, or perhaps give me some pointers?

That would be really appreciated. By the way, my config is the same from last time and nothing has really changed to be honest, and the error is the same. Only this time, the cache was deleted and cleared by me.

Thanks in advance!

@Ch3LL
Copy link
Contributor

Ch3LL commented Mar 8, 2017

What if you add these options to he minion config? I would like to rule out the issue with pillar opts

And you list two configs in your previous comment. Which is the current config?

@esn89
Copy link
Author

esn89 commented Mar 9, 2017

After doing some more careful testing:

Scenario 1:
master:

  • pillar_opts: True
  • s3.keyid: 123abc
  • s3.key: 123abc
  • s3.location: us-west-1

minion:
no settings set

This does not work, I am not able to get the s3 packages. So I can confirm that pillar_opts does not work.


Scenario 2:
master:

  • pillar_opts: False
  • s3.keyid: 123abc
  • s3.key: 123abc
  • s3.location: us-west-1

minion:
no settings set

This does not work as well. I was hoping that the master can be the one to get the code from s3 and from there, pass the tarball that's on s3 over to the minions for extracting.


Scenario 3:
master:

  • pillar_opts: False
  • s3.keyid: NOT SET
  • s3.key: NOT SET
  • s3.location: NOT SET

minion:

  • s3.keyid: 123abc
  • s3.key: 123abc
  • s3.location: us-west-1

This works! Doing things minion-side is successful.


And I think what happened last time was that: I did it successfully minion-side, then, I went back to the old master-side configs with the pillar_opts: True.

Then since I had a cached version of the tarball from s3 earlier, it appears that all my subsequent state.applys worked seamlessly.
This time I smartened up and uploaded a different version of the code and from there I saw that it failed.

Let me know if you want more info!

@ashwinrs
Copy link

We are using saltstack at @Mobcrush and we ran into the same issue @esn89 has. The only difference being, our s3 configs are in pillars and not in minion config. Please let me know if you need any more info.

Our salt version is -

Salt Version:
           Salt: 2016.11.5
 
Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 2.4.2
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 1.0.3
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: 1.3.7
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.12 (default, Nov 19 2016, 06:48:10)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.2.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4
 
System Versions:
           dist: Ubuntu 16.04 xenial
        machine: x86_64
        release: 4.4.0-72-generic
         system: Linux
        version: Ubuntu 16.04 xenial

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 21, 2017

I'm having a hard time replicating this. If i add the following to master config:

pillar_opts: True
s3.keyid: 123abc
s3.key: 123abc
s3.location: us-west-1

or this

pillar_opts: True

with the following pillar data:

s3.keyid: 123abc_pillar
s3.key: 123abc_pillar
s3.location: us-west-1_pillar

And if i add the following debug messages:

diff --git a/salt/modules/s3.py b/salt/modules/s3.py
index cb5b9c6edb..152722d02b 100644
--- a/salt/modules/s3.py
+++ b/salt/modules/s3.py
@@ -169,6 +169,8 @@ def get(bucket='', path='', return_bin=False, action=None,
         location,
         role_arn,
     )
+    log.warn('key: {0}, keyid: {1}, location: {2}'.format(key, keyid,
+                                                          location))
 
     return __utils__['s3.query'](method='GET',
                                  bucket=bucket,

When i run with the pillar data only I see the accurate data:

[root@e711217a8575 /]# salt-call s3.get salttest
[WARNING ] key: 123abc_pillar, keyid: 123abc_pillar, location: us-west-1_pillar

And when running with only the master configuration:

[root@e711217a8575 /]# salt-call s3.get salttest
[WARNING ] key: 123abc, keyid: 123abc, location: us-west-1

I am also on version 2016.11.5

Is there anything you can see from my test case that is different from yours? I can't seem to replicate either of your issues.

@Ch3LL Ch3LL added the cannot-reproduce cannot be replicated with info/context provided label Jun 21, 2017
@trexx
Copy link

trexx commented Jul 7, 2017

I am running in to the same errors.
I am running a minion in ap-south-1 (Mumbai) attempting to access a bucket in eu-west-1 (Ireland).

I think what maybe happening is that as my minion is in ap-south-1, Saltstack is applying this region automatically (probably via the EC2 Metadata Service?) and the bug seems to be we're unable to override Saltstacks decision from the Master.
I've attempted to set s3.service_url (I originally assumed it maybe because the default service_url would always redirect us to ap-south-1 because it's the closest s3 endpoint) and s3.location in the pillar to no avail.

Setting s3.location to eu-west-1 on the minion via /etc/salt/minion works.

@praveenchaudhary0602
Copy link

I am also facing the similar issue , My bucket is in us-east-1 (printed the same and available in the console ) .I have tried using the
s3.setRegion(com.amazonaws.regions.Region.getRegion(Regions.US_EAST_1));

and also created the bucket with unique name also but doesn't work . Let me know if any way I need to try .

@stale
Copy link

stale bot commented Feb 23, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

@stale stale bot added the stale label Feb 23, 2019
@stale stale bot closed this as completed Mar 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cannot-reproduce cannot be replicated with info/context provided info-needed waiting for more info stale
Projects
None yet
Development

No branches or pull requests

6 participants