Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] archive.extracted doesn't use pillar data for s3.key and s3.keyid #60501

Open
lmf-mx opened this issue Jul 7, 2021 · 2 comments
Open
Labels
Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@lmf-mx
Copy link
Contributor

lmf-mx commented Jul 7, 2021

Description
This is the same behavior as #13850 except when the caching is ran from archive.extracted. If the s3.key and s3.keyid pillars are not set in the minion config files, then an attempt to grab IAM roles is made, follow by an exception.

Could not fetch from s3://<path>/file.tar. Exception: Failed to get file. InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

If the same values that are assigned to the pillars from the master are directly added as local config on the minion, the caching succeeds.
#28630 contained a fix for some other modules in 4d38687

Running modules on the minion using the pillar data works, such as s3.head <bucket> <file.tar>.

Setup

unpack_cb_installer:
  archive.extracted:
    - name: /tmp/cbinstaller
    - source: s3://<path>/file.tar
    - source_hash: <hash>
    - unless:
      - fun: pkg.version
        args:
          - <pkgname>

Steps to Reproduce the behavior
Running salt 'minion' state.sls_id unpack_cb_installer cb

----------
          ID: unpack_cb_installer
    Function: archive.extracted
        Name: /tmp/cbinstaller
      Result: False
     Comment: Could not fetch from s3://<path>/file.tar. Exception: Failed to get file. InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
     Started: 12:42:18.517717
    Duration: 8701.662 ms
     Changes:
----------

Expected behavior
Pillar data provided by master should be used for archive.extracted when using s3 as a source.

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
minion# salt-call --versions-report
Salt Version:
          Salt: 3003

Dependency Versions:
          cffi: Not Installed
      cherrypy: Not Installed
      dateutil: Not Installed
     docker-py: Not Installed
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 2.11.1
       libgit2: Not Installed
      M2Crypto: 0.35.2
          Mako: Not Installed
       msgpack: 0.6.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: Not Installed
      pycrypto: Not Installed
  pycryptodome: Not Installed
        pygit2: Not Installed
        Python: 3.6.8 (default, Nov 16 2020, 16:55:22)
  python-gnupg: Not Installed
        PyYAML: 3.13
         PyZMQ: 17.0.0
         smmap: Not Installed
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.1.4

System Versions:
          dist: centos 7 Core
        locale: UTF-8
       machine: x86_64
       release: 3.10.0-1160.25.1.el7.x86_64
        system: Linux
       version: CentOS Linux 7 Core


master#: salt-call --versions-report
Salt Version:
          Salt: 3003

Dependency Versions:
          cffi: 1.14.5
      cherrypy: 8.9.1
      dateutil: 2.7.3
     docker-py: Not Installed
         gitdb: 2.0.6
     gitpython: 3.0.7
        Jinja2: 2.10.1
       libgit2: 0.28.4
      M2Crypto: 0.31.0
          Mako: Not Installed
       msgpack: 0.6.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: 2.20
      pycrypto: Not Installed
  pycryptodome: 3.6.1
        pygit2: 0.28.2
        Python: 3.8.10 (default, Jun  2 2021, 10:49:15)
  python-gnupg: 0.4.5
        PyYAML: 5.3.1
         PyZMQ: 18.1.1
         smmap: 2.0.5
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.3.2

System Versions:
          dist: ubuntu 20.04 focal
        locale: utf-8
       machine: x86_64
       release: 5.4.0-67-generic
        system: Linux
       version: Ubuntu 20.04 focal

@lmf-mx lmf-mx added Bug broken, incorrect, or confusing behavior needs-triage labels Jul 7, 2021
@OrangeDog OrangeDog added severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around and removed needs-triage labels Jul 9, 2021
@OrangeDog OrangeDog added this to the Approved milestone Jul 9, 2021
@lmf-mx
Copy link
Contributor Author

lmf-mx commented Jul 12, 2021

I tried using file.managed as a workaround. After expanding past my initial test minion, I've found that I must have had cached data and pulling the key/keyid from pillar for file.managed has the same issue.

@keslerm
Copy link
Contributor

keslerm commented Jul 28, 2022

There seems to be some difference in specifying s3 credentials in the pillar when using the s3 module vs using a s3:// source in a file.managed or similar state.

The documentation in the file.managed suggests reading the s3 module for configuration, and that configuration says the pillar key should be the literal s3.keyid

But the fileclient.py actually changes this behavior and uses the pillar key s3:keyid

So while setting s3.keyid in the pillar will work when you try to use s3.get or whatever, it will fail for the file.managed states.

I'm not sure if this should be a documentation change or if they fileclient.py itself is wrong

The discrepancy is here https://github.com/keslerm/salt/blob/master/salt/fileclient.py#L559-L564

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

No branches or pull requests

3 participants