Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

archive.extracted not functioning properly #41930

Closed
nomeelnoj opened this issue Jun 24, 2017 · 17 comments
Closed

archive.extracted not functioning properly #41930

nomeelnoj opened this issue Jun 24, 2017 · 17 comments
Assignees
Labels
cannot-reproduce cannot be replicated with info/context provided
Milestone

Comments

@nomeelnoj
Copy link

Description of Issue/Question

I am trying to use archive.extracted to extract a .tgz from the salt master onto a minion. This is for a web server. However, I am getting errors as if salt thinks I am on a previous version, telling me that tar_options is required, so I have formatted the state for the previous version of salt. If i try to use "options", "user", "group" or any of the other options the docs say work with 2016.11.5, I get an error that they are not supported.

Additionally, I have a different state that pulls down an archive from S3, and that is functioning properly per the docs for version 2016.11.5

Setup

#ui.sls

deploy_ui:
  archive.extracted:
    - name: /var/www/html
    - source: salt://dc/ver/bin/cuipub.tgz
    - archive_format: tar
    - tar_options: zxvf                                                                     
    - if_missing: /var/www/html/css

Steps to Reproduce Issue

However, whenever I run this state, the files to not actually extract, and I get this output from salt:

      ID: deploy_ui
Function: archive.extracted
    Name: /var/www/html
  Result: True
 Comment: salt://dc/ver/prod/bin/cuipub.tgz extracted in /var/www/html
 Started: 16:48:05.999371
Duration: 190.144 ms
 Changes:   
          ----------
          directories_created:
              - /var/www/html
              - /var/www/html/css
          extracted_files:
              - tar: You must specify one of the '-Acdtrux', '--delete' or '--test-label' options
              - Try 'tar --help' or 'tar --usage' for more information.

Versions Report

Salt-master:

Salt Version:
           Salt: 2016.11.5
 
Dependency Versions:
           cffi: Not Installed
       cherrypy: unknown
       dateutil: 1.5
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.21.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: 3.4.3
         pygit2: Not Installed
         Python: 2.7.5 (default, Aug  2 2016, 04:20:16)
   python-gnupg: Not Installed
         PyYAML: 3.10
          PyZMQ: 15.3.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4
 
System Versions:
           dist: redhat 7.3 Maipo
        machine: x86_64
        release: 3.10.0-514.21.2.el7.x86_64
         system: Linux
        version: Red Hat Enterprise Linux Server 7.3 Maipo

Salt-minion:

Salt Version:
           Salt: 2016.11.5
 
Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 1.5
      docker-py: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 0.9.1
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: 1.2.3
      pycparser: Not Installed
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.6 (default, Oct 26 2016, 20:30:19)
   python-gnupg: Not Installed
         PyYAML: 3.10
          PyZMQ: 14.0.1
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5
 
System Versions:
           dist: Ubuntu 14.04 trusty
        machine: x86_64
        release: 3.13.0-116-generic
         system: Linux
        version: Ubuntu 14.04 trusty
@nomeelnoj
Copy link
Author

Here is another very strange example. I have an orchestration state that calls an sls to extract an archive from s3. When I run a similar state on the master, it functions properly. However, when I run it on a minion, I get an error that contradicts itself, asking for an option and also saying that option is not available. Here are the files:

/srv/salt/orch/image.sls

deploy_new_image:
  salt.state:
    - tgt: {{ image.get('target') }}
    {% if image.get('target_type') %}
    - tgt_type: {{ image.get('target_type') }}
    {% endif %}
    - sls:
      - dc.ops.salt_states.client_image

/srv/salt/deploy_image.sls

get_new_version:
  archive.extracted:
    - name: /image0
    - source: https://s3-us-west-2.amazonaws.com/<image>
    - skip_verify: True                                                                                       
    #- overwrite: True
    #- enforce_toplevel: False
    - archive_format: tar
    - tar_options: zxvfp

I only added archive format and tar_options because the previous error message told me I had to. But, when I add skip_verify, which is what the error told me I needed to do, I get another error.

Here is the error from salt:

----------
          ID: get_new_version
    Function: archive.extracted
        Name: /image0
      Result: False
     Comment: Unable to verify upstream hash of source file https://s3-us-west-2.amazonaws.com/dc-client-images/image0-nv375-dock1_13-cuda8.tar.gz, please set source_hash or set skip_verify to True
     Started: 05:33:25.364342
    Duration: 134.489 ms
     Changes:   
    Warnings: 'skip_verify' is an invalid keyword argument for
              'archive.extracted'. If you were trying to pass additional data to
              be used in a template context, please populate 'context' with
              'key: value' pairs. Your approach will work until Salt Oxygen is
              out. Please update your state files.

Again, I have a similar state that pulls a file from S3 onto the master, formatted very similarly to the above, and that orch state calls the sls properly, and the file comes down with no problems. I have verified that the version on the target minion is the same as the master, 2016.11.5.

I would really like to deploy the image to the endpoints from s3, rather than have to pull the tar onto the master the copy the entire tar with a file.managed, then extract it with a cmd.run, but that seems to be my only option at this point since the states are failing on my minions. Since the error contradicts itself, saying I need to skip_verify, but that skip_verify is not an option, I have no idea what to do here.

Any help would be extremely appreciated.

@nomeelnoj
Copy link
Author

I have some more information. The issue lies with the operating system. All of our minions are Ubuntu 14.04, but the master is RHEL 7. I set up a test minion using RHEL 7 and the state functioned without a problem.

So, the state breaks on Ubuntu minions. Any way to fix this or get around it? I am next going to try to make the salt-master an Ubuntu system and see if this works, but I cannot easily just switch my salt-master, so I need a solution for RHEL to Ubuntu archive.extracted state files.

@nomeelnoj
Copy link
Author

Final piece of information--I just spun up a salt-master with Ubuntu, and even when controlling ubuntu minions, the archive.extracted state does not work on ubuntu minions.

It seems that the archive.extracted state only behaves as advertised on RHEL minions, and the OS of the master does not matter. Can we fix this?

@terminalmage
Copy link
Contributor

The distro has nothing to do with how the state behaves, we don't do anything special for different distros.

Can you please set log_level_logfile: debug in the minion config file on the RHEL minion, and restart the salt-minion service? Then, try running the state again and post the debug logging from /var/log/salt/minion.

@terminalmage terminalmage added this to the Blocked milestone Jun 26, 2017
@terminalmage terminalmage added the info-needed waiting for more info label Jun 26, 2017
@terminalmage
Copy link
Contributor

Actually, I misread, it's the Ubuntu box you're having trouble with, so we want to set that logging config option on Ubuntu minion.

Also, you've said that you get errors saying that options aren't supported but you haven't actually posted those errors. This doesn't help us very much, in terms of troubleshooting.

Please post the output from the following two commands:

  1. salt minion-id grains.item saltversion
  2. salt minion-id test.versions_report

Replace minion-id with the Ubuntu minion's ID.

@nomeelnoj
Copy link
Author

Okay, here is a full report. For now, let's skip the orchestration state for simplicity. Here is the state file I am running on the minions, with some data truncated for obscurity. Again, an identical state but with a different s3 tarball runs via orchestration on the master with no issues. Unfortunately, I am now seeing the same behavior on both rhel minions and ubuntu minions.

/image0:
  archive.extracted:
    - source: https://s3-us-west-2.amazonaws.com/path/to/image.tar.gz
    - skip_verify: True
    - overwrite: True
    - enforce_toplevel: False

Command

$ salt 'ubuntu-minion' state.apply client_image

Output:

ubuntu-minion:
----------
          ID: /image0
    Function: archive.extracted
      Result: False
     Comment: Missing parameter archive_format for state archive.extracted
     Changes:  

So I add the archive_format: tar to the state, and run again:

$ salt 'ubuntu-minion' state.apply client_image

Output

ubuntu-minion:
----------
          ID: /image0
    Function: archive.extracted
      Result: False
     Comment: tar archive need argument tar_options
     Started: 02:10:48.303502
    Duration: 0.868 ms
     Changes:   
    Warnings: 'enforce_toplevel', 'skip_verify' and 'overwrite' are invalid
              keyword arguments for 'archive.extracted'. If you were trying to
              pass additional data to be used in a template context, please
              populate 'context' with 'key: value' pairs. Your approach will
              work until Salt Oxygen is out. Please update your state files.

So I remove enforce_toplevel, skip_verify, and overwrite, and add tar_options: zxvf

Output

ubuntu-minion:
----------
          ID: /image0
    Function: archive.extracted
      Result: False
     Comment: Unable to verify upstream hash of source file https://s3-us-west-2.amazonaws.com/path/to/image.tar.gz, please set source_hash or set skip_verify to True
     Started: 02:12:24.666772
    Duration: 156.22 ms
     Changes:   

So it is telling me to set skip_verify to True, but when I do that, I get a warning saying it is not a supported option. So I add the source hash, at which point it seems to succeed, but no files are actually unpacked:

Output

ubuntu-minion:
----------
          ID: /image0
    Function: archive.extracted
      Result: True
     Comment: https://s3-us-west-2.amazonaws.com/path/to/image.tar.gz extracted in /image0
     Started: 02:13:22.695982
    Duration: 197669.789 ms
     Changes:   
              ----------
              directories_created:
                  - /image0
              extracted_files:
                  - tar: You must specify one of the '-Acdtrux', '--delete' or '--test-label' options
                  - Try 'tar --help' or 'tar --usage' for more information.

take my word for it that adding any of the options it suggests (-Acdtrux) simply yields the same error.

Here are the versions:

$ salt 'ubuntu-minion' grains.item saltversion

ubuntu-minion:
    ----------
    saltversion:
        2016.11.5

$ salt 'ubuntu-minion' test.versions_report

ubuntu-minion:
    Salt Version:
               Salt: 2016.11.5
     
    Dependency Versions:
               cffi: Not Installed
           cherrypy: Not Installed
           dateutil: 1.5
          docker-py: Not Installed
              gitdb: Not Installed
          gitpython: Not Installed
              ioflo: Not Installed
             Jinja2: 2.7.2
            libgit2: Not Installed
            libnacl: Not Installed
           M2Crypto: Not Installed
               Mako: 0.9.1
       msgpack-pure: Not Installed
     msgpack-python: 0.4.6
       mysql-python: 1.2.3
          pycparser: Not Installed
           pycrypto: 2.6.1
       pycryptodome: Not Installed
             pygit2: Not Installed
             Python: 2.7.6 (default, Oct 26 2016, 20:30:19)
       python-gnupg: Not Installed
             PyYAML: 3.10
              PyZMQ: 14.0.1
               RAET: Not Installed
              smmap: Not Installed
            timelib: Not Installed
            Tornado: 4.2.1
                ZMQ: 4.0.5
     
    System Versions:
               dist: Ubuntu 14.04 trusty
            machine: x86_64
            release: 3.13.0-121-generic
             system: Linux
            version: Ubuntu 14.04 trusty

# salt 'rhel-minion' test.versions_report

rhel-minion:
    Salt Version:
               Salt: 2016.11.5
     
    Dependency Versions:
               cffi: Not Installed
           cherrypy: Not Installed
           dateutil: 1.5
          docker-py: Not Installed
              gitdb: Not Installed
          gitpython: Not Installed
              ioflo: Not Installed
             Jinja2: 2.7.2
            libgit2: Not Installed
            libnacl: Not Installed
           M2Crypto: 0.21.1
               Mako: Not Installed
       msgpack-pure: Not Installed
     msgpack-python: 0.4.8
       mysql-python: Not Installed
          pycparser: Not Installed
           pycrypto: Not Installed
       pycryptodome: 3.4.3
             pygit2: Not Installed
             Python: 2.7.5 (default, Aug  2 2016, 04:20:16)
       python-gnupg: Not Installed
             PyYAML: 3.11
              PyZMQ: 15.3.0
               RAET: Not Installed
              smmap: Not Installed
            timelib: Not Installed
            Tornado: 4.2.1
                ZMQ: 4.1.4
     
    System Versions:
               dist: redhat 7.3 Maipo
            machine: x86_64
            release: 3.10.0-514.21.2.el7.x86_64
             system: Linux
            version: Red Hat Enterprise Linux Server 7.3 Maipo

@terminalmage
Copy link
Contributor

I'm sorry, but I can't reproduce any of this. There are also a couple weird things with the information you posted. First of all, the comment field tar archive need argument tar_options doesn't ring a bell, and doesn't exist in Salt:

% git grep 'need argument' v2016.11.5 | wc -l
0

Secondly, all of the options you used in your SLS are valid in 2016.11.5. There is no way that 2016.11.5 generates those errors. How did you install Salt on these minions?

Also, you never posted the debug logging I asked for. Please do so.

@nomeelnoj
Copy link
Author

So I thought maybe the issue was something with S3, so to remove a layer of complexity, I created a very small tar.gz on the saltmaster, at salt://test1.tar.gz. The new state file is:

image0:
  archive.extracted:
    - name: /opt/ver/ver1
    - source: salt://test1.tar.gz
    - source_hash: md5=HASH
    - archive_format: tar
    - tar_options: z

I get the same error, that I must specify -Acdtrux... Based on the log file. the tar command that salt is running is failing, exiting with 2 as the return code. See below.

Here is the relevant /var/log/salt/minion file snippet:

2017-06-27 03:36:10,356 [salt.fileclient  ][DEBUG   ][11055] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/test1.tar.gz' to resolve 'salt://test1.tar.gz'
2017-06-27 03:36:10,357 [salt.state       ][INFO    ][11055] File changed:
New file
2017-06-27 03:36:10,357 [salt.state       ][INFO    ][11055] Completed state [/var/cache/salt/minion/_opt_ver_ver1.tar] at time 03:36:10.357796 duration_in_ms=17.278
2017-06-27 03:36:10,358 [salt.state       ][DEBUG   ][11055] File /var/cache/salt/minion/accumulator/139775296989776 does not exist, no need to cleanup.
2017-06-27 03:36:10,358 [salt.loaded.ext.states.archive][DEBUG   ][11055] file.managed: {'file_|-/var/cache/salt/minion/_opt_ver_ver1.tar_|-/var/cache/salt/minion/_opt_ver_ver1.tar_|-managed': {'comment': 'File /var/cache/salt/minion/_opt_ver_ver1.tar updated', 'pchanges': {}, 'name': '/var/cache/salt/minion/_opt_ver_ver1.tar', 'start_time': '03:36:10.340518', 'result': True, 'duration': 17.278, '__run_num__': 0, 'changes': {'diff': 'New file', 'mode': '0644'}, '__id__': '/var/cache/salt/minion/_opt_ver_ver1.tar'}}
2017-06-27 03:36:10,358 [salt.loaded.int.module.file][DEBUG   ][11055] Directory '/opt/ver' already exists
2017-06-27 03:36:10,358 [salt.loaded.ext.states.archive][DEBUG   ][11055] Untar /var/cache/salt/minion/_opt_ver_ver1.tar in /opt/ver/ver1
2017-06-27 03:36:10,383 [salt.utils.lazy  ][DEBUG   ][11055] LazyLoaded archive.tar
2017-06-27 03:36:10,385 [salt.utils.lazy  ][DEBUG   ][11055] LazyLoaded cmd.run
2017-06-27 03:36:10,385 [salt.loaded.int.module.cmdmod][INFO    ][11055] Executing command ['tar', '-C', '/opt/ver/ver1', 'xvzf', '/var/cache/salt/minion/_opt_ver_ver1.tar'] in directory '/root'
2017-06-27 03:36:10,389 [salt.loaded.int.module.cmdmod][ERROR   ][11055] Command '['tar', '-C', '/opt/ver/ver1', 'xvzf', '/var/cache/salt/minion/_opt_ver_ver1.tar']' failed with return code: 2
2017-06-27 03:36:10,389 [salt.loaded.int.module.cmdmod][ERROR   ][11055] output: tar: You must specify one of the '-Acdtrux', '--delete' or '--test-label' options
Try 'tar --help' or 'tar --usage' for more information.
2017-06-27 03:36:10,389 [salt.state       ][INFO    ][11055] {'extracted_files': ["tar: You must specify one of the '-Acdtrux', '--delete' or '--test-label' options", "Try 'tar --help' or 'tar --usage' for more information."], 'directories_created': ['/opt/ver/ver1']}
2017-06-27 03:36:10,390 [salt.state       ][INFO    ][11055] Completed state [/opt/ver/ver1] at time 03:36:10.390145 duration_in_ms=197.065
2017-06-27 03:36:10,390 [salt.state       ][DEBUG   ][11055] File /var/cache/salt/minion/accumulator/139775302685840 does not exist, no need to cleanup.
2017-06-27 03:36:10,391 [salt.minion      ][DEBUG   ][11055] Minion return retry timer set to 7 seconds (randomized)
2017-06-27 03:36:10,391 [salt.minion      ][INFO    ][11055] Returning information for job: 20170626233607223405
2017-06-27 03:36:10,391 [salt.transport.zeromq][DEBUG   ][11055] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'ubuntu-minion', 'tcp://MASTER_IP:4506', 'aes')
2017-06-27 03:36:10,392 [salt.crypt       ][DEBUG   ][11055] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'ubuntu-minion', 'tcp://MASTER_IP:4506')

@nomeelnoj
Copy link
Author

One other piece of information, this master is a syndic of another master and is also running salt-api. I am going to spin up a brand new RHEL master and ubuntu minion in AWS and test there. I will report back shortly.

I created these minions using salt-cloud.

@nomeelnoj
Copy link
Author

nomeelnoj commented Jun 27, 2017

Okay, you are onto something with how the minions were installed.

I spun up a new salt-master and minion in AWS by hand, using the instructions at repo.saltstack.com to install the packages. The state worked perfectly with no errors, using the arguments defined in the docs for 2016.11.5.

However, the minions that were spun up using salt-cloud, regardless of OS, exhibit the behavior above.

I tried the following commands on the ubuntu minion:

sudo apt-get --purge remove salt-minion salt-common

added deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main
to /etc/apt/sources.list.d/saltstack.list

sudo apt-get update
sudo apt-get install salt-minion

Same error.

However, if I spin up a blank node in AWS and configure it by hand, I get the desired result.

Any idea what could be going on with nodes spun up using salt-cloud?

I am using salt-api to call an orchestration state to spin up the nodes. This workflow was configured by SaltStack Professional Services at an engagement we had two weeks ago, so I know it is sound.

Please let me know if you would like any of the configuration files or log files from salt-cloud.

@nomeelnoj
Copy link
Author

I tried everything I can think of to fully remove the salt-minion package on the node spun up by salt cloud that is reporting the error, including attempting to also remove all its dependencies. Then I rebooted, and attempted to reinstall salt manually. However, after the manual install, I am still getting the error message.

So, it seems that the error message shows up on a node spun up by salt-cloud, no matter what, even if salt-minion is fully removed and then added back in, at least in my testing this is the case.

A node spun up manually in the AWS console and then salt installed manually functions as advertised.

I tried also setting up a brand new minion in AWS and installing salt-minion using the bootstrap script, since this is how salt-cloud installs salt into a node. Installing salt manually via the bootstrap script yields the desired behavior when running the state with the archive.extracted declaration, and the bootstrap script that I downloaded is IDENTICAL to the one in /etc/salt/cloud.deploy.d/bootstrap-salt.sh.

I am truly stumped.

@nomeelnoj
Copy link
Author

@terminalmage were you able to replicate the behavior reported using minions created with salt-cloud?

@terminalmage
Copy link
Contributor

No, I've been working on getting the release candidate for 2017.7.0 ready and haven't had time to come back to this. I will try to do so over the next day or so.

@terminalmage
Copy link
Contributor

This log message:

2017-06-27 03:36:10,358 [salt.loaded.ext.states.archive][DEBUG   ][11055] Untar /var/cache/salt/minion/_opt_ver_ver1.tar in /opt/ver/ver1

doesn't exist after the first RC for 2015.8.0:

% for tag in `git tag --list`; do git checkout -q --force $tag 2>/dev/null; fgrep Untar states/archive.py 2>/dev/null | fgrep -q debug && echo $tag; done; git checkout -q --force 2017.7
v0.8.1
v0.8.10
v0.8.11
v0.8.2
v0.8.3
v0.8.4
v0.8.5
v0.8.6
v2014.1
v2014.1.0
v2014.1.0rc1
v2014.1.0rc2
v2014.1.0rc3
v2014.1.1
v2014.1.10
v2014.1.11
v2014.1.12
v2014.1.13
v2014.1.14
v2014.1.2
v2014.1.3
v2014.1.4
v2014.1.5
v2014.1.6
v2014.1.7
v2014.1.8
v2014.1.9
v2014.7
v2014.7.0
v2014.7.0rc1
v2014.7.0rc2
v2014.7.0rc3
v2014.7.0rc4
v2014.7.0rc5
v2014.7.0rc6
v2014.7.0rc7
v2014.7.1
v2014.7.2
v2014.7.3
v2014.7.4
v2014.7.5
v2014.7.6
v2014.7.7
v2014.7.8
v2014.7.9
v2015.2
v2015.2.0rc1
v2015.2.0rc2
v2015.2.0rc3
v2015.5
v2015.5.0
v2015.5.1
v2015.5.2
v2015.5.3
v2015.8
v2015.8.0rc1

It was removed in 1726057 nearly two years ago.

I think you have something else weird with your installation.

@terminalmage terminalmage self-assigned this Jun 29, 2017
@terminalmage terminalmage added cannot-reproduce cannot be replicated with info/context provided and removed info-needed waiting for more info labels Jun 29, 2017
@nomeelnoj
Copy link
Author

I think you are right. I just used a different salt-master to run the same state on a different minion and I did not get the error. I am not sure what is going on here, but tomorrow I am going to rebuild the salt-master from scratch and see if the issue goes away. Sorry for the trouble.

@nomeelnoj
Copy link
Author

I rebuilt the saltmaster from scratch, and now it works. I am not sure what the issue was, because I used the same setup script I always use to setup a saltmaster, and followed all of the same steps I did with the initial saltmaster. Really not sure what the issue is, but it is now working. Sorry for all the trouble, and thanks for helping me get to the bottom of this one.

@terminalmage
Copy link
Contributor

@nomeelnoj No problem! One thing to look for is stale .pyc files, ones from an earlier installation which were not removed. This is less common when installing salt from repository packages, as the .pyc files are owned by the package manager and thus are removed when you upgrade or uninstall, but they can get left behind if you're installing from source and are not careful about removing them. I'd also recommend checking your pythonpath (i.e. python -c 'import sys; print(sys.path)') for a .pth file for salt (i.e. salt-<something>.pth). If you see one, check its contents and see if the directory listed there contains an installation of salt. It's possible that you had an old copy of salt installed via salt-bootstrap in this fashion, especially if you've ever installed from a branch/tag using salt-bootstrap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cannot-reproduce cannot be replicated with info/context provided
Projects
None yet
Development

No branches or pull requests

2 participants