-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
archive.extracted not functioning properly #41930
Comments
Here is another very strange example. I have an orchestration state that calls an sls to extract an archive from s3. When I run a similar state on the master, it functions properly. However, when I run it on a minion, I get an error that contradicts itself, asking for an option and also saying that option is not available. Here are the files: /srv/salt/orch/image.sls
/srv/salt/deploy_image.sls
I only added archive format and tar_options because the previous error message told me I had to. But, when I add skip_verify, which is what the error told me I needed to do, I get another error. Here is the error from salt:
Again, I have a similar state that pulls a file from S3 onto the master, formatted very similarly to the above, and that orch state calls the sls properly, and the file comes down with no problems. I have verified that the version on the target minion is the same as the master, 2016.11.5. I would really like to deploy the image to the endpoints from s3, rather than have to pull the tar onto the master the copy the entire tar with a file.managed, then extract it with a cmd.run, but that seems to be my only option at this point since the states are failing on my minions. Since the error contradicts itself, saying I need to skip_verify, but that skip_verify is not an option, I have no idea what to do here. Any help would be extremely appreciated. |
I have some more information. The issue lies with the operating system. All of our minions are Ubuntu 14.04, but the master is RHEL 7. I set up a test minion using RHEL 7 and the state functioned without a problem. So, the state breaks on Ubuntu minions. Any way to fix this or get around it? I am next going to try to make the salt-master an Ubuntu system and see if this works, but I cannot easily just switch my salt-master, so I need a solution for RHEL to Ubuntu archive.extracted state files. |
Final piece of information--I just spun up a salt-master with Ubuntu, and even when controlling ubuntu minions, the archive.extracted state does not work on ubuntu minions. It seems that the archive.extracted state only behaves as advertised on RHEL minions, and the OS of the master does not matter. Can we fix this? |
The distro has nothing to do with how the state behaves, we don't do anything special for different distros. Can you please set |
Actually, I misread, it's the Ubuntu box you're having trouble with, so we want to set that logging config option on Ubuntu minion. Also, you've said that you get errors saying that options aren't supported but you haven't actually posted those errors. This doesn't help us very much, in terms of troubleshooting. Please post the output from the following two commands:
Replace |
Okay, here is a full report. For now, let's skip the orchestration state for simplicity. Here is the state file I am running on the minions, with some data truncated for obscurity. Again, an identical state but with a different s3 tarball runs via orchestration on the master with no issues. Unfortunately, I am now seeing the same behavior on both rhel minions and ubuntu minions.
Command
Output:
So I add the
Output
So I remove enforce_toplevel, skip_verify, and overwrite, and add Output
So it is telling me to set skip_verify to True, but when I do that, I get a warning saying it is not a supported option. So I add the source hash, at which point it seems to succeed, but no files are actually unpacked: Output
take my word for it that adding any of the options it suggests (-Acdtrux) simply yields the same error. Here are the versions:
|
I'm sorry, but I can't reproduce any of this. There are also a couple weird things with the information you posted. First of all, the comment field
Secondly, all of the options you used in your SLS are valid in 2016.11.5. There is no way that 2016.11.5 generates those errors. How did you install Salt on these minions? Also, you never posted the debug logging I asked for. Please do so. |
So I thought maybe the issue was something with S3, so to remove a layer of complexity, I created a very small tar.gz on the saltmaster, at salt://test1.tar.gz. The new state file is:
I get the same error, that I must specify -Acdtrux... Based on the log file. the tar command that salt is running is failing, exiting with 2 as the return code. See below. Here is the relevant /var/log/salt/minion file snippet:
|
One other piece of information, this master is a syndic of another master and is also running salt-api. I am going to spin up a brand new RHEL master and ubuntu minion in AWS and test there. I will report back shortly. I created these minions using salt-cloud. |
Okay, you are onto something with how the minions were installed. I spun up a new salt-master and minion in AWS by hand, using the instructions at repo.saltstack.com to install the packages. The state worked perfectly with no errors, using the arguments defined in the docs for 2016.11.5. However, the minions that were spun up using salt-cloud, regardless of OS, exhibit the behavior above. I tried the following commands on the ubuntu minion:
added
Same error. However, if I spin up a blank node in AWS and configure it by hand, I get the desired result. Any idea what could be going on with nodes spun up using salt-cloud? I am using salt-api to call an orchestration state to spin up the nodes. This workflow was configured by SaltStack Professional Services at an engagement we had two weeks ago, so I know it is sound. Please let me know if you would like any of the configuration files or log files from salt-cloud. |
I tried everything I can think of to fully remove the salt-minion package on the node spun up by salt cloud that is reporting the error, including attempting to also remove all its dependencies. Then I rebooted, and attempted to reinstall salt manually. However, after the manual install, I am still getting the error message. So, it seems that the error message shows up on a node spun up by salt-cloud, no matter what, even if salt-minion is fully removed and then added back in, at least in my testing this is the case. A node spun up manually in the AWS console and then salt installed manually functions as advertised. I tried also setting up a brand new minion in AWS and installing salt-minion using the bootstrap script, since this is how salt-cloud installs salt into a node. Installing salt manually via the bootstrap script yields the desired behavior when running the state with the archive.extracted declaration, and the bootstrap script that I downloaded is IDENTICAL to the one in /etc/salt/cloud.deploy.d/bootstrap-salt.sh. I am truly stumped. |
@terminalmage were you able to replicate the behavior reported using minions created with salt-cloud? |
No, I've been working on getting the release candidate for 2017.7.0 ready and haven't had time to come back to this. I will try to do so over the next day or so. |
This log message:
doesn't exist after the first RC for 2015.8.0: % for tag in `git tag --list`; do git checkout -q --force $tag 2>/dev/null; fgrep Untar states/archive.py 2>/dev/null | fgrep -q debug && echo $tag; done; git checkout -q --force 2017.7
v0.8.1
v0.8.10
v0.8.11
v0.8.2
v0.8.3
v0.8.4
v0.8.5
v0.8.6
v2014.1
v2014.1.0
v2014.1.0rc1
v2014.1.0rc2
v2014.1.0rc3
v2014.1.1
v2014.1.10
v2014.1.11
v2014.1.12
v2014.1.13
v2014.1.14
v2014.1.2
v2014.1.3
v2014.1.4
v2014.1.5
v2014.1.6
v2014.1.7
v2014.1.8
v2014.1.9
v2014.7
v2014.7.0
v2014.7.0rc1
v2014.7.0rc2
v2014.7.0rc3
v2014.7.0rc4
v2014.7.0rc5
v2014.7.0rc6
v2014.7.0rc7
v2014.7.1
v2014.7.2
v2014.7.3
v2014.7.4
v2014.7.5
v2014.7.6
v2014.7.7
v2014.7.8
v2014.7.9
v2015.2
v2015.2.0rc1
v2015.2.0rc2
v2015.2.0rc3
v2015.5
v2015.5.0
v2015.5.1
v2015.5.2
v2015.5.3
v2015.8
v2015.8.0rc1 It was removed in 1726057 nearly two years ago. I think you have something else weird with your installation. |
I think you are right. I just used a different salt-master to run the same state on a different minion and I did not get the error. I am not sure what is going on here, but tomorrow I am going to rebuild the salt-master from scratch and see if the issue goes away. Sorry for the trouble. |
I rebuilt the saltmaster from scratch, and now it works. I am not sure what the issue was, because I used the same setup script I always use to setup a saltmaster, and followed all of the same steps I did with the initial saltmaster. Really not sure what the issue is, but it is now working. Sorry for all the trouble, and thanks for helping me get to the bottom of this one. |
@nomeelnoj No problem! One thing to look for is stale .pyc files, ones from an earlier installation which were not removed. This is less common when installing salt from repository packages, as the .pyc files are owned by the package manager and thus are removed when you upgrade or uninstall, but they can get left behind if you're installing from source and are not careful about removing them. I'd also recommend checking your pythonpath (i.e. |
Description of Issue/Question
I am trying to use archive.extracted to extract a .tgz from the salt master onto a minion. This is for a web server. However, I am getting errors as if salt thinks I am on a previous version, telling me that tar_options is required, so I have formatted the state for the previous version of salt. If i try to use "options", "user", "group" or any of the other options the docs say work with 2016.11.5, I get an error that they are not supported.
Additionally, I have a different state that pulls down an archive from S3, and that is functioning properly per the docs for version 2016.11.5
Setup
#ui.sls
Steps to Reproduce Issue
However, whenever I run this state, the files to not actually extract, and I get this output from salt:
Versions Report
Salt-master:
Salt-minion:
The text was updated successfully, but these errors were encountered: