-
Notifications
You must be signed in to change notification settings - Fork 199
SELinux python bindings fails on CentOS 6 even after a connection_reset meta task #633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This part of the log is curious:
Since there is no traceback :) I'm guessing one wasn't printed? There could be potentially some shared problem between this and #569 involving the importer. For example Python has a "negative import cache", something like that could be recording the module as missing, requiring reset_connection. Looking at the log we can see the I think that suggests, as in #569, that a common idiom in use in the Ansible code is masking another error. I will try to reproduce this locally because it looks like there may be some common importer regression that describes both bugs. Thanks for reporting this! |
I think I can see the problem. The This will need a new release, apologies. You can see in the logs that reset_connection is destroying a sudo context connection, but in follow-up tasks, there is no "Python version is 1.2.3." startup messages from the sudo context. Which means there are two contexts running with the same name, but a subtly different configuration. |
You're correct, there's no traceback in the logs. Checking the issue on my personal computer, I wasn't able to replicate this though. I think I'm onto something here, I always commit mitogen code with my repo and when I updated the code, git may have been confused and reused some of the old code (despite the old version and the new version is in different directories) if that makes sense. Let me run a couple of tests before we jump to conclusions here. |
There are definitely bugs in The first problem is that it does not know the become username. This is how you are ending up with duplicate contexts -- one's unique key includes "username=root", the other includes "username=None". Normal tasks use one key, meta uses another. So it is likely the context torn down by meta was created only moments before. The second problem is that this should not matter -- the |
Here's the log with the working state:
|
The real problem was me deleting the old mitogen 0.2.6 folder and adding the new version in 0.2.8 folder. Git thought that I was renaming the folder and its contents were re-used in the commit. |
Okay, I'm making things complicated and getting nowhere. Now the same environment fails on my workstation. Sigh.. Seems it's nothing to do with git wrongly thinking the files are moved/renamed, I jumped that wagon too soon. I'll just shut up and wait for your investigation now :) If you need a different run and log outputs, I can do that fairly easily. Every run in my env does fail. |
It used to be set by on_action_run() from task_vars, but this doesn't work for meta: reset_connection. That meant MITOGEN_CPU_COUNT>1 would pick the wrong mux to reset the connection on.
- don't create a new connection during reset if no existing connection exists - strip off last hop in connection stack if PlayContext.become is True. - log a debug message if reset cannot find an existing connection
- take host_vars from task_vars too - make missing task_vars a hard error - update tests to provide stub task_vars
This is now on master branch if you'd like to confirm the fix. Apologies for the hassle |
This seems to have fixed it. Thanks! :) |
Reopening this because I want to take a look at what's going on with your packaging problem. It looks similar to other reports. No action needed, just want this to stay in the list :) |
I ran into the same issue. No matter what I did, if I installed |
Hi!
I have recently upgraded mitogen to 0.2.8 and using Ansible in a virtualenv (created by pyenv) with Python version 2.7.15 and Ansible upgraded to 2.8.4.
The issue I'm facing is after installing the "libselinux-python" package in a CentOS 6 server (created by Vagrant in Virtualbox, Vagrantfile included) and immediately setting the selinux status to disabled fails with the following error:
Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!
This was the case before upgrading to mitogen 0.2.8 so I figured it may have a connection to the way mitogen behaves and I've successfully resolved that by using a meta connection_reset task. Now it doesn't help either and fails the task.
Versions:
Vagrantfile I'm using to create the servers, can be used to recreate the environment:
Here's the
ansible-config dump --only-changed
output:Here's the partial task output (reset meta included):
If you need more info, feel free to contact me please. I can replicate the faulty environment with a simple
vagrant up
easily.Thanks!
The text was updated successfully, but these errors were encountered: