Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
SELinux python bindings fails on CentOS 6 even after a connection_reset meta task #633
I have recently upgraded mitogen to 0.2.8 and using Ansible in a virtualenv (created by pyenv) with Python version 2.7.15 and Ansible upgraded to 2.8.4.
The issue I'm facing is after installing the "libselinux-python" package in a CentOS 6 server (created by Vagrant in Virtualbox, Vagrantfile included) and immediately setting the selinux status to disabled fails with the following error:
This was the case before upgrading to mitogen 0.2.8 so I figured it may have a connection to the way mitogen behaves and I've successfully resolved that by using a meta connection_reset task. Now it doesn't help either and fails the task.
Vagrantfile I'm using to create the servers, can be used to recreate the environment:
Here's the partial task output (reset meta included):
If you need more info, feel free to contact me please. I can replicate the faulty environment with a simple
This part of the log is curious:
Since there is no traceback :) I'm guessing one wasn't printed?
There could be potentially some shared problem between this and #569 involving the importer. For example Python has a "negative import cache", something like that could be recording the module as missing, requiring reset_connection.
Looking at the log we can see the
I think that suggests, as in #569, that a common idiom in use in the Ansible code is masking another error.
I will try to reproduce this locally because it looks like there may be some common importer regression that describes both bugs. Thanks for reporting this!
I think I can see the problem. The
This will need a new release, apologies.
You can see in the logs that reset_connection is destroying a sudo context connection, but in follow-up tasks, there is no "Python version is 1.2.3." startup messages from the sudo context. Which means there are two contexts running with the same name, but a subtly different configuration.
You're correct, there's no traceback in the logs. Checking the issue on my personal computer, I wasn't able to replicate this though.
I think I'm onto something here, I always commit mitogen code with my repo and when I updated the code, git may have been confused and reused some of the old code (despite the old version and the new version is in different directories) if that makes sense.
Let me run a couple of tests before we jump to conclusions here.
There are definitely bugs in
The first problem is that it does not know the become username. This is how you are ending up with duplicate contexts -- one's unique key includes "username=root", the other includes "username=None". Normal tasks use one key, meta uses another. So it is likely the context torn down by meta was created only moments before.
The second problem is that this should not matter -- the
Here's the log with the working state:
Okay, I'm making things complicated and getting nowhere. Now the same environment fails on my workstation. Sigh.. Seems it's nothing to do with git wrongly thinking the files are moved/renamed, I jumped that wagon too soon.
I'll just shut up and wait for your investigation now :) If you need a different run and log outputs, I can do that fairly easily. Every run in my env does fail.