Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SELinux: Could not open policy file #56

Open
YiannisGkoufas opened this issue Jun 10, 2016 · 10 comments
Open

SELinux: Could not open policy file #56

YiannisGkoufas opened this issue Jun 10, 2016 · 10 comments

Comments

@YiannisGkoufas
Copy link

Hi there,

I am using RHEL 7 and the mesos-master works fine.
However when I launch the mesos-slave I get the error:

SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.30: No such file or directory
Failed to create a containerizer: Could not create DockerContainerizer: Failed to create docker: Failed to get docker version: Failed to execute 'docker -H unix:///var/run/docker.sock --version': exited with status 127

Can someone please give me a hint on what's wrong?
I started trying different versions of the images and the only one working was the first one (0.19.1)

Thanks a lot!

@r1ckr
Copy link

r1ckr commented Jun 20, 2016

Hey Yiannis, Did you solve this issue?

@YiannisGkoufas
Copy link
Author

Hi there,

no I haven't managed to solve it :(

Thanks!

@greggomann
Copy link
Contributor

Hi @YiannisGkoufas! Thanks for filing this issue. Would you mind testing again? I'm hoping that your issue is solved by 60ad15c

@arnarg
Copy link

arnarg commented Jul 12, 2016

@greggomann It seems to work for me. I no longer get the SELinux error.

@PHPCEO
Copy link

PHPCEO commented Jul 14, 2016

@greggomann this doesn't work for me on a CoreOS host

  • though I was building version 0.28.1-2.0.20.ubuntu1404 rather than 0.20.1-1.0.ubuntu1404 which is the example in the makefile

@greggomann
Copy link
Contributor

@PHPCEO thanks for the report! I've tested with 0.28.x releases in Ubuntu environments, so something must be going on in CoreOS. Are you getting the same SELinux error shown above? Could you provide a little more info on your environment? I'll try some testing on CoreOS tomorrow. Thx!!

@greggomann
Copy link
Contributor

@PHPCEO how exactly were you attempting to build on CoreOS? You were building in a container I imagine? I need a bit more information about your environment in order to reproduce.

@PHPCEO
Copy link

PHPCEO commented Jul 28, 2016

@greggomann so sorry for the delay and thank you for the prompt response! I was building on CoreOS running in xhyve on my work laptop, which is running OSX 10.11.6. I tried disabling selinux in the guest OS to no avail. I believe I did get it running, though. I'll add more info later today!

@PHPCEO
Copy link

PHPCEO commented Jul 28, 2016

Ok, so I was just running make images , e.g.: make images VERSION=0.28.1-2.0.20.ubuntu1404

When I tried to run the slave image, I get this error:

drm -fv mslave-1a; drun -it --name=mslave-1a --privileged -e MESOS_PORT=5051 -e MESOS_MASTER=zk://zk-1.docker:2181,mmaster-1a.docker:2181/mesos -e MESOS_SWITCH_USER=0 -e MESOS_CONTAINERIZERS=docker,mesos -e MESOS_LOG_DIR=/var/log/mesos -e MESOS_WORK_DIR=/var/tmp/mesos -v "$(pwd)/log/mesos:/var/log/mesos" -v "$(pwd)/tmp/mesos:/var/tmp/mesos" -v /var/run/docker.sock:/var/run/docker.sock -v /cgroup:/cgroup -v /sys:/sys -v /usr/bin/docker:/usr/local/bin/docker:ro mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
mslave-1a
I0728 15:57:58.631327     1 logging.cpp:188] INFO level logging started!
I0728 15:57:58.633963     1 main.cpp:223] Build: 2016-04-14 15:41:24 by root
I0728 15:57:58.634363     1 main.cpp:225] Version: 0.28.1
I0728 15:57:58.634593     1 main.cpp:228] Git tag: 0.28.1
I0728 15:57:58.634742     1 main.cpp:232] Git SHA: 555db235a34afbb9fb49940376cc33a66f1f85f0
SELinux:  Could not open policy file <= /etc/selinux/targeted/policy/policy.30:  No such file or directory

What I ultimately wound up doing was modifying the Dockerfile such that we install selinux and systemd on the 14.04 image. I also found that I needed to launch the slave omitting the link from /usr/local/bin/docker to /usr/local/bin/docker and with the flag --launcher=posix, which worked :).

drm -fv mslave-1a; drun -it --name=mslave-1a --privileged -e MESOS_PORT=5051 -e MESOS_MASTER=zk://zk-1.docker:2181,mmaster-1a.docker:2181/mesos -e MESOS_SWITCH_USER=0 -e MESOS_CONTAINERIZERS=docker,mesos -e MESOS_LOG_DIR=/var/log/mesos -e MESOS_WORK_DIR=/var/tmp/mesos -v "$(pwd)/log/mesos:/var/log/mesos" -v "$(pwd)/tmp/mesos:/var/tmp/mesos" -v /var/run/docker.sock:/var/run/docker.sock -v /cgroup:/cgroup -v /sys:/sys mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404 --launcher=posix
mslave-1a
I0728 16:42:47.740856     1 logging.cpp:188] INFO level logging started!
I0728 16:42:47.742466     1 main.cpp:223] Build: 2016-04-14 15:41:24 by root
I0728 16:42:47.742575     1 main.cpp:225] Version: 0.28.1
I0728 16:42:47.742593     1 main.cpp:228] Git tag: 0.28.1
I0728 16:42:47.742614     1 main.cpp:232] Git SHA: 555db235a34afbb9fb49940376cc33a66f1f85f0
I0728 16:42:47.903851     1 containerizer.cpp:149] Using isolation: posix/cpu,posix/mem,filesystem/posix
I0728 16:42:47.906050     1 main.cpp:328] Starting Mesos slave
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@716: Client environment:host.name=de28af01065e
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@724: Client environment:os.arch=4.6.4-coreos
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu Jul 14 20:36:35 UTC 2016
I0728 16:42:47.906545     1 slave.cpp:193] Slave started on 1)@172.21.0.7:5051
I0728 16:42:47.906579     1 slave.cpp:194] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticatee="crammd5" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="docker,mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posix" --launcher_dir="/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://zk-1.docker:2181,mmaster-1a.docker:2181/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="false" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/tmp/mesos"
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@log_env@753: Client environment:user.dir=/
2016-07-28 16:42:47,906:1(0x7f7ab0991700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zk-1.docker:2181,mmaster-1a.docker:2181 sessionTimeout=10000 watcher=0x7f7ab937fa10 sessionId=0 sessionPasswd=<null> context=0x7f7a9c001e20 flags=0
I0728 16:42:47.908989     1 slave.cpp:464] Slave resources: cpus(*):2; mem(*):2932; disk(*):233088; ports(*):[31000-32000]
I0728 16:42:47.909122     1 slave.cpp:472] Slave attributes: [  ]
I0728 16:42:47.909241     1 slave.cpp:477] Slave hostname: de28af01065e
2016-07-28 16:42:47,913:1(0x7f7a8ffff700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [172.21.0.2:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
I0728 16:42:47.911931    11 state.cpp:58] Recovering state from '/var/tmp/mesos/meta'
I0728 16:42:47.916991    11 state.cpp:698] No checkpointed resources found at '/var/tmp/mesos/meta/resources/resources.info'
2016-07-28 16:42:47,918:1(0x7f7a8ffff700):ZOO_INFO@check_events@1703: initiated connection to server [172.21.0.6:2181]
I0728 16:42:47.918781    11 state.cpp:101] Failed to find the latest slave from '/var/tmp/mesos/meta'
I0728 16:42:47.919304    11 status_update_manager.cpp:200] Recovering status update manager
I0728 16:42:47.919656     8 docker.cpp:773] Recovering Docker containers
I0728 16:42:47.919800     8 containerizer.cpp:407] Recovering containerizer
2016-07-28 16:42:47,920:1(0x7f7a8ffff700):ZOO_INFO@check_events@1750: session establishment complete on server [172.21.0.6:2181], sessionId=0x156330739190002, negotiated timeout=10000
I0728 16:42:47.921376    14 group.cpp:349] Group process (group(1)@172.21.0.7:5051) connected to ZooKeeper
I0728 16:42:47.921528    14 group.cpp:831] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0728 16:42:47.921782    14 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I0728 16:42:47.922284     8 provisioner.cpp:245] Provisioner recovery complete
I0728 16:42:47.922771     7 slave.cpp:4565] Finished recovery
I0728 16:42:47.926292    14 detector.cpp:152] Detected a new leader: (id='0')
I0728 16:42:47.926434    14 group.cpp:700] Trying to get '/mesos/json.info_0000000000' in ZooKeeper
I0728 16:42:47.927608    14 detector.cpp:479] A new leading master (UPID=master@172.21.0.2:5050) is detected
I0728 16:42:47.930243     7 slave.cpp:796] New master detected at master@172.21.0.2:5050
I0728 16:42:47.930726    12 status_update_manager.cpp:174] Pausing sending status updates
I0728 16:42:47.931422     7 slave.cpp:821] No credentials provided. Attempting to register without authentication
I0728 16:42:47.932008     7 slave.cpp:832] Detecting new master
I0728 16:42:48.329638    14 slave.cpp:971] Registered with master master@172.21.0.2:5050; given slave ID d847b564-3e66-4633-a63e-6bf5c5c0d557-S0
I0728 16:42:48.330337     7 status_update_manager.cpp:181] Resuming sending status updates
I0728 16:42:48.341511    14 slave.cpp:1030] Forwarding total oversubscribed resources
I0728 16:43:47.909940     9 slave.cpp:4374] Current disk usage 45.21%. Max allowed age: 3.134993996843090days
I0728 16:44:47.913141     8 slave.cpp:4374] Current disk usage 45.22%. Max allowed age: 3.134840638223727days

@billyogat
Copy link

did this ever get solved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants