Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

avahi-daemon fails to start inside debian squeeze lxc container #25

Closed
paneq opened this issue Jul 17, 2013 · 12 comments
Closed

avahi-daemon fails to start inside debian squeeze lxc container #25

paneq opened this issue Jul 17, 2013 · 12 comments

Comments

@paneq
Copy link

paneq commented Jul 17, 2013

root@debian-first:~# apt-get install avahi-daemon avahi-utils

# LOG CUT HERE

Starting system message bus: dbus.
Setting up avahi-daemon (0.6.27-2+squeeze1) ...
Reloading system message bus config...done.
Starting Avahi mDNS/DNS-SD Daemon: avahi-daemonTimeout reached while wating for return value
Could not receive return value from daemon process.
 (warning).

Inside ubuntu containers avahi-daemon starts correctly. Inside real squeeze VMs avahi-daemon also starts correctly.

My host is Ubuntu 12.04

Here is strace output:

root@debian-first:~# strace /etc/init.d/avahi-daemon start
execve("/etc/init.d/avahi-daemon", ["/etc/init.d/avahi-daemon", "start"], [/* 12 vars */]) = 0
brk(0)                                  = 0x247f000
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc33b35a000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=7116, ...}) = 0
mmap(NULL, 7116, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fc33b358000
close(3)                                = 0
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
open("/lib/libc.so.6", O_RDONLY)        = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\355\1\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1437064, ...}) = 0
mmap(NULL, 3545160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc33addd000
mprotect(0x7fc33af36000, 2093056, PROT_NONE) = 0
mmap(0x7fc33b135000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x158000) = 0x7fc33b135000
mmap(0x7fc33b13a000, 18504, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fc33b13a000
close(3)                                = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc33b357000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc33b356000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc33b355000
arch_prctl(ARCH_SET_FS, 0x7fc33b356700) = 0
mprotect(0x7fc33b135000, 16384, PROT_READ) = 0
mprotect(0x7fc33b35c000, 4096, PROT_READ) = 0
munmap(0x7fc33b358000, 7116)            = 0
getpid()                                = 764
rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7fc33ae0f230}, {SIG_DFL, [], 0}, 8) = 0
geteuid()                               = 0
brk(0)                                  = 0x247f000
brk(0x24a0000)                          = 0x24a0000
getppid()                               = 763
stat("/root", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
stat(".", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
open("/etc/init.d/avahi-daemon", O_RDONLY) = 3
fcntl(3, F_DUPFD, 10)                   = 10
close(3)                                = 0
fcntl(10, F_SETFD, FD_CLOEXEC)          = 0
rt_sigaction(SIGINT, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGINT, {0x40f540, ~[RTMIN RT_1], SA_RESTORER, 0x7fc33ae0f230}, NULL, 8) = 0
rt_sigaction(SIGQUIT, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGQUIT, {SIG_DFL, ~[RTMIN RT_1], SA_RESTORER, 0x7fc33ae0f230}, NULL, 8) = 0
rt_sigaction(SIGTERM, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigaction(SIGTERM, {SIG_DFL, ~[RTMIN RT_1], SA_RESTORER, 0x7fc33ae0f230}, NULL, 8) = 0
read(10, "#!/bin/sh\n### BEGIN INIT INFO\n# "..., 8192) = 2315
stat("/usr/sbin/avahi-daemon", {st_mode=S_IFREG|0755, st_size=127992, ...}) = 0
geteuid()                               = 0
open("/lib/lsb/init-functions", O_RDONLY) = 3
fcntl(3, F_DUPFD, 10)                   = 11
close(3)                                = 0
fcntl(11, F_SETFD, FD_CLOEXEC)          = 0
read(11, "# /lib/lsb/init-functions for De"..., 8192) = 8192
read(11, "_message (int exitstatus)\nlog_en"..., 8192) = 1592
stat("/etc/lsb-base-logging.sh", 0x7fff85b6d410) = -1 ENOENT (No such file or directory)
read(11, "", 8192)                      = 0
close(11)                               = 0
stat("/etc/default/avahi-daemon", {st_mode=S_IFREG|0644, st_size=219, ...}) = 0
open("/etc/default/avahi-daemon", O_RDONLY) = 3
fcntl(3, F_DUPFD, 10)                   = 11
close(3)                                = 0
fcntl(11, F_SETFD, FD_CLOEXEC)          = 0
read(11, "# 1 = Try to detect unicast dns "..., 8192) = 219
read(11, "", 8192)                      = 0
close(11)                               = 0
write(1, "Starting Avahi mDNS/DNS-SD Daemo"..., 47Starting Avahi mDNS/DNS-SD Daemon: avahi-daemon) = 47
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
fcntl(1, F_DUPFD, 10)                   = 11
close(1)                                = 0
fcntl(11, F_SETFD, FD_CLOEXEC)          = 0
dup2(3, 1)                              = 1
close(3)                                = 0
fcntl(2, F_DUPFD, 10)                   = 12
close(2)                                = 0
fcntl(12, F_SETFD, FD_CLOEXEC)          = 0
dup2(1, 2)                              = 2
stat("/sbin/modprobe", 0x7fff85b6d530)  = -1 ENOENT (No such file or directory)
stat("/bin/modprobe", 0x7fff85b6d530)   = -1 ENOENT (No such file or directory)
stat("/usr/sbin/modprobe", 0x7fff85b6d530) = -1 ENOENT (No such file or directory)
stat("/usr/bin/modprobe", 0x7fff85b6d530) = -1 ENOENT (No such file or directory)
write(2, "/etc/init.d/avahi-daemon: 102: ", 31) = 31
write(2, "modprobe: not found", 19)     = 19
write(2, "\n", 1)                       = 1
dup2(11, 1)                             = 1
close(11)                               = 0
dup2(12, 2)                             = 2
close(12)                               = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 765
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 765
--- SIGCHLD (Child exited) @ 0 (0) ---
stat("/var/run/avahi-daemon/disabled-for-unicast-local", 0x7fff85b6d400) = -1 ENOENT (No such file or directory)
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 766
wait4(-1, Timeout reached while wating for return value
Could not receive return value from daemon process.
[{WIFEXITED(s) && WEXITSTATUS(s) == 255}], 0, NULL) = 766
--- SIGCHLD (Child exited) @ 0 (0) ---
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
stat("/usr/bin/tput", {st_mode=S_IFREG|0755, st_size=12192, ...}) = 0
geteuid()                               = 0
stat("/usr/bin/expr", {st_mode=S_IFREG|0755, st_size=101448, ...}) = 0
geteuid()                               = 0
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
fcntl(1, F_DUPFD, 10)                   = 11
close(1)                                = 0
fcntl(11, F_SETFD, FD_CLOEXEC)          = 0
dup2(3, 1)                              = 1
close(3)                                = 0
fcntl(2, F_DUPFD, 10)                   = 12
close(2)                                = 0
fcntl(12, F_SETFD, FD_CLOEXEC)          = 0
dup2(1, 2)                              = 2
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 769
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 769
--- SIGCHLD (Child exited) @ 0 (0) ---
dup2(11, 1)                             = 1
close(11)                               = 0
dup2(12, 2)                             = 2
close(12)                               = 0
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
fcntl(1, F_DUPFD, 10)                   = 11
close(1)                                = 0
fcntl(11, F_SETFD, FD_CLOEXEC)          = 0
dup2(3, 1)                              = 1
close(3)                                = 0
fcntl(2, F_DUPFD, 10)                   = 12
close(2)                                = 0
fcntl(12, F_SETFD, FD_CLOEXEC)          = 0
dup2(1, 2)                              = 2
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 770
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 770
--- SIGCHLD (Child exited) @ 0 (0) ---
dup2(11, 1)                             = 1
close(11)                               = 0
dup2(12, 2)                             = 2
close(12)                               = 0
pipe([3, 4])                            = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 771
close(4)                                = 0
read(3, "\33[31m", 128)                 = 5
read(3, "", 128)                        = 0
close(3)                                = 0
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 771
--- SIGCHLD (Child exited) @ 0 (0) ---
pipe([3, 4])                            = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 772
close(4)                                = 0
read(3, "\33[33m", 128)                 = 5
read(3, "", 128)                        = 0
--- SIGCHLD (Child exited) @ 0 (0) ---
close(3)                                = 0
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 772
pipe([3, 4])                            = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 773
close(4)                                = 0
read(3, "\33[39;49m", 128)              = 8
read(3, "", 128)                        = 0
--- SIGCHLD (Child exited) @ 0 (0) ---
close(3)                                = 0
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 773
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fc33b3569d0) = 774
wait4(-1,  (warning).
[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 774
--- SIGCHLD (Child exited) @ 0 (0) ---
exit_group(0)                           = ?
@harridu
Copy link
Contributor

harridu commented Jul 18, 2013

Since there is a fork you might want to consider to use "strace -f".

@stgraber
Copy link
Member

stgraber commented Aug 7, 2013

Agreed, without -f it's impossible to know exactly what failed.

@stgraber
Copy link
Member

stgraber commented Sep 4, 2013

Closing, no response in over two months.

@stgraber stgraber closed this as completed Sep 4, 2013
@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

avahi's error is misleading. Here is the failure according to strace:

clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f2419774a10) = -1 EAGAIN (Resource temporarily unavailable)
close(3) = 0
close(4) = 0
write(2, "chroot.c: fork() failed: Resourc"..., 57chroot.c: fork() failed: Resource temporarily unavailable) = 57

@hallyn hallyn reopened this Nov 12, 2013
@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

When I hack avahi-daemon/caps.c to keep CAP_SYS_RESOURCE, then avahi works.

I can't yet explain why this only happens in a container.

@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

Ok, the reason it fails is that avahi user in the container is the same as a uid already in use on the host. avahi is very strict about setting limit for number of tasks to precisely what it wants. So in mycase, it was set to 104, which was ntp on the host, and ntpd was already running.

I change the container avahi's userid to 99104, did chown -R avahi /var/run/avahi-daemon (i guess no tnecessary) and rebooted. THen avahi came up.

@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

So I'm not sure how best to fix this.

There's no way for the container to know what uids won't be in use on the host or another container. The best it could do would be to check whether any tasks are currently running as the uid.

We could globally assign a unique uid for avahi - but then avahi in multiple containers will fail.

We could hack avahi to be more lenient, allowing, say, 100 tasks.

We could make avahi more verbose when it fails this way, so at least the user can try again with a new uid.

@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

So the simplest way to fix this, I would think, for automated installations, would be to something like

x=$((9000 + RANDOM % 1000))
adduser --uid $x avahi

before installing avahi-daemon.

@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

To be clear, the only real solution to this is to run the container in a user namespace.

@hallyn
Copy link
Member

hallyn commented Nov 12, 2013

One other trivial workaround is to remove 'rlimit-nproc = 3' from /etc/avahi/avahi-daemon.conf

@hallyn hallyn closed this as completed Nov 12, 2013
stgraber pushed a commit to stgraber/lxc that referenced this issue Feb 10, 2015
hinnerk added a commit to experimental-platform/platform-hostname-avahi that referenced this issue Dec 17, 2015
mattiash added a commit to mattiash/docker-netatalk that referenced this issue Mar 14, 2016
It seems as if the uid used by avahi conflicts with the uids used on the docker host.
This fix was applied in this bugreport:

lxc/lxc#25

It seemed to solve the problem for me
philb pushed a commit to openembedded/openembedded-core that referenced this issue Jul 25, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
philb pushed a commit to openembedded/openembedded-core that referenced this issue Jul 25, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 26, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 26, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 27, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 28, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 28, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 28, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Jul 28, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 24, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
mythi pushed a commit to mythi/ostro-os that referenced this issue Aug 24, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 24, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 24, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 25, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 25, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 25, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 26, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 26, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 26, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 26, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
mythi pushed a commit to mythi/ostro-os that referenced this issue Aug 27, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
mythi pushed a commit to mythi/ostro-os that referenced this issue Aug 27, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 30, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 30, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 30, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 30, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
ostroproject-ci pushed a commit to ostroproject/ostro-os that referenced this issue Aug 30, 2016
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
alllexx88 added a commit to Optware/Optware-ng that referenced this issue Sep 5, 2016
Add libcap dependency
Fix rlimit-related issue: see lxc/lxc#25 (comment)
lathiat added a commit to avahi/avahi that referenced this issue Feb 5, 2017
By default, avahi-daemon.conf configures rlimit-nproc=3 to limit the
number of processes running to 3.  In some cases, this would prevent
avahi from starting within a container.

It is presumed this was an attempt to limit attack vectors or Denial of
Service potential of an exploited bug in Avahi.

A problem arises (avahi fails to launch) when the same UID is re-used on
the system, such as containers without UID remapping also running avahi.
In particular, setting security.privileged=true on LXD containers causes
this behavior and avahi will fail to launch in containers because the
total number of processes under the avahi UID on the system exceeds 3.

We comment out the default rlimit-nproc=3 setting from avahi-daemon.conf
and update the relevant manpage with this information. (Closes: #51)

References:
https://bugs.launchpad.net/maas/+bug/1661869
https://lists.linuxcontainers.org/pipermail/lxc-users/2016-January/010791.html
lxc/lxc#25
lathiat added a commit to avahi/avahi that referenced this issue Feb 5, 2017
By default, avahi-daemon.conf configures rlimit-nproc=3 to limit the
number of processes running to 3.  In some cases, this would prevent
avahi from starting within a container.

It is presumed this was an attempt to limit attack vectors or Denial of
Service potential of an exploited bug in Avahi.

A problem arises (avahi fails to launch) when the same UID is re-used on
the system, such as containers without UID remapping also running avahi.
In particular, setting security.privileged=true on LXD containers causes
this behavior and avahi will fail to launch in containers because the
total number of processes under the avahi UID on the system exceeds 3.

We comment out the default rlimit-nproc=3 setting from avahi-daemon.conf
and update the relevant manpage with this information. (Closes: #97)

References:
https://bugs.launchpad.net/maas/+bug/1661869
https://lists.linuxcontainers.org/pipermail/lxc-users/2016-January/010791.html
lxc/lxc#25
@dsegan
Copy link

dsegan commented Apr 25, 2017

x=$((9000 + RANDOM % 1000))
adduser --uid $x avahi

FWIW, I've adapted this to something that's guaranteed not to conflict by creating a user on the host system first:

$ sudo useradd -r avahi-$LXC_NAME # rely on -r to provide a unique UID in 0-1000 range and not create a homedir...
$ host_avahi_uid=id -u avahi-$LXC_NAME
$ lxc-execute --name=$LXC_NAME -- useradd -r -u $host_avahi_uid avahi

xaiki added a commit to endlessm/eos-data-distribution that referenced this issue Jul 12, 2017
without it we hit  lxc/lxc#25 and most particularly avahi/avahi#97

Signed-off-by: Niv Sardi <xaiki@evilgiggle.com>
@djdomi
Copy link

djdomi commented Feb 20, 2018

@hallyn
One other trivial workaround is to remove 'rlimit-nproc = 3' from /etc/avahi/avahi-daemon.conf<

Your right, this fixes the issue!

daregit pushed a commit to daregit/yocto-combined that referenced this issue May 22, 2024
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-Core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
daregit pushed a commit to daregit/yocto-combined that referenced this issue May 22, 2024
It sometimes fails to run avahi with error: "Could not receive return value
from daemon process". It has same root cause with
lxc/lxc#25.

Backport patch to fix this issue.

(From OE-Core rev: a901956968127b2eb5911d7b91f44fca46e30b25)

Signed-off-by: Kai Kang <kai.kang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

6 participants