Skip to content
This repository has been archived by the owner on Nov 24, 2022. It is now read-only.

Duplicated unprivileged config BEGIN/END block #485

Closed
boltronics opened this issue Oct 21, 2019 · 1 comment
Closed

Duplicated unprivileged config BEGIN/END block #485

boltronics opened this issue Oct 21, 2019 · 1 comment
Labels

Comments

@boltronics
Copy link

Everything seems to work great except for this one troublesome bug that shows up at the bottom of the image config file:

$ cat ~/.local/share/lxc/stretch_default_1571630615861_44975/config
# Template used to create this container: /usr/share/rubygems-integration/all/gems/vagrant-lxc-1.4.3/scripts/lxc-template
# Parameters passed to the template: --tarball /home/boltronics/.vagrant.d/boxes/sitepoint-VAGRANTSLASH-debian-stretch-amd64/1.0.2/lxc/rootfs.tar.gz --config /home/boltronics/.vagrant.d/boxes/sitepoint-VAGRANTSLASH-debian-stretch-amd64/1.0.2/lxc/lxc-config
# Template script checksum (SHA-1): eae122a2d6cd572c26668257efa7963c2258186e
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)


##############################################
# Container specific configuration (automatically set)
lxc.include = /etc/lxc/default.conf
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
lxc.rootfs.path = dir:/home/boltronics/.local/share/lxc/stretch_default_1571630615861_44975/rootfs
lxc.uts.name = stretch_default_1571630615861_44975

##############################################
# Network configuration (automatically set)

##############################################
# vagrant-lxc base box specific configuration
# Default mount entries
lxc.mount.auto = cgroup:mixed proc:mixed sys:mixed
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none bind,optional 0 0

# Default console settings
lxc.tty.max = 4
lxc.pty.max = 1024

# Default capabilities
lxc.cap.drop = sys_module mac_admin mac_override sys_time sys_rawio

# When using LXC with apparmor, the container will be confined by default.
# If you wish for it to instead run unconfined, copy the following line
# (uncommented) to the container's configuration file.
#lxc.apparmor.profile = unconfined

# To support container nesting on an Ubuntu host while retaining most of
# apparmor's added security, use the following two lines instead.
#lxc.apparmor.profile = lxc-container-default-with-nesting
#lxc.hook.mount = /usr/share/lxc/hooks/mountcgroups

# If you wish to allow mounting block filesystems, then use the following
# line instead, and make sure to grant access to the block device and/or loop
# devices below in lxc.cgroup.devices.allow.
#lxc.apparmor.profile = lxc-container-default-with-mounting

##############################################
# vagrant-lxc container specific configuration
# VAGRANT-BEGIN
lxc.uts.name=stretch_default_1571630615861_44975
lxc.mount.entry=/sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry=tmpfs tmp tmpfs nodev,nosuid,size=2G 0 0
lxc.mount.entry=/home/boltronics/Development/github.com/sitepoint/vmconfig/images/stretch vagrant none bind,create=dir 0 0
# VAGRANT-END
# VAGRANT-BEGIN
lxc.uts.name=stretch_default_1571630615861_44975
lxc.mount.entry=/sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry=tmpfs tmp tmpfs nodev,nosuid,size=2G 0 0
lxc.mount.entry=/home/boltronics/Development/github.com/sitepoint/vmconfig/images/stretch vagrant none bind,create=dir 0 0
# VAGRANT-END
# VAGRANT-BEGIN
lxc.uts.name=stretch_default_1571630615861_44975
lxc.mount.entry=/sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry=tmpfs tmp tmpfs nodev,nosuid,size=2G 0 0
lxc.mount.entry=/home/boltronics/Development/github.com/sitepoint/vmconfig/images/stretch vagrant none bind,create=dir 0 0
# VAGRANT-END

Every time the guest is stopped and started, it adds a new VAGRANT-BEGIN/VAGRANT-END block entry. For a minimal config this doesn't matter too much, but some options do not like to be duplicated and cause the machine to fail to boot the second time (without manually editing the config file to remove duplicate blocks).

This is on Debian 10 (buster), amd64, using the Debian-provided vagrant and vagrant-lxc packages (however the vagrant-lxc package is currently up to date with the latest stable release - 1.4.3).

The container was created just last week using https://github.com/sitepoint/vagrant-lxc-base-boxes

$ apt-cache policy vagrant vagrant-lxc | grep -B 1 -A 1 Installed
vagrant:
  Installed: 2.2.3+dfsg-1
  Candidate: 2.2.3+dfsg-1
--
vagrant-lxc:
  Installed: 1.4.3-1
  Candidate: 1.4.3-1
$ lxc-checkconfig 
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.19.0-6-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points: 
/sys/fs/cgroup/systemd
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/pids
/sys/fs/cgroup/devices
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/rdma
/sys/fs/cgroup/blkio
/sys/fs/cgroup/freezer
/sys/fs/cgroup/memory

Cgroup v2 mount points: 
/sys/fs/cgroup/unified

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, loaded
Macvlan: enabled, not loaded
Vlan: enabled, loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, loaded
CONFIG_NF_NAT_IPV4: enabled, loaded
CONFIG_NF_NAT_IPV6: enabled, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: 

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
$ cat ~/.config/lxc/default.conf 
lxc.include = /etc/lxc/default.conf

lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
$ cat /etc/lxc/default.conf
#lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = lxcbr0
$ cat Vagrantfile
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "sitepoint/debian-stretch-amd64"
  config.vm.box_version = "1.0.2"
  config.vm.box_download_checksum_type = "sha256"
  config.vm.box_download_checksum = "c11b28024050ac94457e8b5e746365635a485e08d6c856cf7f2115363807b02d"

  config.vm.provider :lxc do |lxc|
    lxc.privileged = false
  end
end

Note that lxc.uts.name is already set to the correct value in the # Container specific configuration (automatically set) block, but seems to be set again unconditionally. While probably not great, this does not seem related to the cause.

boltronics added a commit to boltronics/vagrant-lxc that referenced this issue Oct 21, 2019
@fgrehm fgrehm added the ignored label Nov 17, 2022
@fgrehm
Copy link
Owner

fgrehm commented Nov 17, 2022

Hey, sorry for the silence here but this project is looking for maintainers 😅

As per #499, I've added the ignored label and will close this issue. Thanks for the interest in the project and LMK if you want to step up and take ownership of this project on that other issue 👋

@fgrehm fgrehm closed this as completed Nov 17, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants