Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PREP_SYSTEM_FOR_DIETPI.sh | Automate #1285

Closed
Fourdee opened this issue Dec 10, 2017 · 191 comments
Closed

PREP_SYSTEM_FOR_DIETPI.sh | Automate #1285

Fourdee opened this issue Dec 10, 2017 · 191 comments

Comments

@Fourdee
Copy link
Collaborator

@Fourdee Fourdee commented Dec 10, 2017

Status: Beta

What is this?

  • This script will install and convert any 'bloated' Debian/Raspbian installation, into a lightweight DietPi system.

What this script does NOT do:

  • Support converting existing installed software (eg: nextcloud, Plex Media Server) over to the DietPi system.
    All existing software (APT) and user data will be deleted.

Step 1: Ensure a Debian/Raspbian OS is running on the system

  • For best results, we recommend a fresh/clean minimal Debian/Raspbian installation
  • Native PC users: please install Debian stable before hand: https://www.debian.org/distrib/netinst
  • Desktop images should work, however, the minimal the image, the quicker the installation, as less packages will need to be removed.
  • NB: We do not support Ubuntu, or have any plans to do so.

Step 2: Pre-req packages

  • Critical packages
    These should already exist on most systems, however pure minimal images may require the following installations)
    apt-get update; apt-get install -y systemd-sysv ca-certificates sudo wget locales --reinstall

Step 3: Run DietPi installer

  • Must be run from SSH or local term
    eg: Outside of desktop environment
  • In case of SSH, to prevent connection loss, an Ethernet connection is required.

Ensure you have elevated privileges (eg: login as root, or use sudo su).

Copy and paste all into term.

wget https://raw.githubusercontent.com/MichaIng/DietPi/master/PREP_SYSTEM_FOR_DIETPI.sh -O PREP_SYSTEM_FOR_DIETPI.sh
#wget https://raw.githubusercontent.com/MichaIng/DietPi/jessie-support/PREP_SYSTEM_FOR_DIETPI.sh -O PREP_SYSTEM_FOR_DIETPI.sh
#wget https://raw.githubusercontent.com/MichaIng/DietPi/beta/PREP_SYSTEM_FOR_DIETPI.sh -O PREP_SYSTEM_FOR_DIETPI.sh
#wget https://raw.githubusercontent.com/MichaIng/DietPi/dev/PREP_SYSTEM_FOR_DIETPI.sh -O PREP_SYSTEM_FOR_DIETPI.sh
chmod +x PREP_SYSTEM_FOR_DIETPI.sh
./PREP_SYSTEM_FOR_DIETPI.sh

Follow the onscreen prompts.

@Fourdee Fourdee added this to the v160 milestone Dec 10, 2017
@Fourdee Fourdee self-assigned this Dec 10, 2017
@MichaIng
Copy link
Owner

@MichaIng MichaIng commented Dec 10, 2017

Just in theory: If we really get this automated secure for at least some devices, this means we do not necessarily need to create images for those. We could just provide the preparation script to upgrade any Debian system into DietPi. By use of apt-mark, as suggested here: #1266, we would be sure that possibly disturbing software is removed and provide our own config files/adjustments for our dependencies. Kernel/bootleader apt packages are generally excluded from autoremove on every device I checked.
As running systems are, yeah running, we could extract boot critical information (especially fstab/boot.ini/network details) from the running system in case and create our configs around those.

Hehe just a little optimistic view into the feature 😆.

@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 10, 2017

@MichaIng

We could just provide the preparation script to upgrade any Debian system into DietPi.
As running systems are, yeah running, we could extract boot critical information (especially fstab/boot.ini/network details) from the running system in case and create our configs around those.
Hehe just a little optimistic view into the feature

I like where this is going 👍

Fourdee added a commit that referenced this issue Dec 11, 2017
+ Initial start of: #1285

NB: failed replace on $OPTION casing WHIP_OPTION
Fourdee added a commit that referenced this issue Dec 11, 2017
+ Dry run of AGP. continue:
#1285
@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 12, 2017

@MichaIng

I've not looked into manual holding packages yet. I'am simply converting the old script into automated. Then we can work on it.

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented Dec 12, 2017

@Fourdee
Great work so far, like the idea of setting options by whiptail, first steps into end user direction 😄.

Just consider: #1266

If you generally like the idea, I will recreate the PR for the automated script. Other small adjustments based on #1219 (comment) and following as well.

Just tell, after you implemented you ideas, then I will create PR.

Fourdee added a commit that referenced this issue Dec 13, 2017
+ DietPi-Software | Plex Media Server: Resolved uninstall to include
/var/lib/plexmediaserver in removal (which is not completed via apt
purge).

+ Sparky SBC: Matrix Audio X-SPDIF 2, native DSD is now added to kernel,
many thanks @sudeep: sparkysbc/Linux#3

+ #1285
Fourdee added a commit that referenced this issue Dec 13, 2017
+ Package removal test: #1285
@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 13, 2017

Ok, so the new package removal system:

Scraped, in favor of @MichaIng APT removal method.

  • We provide a list of essential minimal packages that must be installed for DietPi
  • The script then finds all the dependencies for each package we need and stores it into an array
  • Package removal list, is generated by comparing the currently installed, to the array we generated. If any packages exist that we dont need, DietPi will remove them.

This allows us to exclude the old system of manually removing "known" packages we dont want. Instead, having everything generated automatically, based on the current installed system.

/DietPi/dietpi/PREP_SYSTEM_FOR_DIETPI.sh                         

 [Ok] Generating list of minimal packages required for DietPi installation


 [Ok] Obtaining list of currently installed packages


 [Info] (0): Passed


 [Ok] Generating a list of deps, required for the DietPi packages
This may take some time, please wait...

Checking deps: acl
 - Adding deps: libacl1
 - Adding deps: libattr1
 - Adding deps: libc6
Checking deps: adduser
 - Adding deps: passwd
 - Adding deps: debconf
Checking deps: apt
 - Adding deps: adduser
 - Adding deps: gpgv
 - Adding deps: debian-archive-keyring
 - Adding deps: init-system-helpers
 - Adding deps: libapt-pkg5.0
 - Adding deps: libgcc1
 - Adding deps: libstdc++6
Checking deps: apt-transport-https
 - Adding deps: libcurl3-gnutls
Checking deps: apt-utils
 - Adding deps: apt
 - Adding deps: libapt-inst2.0
 - Adding deps: libdb5.3
Checking deps: base-files
Checking deps: base-passwd
 - Adding deps: libdebconfclient0
Checking deps: bash
 - Adding deps: dash
 - Adding deps: libtinfo5
 - Adding deps: base-files
 - Adding deps: debianutils
Checking deps: bash-completion
 - Adding deps: dpkg
 - Adding deps: bash
Checking deps: bc
 - Adding deps: libreadline7
Checking deps: bsdmainutils
 - Adding deps: libbsd0
 - Adding deps: libncurses5
 - Adding deps: bsdutils
Checking deps: bsdutils
 - Adding deps: libsystemd0
Checking deps: bzip2
 - Adding deps: libbz2-1.0
Checking deps: ca-certificates
 - Adding deps: openssl
Checking deps: console-common
 - Adding deps: console-data
 - Adding deps: kbd
 - Adding deps: lsb-base
Checking deps: console-data
Checking deps: console-setup
 - Adding deps: console-setup-linux
 - Adding deps: console-setup-freebsd
 - Adding deps: xkb-data
 - Adding deps: keyboard-configuration
Checking deps: console-setup-linux
 - Adding deps: initscripts
Checking deps: coreutils
 - Adding deps: libselinux1
Checking deps: cpio
Checking deps: crda
 - Adding deps: libnl-3-200
 - Adding deps: libnl-genl-3-200
 - Adding deps: libssl1.1
 - Adding deps: wireless-regdb
 - Adding deps: iw
Checking deps: cron
 - Adding deps: libpam0g
 - Adding deps: libpam-runtime
Checking deps: curl
 - Adding deps: libcurl3
 - Adding deps: zlib1g
Checking deps: dash
Checking deps: dbus
 - Adding deps: libapparmor1
 - Adding deps: libaudit1
 - Adding deps: libcap-ng0
 - Adding deps: libdbus-1-3
 - Adding deps: libexpat1
Checking deps: debconf
 - Adding deps: perl-base
Checking deps: debian-archive-keyring
Checking deps: debianutils
 - Adding deps: sensible-utils
Checking deps: diffutils
Checking deps: dmidecode
Checking deps: dmsetup
 - Adding deps: libdevmapper1.02.1
Checking deps: dosfstools
 - Adding deps: libudev1
Checking deps: dphys-swapfile
 - Adding deps: dc
Checking deps: dpkg
 - Adding deps: liblzma5
 - Adding deps: tar
Checking deps: e2fslibs
Checking deps: e2fsprogs
 - Adding deps: e2fslibs
 - Adding deps: libblkid1
 - Adding deps: libcomerr2
 - Adding deps: libss2
 - Adding deps: libuuid1
 - Adding deps: util-linux
Checking deps: ethtool
Checking deps: fake-hwclock
Checking deps: fbset
 - Adding deps: udev
 - Adding deps: makedev
Checking deps: findutils
Checking deps: firmware-atheros
Checking deps: firmware-brcm80211
Checking deps: firmware-ralink
Checking deps: firmware-realtek
Checking deps: fuse
 - Adding deps: libfuse2
 - Adding deps: mount
 - Adding deps: sed
Checking deps: gnupg
 - Adding deps: gnupg-agent
 - Adding deps: libassuan0
 - Adding deps: libgcrypt20
 - Adding deps: libgpg-error0
 - Adding deps: libksba8
 - Adding deps: libsqlite3-0
Checking deps: gpgv
Checking deps: grep
 - Adding deps: libpcre3
 - Adding deps: install-info
Checking deps: groff-base
Checking deps: gzip
Checking deps: hdparm
Checking deps: hfsplus
 - Adding deps: libhfsp0
Checking deps: hostname
Checking deps: htop
 - Adding deps: libncursesw5
Checking deps: ifupdown
 - Adding deps: iproute2
Checking deps: init
 - Adding deps: systemd-sysv
 - Adding deps: sysvinit-core
Checking deps: init-system-helpers
Checking deps: initramfs-tools
 - Adding deps: initramfs-tools-core
 - Adding deps: linux-base
Checking deps: initscripts
 - Adding deps: sysvinit-utils
 - Adding deps: sysv-rc
 - Adding deps: file-rc
 - Adding deps: openrc
 - Adding deps: coreutils
Checking deps: insserv
Checking deps: iproute2
 - Adding deps: libelf1
 - Adding deps: libmnl0
Checking deps: iputils-ping
 - Adding deps: libcap2
 - Adding deps: libidn11
 - Adding deps: libnettle6
Checking deps: isc-dhcp-client
 - Adding deps: libdns-export162
 - Adding deps: libisc-export160
Checking deps: isc-dhcp-common
Checking deps: iw
Checking deps: kbd
Checking deps: keyboard-configuration
 - Adding deps: liblocale-gettext-perl
Checking deps: klibc-utils
 - Adding deps: libklibc
Checking deps: kmod
 - Adding deps: libkmod2
Checking deps: less
Checking deps: locales
 - Adding deps: libc-bin
 - Adding deps: libc-l10n
Checking deps: login
 - Adding deps: libpam-modules
Checking deps: lsb-base
Checking deps: mawk
Checking deps: mount
 - Adding deps: libmount1
 - Adding deps: libsmartcols1
Checking deps: multiarch-support
Checking deps: nano
Checking deps: ncurses-base
Checking deps: ncurses-bin
Checking deps: net-tools
Checking deps: ntfs-3g
 - Adding deps: fuse
 - Adding deps: libgnutls30
 - Adding deps: libntfs-3g871
Checking deps: ntp
 - Adding deps: netbase
 - Adding deps: libedit2
 - Adding deps: libopts25
Checking deps: p7zip-full
 - Adding deps: p7zip
Checking deps: parted
 - Adding deps: libparted2
Checking deps: passwd
 - Adding deps: libsemanage1
Checking deps: perl-base
Checking deps: procps
 - Adding deps: libprocps6
Checking deps: psmisc
Checking deps: readline-common
Checking deps: resolvconf
 - Adding deps: ifupdown
Checking deps: sed
Checking deps: sensible-utils
Checking deps: startpar
Checking deps: sudo
Checking deps: systemd
 - Adding deps: libcryptsetup4
 - Adding deps: libip4tc0
 - Adding deps: liblz4-1
 - Adding deps: libseccomp2
 - Adding deps: procps
Checking deps: systemd-sysv
 - Adding deps: systemd
Checking deps: sysvinit-utils
Checking deps: tar
Checking deps: tzdata
Checking deps: udev
Checking deps: unzip
Checking deps: usbutils
 - Adding deps: libusb-1.0-0
Checking deps: util-linux
 - Adding deps: libfdisk1
Checking deps: wget
 - Adding deps: libpsl5
Checking deps: whiptail
 - Adding deps: libnewt0.52
 - Adding deps: libpopt0
 - Adding deps: libslang2
Checking deps: wireless-regdb
Checking deps: wireless-tools
 - Adding deps: libiw30
Checking deps: wpasupplicant
 - Adding deps: libpcsclite1
 - Adding deps: libssl1.0.2
Checking deps: wput
 - Adding deps: libgnutls-openssl27
Checking deps: zip
Checking deps: intel-microcode
 - Adding deps: iucode-tool
Checking deps: amd64-microcode
Checking deps: firmware-linux-nonfree
 - Adding deps: firmware-misc-nonfree
 - Adding deps: firmware-amd-graphics

 [Ok] Generating a list of packages, not required by DietPi, to be removed from system.
This may take some time, please wait...


 [Info] The following packages will be removed
adwaita-icon-theme alsa-utils bind9-host binutils build-essential checkinstall cifs-utils cmake cmake-data cpp cpp-6 cryptsetup cryptsetup-bin dh-python dhcpcd5 dialog discover discover-data distro-info-data dnsmasq dnsmasq-base dnsutils dropbear dropbear-bin dropbear-initramfs dropbear-run efibootmgr fakeroot file fontconfig fontconfig-config fonts-dejavu-core g++ g++-6 galculator gcc gettext-base git git-man glib-networking:amd64 glib-networking-common glib-networking-services gparted gpicview grub2-common gsettings-desktop-schemas gtk-update-icon-cache hicolor-icon-theme installation-report iptables irqbalance laptop-detect leafpad light-locker lightdm lightdm-gtk-greeter lighttpd linux-libc-dev:amd64 lsb-release lsof lxappearance lxappearance-obconf lxde lxde-common lxde-core lxde-icon-theme lxhotkey-core lxhotkey-gtk lxinput lxlock lxmenu-data lxpanel lxpanel-data lxpolkit lxrandr lxsession lxsession-data lxsession-edit lxsession-logout lxterminal make manpages manpages-dev mime-support mysql-common netcat netcat-traditional openbox openbox-lxde-session os-prober patch pciutils pcmanfm perl perl-modules-5.24 php-apcu php-apcu-bc php-cgi php-common php-curl php-fpm php-gd php-mbstring php-mcrypt php-xml php-zip php7.0-cgi php7.0-cli php7.0-common php7.0-curl php7.0-fpm php7.0-gd php7.0-json php7.0-mbstring php7.0-mcrypt php7.0-opcache php7.0-readline php7.0-xml php7.0-zip pinentry-curses policykit-1 python python-minimal python-talloc python2.7 python2.7-minimal python3 python3-minimal python3.5 python3.5-minimal samba-common samba-libs:amd64 sgml-base shared-mime-info smbclient spawn-fcgi sysbench tcl tcl8.6 tigervnc-common tigervnc-standalone-server tk tk8.6 ucf upower vnc4server wamerican x11-common x11-utils x11-xkb-utils x11-xserver-utils x11vnc x11vnc-data xarchiver xauth xbitmaps xcompmgr xdg-user-dirs xfonts-base xfonts-encodings xfonts-utils xinit xml-core xserver-common xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-input-libinput xserver-xorg-video-all xserver-xorg-video-amdgpu xserver-xorg-video-ati xserver-xorg-video-fbdev xserver-xorg-video-nouveau xserver-xorg-video-radeon xserver-xorg-video-vesa xserver-xorg-video-vmware xterm xz-utils zerofree


 [Info] The following packages will be installed
acl adduser apt apt-transport-https apt-utils base-files base-passwd bash bash-completion bc bsdmainutils bsdutils bzip2 ca-certificates console-common console-data console-setup console-setup-linux coreutils cpio crda cron curl dash dbus debconf debian-archive-keyring debianutils diffutils dmidecode dmsetup dosfstools dphys-swapfile dpkg e2fslibs e2fsprogs ethtool fake-hwclock fbset findutils firmware-atheros firmware-brcm80211 firmware-ralink firmware-realtek fuse gnupg gpgv grep groff-base gzip hdparm hfsplus hostname htop ifupdown init init-system-helpers initramfs-tools initscripts insserv iproute2 iputils-ping isc-dhcp-client isc-dhcp-common iw kbd keyboard-configuration klibc-utils kmod less locales login lsb-base mawk mount multiarch-support nano ncurses-base ncurses-bin net-tools ntfs-3g ntp p7zip-full parted passwd perl-base procps psmisc readline-common resolvconf sed sensible-utils startpar sudo systemd systemd-sysv sysvinit-utils tar tzdata udev unzip usbutils util-linux wget whiptail wireless-regdb wireless-tools wpasupplicant wput zip intel-microcode amd64-microcode firmware-linux-nonfree

Fourdee added a commit that referenced this issue Dec 14, 2017
+ Switch to @MichaIng APT removal method, much more efficent (scrap
array based removal): #1285

+ Buster DISTRO
@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 14, 2017

Buster notes:

  • I've disabled option for now.

RPI:

W: The repository 'https://archive.raspberrypi.org/debian buster Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://archive.raspberrypi.org/debian/dists/buster/main/binary-armhf/Packages  404  Not Found

C2:

W: The repository 'http://ftp.debian.org/debian buster-backports Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch http://ftp.debian.org/debian/dists/buster-backports/main/binary-arm64/Packages  404  Not Found [IP: 130.89.148.12 80]
E: Some index files failed to download. They have been ignored, or old ones used instead.
@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 14, 2017

Note to self:

ToDo:

  • Automate TZ/Locale??? Had issues with this in the past. Need to research it again.
  • Add ARMbian linux kernel to install list, or, add a general "do not remove" linux-* to script?
@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 14, 2017

Ready for early internal testing to fix bugs and improve.

Not stable enough for public users, yet.

Tested:

  • 🈯️ Odroid C2 Stretch
@MichaIng
Copy link
Owner

@MichaIng MichaIng commented Dec 14, 2017

@Fourdee

Buster notes:

I've disabled option for now.

  • Yes, the raspberrypi.org repo was always behind: https://archive.raspberrypi.org/debian/dists/ I remember stretch distro to appear there just around it became release for raspbian, long after it became release for debian and long after I switched already to stretch with my RPi. Jessie distro worked fine there, as it mainly provides just firmware/kernel/bootloader. I will upgrade my RPi to buster after doing some tests on my buster VM to then tell more reliably if buster + raspberrypi.org stretch repos work together. We need to further implement buster for this into DietPi, but let's first get the automated script ready for public testing.
  • About buster debian repo: Of course no buster-backport exists, as it is already the testing repo with up-to-date packages 😆: http://ftp.debian.org/debian/dists/ Okay sid also exists, but so far this looks just like a total experimental developer playground, with also many old (pre-)jessie packages (like php5...) inside. So for this, we can just comment out buster-backports and choose the main repo always inside our software scripts.
  • About debian repo generally: Found this: https://deb.debian.org/
    It is already default on new debian installations and seems to be kind of mirror director. Works well, even is reachable via https, as the link shows. Only on Jessie I got the following errors on apt update, if https was used. With http everything works fine:
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-updates_main_binary-amd64_Packages.diff_Index is not what the server reported 9376 345
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-updates_non-free_binary-amd64_Packages.diff_Index is not what the server reported 736 349
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-updates_main_i18n_Translation-en.diff_Index is not what the server reported 3688 343
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-updates_non-free_i18n_Translation-en.diff_Index is not what the server reported 736 347
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-backports_main_binary-amd64_Packages.diff_Index is not what the server reported 27796 347
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-backports_contrib_binary-amd64_Packages.diff_Index is not what the server reported 24964 15799
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-backports_non-free_binary-amd64_Packages.diff_Index is not what the server reported 22858 351
W: Size of file /var/lib/apt/lists/partial/deb.debian.org_debian_dists_jessie-backports_main_i18n_Translation-en.diff_Index is not what the server reported 27796 345
@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 15, 2017

@MichaIng

Legend 👍

In regards to mirror:

root@DietPi:~# cat /etc/apt/sources.list
deb http://deb.debian.org/debian stretch main contrib non-free
deb http://deb.debian.org/security stretch main contrib non-free
deb http://ftp.debian.org/debian/ stretch-backports main contrib non-free

1st run

Ign:28 http://deb.debian.org/security stretch/non-free Translation-en
Err:14 http://deb.debian.org/security stretch/main amd64 Packages
  404  Not Found [IP: 151.101.0.204 80]
Ign:15 http://deb.debian.org/security stretch/main all Packages

2nd run

Ign:20 http://deb.debian.org/security stretch/non-free Translation-en_GB
Err:6 http://deb.debian.org/security stretch/main i386 Packages
  404  Not Found [IP: 151.101.0.204 80]
Ign:7 http://deb.debian.org/security stretch/main amd64 Packages

This is why I dislike using mirror services. They are unreliable.

@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 15, 2017

Moved:

@Fourdee
About the dependency list:

  • I just wrote a little script that shows installed dependants of all installed packages. With this we can further reduce our dependency list to the final packages that are really directly used by dietpi / usual system operations. Here your dependency list where I added 🈯️ > dependant, if one within our anyway installed packages exist:
    'acl' 🈺 > systemd on Jessie, but not on Stretch+Buster, so we really need this, getfacl or setfacl
    'adduser' 🈯️ > systemd+apt+cron+udev+several other packages, depending on distro
    'apt' 🈯️ > apt-utils+apt-transport-https
    'apt-transport-https'
    'apt-utils'
    'base-files' 🈯️ > bash
    'base-passwd' 🈯️ essential package will be never autoremoved
    'bash' 🈯️ > bash-completion + essential
    'bash-completion'
    'bc'
    'bsdmainutils' 🈺 Do we need this? Was not in the list before!
    'bsdutils' 🈯️ essential
    'bzip2'
    'ca-certificates'
    'console-common' 🈺 console-setup supersedes or depends on other console related packages
    'console-data' 🈺 console-setup supersedes or depends on other console related packages
    'console-setup'
    'console-setup-linux' 🈺 console-setup supersedes or depends on other console related packages
    'coreutils' 🈯️ essential
    'cpio' 🈯️ (> initramfs-tools-core) > initramfs-tools
    'crda'
    'cron'
    'curl'
    'dash' 🈯️ > bash > bash-completion
    'dbus'
    'debconf' 🈯️ > debconf-utils
    'debian-archive-keyring' 🈯️ > apt and wrong on Raspbian, where raspbian-archive-keyring takes place!
    'debianutils' 🈯️ > cron+bash+dash+some others, depending on distro
    'diffutils' 🈯️ essential
    'dmidecode' 🈺 Do we need this? Was not in the list before!
    'dmsetup' 🈯️ > > > parted
    'dosfstools' 🈺 Could be left out on VMs, as we do not have fat boot partition there.
    'dphys-swapfile'
    'dpkg' 🈯️ > cron+bash-completion+others+anyway essential
    'e2fslibs' 🈯️ > e2fsprogs
    'e2fsprogs' 🈯️ essential
    'ethtool'
    'fake-hwclock'
    'fbset'
    'findutils' 🈯️ essential
    'firmware-atheros' 🈺 was x86 only before?
    'firmware-brcm80211' 🈺 was x86 only before?
    'firmware-ralink' 🈺 was x86 only before?
    'firmware-realtek' 🈺 was x86 only before?
    'firmware-misc-nonfree' 🈺 was x86 only before?
    'fuse' 🈺 Didn't see this before, do we need it? https://packages.debian.org/de/stretch/fuse
    'gnupg' 🈺 We used this before, but gpgv seems to be a lightweight version of it.
    'gpgv' 🈺 Superseded by gnupg, but more lightweight, maybe we need to switch?
    'grep' 🈯️ essential
    'groff-base' 🈺 Do we need this? Was not in the list before!
    'gzip' 🈯️ essential
    'hdparm'
    'hfsplus'
    'hostname' 🈯️ essential
    'htop'
    'ifupdown' 🈯️ > resolvconf
    'init' 🈯️ essential
    'init-system-helpers' 🈯️ > init and essential
    'initramfs-tools'
    'initscripts' 🈺 Was dependency of systemd on Jessie, but since Stretch not anymore. I guess systemd now supersedes that package.
    'insserv' 🈺 same than initscripts, don't have any of them on my systems (besides Jessie VM) and absolutely no issues.
    'iproute2' 🈯️ > ifupdown > resolvconf
    'iputils-ping' 🈺 Helpful indeed sometimes, but not necessary? Also superseded by curl, wget etc?
    'isc-dhcp-client'
    'isc-dhcp-common' 🈯️ This is just man pages!
    'iw'
    'kbd' 🈺 console-setup supersedes or depends on other console related packages
    'keyboard-configuration' 🈺 console-setup supersedes or depends on other console related packages
    'klibc-utils' 🈯️ initramfs-tools-core > initramfs-tools
    'kmod' 🈯️ (> initramfs-tools-core) > initramfs-tools
    'less' 🈺 Helpful indeed sometimes, but not necessary?
    'locales'
    'login' 🈯️ essential
    'lsb-base' 🈯️ > cron+udev+swapfile+resolvconf+others
    'mawk' 🈯️ > dash > bash
    'mount' 🈯️ > systemd
    'multiarch-support' 🈯️ > hundrets of libs, also e2fslibs > e2fsprogs, which is essential
    'nano' not necessary, but yeah I use this several times a day, everybody needs a simple editor!
    'ncurses-base' 🈯️ essential
    'ncurses-bin' 🈯️ essential
    'net-tools'
    'ntfs-3g'
    'ntp'
    'p7zip-full'
    'parted'
    'passwd' 🈯️ > adduser
    'perl-base' 🈯️ > init-system-helpers+debconf > debconf-utils
    'procps' 🈯️ > udev+systemd
    'psmisc'
    'readline-common' 🈯️ > parted+gnu(pg)
    'resolvconf'
    'rfkill' #Used by some onboard WiFi chipsets
    'rsync' 🈯️ Not necessary here, as it will be installed on demand by dietpi-backup/sync
    'sed' 🈯️ essential
    'sensible-utils' 🈺 Dependency of debianutils until buster. Just wondered that it is not autoremoved on my buster VM (marked as auto), but no dependency anymore and not essential. Just purged it on my Buster VM without any warnings or additional removed packages. Let's see what happens 😄.
    'startpar' 🈺 Was dependency of sysv-rc > systemd on Jessie, but since Stretch, systemd does not depend on sysv-rc anymore. Seems to solve the tasks differently now (same as insserv)?
    'sudo'
    'systemd' 🈯️ > systemd-sysv > init
    'systemd-sysv' 🈯️ > init
    'sysvinit-utils' 🈯️ essential
    'tar' 🈯️ essential
    'tzdata'
    'udev' 🈯️ > initramfs-tools-core > initramfs-tools
    'unzip'
    'usbutils'
    'util-linux' 🈯️ > udev+systemd+others+essential
    'wget'
    'whiptail'
    'wireless-regdb' 🈯️ > crda
    'wireless-tools'
    'wpasupplicant'
    'wput'
    'zip'

By removing the green ones from the list, there should be no additional packages removed. By removing also some of the orange ones, you get the list I used in my PR: https://github.com/Fourdee/DietPi/pull/1266/files#diff-e9e0c6c64b4937739e13ffac58f4a888R74

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented Dec 15, 2017

@Fourdee

Err:14 http://deb.debian.org/security stretch/main amd64 Packages
404 Not Found [IP: 151.101.0.204 80]

Hehe this is just the wrong URL, use: deb https://deb.debian.org/debian-security/ <distro>/updates main contrib non-free


Going on through the script:


But yeah everything not much more than cosmetic. I can do PR if you want. Otherwise REALLY great work!! 👍 👍 👍 Also found the new fstab creation. Looks really professional! I will test the script tomorrow on my VMs.


If we are already touching this:

  • Perhaps we could also allow choosing whether to install wifi related packages, like for bt. We could actually move them (Wifi and Bt) into an own software stack to allow removing and installing them.
  • Would love to see nfs/samba/ftp log files and mount folders created on demand.

But both definitely no high priority 😉

@Fourdee
Copy link
Collaborator Author

@Fourdee Fourdee commented Dec 15, 2017

@MichaIng

Epic list #1285 (comment), many thanks for that 👍


In regards to relying on deps to pull in packages we manually listed (eg: sed)
In general, there is no harm is us manually listing additional packages that DietPi uses in its scripts, instead of relying on other packages to pull it in.

Benefits of listing (example sed):

  • We can see what packages DietPi fully requires/uses to function
  • By not relying on deps, if Debian were to change sed essential deps overnight, no risk of breaking script.

Firmware:

Covers us for all WiFi chipsets etc across all devices. We really should have it in for all images.

firmware-misc-nonfree can be x86_64 only, mostly intel/nvidia stuff https://packages.debian.org/stretch/firmware-misc-nonfree


Fuse

Required for ntfs-3g https://packages.debian.org/de/stretch/ntfs-3g:

root@DietPi:~# apt-get purge fuse
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
  fuse* ntfs-3g*
0 upgraded, 0 newly installed, 2 to remove and 1 not upgraded.
After this operation, 1,241 kB disk space will be freed.
Do you want to continue? [Y/n]
Fourdee added a commit that referenced this issue Dec 15, 2017
Tweaks based on @MichaIng feedback:
#1285 (comment)
@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 7, 2020

@Joulinar
Many thanks for reporting. I re-added RPi as default selection to DietPi-PREP. Indeed it is better to start at the top of the list than at the bottom: e5d2dd6

I found a reason why DietPi-PREP created a wrong sources.list when one chooses to do a distro upgrade: 5e98abc
However, the RPi should have been correctly auto-detected and hence the right Raspbian entry added. Could you paste:

cat /boot/dietpi/.hw_model
/boot/dietpi/func/dietpi-obtain_hw_model
cat /boot/dietpi/.hw_model
/boot/dietpi/func/dietpi-set_software apt-mirror default
cat /etc/apt/sources.list
@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented May 7, 2020

damn, I did the same as yesterday, but I was not able to reproduce the error with the incorrect source list. 🤔

Anyway I end up with the missing rootFS entry and a ro file system 😟

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 8, 2020

@Joulinar
The sources list according to my review should have been only incorrect when you choose to upgrade the distro, e.g. from a Stretch image to Buster. I could not derive why an RPi should not be identified correctly, but will run a test build on true RPi later (still waiting for ISP technician...)

@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented May 8, 2020

@MichaIng
today I was able to replicate the issue. Thats the output you requested

root@DietPi:/tmp/DietPi-PREP# cat /boot/dietpi/.hw_model
G_HW_MODEL=22
G_HW_MODEL_NAME='Generic Device (armv7l)'
G_HW_ARCH=2
G_HW_ARCH_NAME='armv7l'
G_HW_CPUID=0
G_HW_CPU_CORES=4
G_DISTRO=5
G_DISTRO_NAME='buster'
G_ROOTFS_DEV='/dev/mmcblk0p2'
G_HW_UUID='0db1c30f-9dd9-4037-b459-b8ce61a59e49'
root@DietPi:/tmp/DietPi-PREP#
root@DietPi:/tmp/DietPi-PREP# /boot/dietpi/func/dietpi-obtain_hw_model
root@DietPi:/tmp/DietPi-PREP#
root@DietPi:/tmp/DietPi-PREP# cat /boot/dietpi/.hw_model
G_HW_MODEL=22
G_HW_MODEL_NAME='Generic Device (armv7l)'
G_HW_ARCH=2
G_HW_ARCH_NAME='armv7l'
G_HW_CPUID=0
G_HW_CPU_CORES=4
G_DISTRO=5
G_DISTRO_NAME='buster'
G_ROOTFS_DEV='/dev/mmcblk0p2'
G_HW_UUID='0db1c30f-9dd9-4037-b459-b8ce61a59e49'
root@DietPi:/tmp/DietPi-PREP#
root@DietPi:/tmp/DietPi-PREP# /boot/dietpi/func/dietpi-set_software apt-mirror default
[ SUB1 ] DietPi-Set_software > apt-mirror (default)
[  OK  ] DietPi-Set_software | Desired setting in /boot/dietpi.txt was already set: CONFIG_APT_DEBIAN_MIRROR=https://deb.debian.org/debian/
[  OK  ] apt-mirror https://deb.debian.org/debian/ | Completed
root@DietPi:/tmp/DietPi-PREP#
root@DietPi:/tmp/DietPi-PREP# cat /etc/apt/sources.list
deb https://deb.debian.org/debian/ buster main contrib non-free
deb https://deb.debian.org/debian/ buster-updates main contrib non-free
deb https://deb.debian.org/debian-security/ buster/updates main contrib non-free
deb https://deb.debian.org/debian/ buster-backports main contrib non-free
root@DietPi:/tmp/DietPi-PREP#

and that's the input I did during script execution

[ INFO ] DietPi-PREP | -----------------------------------------------------------------------------------
[  OK  ] DietPi-PREP | Step 1: Target system inputs
[ INFO ] DietPi-PREP | -----------------------------------------------------------------------------------
[ INFO ] DietPi-PREP | Entered image creator: bla
[ INFO ] DietPi-PREP | Entered pre-image info: bla
[ INFO ] DietPi-PREP | Selected hardware model ID: 0
[ INFO ] DietPi-PREP | Detected CPU architecture: armv7l (ID: 2)
[ INFO ] DietPi-PREP | Marking WiFi as NOT required
[ INFO ] DietPi-PREP | Disabled distro downgrade to: Stretch
[ INFO ] DietPi-PREP | Selected Debian version: buster (ID: 5)

looks like it's detecting a Generic Device, which is not correct.

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 8, 2020

@Joulinar
Found and fixed: 028c38b

@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented May 9, 2020

@MichaIng
ok seems working again. I used dev branch PREP script and selected dev branch as target. 👍
but now going to close the shop for today 🤣

@rondadon
Copy link

@rondadon rondadon commented May 17, 2020

Hey there!

I love dietpi and its low profile. Running it on several ARM Devices and I wanted to convert a freshly installed debian system on a VPS (x64 IONOS) to dietpi.

It seems that the conversion completed successfully but after the reboot it dosent boot up.
The KVM Console gives an error which indicates, that initially on the stock debian install the device "/dev/mapper/vg00-lv01" mounted to / isn't available anymore and so the boot up process couldn't mount it to / .

Is there a way to implement an option at the beginning of the script where you can choose the system for IONOS VPS ?

Some other guy had the same issue and created a thread in the dietpi forums
LINK TO FORUM

KVM Console Output:

Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
....
done.
Gave up waiting for suspend/resume device
done.
Begin: Waiting for root file system ... Begin: Running /scripts/local-block ... done.
done.
Gave up waiting for root file system device. Common problems:
 - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
 - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/vg00-lv01 does not exist. Dropping to a shell!
(initramfs)

df -h from a freshly installed debian on the VPS:

root@localhost:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                   214M     0  214M   0% /dev
tmpfs                   46M  5.2M   41M  12% /run
/dev/mapper/vg00-lv01  7.5G  1.3G  5.9G  18% /
tmpfs                  230M     0  230M   0% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  230M     0  230M   0% /sys/fs/cgroup
/dev/sda1              464M   84M  352M  20% /boot
tmpfs                   46M     0   46M   0% /run/user/0

Would be nice if someone could help to resolve this issue. Would also test it and provide feedback.

Thank you very much!
Have a nice day

@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented May 17, 2020

Hi,

Something to check with Ionos as already recommended on the forum. Probably they don't support installations like DietPi on their platform.

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 17, 2020

@rondadon
The root file system is an LVM volume. I think this requires an additional APT package: https://packages.debian.org/buster/lvm2
The /etc/fstab entries seem to be the same, although I read about a recommendation to use disk labels instead of UUIDs: https://wiki.debian.org/fstab#Defining_filesystems

Can you run and paste the output of the following commands from the fresh VPS image?

cat /etc/fstab
df -a
blkid
lsblk
@rondadon
Copy link

@rondadon rondadon commented May 17, 2020

Hi there @MichaIng ... Thank you for trying to help! I appreciate it!
Hope this could be easily solved with installing an additional package!

Sure I can paste the output of said commands. Here they are!

root@localhost:~# cat /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/vg00-lv01 /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
UUID=0fde5837-626f-4cb3-ba89-e0cae8601f55 /boot           ext4    defaults        0       2
/dev/mapper/vg00-lv00 none            swap    sw              0       0
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0

root@localhost:~# df -a

Filesystem            1K-blocks    Used Available Use% Mounted on
sysfs                         0       0         0    - /sys
proc                          0       0         0    - /proc
udev                     219008       0    219008   0% /dev
devpts                        0       0         0    - /dev/pts
tmpfs                     46992    5320     41672  12% /run
/dev/mapper/vg00-lv01   7792568 1603464   5774796  22% /
securityfs                    0       0         0    - /sys/kernel/security
tmpfs                    234952       0    234952   0% /dev/shm
tmpfs                      5120       0      5120   0% /run/lock
tmpfs                    234952       0    234952   0% /sys/fs/cgroup
cgroup2                       0       0         0    - /sys/fs/cgroup/unified
cgroup                        0       0         0    - /sys/fs/cgroup/systemd
pstore                        0       0         0    - /sys/fs/pstore
bpf                           0       0         0    - /sys/fs/bpf
cgroup                        0       0         0    - /sys/fs/cgroup/pids
cgroup                        0       0         0    - /sys/fs/cgroup/net_cls,net_prio
cgroup                        0       0         0    - /sys/fs/cgroup/memory
cgroup                        0       0         0    - /sys/fs/cgroup/perf_event
cgroup                        0       0         0    - /sys/fs/cgroup/devices
cgroup                        0       0         0    - /sys/fs/cgroup/blkio
cgroup                        0       0         0    - /sys/fs/cgroup/cpu,cpuacct
cgroup                        0       0         0    - /sys/fs/cgroup/freezer
cgroup                        0       0         0    - /sys/fs/cgroup/rdma
cgroup                        0       0         0    - /sys/fs/cgroup/cpuset
systemd-1                     0       0         0    - /proc/sys/fs/binfmt_misc
debugfs                       0       0         0    - /sys/kernel/debug
hugetlbfs                     0       0         0    - /dev/hugepages
mqueue                        0       0         0    - /dev/mqueue
/dev/sda1                474712   85236    360446  20% /boot
tmpfs                     46988       0     46988   0% /run/user/1000

root@localhost:~# blkid

/dev/sda1: UUID="0fde5837-626f-4cb3-ba89-e0cae8601f55" TYPE="ext4" PARTUUID="bc873d4a-01"
/dev/sda2: UUID="msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS" TYPE="LVM2_member" PARTUUID="bc873d4a-02"
/dev/mapper/vg00-lv01: UUID="a74fdb8e-18cb-4438-8652-a900525cf565" TYPE="ext4"
/dev/mapper/vg00-lv00: UUID="99a6095c-5433-471c-b8f7-b7de434d6921" TYPE="swap"

root@localhost:~# lsblk

NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   10G  0 disk 
├─sda1          8:1    0  487M  0 part /boot
└─sda2          8:2    0  9.5G  0 part 
  ├─vg00-lv01 254:0    0  7.6G  0 lvm  /
  └─vg00-lv00 254:1    0  1.9G  0 lvm  [SWAP]
sr0            11:0    1 1024M  0 rom  

Again: Thank you for your help. 👍 Have a nice sunday!

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 17, 2020

@rondadon
Many thanks, this indeed allows me to implement LVM support:

df /dev/mapper/vg00-lv01   7792568 1603464   5774796  22% /
  • If any /dev/mapper/ source is mounted, install lvm2 APT package.
# blkid
/dev/sda2: UUID="msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS" TYPE="LVM2_member" PARTUUID="bc873d4a-02"
  • Skip detected drive if fs type is LVM2_member
# /etc/fstab
/dev/mapper/vg00-lv01 /               ext4    errors=remount-ro 0       1
  • Add /dev/mapper/ mount sources via mount source instead of UUID to fstab.

But I'll not find time quickly to implement this ℹ️.

Generally the fstab entry works with UUID as well and should be added like this already, so actually the VPS should boot if your run DietPi-PREP and run apt install lvm2 manually afterwards. And as failsafe step check /etc/fstab first if you see the the rootfs / mount entry. The swap partition will be missing but can be re-added afterwards manually.

@rondadon
Copy link

@rondadon rondadon commented May 17, 2020

@MichaIng :

Wow... Thank you very much for your time and help.

So I just need to run the DietPi-PREP Script and install lvm2 package manually after the script finished before I reboot the System.

Surely I would check /etc/fstab before rebooting.. And mounting swap will be necessary because the VPS has only 512MB Ram which is totally enough for the Service I will run on the VPS but still SWAP will be necessary. I will let you know how it went!

And not pressure regarding implementing it into the DietPi-PREP Script.
Let me know if I could provide any help.

Man, thank you again. I have no words for how thankful I am!

:)

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 17, 2020

@rondadon
Do not thank me too early, so far this is just an assumption of what I think "should" work 😉.

Ah please also check if the mapper device is still present: ls -Al /dev/mapper/
Not sure if some lvm2 command needs to run first to create it or it it is auto-generated on package install.


Please also check the following config files before running DietPi-PREP (if it's not yet too late):

cat /etc/lvm/lvm.conf
cat /etc/lvm/lvmlocal.conf

Probably the device mapping must be defined there.

@rondadon
Copy link

@rondadon rondadon commented May 17, 2020

@MichaIng
Hehe, I'm allready thankful for your time and help, even when it dosen't work at the end.
I didn't ran the script, still. Will do it in an hour or so.

cat /etc/lvm/lvm.conf

CLICK ME TO EXPAND

# It contains the default settings that would be used if there was no
# /etc/lvm/lvm.conf file.
#
# Refer to 'man lvm.conf' for further information including the file layout.
#
# Refer to 'man lvm.conf' for information about how settings configured in
# this file are combined with built-in values and command line options to
# arrive at the final values used by LVM.
#
# Refer to 'man lvmconfig' for information about displaying the built-in
# and configured values used by LVM.
#
# If a default value is set in this file (not commented out), then a
# new version of LVM using this file will continue using that value,
# even if the new version of LVM changes the built-in default value.
#
# To put this file in a different directory and override /etc/lvm set
# the environment variable LVM_SYSTEM_DIR before running the tools.
#
# N.B. Take care that each setting only appears once if uncommenting
# example settings in this file.


# Configuration section config.
# How LVM configuration settings are handled.
config {

	# Configuration option config/checks.
	# If enabled, any LVM configuration mismatch is reported.
	# This implies checking that the configuration key is understood by
	# LVM and that the value of the key is the proper type. If disabled,
	# any configuration mismatch is ignored and the default value is used
	# without any warning (a message about the configuration key not being
	# found is issued in verbose mode only).
	checks = 1

	# Configuration option config/abort_on_errors.
	# Abort the LVM process if a configuration mismatch is found.
	abort_on_errors = 0

	# Configuration option config/profile_dir.
	# Directory where LVM looks for configuration profiles.
	profile_dir = "/etc/lvm/profile"
}

# Configuration section devices.
# How LVM uses block devices.
devices {

	# Configuration option devices/dir.
	# Directory in which to create volume group device nodes.
	# Commands also accept this as a prefix on volume group names.
	# This configuration option is advanced.
	dir = "/dev"

	# Configuration option devices/scan.
	# Directories containing device nodes to use with LVM.
	# This configuration option is advanced.
	scan = [ "/dev" ]

	# Configuration option devices/obtain_device_list_from_udev.
	# Obtain the list of available devices from udev.
	# This avoids opening or using any inapplicable non-block devices or
	# subdirectories found in the udev directory. Any device node or
	# symlink not managed by udev in the udev directory is ignored. This
	# setting applies only to the udev-managed device directory; other
	# directories will be scanned fully. LVM needs to be compiled with
	# udev support for this setting to apply.
	obtain_device_list_from_udev = 1

	# Configuration option devices/external_device_info_source.
	# Select an external device information source.
	# Some information may already be available in the system and LVM can
	# use this information to determine the exact type or use of devices it
	# processes. Using an existing external device information source can
	# speed up device processing as LVM does not need to run its own native
	# routines to acquire this information. For example, this information
	# is used to drive LVM filtering like MD component detection, multipath
	# component detection, partition detection and others.
	# 
	# Accepted values:
	#   none
	#     No external device information source is used.
	#   udev
	#     Reuse existing udev database records. Applicable only if LVM is
	#     compiled with udev support.
	# 
	external_device_info_source = "none"

	# Configuration option devices/preferred_names.
	# Select which path name to display for a block device.
	# If multiple path names exist for a block device, and LVM needs to
	# display a name for the device, the path names are matched against
	# each item in this list of regular expressions. The first match is
	# used. Try to avoid using undescriptive /dev/dm-N names, if present.
	# If no preferred name matches, or if preferred_names are not defined,
	# the following built-in preferences are applied in order until one
	# produces a preferred name:
	# Prefer names with path prefixes in the order of:
	# /dev/mapper, /dev/disk, /dev/dm-*, /dev/block.
	# Prefer the name with the least number of slashes.
	# Prefer a name that is a symlink.
	# Prefer the path with least value in lexicographical order.
	# 
	# Example
	# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option devices/filter.
	# Limit the block devices that are used by LVM commands.
	# This is a list of regular expressions used to accept or reject block
	# device path names. Each regex is delimited by a vertical bar '|'
	# (or any character) and is preceded by 'a' to accept the path, or
	# by 'r' to reject the path. The first regex in the list to match the
	# path is used, producing the 'a' or 'r' result for the device.
	# When multiple path names exist for a block device, if any path name
	# matches an 'a' pattern before an 'r' pattern, then the device is
	# accepted. If all the path names match an 'r' pattern first, then the
	# device is rejected. Unmatching path names do not affect the accept
	# or reject decision. If no path names for a device match a pattern,
	# then the device is accepted. Be careful mixing 'a' and 'r' patterns,
	# as the combination might produce unexpected results (test changes.)
	# Run vgscan after changing the filter to regenerate the cache.
	# 
	# Example
	# Accept every block device:
	# filter = [ "a|.*/|" ]
	# Reject the cdrom drive:
	# filter = [ "r|/dev/cdrom|" ]
	# Work with just loopback devices, e.g. for testing:
	# filter = [ "a|loop|", "r|.*|" ]
	# Accept all loop devices and ide drives except hdc:
	# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
	# Use anchors to be very specific:
	# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
	# 
	# This configuration option has an automatic default value.
	# filter = [ "a|.*/|" ]

	# Configuration option devices/global_filter.
	# Limit the block devices that are used by LVM system components.
	# Because devices/filter may be overridden from the command line, it is
	# not suitable for system-wide device filtering, e.g. udev.
	# Use global_filter to hide devices from these LVM system components.
	# The syntax is the same as devices/filter. Devices rejected by
	# global_filter are not opened by LVM.
	# This configuration option has an automatic default value.
	# global_filter = [ "a|.*/|" ]

	# Configuration option devices/types.
	# List of additional acceptable block device types.
	# These are of device type names from /proc/devices, followed by the
	# maximum number of partitions.
	# 
	# Example
	# types = [ "fd", 16 ]
	# 
	# This configuration option is advanced.
	# This configuration option does not have a default value defined.

	# Configuration option devices/sysfs_scan.
	# Restrict device scanning to block devices appearing in sysfs.
	# This is a quick way of filtering out block devices that are not
	# present on the system. sysfs must be part of the kernel and mounted.)
	sysfs_scan = 1

	# Configuration option devices/scan_lvs.
	# Scan LVM LVs for layered PVs.
	scan_lvs = 1

	# Configuration option devices/multipath_component_detection.
	# Ignore devices that are components of DM multipath devices.
	multipath_component_detection = 1

	# Configuration option devices/md_component_detection.
	# Ignore devices that are components of software RAID (md) devices.
	md_component_detection = 1

	# Configuration option devices/fw_raid_component_detection.
	# Ignore devices that are components of firmware RAID devices.
	# LVM must use an external_device_info_source other than none for this
	# detection to execute.
	fw_raid_component_detection = 0

	# Configuration option devices/md_chunk_alignment.
	# Align the start of a PV data area with md device's stripe-width.
	# This applies if a PV is placed directly on an md device.
	# default_data_alignment will be overriden if it is not aligned
	# with the value detected for this setting.
	# This setting is overriden by data_alignment_detection,
	# data_alignment, and the --dataalignment option.
	md_chunk_alignment = 1

	# Configuration option devices/default_data_alignment.
	# Align the start of a PV data area with this number of MiB.
	# Set to 1 for 1MiB, 2 for 2MiB, etc. Set to 0 to disable.
	# This setting is overriden by data_alignment and the --dataalignment
	# option.
	# This configuration option has an automatic default value.
	# default_data_alignment = 1

	# Configuration option devices/data_alignment_detection.
	# Align the start of a PV data area with sysfs io properties.
	# The start of a PV data area will be a multiple of minimum_io_size or
	# optimal_io_size exposed in sysfs. minimum_io_size is the smallest
	# request the device can perform without incurring a read-modify-write
	# penalty, e.g. MD chunk size. optimal_io_size is the device's
	# preferred unit of receiving I/O, e.g. MD stripe width.
	# minimum_io_size is used if optimal_io_size is undefined (0).
	# If md_chunk_alignment is enabled, that detects the optimal_io_size.
	# default_data_alignment and md_chunk_alignment will be overriden
	# if they are not aligned with the value detected for this setting.
	# This setting is overriden by data_alignment and the --dataalignment
	# option.
	data_alignment_detection = 1

	# Configuration option devices/data_alignment.
	# Align the start of a PV data area with this number of KiB.
	# When non-zero, this setting overrides default_data_alignment.
	# Set to 0 to disable, in which case default_data_alignment
	# is used to align the first PE in units of MiB.
	# This setting is overriden by the --dataalignment option.
	data_alignment = 0

	# Configuration option devices/data_alignment_offset_detection.
	# Shift the start of an aligned PV data area based on sysfs information.
	# After a PV data area is aligned, it will be shifted by the
	# alignment_offset exposed in sysfs. This offset is often 0, but may
	# be non-zero. Certain 4KiB sector drives that compensate for windows
	# partitioning will have an alignment_offset of 3584 bytes (sector 7
	# is the lowest aligned logical block, the 4KiB sectors start at
	# LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).
	# This setting is overriden by the --dataalignmentoffset option.
	data_alignment_offset_detection = 1

	# Configuration option devices/ignore_suspended_devices.
	# Ignore DM devices that have I/O suspended while scanning devices.
	# Otherwise, LVM waits for a suspended device to become accessible.
	# This should only be needed in recovery situations.
	ignore_suspended_devices = 0

	# Configuration option devices/ignore_lvm_mirrors.
	# Do not scan 'mirror' LVs to avoid possible deadlocks.
	# This avoids possible deadlocks when using the 'mirror' segment type.
	# This setting determines whether LVs using the 'mirror' segment type
	# are scanned for LVM labels. This affects the ability of mirrors to
	# be used as physical volumes. If this setting is enabled, it is
	# impossible to create VGs on top of mirror LVs, i.e. to stack VGs on
	# mirror LVs. If this setting is disabled, allowing mirror LVs to be
	# scanned, it may cause LVM processes and I/O to the mirror to become
	# blocked. This is due to the way that the mirror segment type handles
	# failures. In order for the hang to occur, an LVM command must be run
	# just after a failure and before the automatic LVM repair process
	# takes place, or there must be failures in multiple mirrors in the
	# same VG at the same time with write failures occurring moments before
	# a scan of the mirror's labels. The 'mirror' scanning problems do not
	# apply to LVM RAID types like 'raid1' which handle failures in a
	# different way, making them a better choice for VG stacking.
	ignore_lvm_mirrors = 1

	# Configuration option devices/require_restorefile_with_uuid.
	# Allow use of pvcreate --uuid without requiring --restorefile.
	require_restorefile_with_uuid = 1

	# Configuration option devices/pv_min_size.
	# Minimum size in KiB of block devices which can be used as PVs.
	# In a clustered environment all nodes must use the same value.
	# Any value smaller than 512KiB is ignored. The previous built-in
	# value was 512.
	pv_min_size = 2048

	# Configuration option devices/issue_discards.
	# Issue discards to PVs that are no longer used by an LV.
	# Discards are sent to an LV's underlying physical volumes when the LV
	# is no longer using the physical volumes' space, e.g. lvremove,
	# lvreduce. Discards inform the storage that a region is no longer
	# used. Storage that supports discards advertise the protocol-specific
	# way discards should be issued by the kernel (TRIM, UNMAP, or
	# WRITE SAME with UNMAP bit set). Not all storage will support or
	# benefit from discards, but SSDs and thinly provisioned LUNs
	# generally do. If enabled, discards will only be issued if both the
	# storage and kernel provide support.
	issue_discards = 0

	# Configuration option devices/allow_changes_with_duplicate_pvs.
	# Allow VG modification while a PV appears on multiple devices.
	# When a PV appears on multiple devices, LVM attempts to choose the
	# best device to use for the PV. If the devices represent the same
	# underlying storage, the choice has minimal consequence. If the
	# devices represent different underlying storage, the wrong choice
	# can result in data loss if the VG is modified. Disabling this
	# setting is the safest option because it prevents modifying a VG
	# or activating LVs in it while a PV appears on multiple devices.
	# Enabling this setting allows the VG to be used as usual even with
	# uncertain devices.
	allow_changes_with_duplicate_pvs = 0
}

# Configuration section allocation.
# How LVM selects space and applies properties to LVs.
allocation {

	# Configuration option allocation/cling_tag_list.
	# Advise LVM which PVs to use when searching for new space.
	# When searching for free space to extend an LV, the 'cling' allocation
	# policy will choose space on the same PVs as the last segment of the
	# existing LV. If there is insufficient space and a list of tags is
	# defined here, it will check whether any of them are attached to the
	# PVs concerned and then seek to match those PV tags between existing
	# extents and new extents.
	# 
	# Example
	# Use the special tag "@*" as a wildcard to match any PV tag:
	# cling_tag_list = [ "@*" ]
	# LVs are mirrored between two sites within a single VG, and
	# PVs are tagged with either @site1 or @site2 to indicate where
	# they are situated:
	# cling_tag_list = [ "@site1", "@site2" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option allocation/maximise_cling.
	# Use a previous allocation algorithm.
	# Changes made in version 2.02.85 extended the reach of the 'cling'
	# policies to detect more situations where data can be grouped onto
	# the same disks. This setting can be used to disable the changes
	# and revert to the previous algorithm.
	maximise_cling = 1

	# Configuration option allocation/use_blkid_wiping.
	# Use blkid to detect and erase existing signatures on new PVs and LVs.
	# The blkid library can detect more signatures than the native LVM
	# detection code, but may take longer. LVM needs to be compiled with
	# blkid wiping support for this setting to apply. LVM native detection
	# code is currently able to recognize: MD device signatures,
	# swap signature, and LUKS signatures. To see the list of signatures
	# recognized by blkid, check the output of the 'blkid -k' command.
	use_blkid_wiping = 1

	# Configuration option allocation/wipe_signatures_when_zeroing_new_lvs.
	# Look for and erase any signatures while zeroing a new LV.
	# The --wipesignatures option overrides this setting.
	# Zeroing is controlled by the -Z/--zero option, and if not specified,
	# zeroing is used by default if possible. Zeroing simply overwrites the
	# first 4KiB of a new LV with zeroes and does no signature detection or
	# wiping. Signature wiping goes beyond zeroing and detects exact types
	# and positions of signatures within the whole LV. It provides a
	# cleaner LV after creation as all known signatures are wiped. The LV
	# is not claimed incorrectly by other tools because of old signatures
	# from previous use. The number of signatures that LVM can detect
	# depends on the detection code that is selected (see
	# use_blkid_wiping.) Wiping each detected signature must be confirmed.
	# When this setting is disabled, signatures on new LVs are not detected
	# or erased unless the --wipesignatures option is used directly.
	wipe_signatures_when_zeroing_new_lvs = 1

	# Configuration option allocation/mirror_logs_require_separate_pvs.
	# Mirror logs and images will always use different PVs.
	# The default setting changed in version 2.02.85.
	mirror_logs_require_separate_pvs = 0

	# Configuration option allocation/raid_stripe_all_devices.
	# Stripe across all PVs when RAID stripes are not specified.
	# If enabled, all PVs in the VG or on the command line are used for
	# raid0/4/5/6/10 when the command does not specify the number of
	# stripes to use.
	# This was the default behaviour until release 2.02.162.
	# This configuration option has an automatic default value.
	# raid_stripe_all_devices = 0

	# Configuration option allocation/cache_pool_metadata_require_separate_pvs.
	# Cache pool metadata and data will always use different PVs.
	cache_pool_metadata_require_separate_pvs = 0

	# Configuration option allocation/cache_metadata_format.
	# Sets default metadata format for new cache.
	# 
	# Accepted values:
	#   0  Automatically detected best available format
	#   1  Original format
	#   2  Improved 2nd. generation format
	# 
	# This configuration option has an automatic default value.
	# cache_metadata_format = 0

	# Configuration option allocation/cache_mode.
	# The default cache mode used for new cache.
	# 
	# Accepted values:
	#   writethrough
	#     Data blocks are immediately written from the cache to disk.
	#   writeback
	#     Data blocks are written from the cache back to disk after some
	#     delay to improve performance.
	# 
	# This setting replaces allocation/cache_pool_cachemode.
	# This configuration option has an automatic default value.
	# cache_mode = "writethrough"

	# Configuration option allocation/cache_policy.
	# The default cache policy used for new cache volume.
	# Since kernel 4.2 the default policy is smq (Stochastic multiqueue),
	# otherwise the older mq (Multiqueue) policy is selected.
	# This configuration option does not have a default value defined.

	# Configuration section allocation/cache_settings.
	# Settings for the cache policy.
	# See documentation for individual cache policies for more info.
	# This configuration section has an automatic default value.
	# cache_settings {
	# }

	# Configuration option allocation/cache_pool_chunk_size.
	# The minimal chunk size in KiB for cache pool volumes.
	# Using a chunk_size that is too large can result in wasteful use of
	# the cache, where small reads and writes can cause large sections of
	# an LV to be mapped into the cache. However, choosing a chunk_size
	# that is too small can result in more overhead trying to manage the
	# numerous chunks that become mapped into the cache. The former is
	# more of a problem than the latter in most cases, so the default is
	# on the smaller end of the spectrum. Supported values range from
	# 32KiB to 1GiB in multiples of 32.
	# This configuration option does not have a default value defined.

	# Configuration option allocation/cache_pool_max_chunks.
	# The maximum number of chunks in a cache pool.
	# For cache target v1.9 the recommended maximumm is 1000000 chunks.
	# Using cache pool with more chunks may degrade cache performance.
	# This configuration option does not have a default value defined.

	# Configuration option allocation/thin_pool_metadata_require_separate_pvs.
	# Thin pool metdata and data will always use different PVs.
	thin_pool_metadata_require_separate_pvs = 0

	# Configuration option allocation/thin_pool_zero.
	# Thin pool data chunks are zeroed before they are first used.
	# Zeroing with a larger thin pool chunk size reduces performance.
	# This configuration option has an automatic default value.
	# thin_pool_zero = 1

	# Configuration option allocation/thin_pool_discards.
	# The discards behaviour of thin pool volumes.
	# 
	# Accepted values:
	#   ignore
	#   nopassdown
	#   passdown
	# 
	# This configuration option has an automatic default value.
	# thin_pool_discards = "passdown"

	# Configuration option allocation/thin_pool_chunk_size_policy.
	# The chunk size calculation policy for thin pool volumes.
	# 
	# Accepted values:
	#   generic
	#     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
	#     the chunk size based on estimation and device hints exposed in
	#     sysfs - the minimum_io_size. The chunk size is always at least
	#     64KiB.
	#   performance
	#     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
	#     the chunk size for performance based on device hints exposed in
	#     sysfs - the optimal_io_size. The chunk size is always at least
	#     512KiB.
	# 
	# This configuration option has an automatic default value.
	# thin_pool_chunk_size_policy = "generic"

	# Configuration option allocation/thin_pool_chunk_size.
	# The minimal chunk size in KiB for thin pool volumes.
	# Larger chunk sizes may improve performance for plain thin volumes,
	# however using them for snapshot volumes is less efficient, as it
	# consumes more space and takes extra time for copying. When unset,
	# lvm tries to estimate chunk size starting from 64KiB. Supported
	# values are in the range 64KiB to 1GiB.
	# This configuration option does not have a default value defined.

	# Configuration option allocation/physical_extent_size.
	# Default physical extent size in KiB to use for new VGs.
	# This configuration option has an automatic default value.
	# physical_extent_size = 4096

	# Configuration option allocation/vdo_use_compression.
	# Enables or disables compression when creating a VDO volume.
	# Compression may be disabled if necessary to maximize performance
	# or to speed processing of data that is unlikely to compress.
	# This configuration option has an automatic default value.
	# vdo_use_compression = 1

	# Configuration option allocation/vdo_use_deduplication.
	# Enables or disables deduplication when creating a VDO volume.
	# Deduplication may be disabled in instances where data is not expected
	# to have good deduplication rates but compression is still desired.
	# This configuration option has an automatic default value.
	# vdo_use_deduplication = 1

	# Configuration option allocation/vdo_emulate_512_sectors.
	# Specifies that the VDO volume is to emulate a 512 byte block device.
	# This configuration option has an automatic default value.
	# vdo_emulate_512_sectors = 0

	# Configuration option allocation/vdo_block_map_cache_size_mb.
	# Specifies the amount of memory in MiB allocated for caching block map
	# pages for VDO volume. The value must be a multiple of 4096 and must be
	# at least 128MiB and less than 16TiB. The cache must be at least 16MiB
	# per logical thread. Note that there is a memory overhead of 15%.
	# This configuration option has an automatic default value.
	# vdo_block_map_cache_size_mb = 128

	# Configuration option allocation/vdo_block_map_period.
	# Tunes the quantity of block map updates that can accumulate
	# before cache pages are flushed to disk. The value must be
	# at least 1 and less then 16380.
	# A lower value means shorter recovery time but lower performance.
	# This configuration option has an automatic default value.
	# vdo_block_map_period = 16380

	# Configuration option allocation/vdo_check_point_frequency.
	# The default check point frequency for VDO volume.
	# This configuration option has an automatic default value.
	# vdo_check_point_frequency = 0

	# Configuration option allocation/vdo_use_sparse_index.
	# Enables sparse indexing for VDO volume.
	# This configuration option has an automatic default value.
	# vdo_use_sparse_index = 0

	# Configuration option allocation/vdo_index_memory_size_mb.
	# Specifies the amount of index memory in MiB for VDO volume.
	# The value must be at least 256MiB and at most 1TiB.
	# This configuration option has an automatic default value.
	# vdo_index_memory_size_mb = 256

	# Configuration option allocation/vdo_use_read_cache.
	# Enables or disables the read cache within the VDO volume.
	# The cache should be enabled if write workloads are expected
	# to have high levels of deduplication, or for read intensive
	# workloads of highly compressible data.
	# This configuration option has an automatic default value.
	# vdo_use_read_cache = 0

	# Configuration option allocation/vdo_read_cache_size_mb.
	# Specifies the extra VDO volume read cache size in MiB.
	# This space is in addition to a system-defined minimum.
	# The value must be less then 16TiB and 1.12 MiB of memory
	# will be used per MiB of read cache specified, per bio thread.
	# This configuration option has an automatic default value.
	# vdo_read_cache_size_mb = 0

	# Configuration option allocation/vdo_slab_size_mb.
	# Specifies the size in MiB of the increment by which a VDO is grown.
	# Using a smaller size constrains the total maximum physical size
	# that can be accommodated. Must be a power of two between 128MiB and 32GiB.
	# This configuration option has an automatic default value.
	# vdo_slab_size_mb = 2048

	# Configuration option allocation/vdo_ack_threads.
	# Specifies the number of threads to use for acknowledging
	# completion of requested VDO I/O operations.
	# The value must be at in range [0..100].
	# This configuration option has an automatic default value.
	# vdo_ack_threads = 1

	# Configuration option allocation/vdo_bio_threads.
	# Specifies the number of threads to use for submitting I/O
	# operations to the storage device of VDO volume.
	# The value must be in range [1..100]
	# Each additional thread after the first will use an additional 18MiB of RAM,
	# plus 1.12 MiB of RAM per megabyte of configured read cache size.
	# This configuration option has an automatic default value.
	# vdo_bio_threads = 1

	# Configuration option allocation/vdo_bio_rotation.
	# Specifies the number of I/O operations to enqueue for each bio-submission
	# thread before directing work to the next. The value must be in range [1..1024].
	# This configuration option has an automatic default value.
	# vdo_bio_rotation = 64

	# Configuration option allocation/vdo_cpu_threads.
	# Specifies the number of threads to use for CPU-intensive work such as
	# hashing or compression for VDO volume. The value must be in range [1..100]
	# This configuration option has an automatic default value.
	# vdo_cpu_threads = 2

	# Configuration option allocation/vdo_hash_zone_threads.
	# Specifies the number of threads across which to subdivide parts of the VDO
	# processing based on the hash value computed from the block data.
	# The value must be at in range [0..100].
	# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
	# either all zero or all non-zero.
	# This configuration option has an automatic default value.
	# vdo_hash_zone_threads = 1

	# Configuration option allocation/vdo_logical_threads.
	# Specifies the number of threads across which to subdivide parts of the VDO
	# processing based on the hash value computed from the block data.
	# A logical thread count of 9 or more will require explicitly specifying
	# a sufficiently large block map cache size, as well.
	# The value must be in range [0..100].
	# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
	# either all zero or all non-zero.
	# This configuration option has an automatic default value.
	# vdo_logical_threads = 1

	# Configuration option allocation/vdo_physical_threads.
	# Specifies the number of threads across which to subdivide parts of the VDO
	# processing based on physical block addresses.
	# Each additional thread after the first will use an additional 10MiB of RAM.
	# The value must be in range [0..16].
	# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
	# either all zero or all non-zero.
	# This configuration option has an automatic default value.
	# vdo_physical_threads = 1

	# Configuration option allocation/vdo_write_policy.
	# Specifies the write policy:
	# auto  - VDO will check the storage device and determine whether it supports flushes.
	#         If it does, VDO will run in async mode, otherwise it will run in sync mode.
	# sync  - Writes are acknowledged only after data is stably written.
	#         This policy is not supported if the underlying storage is not also synchronous.
	# async - Writes are acknowledged after data has been cached for writing to stable storage.
	#         Data which has not been flushed is not guaranteed to persist in this mode.
	# This configuration option has an automatic default value.
	# vdo_write_policy = "auto"
}

# Configuration section log.
# How LVM log information is reported.
log {

	# Configuration option log/report_command_log.
	# Enable or disable LVM log reporting.
	# If enabled, LVM will collect a log of operations, messages,
	# per-object return codes with object identification and associated
	# error numbers (errnos) during LVM command processing. Then the
	# log is either reported solely or in addition to any existing
	# reports, depending on LVM command used. If it is a reporting command
	# (e.g. pvs, vgs, lvs, lvm fullreport), then the log is reported in
	# addition to any existing reports. Otherwise, there's only log report
	# on output. For all applicable LVM commands, you can request that
	# the output has only log report by using --logonly command line
	# option. Use log/command_log_cols and log/command_log_sort settings
	# to define fields to display and sort fields for the log report.
	# You can also use log/command_log_selection to define selection
	# criteria used each time the log is reported.
	# This configuration option has an automatic default value.
	# report_command_log = 0

	# Configuration option log/command_log_sort.
	# List of columns to sort by when reporting command log.
	# See <lvm command> --logonly --configreport log -o help
	# for the list of possible fields.
	# This configuration option has an automatic default value.
	# command_log_sort = "log_seq_num"

	# Configuration option log/command_log_cols.
	# List of columns to report when reporting command log.
	# See <lvm command> --logonly --configreport log -o help
	# for the list of possible fields.
	# This configuration option has an automatic default value.
	# command_log_cols = "log_seq_num,log_type,log_context,log_object_type,log_object_name,log_object_id,log_object_group,log_object_group_id,log_message,log_errno,log_ret_code"

	# Configuration option log/command_log_selection.
	# Selection criteria used when reporting command log.
	# You can define selection criteria that are applied each
	# time log is reported. This way, it is possible to control the
	# amount of log that is displayed on output and you can select
	# only parts of the log that are important for you. To define
	# selection criteria, use fields from log report. See also
	# <lvm command> --logonly --configreport log -S help for the
	# list of possible fields and selection operators. You can also
	# define selection criteria for log report on command line directly
	# using <lvm command> --configreport log -S <selection criteria>
	# which has precedence over log/command_log_selection setting.
	# For more information about selection criteria in general, see
	# lvm(8) man page.
	# This configuration option has an automatic default value.
	# command_log_selection = "!(log_type=status && message=success)"

	# Configuration option log/verbose.
	# Controls the messages sent to stdout or stderr.
	verbose = 0

	# Configuration option log/silent.
	# Suppress all non-essential messages from stdout.
	# This has the same effect as -qq. When enabled, the following commands
	# still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,
	# pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs.
	# Non-essential messages are shifted from log level 4 to log level 5
	# for syslog and lvm2_log_fn purposes.
	# Any 'yes' or 'no' questions not overridden by other arguments are
	# suppressed and default to 'no'.
	silent = 0

	# Configuration option log/syslog.
	# Send log messages through syslog.
	syslog = 1

	# Configuration option log/file.
	# Write error and debug log messages to a file specified here.
	# This configuration option does not have a default value defined.

	# Configuration option log/overwrite.
	# Overwrite the log file each time the program is run.
	overwrite = 0

	# Configuration option log/level.
	# The level of log messages that are sent to the log file or syslog.
	# There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.
	# 7 is the most verbose (LOG_DEBUG).
	level = 0

	# Configuration option log/indent.
	# Indent messages according to their severity.
	indent = 1

	# Configuration option log/command_names.
	# Display the command name on each line of output.
	command_names = 0

	# Configuration option log/prefix.
	# A prefix to use before the log message text.
	# (After the command name, if selected).
	# Two spaces allows you to see/grep the severity of each message.
	# To make the messages look similar to the original LVM tools use:
	# indent = 0, command_names = 1, prefix = " -- "
	prefix = "  "

	# Configuration option log/activation.
	# Log messages during activation.
	# Don't use this in low memory situations (can deadlock).
	activation = 0

	# Configuration option log/debug_classes.
	# Select log messages by class.
	# Some debugging messages are assigned to a class and only appear in
	# debug output if the class is listed here. Classes currently
	# available: memory, devices, io, activation, allocation,
	# metadata, cache, locking, lvmpolld. Use "all" to see everything.
	debug_classes = [ "memory", "devices", "io", "activation", "allocation", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
}

# Configuration section backup.
# How LVM metadata is backed up and archived.
# In LVM, a 'backup' is a copy of the metadata for the current system,
# and an 'archive' contains old metadata configurations. They are
# stored in a human readable text format.
backup {

	# Configuration option backup/backup.
	# Maintain a backup of the current metadata configuration.
	# Think very hard before turning this off!
	backup = 1

	# Configuration option backup/backup_dir.
	# Location of the metadata backup files.
	# Remember to back up this directory regularly!
	backup_dir = "/etc/lvm/backup"

	# Configuration option backup/archive.
	# Maintain an archive of old metadata configurations.
	# Think very hard before turning this off.
	archive = 1

	# Configuration option backup/archive_dir.
	# Location of the metdata archive files.
	# Remember to back up this directory regularly!
	archive_dir = "/etc/lvm/archive"

	# Configuration option backup/retain_min.
	# Minimum number of archives to keep.
	retain_min = 10

	# Configuration option backup/retain_days.
	# Minimum number of days to keep archive files.
	retain_days = 30
}

# Configuration section shell.
# Settings for running LVM in shell (readline) mode.
shell {

	# Configuration option shell/history_size.
	# Number of lines of history to store in ~/.lvm_history.
	history_size = 100
}

# Configuration section global.
# Miscellaneous global LVM settings.
global {

	# Configuration option global/umask.
	# The file creation mask for any files and directories created.
	# Interpreted as octal if the first digit is zero.
	umask = 077

	# Configuration option global/test.
	# No on-disk metadata changes will be made in test mode.
	# Equivalent to having the -t option on every command.
	test = 0

	# Configuration option global/units.
	# Default value for --units argument.
	units = "r"

	# Configuration option global/si_unit_consistency.
	# Distinguish between powers of 1024 and 1000 bytes.
	# The LVM commands distinguish between powers of 1024 bytes,
	# e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.
	# If scripts depend on the old behaviour, disable this setting
	# temporarily until they are updated.
	si_unit_consistency = 1

	# Configuration option global/suffix.
	# Display unit suffix for sizes.
	# This setting has no effect if the units are in human-readable form
	# (global/units = "h") in which case the suffix is always displayed.
	suffix = 1

	# Configuration option global/activation.
	# Enable/disable communication with the kernel device-mapper.
	# Disable to use the tools to manipulate LVM metadata without
	# activating any logical volumes. If the device-mapper driver
	# is not present in the kernel, disabling this should suppress
	# the error messages.
	activation = 1

	# Configuration option global/segment_libraries.
	# This configuration option does not have a default value defined.

	# Configuration option global/proc.
	# Location of proc filesystem.
	# This configuration option is advanced.
	proc = "/proc"

	# Configuration option global/etc.
	# Location of /etc system configuration directory.
	etc = "/etc"

	# Configuration option global/wait_for_locks.
	# When disabled, fail if a lock request would block.
	wait_for_locks = 1

	# Configuration option global/locking_dir.
	# Directory to use for LVM command file locks.
	# Local non-LV directory that holds file-based locks while commands are
	# in progress. A directory like /tmp that may get wiped on reboot is OK.
	locking_dir = "/run/lock/lvm"

	# Configuration option global/prioritise_write_locks.
	# Allow quicker VG write access during high volume read access.
	# When there are competing read-only and read-write access requests for
	# a volume group's metadata, instead of always granting the read-only
	# requests immediately, delay them to allow the read-write requests to
	# be serviced. Without this setting, write access may be stalled by a
	# high volume of read-only requests. This option only affects
	# locking_type 1 viz. local file-based locking.
	prioritise_write_locks = 1

	# Configuration option global/library_dir.
	# Search this directory first for shared libraries.
	# This configuration option does not have a default value defined.

	# Configuration option global/abort_on_internal_errors.
	# Abort a command that encounters an internal error.
	# Treat any internal errors as fatal errors, aborting the process that
	# encountered the internal error. Please only enable for debugging.
	abort_on_internal_errors = 0

	# Configuration option global/metadata_read_only.
	# No operations that change on-disk metadata are permitted.
	# Additionally, read-only commands that encounter metadata in need of
	# repair will still be allowed to proceed exactly as if the repair had
	# been performed (except for the unchanged vg_seqno). Inappropriate
	# use could mess up your system, so seek advice first!
	metadata_read_only = 0

	# Configuration option global/mirror_segtype_default.
	# The segment type used by the short mirroring option -m.
	# The --type mirror|raid1 option overrides this setting.
	# 
	# Accepted values:
	#   mirror
	#     The original RAID1 implementation from LVM/DM. It is
	#     characterized by a flexible log solution (core, disk, mirrored),
	#     and by the necessity to block I/O while handling a failure.
	#     There is an inherent race in the dmeventd failure handling logic
	#     with snapshots of devices using this type of RAID1 that in the
	#     worst case could cause a deadlock. (Also see
	#     devices/ignore_lvm_mirrors.)
	#   raid1
	#     This is a newer RAID1 implementation using the MD RAID1
	#     personality through device-mapper. It is characterized by a
	#     lack of log options. (A log is always allocated for every
	#     device and they are placed on the same device as the image,
	#     so no separate devices are required.) This mirror
	#     implementation does not require I/O to be blocked while
	#     handling a failure. This mirror implementation is not
	#     cluster-aware and cannot be used in a shared (active/active)
	#     fashion in a cluster.
	# 
	mirror_segtype_default = "raid1"

	# Configuration option global/raid10_segtype_default.
	# The segment type used by the -i -m combination.
	# The --type raid10|mirror option overrides this setting.
	# The --stripes/-i and --mirrors/-m options can both be specified
	# during the creation of a logical volume to use both striping and
	# mirroring for the LV. There are two different implementations.
	# 
	# Accepted values:
	#   raid10
	#     LVM uses MD's RAID10 personality through DM. This is the
	#     preferred option.
	#   mirror
	#     LVM layers the 'mirror' and 'stripe' segment types. The layering
	#     is done by creating a mirror LV on top of striped sub-LVs,
	#     effectively creating a RAID 0+1 array. The layering is suboptimal
	#     in terms of providing redundancy and performance.
	# 
	raid10_segtype_default = "raid10"

	# Configuration option global/sparse_segtype_default.
	# The segment type used by the -V -L combination.
	# The --type snapshot|thin option overrides this setting.
	# The combination of -V and -L options creates a sparse LV. There are
	# two different implementations.
	# 
	# Accepted values:
	#   snapshot
	#     The original snapshot implementation from LVM/DM. It uses an old
	#     snapshot that mixes data and metadata within a single COW
	#     storage volume and performs poorly when the size of stored data
	#     passes hundreds of MB.
	#   thin
	#     A newer implementation that uses thin provisioning. It has a
	#     bigger minimal chunk size (64KiB) and uses a separate volume for
	#     metadata. It has better performance, especially when more data
	#     is used. It also supports full snapshots.
	# 
	sparse_segtype_default = "thin"

	# Configuration option global/lvdisplay_shows_full_device_path.
	# Enable this to reinstate the previous lvdisplay name format.
	# The default format for displaying LV names in lvdisplay was changed
	# in version 2.02.89 to show the LV name and path separately.
	# Previously this was always shown as /dev/vgname/lvname even when that
	# was never a valid path in the /dev filesystem.
	# This configuration option has an automatic default value.
	# lvdisplay_shows_full_device_path = 0

	# Configuration option global/event_activation.
	# Activate LVs based on system-generated device events.
	# When a device appears on the system, a system-generated event runs
	# the pvscan command to activate LVs if the new PV completes the VG.
	# Use auto_activation_volume_list to select which LVs should be
	# activated from these events (the default is all.)
	# When event_activation is disabled, the system will generally run
	# a direct activation command to activate LVs in complete VGs.
	event_activation = 1

	# Configuration option global/use_aio.
	# Use async I/O when reading and writing devices.
	# This configuration option has an automatic default value.
	# use_aio = 1

	# Configuration option global/use_lvmlockd.
	# Use lvmlockd for locking among hosts using LVM on shared storage.
	# Applicable only if LVM is compiled with lockd support in which
	# case there is also lvmlockd(8) man page available for more
	# information.
	use_lvmlockd = 0

	# Configuration option global/lvmlockd_lock_retries.
	# Retry lvmlockd lock requests this many times.
	# Applicable only if LVM is compiled with lockd support
	# This configuration option has an automatic default value.
	# lvmlockd_lock_retries = 3

	# Configuration option global/sanlock_lv_extend.
	# Size in MiB to extend the internal LV holding sanlock locks.
	# The internal LV holds locks for each LV in the VG, and after enough
	# LVs have been created, the internal LV needs to be extended. lvcreate
	# will automatically extend the internal LV when needed by the amount
	# specified here. Setting this to 0 disables the automatic extension
	# and can cause lvcreate to fail. Applicable only if LVM is compiled
	# with lockd support
	# This configuration option has an automatic default value.
	# sanlock_lv_extend = 256

	# Configuration option global/thin_check_executable.
	# The full path to the thin_check command.
	# LVM uses this command to check that a thin metadata device is in a
	# usable state. When a thin pool is activated and after it is
	# deactivated, this command is run. Activation will only proceed if
	# the command has an exit status of 0. Set to "" to skip this check.
	# (Not recommended.) Also see thin_check_options.
	# (See package device-mapper-persistent-data or thin-provisioning-tools)
	# This configuration option has an automatic default value.
	# thin_check_executable = "/usr/sbin/thin_check"

	# Configuration option global/thin_dump_executable.
	# The full path to the thin_dump command.
	# LVM uses this command to dump thin pool metadata.
	# (See package device-mapper-persistent-data or thin-provisioning-tools)
	# This configuration option has an automatic default value.
	# thin_dump_executable = "/usr/sbin/thin_dump"

	# Configuration option global/thin_repair_executable.
	# The full path to the thin_repair command.
	# LVM uses this command to repair a thin metadata device if it is in
	# an unusable state. Also see thin_repair_options.
	# (See package device-mapper-persistent-data or thin-provisioning-tools)
	# This configuration option has an automatic default value.
	# thin_repair_executable = "/usr/sbin/thin_repair"

	# Configuration option global/thin_check_options.
	# List of options passed to the thin_check command.
	# With thin_check version 2.1 or newer you can add the option
	# --ignore-non-fatal-errors to let it pass through ignorable errors
	# and fix them later. With thin_check version 3.2 or newer you should
	# include the option --clear-needs-check-flag.
	# This configuration option has an automatic default value.
	# thin_check_options = [ "-q", "--clear-needs-check-flag" ]

	# Configuration option global/thin_repair_options.
	# List of options passed to the thin_repair command.
	# This configuration option has an automatic default value.
	# thin_repair_options = [ "" ]

	# Configuration option global/thin_disabled_features.
	# Features to not use in the thin driver.
	# This can be helpful for testing, or to avoid using a feature that is
	# causing problems. Features include: block_size, discards,
	# discards_non_power_2, external_origin, metadata_resize,
	# external_origin_extend, error_if_no_space.
	# 
	# Example
	# thin_disabled_features = [ "discards", "block_size" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option global/cache_disabled_features.
	# Features to not use in the cache driver.
	# This can be helpful for testing, or to avoid using a feature that is
	# causing problems. Features include: policy_mq, policy_smq, metadata2.
	# 
	# Example
	# cache_disabled_features = [ "policy_smq" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option global/cache_check_executable.
	# The full path to the cache_check command.
	# LVM uses this command to check that a cache metadata device is in a
	# usable state. When a cached LV is activated and after it is
	# deactivated, this command is run. Activation will only proceed if the
	# command has an exit status of 0. Set to "" to skip this check.
	# (Not recommended.) Also see cache_check_options.
	# (See package device-mapper-persistent-data or thin-provisioning-tools)
	# This configuration option has an automatic default value.
	# cache_check_executable = "/usr/sbin/cache_check"

	# Configuration option global/cache_dump_executable.
	# The full path to the cache_dump command.
	# LVM uses this command to dump cache pool metadata.
	# (See package device-mapper-persistent-data or thin-provisioning-tools)
	# This configuration option has an automatic default value.
	# cache_dump_executable = "/usr/sbin/cache_dump"

	# Configuration option global/cache_repair_executable.
	# The full path to the cache_repair command.
	# LVM uses this command to repair a cache metadata device if it is in
	# an unusable state. Also see cache_repair_options.
	# (See package device-mapper-persistent-data or thin-provisioning-tools)
	# This configuration option has an automatic default value.
	# cache_repair_executable = "/usr/sbin/cache_repair"

	# Configuration option global/cache_check_options.
	# List of options passed to the cache_check command.
	# With cache_check version 5.0 or newer you should include the option
	# --clear-needs-check-flag.
	# This configuration option has an automatic default value.
	# cache_check_options = [ "-q", "--clear-needs-check-flag" ]

	# Configuration option global/cache_repair_options.
	# List of options passed to the cache_repair command.
	# This configuration option has an automatic default value.
	# cache_repair_options = [ "" ]

	# Configuration option global/vdo_format_executable.
	# The full path to the vdoformat command.
	# LVM uses this command to initial data volume for VDO type logical volume
	# This configuration option has an automatic default value.
	# vdo_format_executable = "autodetect"

	# Configuration option global/vdo_format_options.
	# List of options passed added to standard vdoformat command.
	# This configuration option has an automatic default value.
	# vdo_format_options = [ "" ]

	# Configuration option global/fsadm_executable.
	# The full path to the fsadm command.
	# LVM uses this command to help with lvresize -r operations.
	# This configuration option has an automatic default value.
	# fsadm_executable = "/sbin/fsadm"

	# Configuration option global/system_id_source.
	# The method LVM uses to set the local system ID.
	# Volume Groups can also be given a system ID (by vgcreate, vgchange,
	# or vgimport.) A VG on shared storage devices is accessible only to
	# the host with a matching system ID. See 'man lvmsystemid' for
	# information on limitations and correct usage.
	# 
	# Accepted values:
	#   none
	#     The host has no system ID.
	#   lvmlocal
	#     Obtain the system ID from the system_id setting in the 'local'
	#     section of an lvm configuration file, e.g. lvmlocal.conf.
	#   uname
	#     Set the system ID from the hostname (uname) of the system.
	#     System IDs beginning localhost are not permitted.
	#   machineid
	#     Use the contents of the machine-id file to set the system ID.
	#     Some systems create this file at installation time.
	#     See 'man machine-id' and global/etc.
	#   file
	#     Use the contents of another file (system_id_file) to set the
	#     system ID.
	# 
	system_id_source = "none"

	# Configuration option global/system_id_file.
	# The full path to the file containing a system ID.
	# This is used when system_id_source is set to 'file'.
	# Comments starting with the character # are ignored.
	# This configuration option does not have a default value defined.

	# Configuration option global/use_lvmpolld.
	# Use lvmpolld to supervise long running LVM commands.
	# When enabled, control of long running LVM commands is transferred
	# from the original LVM command to the lvmpolld daemon. This allows
	# the operation to continue independent of the original LVM command.
	# After lvmpolld takes over, the LVM command displays the progress
	# of the ongoing operation. lvmpolld itself runs LVM commands to
	# manage the progress of ongoing operations. lvmpolld can be used as
	# a native systemd service, which allows it to be started on demand,
	# and to use its own control group. When this option is disabled, LVM
	# commands will supervise long running operations by forking themselves.
	# Applicable only if LVM is compiled with lvmpolld support.
	use_lvmpolld = 1

	# Configuration option global/notify_dbus.
	# Enable D-Bus notification from LVM commands.
	# When enabled, an LVM command that changes PVs, changes VG metadata,
	# or changes the activation state of an LV will send a notification.
	notify_dbus = 1
}

# Configuration section activation.
activation {

	# Configuration option activation/checks.
	# Perform internal checks of libdevmapper operations.
	# Useful for debugging problems with activation. Some of the checks may
	# be expensive, so it's best to use this only when there seems to be a
	# problem.
	checks = 0

	# Configuration option activation/udev_sync.
	# Use udev notifications to synchronize udev and LVM.
	# The --nodevsync option overrides this setting.
	# When disabled, LVM commands will not wait for notifications from
	# udev, but continue irrespective of any possible udev processing in
	# the background. Only use this if udev is not running or has rules
	# that ignore the devices LVM creates. If enabled when udev is not
	# running, and LVM processes are waiting for udev, run the command
	# 'dmsetup udevcomplete_all' to wake them up.
	udev_sync = 1

	# Configuration option activation/udev_rules.
	# Use udev rules to manage LV device nodes and symlinks.
	# When disabled, LVM will manage the device nodes and symlinks for
	# active LVs itself. Manual intervention may be required if this
	# setting is changed while LVs are active.
	udev_rules = 1

	# Configuration option activation/verify_udev_operations.
	# Use extra checks in LVM to verify udev operations.
	# This enables additional checks (and if necessary, repairs) on entries
	# in the device directory after udev has completed processing its
	# events. Useful for diagnosing problems with LVM/udev interactions.
	verify_udev_operations = 0

	# Configuration option activation/retry_deactivation.
	# Retry failed LV deactivation.
	# If LV deactivation fails, LVM will retry for a few seconds before
	# failing. This may happen because a process run from a quick udev rule
	# temporarily opened the device.
	retry_deactivation = 1

	# Configuration option activation/missing_stripe_filler.
	# Method to fill missing stripes when activating an incomplete LV.
	# Using 'error' will make inaccessible parts of the device return I/O
	# errors on access. Using 'zero' will return success (and zero) on I/O
	# You can instead use a device path, in which case,
	# that device will be used in place of missing stripes. Using anything
	# other than 'error' with mirrored or snapshotted volumes is likely to
	# result in data corruption.
	# This configuration option is advanced.
	missing_stripe_filler = "error"

	# Configuration option activation/use_linear_target.
	# Use the linear target to optimize single stripe LVs.
	# When disabled, the striped target is used. The linear target is an
	# optimised version of the striped target that only handles a single
	# stripe.
	use_linear_target = 1

	# Configuration option activation/reserved_stack.
	# Stack size in KiB to reserve for use while devices are suspended.
	# Insufficent reserve risks I/O deadlock during device suspension.
	reserved_stack = 64

	# Configuration option activation/reserved_memory.
	# Memory size in KiB to reserve for use while devices are suspended.
	# Insufficent reserve risks I/O deadlock during device suspension.
	reserved_memory = 8192

	# Configuration option activation/process_priority.
	# Nice value used while devices are suspended.
	# Use a high priority so that LVs are suspended
	# for the shortest possible time.
	process_priority = -18

	# Configuration option activation/volume_list.
	# Only LVs selected by this list are activated.
	# If this list is defined, an LV is only activated if it matches an
	# entry in this list. If this list is undefined, it imposes no limits
	# on LV activation (all are allowed).
	# 
	# Accepted values:
	#   vgname
	#     The VG name is matched exactly and selects all LVs in the VG.
	#   vgname/lvname
	#     The VG name and LV name are matched exactly and selects the LV.
	#   @tag
	#     Selects an LV if the specified tag matches a tag set on the LV
	#     or VG.
	#   @*
	#     Selects an LV if a tag defined on the host is also set on the LV
	#     or VG. See tags/hosttags. If any host tags exist but volume_list
	#     is not defined, a default single-entry list containing '@*'
	#     is assumed.
	# 
	# Example
	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option activation/auto_activation_volume_list.
	# Only LVs selected by this list are auto-activated.
	# This list works like volume_list, but it is used only by
	# auto-activation commands. It does not apply to direct activation
	# commands. If this list is defined, an LV is only auto-activated
	# if it matches an entry in this list. If this list is undefined, it
	# imposes no limits on LV auto-activation (all are allowed.) If this
	# list is defined and empty, i.e. "[]", then no LVs are selected for
	# auto-activation. An LV that is selected by this list for
	# auto-activation, must also be selected by volume_list (if defined)
	# before it is activated. Auto-activation is an activation command that
	# includes the 'a' argument: --activate ay or -a ay. The 'a' (auto)
	# argument for auto-activation is meant to be used by activation
	# commands that are run automatically by the system, as opposed to LVM
	# commands run directly by a user. A user may also use the 'a' flag
	# directly to perform auto-activation. Also see pvscan(8) for more
	# information about auto-activation.
	# 
	# Accepted values:
	#   vgname
	#     The VG name is matched exactly and selects all LVs in the VG.
	#   vgname/lvname
	#     The VG name and LV name are matched exactly and selects the LV.
	#   @tag
	#     Selects an LV if the specified tag matches a tag set on the LV
	#     or VG.
	#   @*
	#     Selects an LV if a tag defined on the host is also set on the LV
	#     or VG. See tags/hosttags. If any host tags exist but volume_list
	#     is not defined, a default single-entry list containing '@*'
	#     is assumed.
	# 
	# Example
	# auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option activation/read_only_volume_list.
	# LVs in this list are activated in read-only mode.
	# If this list is defined, each LV that is to be activated is checked
	# against this list, and if it matches, it is activated in read-only
	# mode. This overrides the permission setting stored in the metadata,
	# e.g. from --permission rw.
	# 
	# Accepted values:
	#   vgname
	#     The VG name is matched exactly and selects all LVs in the VG.
	#   vgname/lvname
	#     The VG name and LV name are matched exactly and selects the LV.
	#   @tag
	#     Selects an LV if the specified tag matches a tag set on the LV
	#     or VG.
	#   @*
	#     Selects an LV if a tag defined on the host is also set on the LV
	#     or VG. See tags/hosttags. If any host tags exist but volume_list
	#     is not defined, a default single-entry list containing '@*'
	#     is assumed.
	# 
	# Example
	# read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
	# 
	# This configuration option does not have a default value defined.

	# Configuration option activation/raid_region_size.
	# Size in KiB of each raid or mirror synchronization region.
	# The clean/dirty state of data is tracked for each region.
	# The value is rounded down to a power of two if necessary, and
	# is ignored if it is not a multiple of the machine memory page size.
	raid_region_size = 2048

	# Configuration option activation/error_when_full.
	# Return errors if a thin pool runs out of space.
	# The --errorwhenfull option overrides this setting.
	# When enabled, writes to thin LVs immediately return an error if the
	# thin pool is out of data space. When disabled, writes to thin LVs
	# are queued if the thin pool is out of space, and processed when the
	# thin pool data space is extended. New thin pools are assigned the
	# behavior defined here.
	# This configuration option has an automatic default value.
	# error_when_full = 0

	# Configuration option activation/readahead.
	# Setting to use when there is no readahead setting in metadata.
	# 
	# Accepted values:
	#   none
	#     Disable readahead.
	#   auto
	#     Use default value chosen by kernel.
	# 
	readahead = "auto"

	# Configuration option activation/raid_fault_policy.
	# Defines how a device failure in a RAID LV is handled.
	# This includes LVs that have the following segment types:
	# raid1, raid4, raid5*, and raid6*.
	# If a device in the LV fails, the policy determines the steps
	# performed by dmeventd automatically, and the steps perfomed by the
	# manual command lvconvert --repair --use-policies.
	# Automatic handling requires dmeventd to be monitoring the LV.
	# 
	# Accepted values:
	#   warn
	#     Use the system log to warn the user that a device in the RAID LV
	#     has failed. It is left to the user to run lvconvert --repair
	#     manually to remove or replace the failed device. As long as the
	#     number of failed devices does not exceed the redundancy of the LV
	#     (1 device for raid4/5, 2 for raid6), the LV will remain usable.
	#   allocate
	#     Attempt to use any extra physical volumes in the VG as spares and
	#     replace faulty devices.
	# 
	raid_fault_policy = "warn"

	# Configuration option activation/mirror_image_fault_policy.
	# Defines how a device failure in a 'mirror' LV is handled.
	# An LV with the 'mirror' segment type is composed of mirror images
	# (copies) and a mirror log. A disk log ensures that a mirror LV does
	# not need to be re-synced (all copies made the same) every time a
	# machine reboots or crashes. If a device in the LV fails, this policy
	# determines the steps perfomed by dmeventd automatically, and the steps
	# performed by the manual command lvconvert --repair --use-policies.
	# Automatic handling requires dmeventd to be monitoring the LV.
	# 
	# Accepted values:
	#   remove
	#     Simply remove the faulty device and run without it. If the log
	#     device fails, the mirror would convert to using an in-memory log.
	#     This means the mirror will not remember its sync status across
	#     crashes/reboots and the entire mirror will be re-synced. If a
	#     mirror image fails, the mirror will convert to a non-mirrored
	#     device if there is only one remaining good copy.
	#   allocate
	#     Remove the faulty device and try to allocate space on a new
	#     device to be a replacement for the failed device. Using this
	#     policy for the log is fast and maintains the ability to remember
	#     sync state through crashes/reboots. Using this policy for a
	#     mirror device is slow, as it requires the mirror to resynchronize
	#     the devices, but it will preserve the mirror characteristic of
	#     the device. This policy acts like 'remove' if no suitable device
	#     and space can be allocated for the replacement.
	#   allocate_anywhere
	#     Not yet implemented. Useful to place the log device temporarily
	#     on the same physical volume as one of the mirror images. This
	#     policy is not recommended for mirror devices since it would break
	#     the redundant nature of the mirror. This policy acts like
	#     'remove' if no suitable device and space can be allocated for the
	#     replacement.
	# 
	mirror_image_fault_policy = "remove"

	# Configuration option activation/mirror_log_fault_policy.
	# Defines how a device failure in a 'mirror' log LV is handled.
	# The mirror_image_fault_policy description for mirrored LVs also
	# applies to mirrored log LVs.
	mirror_log_fault_policy = "allocate"

	# Configuration option activation/snapshot_autoextend_threshold.
	# Auto-extend a snapshot when its usage exceeds this percent.
	# Setting this to 100 disables automatic extension.
	# The minimum value is 50 (a smaller value is treated as 50.)
	# Also see snapshot_autoextend_percent.
	# Automatic extension requires dmeventd to be monitoring the LV.
	# 
	# Example
	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
	# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
	# 840M, it is extended to 1.44G:
	# snapshot_autoextend_threshold = 70
	# 
	snapshot_autoextend_threshold = 100

	# Configuration option activation/snapshot_autoextend_percent.
	# Auto-extending a snapshot adds this percent extra space.
	# The amount of additional space added to a snapshot is this
	# percent of its current size.
	# 
	# Example
	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
	# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
	# 840M, it is extended to 1.44G:
	# snapshot_autoextend_percent = 20
	# 
	snapshot_autoextend_percent = 20

	# Configuration option activation/thin_pool_autoextend_threshold.
	# Auto-extend a thin pool when its usage exceeds this percent.
	# Setting this to 100 disables automatic extension.
	# The minimum value is 50 (a smaller value is treated as 50.)
	# Also see thin_pool_autoextend_percent.
	# Automatic extension requires dmeventd to be monitoring the LV.
	# 
	# Example
	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
	# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
	# 840M, it is extended to 1.44G:
	# thin_pool_autoextend_threshold = 70
	# 
	thin_pool_autoextend_threshold = 100

	# Configuration option activation/thin_pool_autoextend_percent.
	# Auto-extending a thin pool adds this percent extra space.
	# The amount of additional space added to a thin pool is this
	# percent of its current size.
	# 
	# Example
	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
	# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
	# 840M, it is extended to 1.44G:
	# thin_pool_autoextend_percent = 20
	# 
	thin_pool_autoextend_percent = 20

	# Configuration option activation/vdo_pool_autoextend_threshold.
	# Auto-extend a VDO pool when its usage exceeds this percent.
	# Setting this to 100 disables automatic extension.
	# The minimum value is 50 (a smaller value is treated as 50.)
	# Also see vdo_pool_autoextend_percent.
	# Automatic extension requires dmeventd to be monitoring the LV.
	# 
	# Example
	# Using 70% autoextend threshold and 20% autoextend size, when a 10G
	# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
	# 8.4G, it is extended to 14.4G:
	# vdo_pool_autoextend_threshold = 70
	# 
	vdo_pool_autoextend_threshold = 100

	# Configuration option activation/vdo_pool_autoextend_percent.
	# Auto-extending a VDO pool adds this percent extra space.
	# The amount of additional space added to a VDO pool is this
	# percent of its current size.
	# 
	# Example
	# Using 70% autoextend threshold and 20% autoextend size, when a 10G
	# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
	# 8.4G, it is extended to 14.4G:
	# This configuration option has an automatic default value.
	# vdo_pool_autoextend_percent = 20

	# Configuration option activation/mlock_filter.
	# Do not mlock these memory areas.
	# While activating devices, I/O to devices being (re)configured is
	# suspended. As a precaution against deadlocks, LVM pins memory it is
	# using so it is not paged out, and will not require I/O to reread.
	# Groups of pages that are known not to be accessed during activation
	# do not need to be pinned into memory. Each string listed in this
	# setting is compared against each line in /proc/self/maps, and the
	# pages corresponding to lines that match are not pinned. On some
	# systems, locale-archive was found to make up over 80% of the memory
	# used by the process.
	# 
	# Example
	# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
	# 
	# This configuration option is advanced.
	# This configuration option does not have a default value defined.

	# Configuration option activation/use_mlockall.
	# Use the old behavior of mlockall to pin all memory.
	# Prior to version 2.02.62, LVM used mlockall() to pin the whole
	# process's memory while activating devices.
	use_mlockall = 0

	# Configuration option activation/monitoring.
	# Monitor LVs that are activated.
	# The --ignoremonitoring option overrides this setting.
	# When enabled, LVM will ask dmeventd to monitor activated LVs.
	monitoring = 1

	# Configuration option activation/polling_interval.
	# Check pvmove or lvconvert progress at this interval (seconds).
	# When pvmove or lvconvert must wait for the kernel to finish
	# synchronising or merging data, they check and report progress at
	# intervals of this number of seconds. If this is set to 0 and there
	# is only one thing to wait for, there are no progress reports, but
	# the process is awoken immediately once the operation is complete.
	polling_interval = 15

	# Configuration option activation/auto_set_activation_skip.
	# Set the activation skip flag on new thin snapshot LVs.
	# The --setactivationskip option overrides this setting.
	# An LV can have a persistent 'activation skip' flag. The flag causes
	# the LV to be skipped during normal activation. The lvchange/vgchange
	# -K option is required to activate LVs that have the activation skip
	# flag set. When this setting is enabled, the activation skip flag is
	# set on new thin snapshot LVs.
	# This configuration option has an automatic default value.
	# auto_set_activation_skip = 1

	# Configuration option activation/activation_mode.
	# How LVs with missing devices are activated.
	# The --activationmode option overrides this setting.
	# 
	# Accepted values:
	#   complete
	#     Only allow activation of an LV if all of the Physical Volumes it
	#     uses are present. Other PVs in the Volume Group may be missing.
	#   degraded
	#     Like complete, but additionally RAID LVs of segment type raid1,
	#     raid4, raid5, radid6 and raid10 will be activated if there is no
	#     data loss, i.e. they have sufficient redundancy to present the
	#     entire addressable range of the Logical Volume.
	#   partial
	#     Allows the activation of any LV even if a missing or failed PV
	#     could cause data loss with a portion of the LV inaccessible.
	#     This setting should not normally be used, but may sometimes
	#     assist with data recovery.
	# 
	activation_mode = "degraded"

	# Configuration option activation/lock_start_list.
	# Locking is started only for VGs selected by this list.
	# The rules are the same as those for volume_list.
	# This configuration option does not have a default value defined.

	# Configuration option activation/auto_lock_start_list.
	# Locking is auto-started only for VGs selected by this list.
	# The rules are the same as those for auto_activation_volume_list.
	# This configuration option does not have a default value defined.
}

# Configuration section metadata.
# This configuration section has an automatic default value.
# metadata {

	# Configuration option metadata/check_pv_device_sizes.
	# Check device sizes are not smaller than corresponding PV sizes.
	# If device size is less than corresponding PV size found in metadata,
	# there is always a risk of data loss. If this option is set, then LVM
	# issues a warning message each time it finds that the device size is
	# less than corresponding PV size. You should not disable this unless
	# you are absolutely sure about what you are doing!
	# This configuration option is advanced.
	# This configuration option has an automatic default value.
	# check_pv_device_sizes = 1

	# Configuration option metadata/record_lvs_history.
	# When enabled, LVM keeps history records about removed LVs in
	# metadata. The information that is recorded in metadata for
	# historical LVs is reduced when compared to original
	# information kept in metadata for live LVs. Currently, this
	# feature is supported for thin and thin snapshot LVs only.
	# This configuration option has an automatic default value.
	# record_lvs_history = 0

	# Configuration option metadata/lvs_history_retention_time.
	# Retention time in seconds after which a record about individual
	# historical logical volume is automatically destroyed.
	# A value of 0 disables this feature.
	# This configuration option has an automatic default value.
	# lvs_history_retention_time = 0

	# Configuration option metadata/pvmetadatacopies.
	# Number of copies of metadata to store on each PV.
	# The --pvmetadatacopies option overrides this setting.
	# 
	# Accepted values:
	#   2
	#     Two copies of the VG metadata are stored on the PV, one at the
	#     front of the PV, and one at the end.
	#   1
	#     One copy of VG metadata is stored at the front of the PV.
	#   0
	#     No copies of VG metadata are stored on the PV. This may be
	#     useful for VGs containing large numbers of PVs.
	# 
	# This configuration option is advanced.
	# This configuration option has an automatic default value.
	# pvmetadatacopies = 1

	# Configuration option metadata/vgmetadatacopies.
	# Number of copies of metadata to maintain for each VG.
	# The --vgmetadatacopies option overrides this setting.
	# If set to a non-zero value, LVM automatically chooses which of the
	# available metadata areas to use to achieve the requested number of
	# copies of the VG metadata. If you set a value larger than the the
	# total number of metadata areas available, then metadata is stored in
	# them all. The value 0 (unmanaged) disables this automatic management
	# and allows you to control which metadata areas are used at the
	# individual PV level using pvchange --metadataignore y|n.
	# This configuration option has an automatic default value.
	# vgmetadatacopies = 0

	# Configuration option metadata/pvmetadatasize.
	# The default size of the metadata area in units of 512 byte sectors.
	# The metadata area begins at an offset of the page size from the start
	# of the device. The first PE is by default at 1 MiB from the start of
	# the device. The space between these is the default metadata area size.
	# The actual size of the metadata area may be larger than what is set
	# here due to default_data_alignment making the first PE a MiB multiple.
	# The metadata area begins with a 512 byte header and is followed by a
	# circular buffer used for VG metadata text. The maximum size of the VG
	# metadata is about half the size of the metadata buffer. VGs with large
	# numbers of PVs or LVs, or VGs containing complex LV structures, may need
	# additional space for VG metadata. The --metadatasize option overrides
	# this setting.
	# This configuration option does not have a default value defined.
	# This configuration option has an automatic default value.

	# Configuration option metadata/pvmetadataignore.
	# Ignore metadata areas on a new PV.
	# The --metadataignore option overrides this setting.
	# If metadata areas on a PV are ignored, LVM will not store metadata
	# in them.
	# This configuration option is advanced.
	# This configuration option has an automatic default value.
	# pvmetadataignore = 0

	# Configuration option metadata/stripesize.
	# This configuration option is advanced.
	# This configuration option has an automatic default value.
	# stripesize = 64
# }

# Configuration section report.
# LVM report command output formatting.
# This configuration section has an automatic default value.
# report {

	# Configuration option report/output_format.
	# Format of LVM command's report output.
	# If there is more than one report per command, then the format
	# is applied for all reports. You can also change output format
	# directly on command line using --reportformat option which
	# has precedence over log/output_format setting.
	# Accepted values:
	#   basic
	#     Original format with columns and rows. If there is more than
	#     one report per command, each report is prefixed with report's
	#     name for identification.
	#   json
	#     JSON format.
	# This configuration option has an automatic default value.
	# output_format = "basic"

	# Configuration option report/compact_output.
	# Do not print empty values for all report fields.
	# If enabled, all fields that don't have a value set for any of the
	# rows reported are skipped and not printed. Compact output is
	# applicable only if report/buffered is enabled. If you need to
	# compact only specified fields, use compact_output=0 and define
	# report/compact_output_cols configuration setting instead.
	# This configuration option has an automatic default value.
	# compact_output = 0

	# Configuration option report/compact_output_cols.
	# Do not print empty values for specified report fields.
	# If defined, specified fields that don't have a value set for any
	# of the rows reported are skipped and not printed. Compact output
	# is applicable only if report/buffered is enabled. If you need to
	# compact all fields, use compact_output=1 instead in which case
	# the compact_output_cols setting is then ignored.
	# This configuration option has an automatic default value.
	# compact_output_cols = ""

	# Configuration option report/aligned.
	# Align columns in report output.
	# This configuration option has an automatic default value.
	# aligned = 1

	# Configuration option report/buffered.
	# Buffer report output.
	# When buffered reporting is used, the report's content is appended
	# incrementally to include each object being reported until the report
	# is flushed to output which normally happens at the end of command
	# execution. Otherwise, if buffering is not used, each object is
	# reported as soon as its processing is finished.
	# This configuration option has an automatic default value.
	# buffered = 1

	# Configuration option report/headings.
	# Show headings for columns on report.
	# This configuration option has an automatic default value.
	# headings = 1

	# Configuration option report/separator.
	# A separator to use on report after each field.
	# This configuration option has an automatic default value.
	# separator = " "

	# Configuration option report/list_item_separator.
	# A separator to use for list items when reported.
	# This configuration option has an automatic default value.
	# list_item_separator = ","

	# Configuration option report/prefixes.
	# Use a field name prefix for each field reported.
	# This configuration option has an automatic default value.
	# prefixes = 0

	# Configuration option report/quoted.
	# Quote field values when using field name prefixes.
	# This configuration option has an automatic default value.
	# quoted = 1

	# Configuration option report/columns_as_rows.
	# Output each column as a row.
	# If set, this also implies report/prefixes=1.
	# This configuration option has an automatic default value.
	# columns_as_rows = 0

	# Configuration option report/binary_values_as_numeric.
	# Use binary values 0 or 1 instead of descriptive literal values.
	# For columns that have exactly two valid values to report
	# (not counting the 'unknown' value which denotes that the
	# value could not be determined).
	# This configuration option has an automatic default value.
	# binary_values_as_numeric = 0

	# Configuration option report/time_format.
	# Set time format for fields reporting time values.
	# Format specification is a string which may contain special character
	# sequences and ordinary character sequences. Ordinary character
	# sequences are copied verbatim. Each special character sequence is
	# introduced by the '%' character and such sequence is then
	# substituted with a value as described below.
	# 
	# Accepted values:
	#   %a
	#     The abbreviated name of the day of the week according to the
	#     current locale.
	#   %A
	#     The full name of the day of the week according to the current
	#     locale.
	#   %b
	#     The abbreviated month name according to the current locale.
	#   %B
	#     The full month name according to the current locale.
	#   %c
	#     The preferred date and time representation for the current
	#     locale (alt E)
	#   %C
	#     The century number (year/100) as a 2-digit integer. (alt E)
	#   %d
	#     The day of the month as a decimal number (range 01 to 31).
	#     (alt O)
	#   %D
	#     Equivalent to %m/%d/%y. (For Americans only. Americans should
	#     note that in other countries%d/%m/%y is rather common. This
	#     means that in international context this format is ambiguous and
	#     should not be used.
	#   %e
	#     Like %d, the day of the month as a decimal number, but a leading
	#     zero is replaced by a space. (alt O)
	#   %E
	#     Modifier: use alternative local-dependent representation if
	#     available.
	#   %F
	#     Equivalent to %Y-%m-%d (the ISO 8601 date format).
	#   %G
	#     The ISO 8601 week-based year with century as adecimal number.
	#     The 4-digit year corresponding to the ISO week number (see %V).
	#     This has the same format and value as %Y, except that if the
	#     ISO week number belongs to the previous or next year, that year
	#     is used instead.
	#   %g
	#     Like %G, but without century, that is, with a 2-digit year
	#     (00-99).
	#   %h
	#     Equivalent to %b.
	#   %H
	#     The hour as a decimal number using a 24-hour clock
	#     (range 00 to 23). (alt O)
	#   %I
	#     The hour as a decimal number using a 12-hour clock
	#     (range 01 to 12). (alt O)
	#   %j
	#     The day of the year as a decimal number (range 001 to 366).
	#   %k
	#     The hour (24-hour clock) as a decimal number (range 0 to 23);
	#     single digits are preceded by a blank. (See also %H.)
	#   %l
	#     The hour (12-hour clock) as a decimal number (range 1 to 12);
	#     single digits are preceded by a blank. (See also %I.)
	#   %m
	#     The month as a decimal number (range 01 to 12). (alt O)
	#   %M
	#     The minute as a decimal number (range 00 to 59). (alt O)
	#   %O
	#     Modifier: use alternative numeric symbols.
	#   %p
	#     Either "AM" or "PM" according to the given time value,
	#     or the corresponding strings for the current locale. Noon is
	#     treated as "PM" and midnight as "AM".
	#   %P
	#     Like %p but in lowercase: "am" or "pm" or a corresponding
	#     string for the current locale.
	#   %r
	#     The time in a.m. or p.m. notation. In the POSIX locale this is
	#     equivalent to %I:%M:%S %p.
	#   %R
	#     The time in 24-hour notation (%H:%M). For a version including
	#     the seconds, see %T below.
	#   %s
	#     The number of seconds since the Epoch,
	#     1970-01-01 00:00:00 +0000 (UTC)
	#   %S
	#     The second as a decimal number (range 00 to 60). (The range is
	#     up to 60 to allow for occasional leap seconds.) (alt O)
	#   %t
	#     A tab character.
	#   %T
	#     The time in 24-hour notation (%H:%M:%S).
	#   %u
	#     The day of the week as a decimal, range 1 to 7, Monday being 1.
	#     See also %w. (alt O)
	#   %U
	#     The week number of the current year as a decimal number,
	#     range 00 to 53, starting with the first Sunday as the first
	#     day of week 01. See also %V and %W. (alt O)
	#   %V
	#     The ISO 8601 week number of the current year as a decimal number,
	#     range 01 to 53, where week 1 is the first week that has at least
	#     4 days in the new year. See also %U and %W. (alt O)
	#   %w
	#     The day of the week as a decimal, range 0 to 6, Sunday being 0.
	#     See also %u. (alt O)
	#   %W
	#     The week number of the current year as a decimal number,
	#     range 00 to 53, starting with the first Monday as the first day
	#     of week 01. (alt O)
	#   %x
	#     The preferred date representation for the current locale without
	#     the time. (alt E)
	#   %X
	#     The preferred time representation for the current locale without
	#     the date. (alt E)
	#   %y
	#     The year as a decimal number without a century (range 00 to 99).
	#     (alt E, alt O)
	#   %Y
	#     The year as a decimal number including the century. (alt E)
	#   %z
	#     The +hhmm or -hhmm numeric timezone (that is, the hour and minute
	#     offset from UTC).
	#   %Z
	#     The timezone name or abbreviation.
	#   %%
	#     A literal '%' character.
	# 
	# This configuration option has an automatic default value.
	# time_format = "%Y-%m-%d %T %z"

	# Configuration option report/devtypes_sort.
	# List of columns to sort by when reporting 'lvm devtypes' command.
	# See 'lvm devtypes -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# devtypes_sort = "devtype_name"

	# Configuration option report/devtypes_cols.
	# List of columns to report for 'lvm devtypes' command.
	# See 'lvm devtypes -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# devtypes_cols = "devtype_name,devtype_max_partitions,devtype_description"

	# Configuration option report/devtypes_cols_verbose.
	# List of columns to report for 'lvm devtypes' command in verbose mode.
	# See 'lvm devtypes -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# devtypes_cols_verbose = "devtype_name,devtype_max_partitions,devtype_description"

	# Configuration option report/lvs_sort.
	# List of columns to sort by when reporting 'lvs' command.
	# See 'lvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# lvs_sort = "vg_name,lv_name"

	# Configuration option report/lvs_cols.
	# List of columns to report for 'lvs' command.
	# See 'lvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# lvs_cols = "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"

	# Configuration option report/lvs_cols_verbose.
	# List of columns to report for 'lvs' command in verbose mode.
	# See 'lvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# lvs_cols_verbose = "lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"

	# Configuration option report/vgs_sort.
	# List of columns to sort by when reporting 'vgs' command.
	# See 'vgs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# vgs_sort = "vg_name"

	# Configuration option report/vgs_cols.
	# List of columns to report for 'vgs' command.
	# See 'vgs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# vgs_cols = "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"

	# Configuration option report/vgs_cols_verbose.
	# List of columns to report for 'vgs' command in verbose mode.
	# See 'vgs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# vgs_cols_verbose = "vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"

	# Configuration option report/pvs_sort.
	# List of columns to sort by when reporting 'pvs' command.
	# See 'pvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvs_sort = "pv_name"

	# Configuration option report/pvs_cols.
	# List of columns to report for 'pvs' command.
	# See 'pvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"

	# Configuration option report/pvs_cols_verbose.
	# List of columns to report for 'pvs' command in verbose mode.
	# See 'pvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"

	# Configuration option report/segs_sort.
	# List of columns to sort by when reporting 'lvs --segments' command.
	# See 'lvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# segs_sort = "vg_name,lv_name,seg_start"

	# Configuration option report/segs_cols.
	# List of columns to report for 'lvs --segments' command.
	# See 'lvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# segs_cols = "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"

	# Configuration option report/segs_cols_verbose.
	# List of columns to report for 'lvs --segments' command in verbose mode.
	# See 'lvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# segs_cols_verbose = "lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"

	# Configuration option report/pvsegs_sort.
	# List of columns to sort by when reporting 'pvs --segments' command.
	# See 'pvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvsegs_sort = "pv_name,pvseg_start"

	# Configuration option report/pvsegs_cols.
	# List of columns to sort by when reporting 'pvs --segments' command.
	# See 'pvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvsegs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"

	# Configuration option report/pvsegs_cols_verbose.
	# List of columns to sort by when reporting 'pvs --segments' command in verbose mode.
	# See 'pvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvsegs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"

	# Configuration option report/vgs_cols_full.
	# List of columns to report for lvm fullreport's 'vgs' subreport.
	# See 'vgs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# vgs_cols_full = "vg_all"

	# Configuration option report/pvs_cols_full.
	# List of columns to report for lvm fullreport's 'vgs' subreport.
	# See 'pvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvs_cols_full = "pv_all"

	# Configuration option report/lvs_cols_full.
	# List of columns to report for lvm fullreport's 'lvs' subreport.
	# See 'lvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# lvs_cols_full = "lv_all"

	# Configuration option report/pvsegs_cols_full.
	# List of columns to report for lvm fullreport's 'pvseg' subreport.
	# See 'pvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvsegs_cols_full = "pvseg_all,pv_uuid,lv_uuid"

	# Configuration option report/segs_cols_full.
	# List of columns to report for lvm fullreport's 'seg' subreport.
	# See 'lvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# segs_cols_full = "seg_all,lv_uuid"

	# Configuration option report/vgs_sort_full.
	# List of columns to sort by when reporting lvm fullreport's 'vgs' subreport.
	# See 'vgs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# vgs_sort_full = "vg_name"

	# Configuration option report/pvs_sort_full.
	# List of columns to sort by when reporting lvm fullreport's 'vgs' subreport.
	# See 'pvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvs_sort_full = "pv_name"

	# Configuration option report/lvs_sort_full.
	# List of columns to sort by when reporting lvm fullreport's 'lvs' subreport.
	# See 'lvs -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# lvs_sort_full = "vg_name,lv_name"

	# Configuration option report/pvsegs_sort_full.
	# List of columns to sort by when reporting for lvm fullreport's 'pvseg' subreport.
	# See 'pvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# pvsegs_sort_full = "pv_uuid,pvseg_start"

	# Configuration option report/segs_sort_full.
	# List of columns to sort by when reporting lvm fullreport's 'seg' subreport.
	# See 'lvs --segments -o help' for the list of possible fields.
	# This configuration option has an automatic default value.
	# segs_sort_full = "lv_uuid,seg_start"

	# Configuration option report/mark_hidden_devices.
	# Use brackets [] to mark hidden devices.
	# This configuration option has an automatic default value.
	# mark_hidden_devices = 1

	# Configuration option report/two_word_unknown_device.
	# Use the two words 'unknown device' in place of '[unknown]'.
	# This is displayed when the device for a PV is not known.
	# This configuration option has an automatic default value.
	# two_word_unknown_device = 0
# }

# Configuration section dmeventd.
# Settings for the LVM event daemon.
dmeventd {

	# Configuration option dmeventd/mirror_library.
	# The library dmeventd uses when monitoring a mirror device.
	# libdevmapper-event-lvm2mirror.so attempts to recover from
	# failures. It removes failed devices from a volume group and
	# reconfigures a mirror as necessary. If no mirror library is
	# provided, mirrors are not monitored through dmeventd.
	mirror_library = "libdevmapper-event-lvm2mirror.so"

	# Configuration option dmeventd/raid_library.
	# This configuration option has an automatic default value.
	# raid_library = "libdevmapper-event-lvm2raid.so"

	# Configuration option dmeventd/snapshot_library.
	# The library dmeventd uses when monitoring a snapshot device.
	# libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
	# and emits a warning through syslog when the usage exceeds 80%. The
	# warning is repeated when 85%, 90% and 95% of the snapshot is filled.
	snapshot_library = "libdevmapper-event-lvm2snapshot.so"

	# Configuration option dmeventd/thin_library.
	# The library dmeventd uses when monitoring a thin device.
	# libdevmapper-event-lvm2thin.so monitors the filling of a pool
	# and emits a warning through syslog when the usage exceeds 80%. The
	# warning is repeated when 85%, 90% and 95% of the pool is filled.
	thin_library = "libdevmapper-event-lvm2thin.so"

	# Configuration option dmeventd/thin_command.
	# The plugin runs command with each 5% increment when thin-pool data volume
	# or metadata volume gets above 50%.
	# Command which starts with 'lvm ' prefix is internal lvm command.
	# You can write your own handler to customise behaviour in more details.
	# User handler is specified with the full path starting with '/'.
	# This configuration option has an automatic default value.
	# thin_command = "lvm lvextend --use-policies"

	# Configuration option dmeventd/vdo_library.
	# The library dmeventd uses when monitoring a VDO pool device.
	# libdevmapper-event-lvm2vdo.so monitors the filling of a pool
	# and emits a warning through syslog when the usage exceeds 80%. The
	# warning is repeated when 85%, 90% and 95% of the pool is filled.
	# This configuration option has an automatic default value.
	# vdo_library = "libdevmapper-event-lvm2vdo.so"

	# Configuration option dmeventd/vdo_command.
	# The plugin runs command with each 5% increment when VDO pool volume
	# gets above 50%.
	# Command which starts with 'lvm ' prefix is internal lvm command.
	# You can write your own handler to customise behaviour in more details.
	# User handler is specified with the full path starting with '/'.
	# This configuration option has an automatic default value.
	# vdo_command = "lvm lvextend --use-policies"

	# Configuration option dmeventd/executable.
	# The full path to the dmeventd binary.
	# This configuration option has an automatic default value.
	# executable = "/sbin/dmeventd"
}

# Configuration section tags.
# Host tag settings.
# This configuration section has an automatic default value.
# tags {

	# Configuration option tags/hosttags.
	# Create a host tag using the machine name.
	# The machine name is nodename returned by uname(2).
	# This configuration option has an automatic default value.
	# hosttags = 0

	# Configuration section tags/<tag>.
	# Replace this subsection name with a custom tag name.
	# Multiple subsections like this can be created. The '@' prefix for
	# tags is optional. This subsection can contain host_list, which is a
	# list of machine names. If the name of the local machine is found in
	# host_list, then the name of this subsection is used as a tag and is
	# applied to the local machine as a 'host tag'. If this subsection is
	# empty (has no host_list), then the subsection name is always applied
	# as a 'host tag'.
	# 
	# Example
	# The host tag foo is given to all hosts, and the host tag
	# bar is given to the hosts named machine1 and machine2.
	# tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
	# 
	# This configuration section has variable name.
	# This configuration section has an automatic default value.
	# tag {

		# Configuration option tags/<tag>/host_list.
		# A list of machine names.
		# These machine names are compared to the nodename returned
		# by uname(2). If the local machine name matches an entry in
		# this list, the name of the subsection is applied to the
		# machine as a 'host tag'.
		# This configuration option does not have a default value defined.
	# }
# }

cat /etc/lvm/lvmlocal.conf

CLICK ME TO EXPAND

# which should be installed as /etc/lvm/lvmlocal.conf .
#
# Refer to 'man lvm.conf' for information about the file layout.
#
# To put this file in a different directory and override
# /etc/lvm set the environment variable LVM_SYSTEM_DIR before
# running the tools.
#
# The lvmlocal.conf file is normally expected to contain only the
# "local" section which contains settings that should not be shared or
# repeated among different hosts.  (But if other sections are present,
# they *will* get processed.  Settings in this file override equivalent
# ones in lvm.conf and are in turn overridden by ones in any enabled
# lvm_<tag>.conf files.)
#
# Please take care that each setting only appears once if uncommenting
# example settings in this file and never copy this file between hosts.


# Configuration section local.
# LVM settings that are specific to the local host.
local {

	# Configuration option local/system_id.
	# Defines the local system ID for lvmlocal mode.
	# This is used when global/system_id_source is set to 'lvmlocal' in the
	# main configuration file, e.g. lvm.conf. When used, it must be set to
	# a unique value among all hosts sharing access to the storage,
	# e.g. a host name.
	# 
	# Example
	# Set no system ID:
	# system_id = ""
	# Set the system_id to a specific name:
	# system_id = "host1"
	# 
	# This configuration option has an automatic default value.
	# system_id = ""

	# Configuration option local/extra_system_ids.
	# A list of extra VG system IDs the local host can access.
	# VGs with the system IDs listed here (in addition to the host's own
	# system ID) can be fully accessed by the local host. (These are
	# system IDs that the host sees in VGs, not system IDs that identify
	# the local host, which is determined by system_id_source.)
	# Use this only after consulting 'man lvmsystemid' to be certain of
	# correct usage and possible dangers.
	# This configuration option does not have a default value defined.

	# Configuration option local/host_id.
	# The lvmlockd sanlock host_id.
	# This must be unique among all hosts, and must be between 1 and 2000.
	# Applicable only if LVM is compiled with lockd support
	# This configuration option has an automatic default value.
	# host_id = 0
}

@rondadon
Copy link

@rondadon rondadon commented May 17, 2020

@MichaIng
So I ran the script. I chose "x86_64 Virtual Machine".
These are the outputs after running the DietPi_Prep Script:

root@localhost:~# ls -Al /dev/mapper/

total 0
crw------- 1 root root 10, 236 May 17 17:31 control
lrwxrwxrwx 1 root root       7 May 17 18:17 vg00-lv00 -> ../dm-1
lrwxrwxrwx 1 root root       7 May 17 17:32 vg00-lv01 -> ../dm-0

root@localhost:~# cat /etc/fstab

# Please use "dietpi-drive_manager" to setup mounts
#----------------------------------------------------------------
# NETWORK
#----------------------------------------------------------------


#----------------------------------------------------------------
# TMPFS
#----------------------------------------------------------------
tmpfs /tmp tmpfs size=229M,noatime,lazytime,nodev,nosuid,mode=1777
tmpfs /var/log tmpfs size=50M,noatime,lazytime,nodev,nosuid,mode=1777

#----------------------------------------------------------------
# MISC: ecryptfs, vboxsf (VirtualBox shared folder), gluster, bind mounts
#----------------------------------------------------------------


#----------------------------------------------------------------
# SWAPFILE
#----------------------------------------------------------------


#----------------------------------------------------------------
# PHYSICAL DRIVES
#----------------------------------------------------------------
UUID=a74fdb8e-18cb-4438-8652-a900525cf565 / auto noatime,lazytime,rw 0 1
UUID=0fde5837-626f-4cb3-ba89-e0cae8601f55 /boot ext4 noatime,lazytime,rw 0 2
#UUID=msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS /mnt/msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS LVM2_member noatime,lazytime,rw,nofail

root@localhost:~# df -a

df: /proc/sys/fs/binfmt_misc: No such device
Filesystem            1K-blocks   Used Available Use% Mounted on
sysfs                         0      0         0    - /sys
proc                          0      0         0    - /proc
udev                     219008      0    219008   0% /dev
devpts                        0      0         0    - /dev/pts
tmpfs                     46992   4120     42872   9% /run
/dev/mapper/vg00-lv01   7792568 599060   6779200   9% /
securityfs                    0      0         0    - /sys/kernel/security
tmpfs                    234952      0    234952   0% /dev/shm
tmpfs                      5120      0      5120   0% /run/lock
tmpfs                    234952      0    234952   0% /sys/fs/cgroup
cgroup2                       0      0         0    - /sys/fs/cgroup/unified
cgroup                        0      0         0    - /sys/fs/cgroup/systemd
pstore                        0      0         0    - /sys/fs/pstore
bpf                           0      0         0    - /sys/fs/bpf
cgroup                        0      0         0    - /sys/fs/cgroup/net_cls,net_prio
cgroup                        0      0         0    - /sys/fs/cgroup/freezer
cgroup                        0      0         0    - /sys/fs/cgroup/cpu,cpuacct
cgroup                        0      0         0    - /sys/fs/cgroup/perf_event
cgroup                        0      0         0    - /sys/fs/cgroup/devices
cgroup                        0      0         0    - /sys/fs/cgroup/pids
cgroup                        0      0         0    - /sys/fs/cgroup/cpuset
cgroup                        0      0         0    - /sys/fs/cgroup/memory
cgroup                        0      0         0    - /sys/fs/cgroup/rdma
cgroup                        0      0         0    - /sys/fs/cgroup/blkio
hugetlbfs                     0      0         0    - /dev/hugepages
mqueue                        0      0         0    - /dev/mqueue
debugfs                       0      0         0    - /sys/kernel/debug
/dev/sda1                474712  24077    421605   6% /boot
tmpfs                     46988      0     46988   0% /run/user/0
fusectl                       0      0         0    - /sys/fs/fuse/connections
tmpfs                    234496      0    234496   0% /tmp
tmpfs                     51200     24     51176   1% /var/log

root@localhost:~# blkid

/dev/sda1: UUID="0fde5837-626f-4cb3-ba89-e0cae8601f55" TYPE="ext4" PARTUUID="bc873d4a-01"
/dev/sda2: UUID="msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS" TYPE="LVM2_member" PARTUUID="bc873d4a-02"
/dev/mapper/vg00-lv01: UUID="a74fdb8e-18cb-4438-8652-a900525cf565" TYPE="ext4"
/dev/mapper/vg00-lv00: UUID="99a6095c-5433-471c-b8f7-b7de434d6921" TYPE="swap"

root@localhost:~# lsblk

NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   10G  0 disk 
|-sda1          8:1    0  487M  0 part /boot
`-sda2          8:2    0  9.5G  0 part 
  |-vg00-lv01 254:0    0  7.6G  0 lvm  /
  `-vg00-lv00 254:1    0  1.9G  0 lvm  
sr0            11:0    1 1024M  0 rom 

I then updated the package list with apt-get update, installed lvm2 and dependencies, and changed fstab to following:

root@localhost:~# cat /etc/fstab

# Please use "dietpi-drive_manager" to setup mounts
#----------------------------------------------------------------
# NETWORK
#----------------------------------------------------------------


#----------------------------------------------------------------
# TMPFS
#----------------------------------------------------------------
tmpfs /tmp tmpfs size=229M,noatime,lazytime,nodev,nosuid,mode=1777
tmpfs /var/log tmpfs size=50M,noatime,lazytime,nodev,nosuid,mode=1777

#----------------------------------------------------------------
# MISC: ecryptfs, vboxsf (VirtualBox shared folder), gluster, bind mounts
#----------------------------------------------------------------


#----------------------------------------------------------------
# SWAPFILE
#----------------------------------------------------------------


#----------------------------------------------------------------
# PHYSICAL DRIVES
#----------------------------------------------------------------
/dev/mapper/vg00-lv01 /               ext4    errors=remount-ro 0       1
#UUID=a74fdb8e-18cb-4438-8652-a900525cf565 / auto noatime,lazytime,rw 0 1
UUID=0fde5837-626f-4cb3-ba89-e0cae8601f55 /boot ext4 noatime,lazytime,rw 0 2
#UUID=msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS /mnt/msy2Zk-rDPu-iXjf-jabJ-9SVQ-1WnZ-kkOKYS LVM2_member noatime,lazytime,rw,$

So basically I commented the line with UUID=a74fdb8e-18cb-4438-8652-a900525cf565 / auto noatime,lazytime,rw 0 1 and added /dev/mapper/vg00-lv01 / ext4 errors=remount-ro 0 1.

The I checked again with ls -Al /dev/mapper/ and it looked the same like posted at the top of this post.

I rebooted and now I get a different error message in KVM Console. It somehow waits for /dev/mapper/vg00-lv01 to appear and after a while it jumps to Kernel-Panic:

See Screenshots from KVM Console (Sorry, couldn't copy and paste it):

Screenshot_2020-05-17 Cloud Server 0(1)

And at the Server Info Page I get this warning:
VMware Tools: Die VMWare Tools sind nicht auf dem Server installiert. Die VMware Tools bestehen aus einer Reihe von Dienstprogrammen, die Sie im Betriebssystem des Servers installieren. Installieren Sie die VMware Tools, damit der ordnungsgemäße Betrieb Ihres Servers gewährleistet werden kann.

So basically it says that the VMWare Tools are not installed on the server. Maybe it's necessary to install the VMWare Tools to run the mapper devices.?!

Will try it again later but additionally install VMWare Tools.

Have a nice evening!

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 17, 2020

@rondadon
Since the errors occurred within the initramfs, probably updating it is required as well:

update-initramfs -u

Possibly when removing lvm2, the initramfs is updated to not attempt loading this module. Hence when installing it again, it might be required to update it again to load it.

The LVM configs look like defaults, no entries for specific devices.

Also to get some more details about the VPS setup:

dpkg --get-selections 'linux-image*' '*initramfs*' 'grub*' '*boot*'
ls -l /lib/modules/
@rondadon
Copy link

@rondadon rondadon commented May 18, 2020

@MichaIng
update-initramfs -u did the trick!!!

Needed to install the initramfs-tools package to be able tu run update-initramfs -u.

Wow...
You can't imagine how happy I am that it works now. I can not thank you enough..

I forgot to run the following prior to running the script. Will revert to the clean debian install and will run this command.

dpkg --get-selections 'linux-image*' '*initramfs*' 'grub*' '*boot*'
ls -l /lib/modules/

Now converted to dietpi, it boots flawelessly, I can login via ssh and tinker with it. Do you need some info for the dietpi install now? Only the SWAP Partition needs to be mounted now.

Man, this made my day... !

@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented May 18, 2020

many thanks for confirmation. I linked the solution back to the forum post to have the loop completed.

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented May 18, 2020

@rondadon
Ah, I didn't think about that we install tiny-initramfs on VMs, update-tirfs would have been the command to update it. However probably tiny-initramfs does not support LVM, so that the regular initramfs-tools may be required anyway.

I forgot to run the following prior to running the script. Will revert to the clean debian install and will run this command.

Would be awesome, although not too important as we now know how to make it working 😃.

Only the SWAP Partition needs to be mounted now.

Yes, you can do the following:

# Disable the DietPi swap file
/boot/dietpi/func/dietpi-set_swapfile 0
# Create swap partition
mkswap /dev/mapper/vg00-lv00
# Enable swap partition
swapon /dev/mapper/vg00-lv00
# Mount swap partition on boot
echo '/dev/mapper/vg00-lv00 none swap sw' >> /etc/fstab
@rondadon
Copy link

@rondadon rondadon commented May 19, 2020

So to summerize what I did:

  1. Downloaded and ran the PREP_SYSTEM_FOR_DIETPI.sh
  2. Chose x86_64 Virtual Machine
  3. After Script finished -> apt-get update and then apt-get install lvm2 initramfs-tools
  4. Edit /etc/fstab and COMMENT (add # at beginning of Line where / is mounted)
    UUID=a74fdb8e-18cb-4438-8652-a900525cf565 / auto noatime,lazytime,rw 0 1
    and add the line
    /dev/mapper/vg00-lv01 / ext4 errors=remount-ro 0 1
    and save it!
  5. Run command update-initramfs -u
  6. Reboot. It should boot into dietpi now!

Now the Swapfile is using the space on / (about 1.5GB) so we need to disable the swapfile and enable the SWAP partition of the VMware VPS by doing following (as shown above by Michalng):

  1. Disable the DietPi swap file by running command:
    /boot/dietpi/func/dietpi-set_swapfile 0
  2. Create swap partition by running command:
    mkswap /dev/mapper/vg00-lv00
  3. Enable swap partition by running command:
    swapon /dev/mapper/vg00-lv00
  4. Mount swap partition on boot by running command or edit /etc/fstab manually
    echo '/dev/mapper/vg00-lv00 none swap sw' >> /etc/fstab
  5. Reboot
  6. After Reboot run following command to check if SWAP Partition is activated/enabled:
    swapon --show
    It should show something like this if it's activated/enabled:
root@DietPi:~# swapon --show
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition 1.9G   0B   -2

Additional Info:

df -h before activating SWAP Partition:

root@DietPi:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                   215M     0  215M   0% /dev
tmpfs                   46M  2.3M   44M   5% /run
/dev/mapper/vg00-lv01  7.5G  2.2G  4.9G  31% /
tmpfs                  230M     0  230M   0% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  230M     0  230M   0% /sys/fs/cgroup
tmpfs                  1.0G     0  1.0G   0% /tmp
tmpfs                   50M  8.0K   50M   1% /var/log
/dev/sda1              464M   50M  386M  12% /boot

df -h AFTER activating SWAP Partition:

root@DietPi:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                   215M     0  215M   0% /dev
tmpfs                   46M  2.3M   44M   5% /run
/dev/mapper/vg00-lv01  7.5G  632M  6.5G   9% /
tmpfs                  230M     0  230M   0% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                  230M     0  230M   0% /sys/fs/cgroup
/dev/sda1              464M   50M  386M  12% /boot
tmpfs                   50M     0   50M   0% /var/log
tmpfs                  229M     0  229M   0% /tmp

Question still to find an answer for:

  • How the installation will behave after a dietpi update. Will it screw up the setup or will the system boot flawelessly after an update.?

BTW: I installed debian again and ran the commands mentioned above by @MichaIng . These are the results:

root@localhost:~# dpkg --get-selections 'linux-image*' '*initramfs*' 'grub*' '*boot*'
linux-image-4.19.0-5-amd64                      install
linux-image-4.19.0-9-amd64                      install
linux-image-amd64                               install
initramfs-tools                                 install
initramfs-tools-core                            install
grub-common                                     install
grub-pc                                         install
grub-pc-bin                                     install
grub2-common                                    install
libefiboot1:amd64                               install
root@localhost:~# ls -l /lib/modules/
total 8
drwxr-xr-x 3 root root 4096 Aug 29  2019 4.19.0-5-amd64
drwxr-xr-x 3 root root 4096 May 18 22:50 4.19.0-9-amd64

I hope that is all. Thank you again for your effort, help and time to get this resolved! I am really happy now being able to use dietpi on ionos VPS. It uses like 30-40 MB less ram and also about roughly half of the space on / .
I hope this is also of help for others trying to run dietpi on their (IONOS) VPS!

@MichaIng MichaIng mentioned this issue May 23, 2020
@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented Jun 19, 2020

@MichaIng
I was facing following error message during PREP script usage. However PREP was running find and finished at the end

[ INFO ] DietPi-PREP | Disable package state translation downloads
[  OK  ] DietPi-PREP | Preserve modified config files on APT update
./PREP_SYSTEM_FOR_DIETPI.sh: line 753: ((: > 5 : syntax error: operand expected (error token is "> 5 ")
[ INFO ] DietPi-PREP | APT install for: libraspberrypi-bin libraspberrypi0 raspberrypi-bootloader raspberrypi-kernel raspberrypi-sys-mods raspi-copies-and-fills, please wait...
@MichaIng
Copy link
Owner

@MichaIng MichaIng commented Jun 19, 2020

Jep, this has been fixed here: 16a1e9f
Only relevant for Bullseye (where systemd-timesyncd became an own package) and even there it would be installed or kept as dependency of systemd. Only on Bullseye, when a different time sync daemon is installed already, this would be kept and systemd-timesyncd would be missing without the fix 😉.

@rhkean
Copy link
Contributor

@rhkean rhkean commented Sep 13, 2020

Hello @Fourdee and @MichaIng ,

It's been a loooong time. I hope you two are doing well in this pandemic age... LOL

I have a new challenge... Do you think something like this would work for getting a dietpi image running in a chroot on a Chromebook? I need a thin dev environment and all the targets that I have tried through Crouton are bogging down my older Chromebook.

-Rob Kean

@MichaIng
Copy link
Owner

@MichaIng MichaIng commented Sep 13, 2020

Hi Rob, nice to see you stopping by. Yes did and doing well here during pandemic, luckily coding basically implies limited infection risk 😄. I hope you're doing fine as well.

We made large progress to make DietPi + DietPi-PREP running inside a chroot, allowing to automate things. Image creation is currently mostly done by booting images via losetup images files and systemd-nspawn -bD /mnt/mountpoint, so aside of some "/sys/something is read-only..." like errors and obsoletely failing network setup (fixed from host) this works fine and reduces the overhead largely. I use a set of such images for compiling/building binaries and packages hosted on https://dietpi.com/downloads/binaries/, bootloader+initramfs+kernel can be purged, then it is quite thin. binfmt-support + qemu-user-static can be used to boot ARM images on x86 hosts, systemd-nspawn invokes those automatically when required, so cross compiling and testing can be made in one step. But it takes much more time due to emulation for every single binary call, so only useful when builds can run unattended and time does play no role 😉.

@famewolf
Copy link

@famewolf famewolf commented Jan 3, 2021

Any chance this dietpi-prep script would work with an older 32 bit laptop? Last I checked debian still supported 32 bit but it's not in the list of choices for dietpi although 32 bit for other architectures IS available. Yes I know 32 bit is ancient but I just have difficulty in tossing a perfectly working laptop into the trash when it's still usable for so many things.

@Joulinar
Copy link
Collaborator

@Joulinar Joulinar commented Jan 3, 2021

Hi,

I guess this will answer your question. #4024
Short answer is: no

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet