Skip to content
Henryk Paluch edited this page Mar 27, 2024 · 30 revisions

PVE Proxmox is Linux + KVM (or LXC) virtualization. It is freely available from: https://www.proxmox.com/en/downloads

Tested version:

pveversion
   pve-manager/5.1-41/0b958203 (running kernel: 4.13.13-2-pve)

Notable highlights:

  • HTML5/js web client - works from both Linux and Windows. The only exception is SPICE console (requires native binary virt-viewer). However you can always use VNC as fallback
  • KVM/QEMU supports memory overcommit well - see Hypervisor Memory overcommit tests. The only comparable hypervisor (in this respect) is ESXi from VMware.
  • also supports LXC containers (no HW virtualization needed - least possible overhead)
  • also supports Software QEMU mode (slow, but useful when you have no HW virtualization - for example when running as nested VM on older CPU).

Traps

Emulated SATA partition corruption

Never(!) use emulated SATA in Proxmox VE. It will sooner or later corrupt your MBR partition on disk (it really happened to us). It was fixed only recently:

Other

If you use LVM and/or ZFS, please be aware that you can't put 2 HDDs with installed Proxmox to single PC. It will fail with error "duplicate VG" or with error "duplicate ZFS pool"...

In my case I rather renamed VG:

  • original pointer is here:
  • booting Proxmox install ISO in Advanced mode
  • exiting 1st RAMdisk shell (pretty useless)
  • in 2nd shell (much more useful) I issued vgrename pve pvessd
  • and mounted original FS and updated VG names in its /etc/fstab
  • next you have to bind mount /dev/, /proc/ and /sys
  • enter chroot
  • call update-grub in chroot
  • exit chroot
  • unmount all filesystems in chroot
  • reboot

Putty SSH ciphers error

If you get SSH connection error like Couldn't agree a client-to-server cipher ... then you need to upgrade your Putty client (had success with putty-64bit-0.70-installer.msi from https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)

List and download PVE LXC appliances

To use Linux containers (LXC based) you need to download appliance filesystem archive

Use this command to list LXC appliances:

pveam available

Use this command to download specific appliance:

pveam download local debian-9.0-standard_9.3-1_amd64.tar.gz

NOTE:

The local argument is your storage name. You can list available storages using command:

pvesm status

You can then create new LXC container using this template. Example using Web UI:

  • logon to your Proxmox Web UI at https://IP_ADDRESS:8006
  • click on "Create CT" (blue button at right-top corner of webpage)
  • on "General" tab fill:
    • "Password"
    • "Confirm Password"
  • on "Template" tab fill:
    • "Storage" - keep "local" if possible
    • "Template" - there should be only option - previously downloaded debian-9.0-standard_9.3-1_amd64.tar.gz
  • keep defaults on "Root Disk" (lvm-thin), "CPU" and "Memory" tabs
  • on "Network" tab remember to either fill IPv4 (or IPv6) address or choice DHCP (if you have DHCP server on your network)
  • "DNS" - I have available "use host settings" only so it is easy
  • click on "Confirm" tab and then click on "Finish" button
  • wait for creation to complete:
    • on "Status" tab you should see "stopped: OK"

Now you can expand in tree

  • "Data Center" -> "pve" -> "100 (CT100)"

  • click on "Start" button

  • you can click on "Console" (or "Console JS") to login to your container

  • in case or our Debian you can query container ip address using command:

    ip addr

NOTE: above Debian 9 image does not allow root to log via SSH - so you need to either create non-root user to login, or PermitRootLogin yes in /etc/ssh/sshd_config of your LXC

Getting container info from Proxmox SSH connection:

  • list LXC containers:

    lxc-ls
       100
  • list information about container 100:

    lxc-info --name 100

NOTE: grep IP: column to get IP address of container.

Quick Download of ISO vm

To have new VM you typically need installation ISO. You can use either web UI to upload ISO this way:

  • logon to your Proxmox Web UI at https://IP_ADDRESS:8006
  • expand/click on "Datacenter" -> "pve" -> "local"
  • clock on "Upload" on right-pane to Upload your ISO

Or you can download ISO image directly to your proxmox storage:

  • SSH to your Proxmox
  • cd to ISO directory and download installation ISO image, for example:
cd /var/lib/vz/template/iso
wget http://ftp.linux.cz/pub/linux/debian-cd/9.4.0/amd64/iso-cd/debian-9.4.0-amd64-netinst.iso

NOTE: If your download fail you can continue it (without need to download whole file again) using -c argument for example:

wget -c http://ftp.linux.cz/pub/linux/debian-cd/9.4.0/amd64/iso-cd/debian-9.4.0-amd64-netinst.iso

Now you can create your first VM using this ISO, for example:

  • logon to your Proxmox Web UI at https://IP_ADDRESS:8006
  • click on "Create VM" blue button at the right-top of web page
  • click on "OS" tab
    • select your "Iso image:" - debian-9.4.0-amd64-netinst.iso
  • click on "Hard Disk", "CPU", "Memory", "Network", "Confirm" tabs and on "Finish" button.
  • expand "Data Center" -> "pve" -> "101 (VM 101)"
  • click on "Start"
  • click on "Console" and follow standard Debian installation

PVE SSH commands to get basic info about VMs:

qm list

# 101 is VMID from "qm list"
qm config 101

Listing available templates

Use this command to list installed LXC and ISO templates:

# note: "local" is storage name
pvesm list local

Install QEMU Agent

QEMU Agent is used to run commands (for example shutdown...) in guest from Host.

Please see https://pve.proxmox.com/wiki/Qemu-guest-agent for guide.

You can then run Agent commands from Proxmox SSH, for example:

qm agent 101 network-get-interfaces

Enable backups for local storage

It is two step operation:

  • enable backup for local storage:
    pvesm set local --content rootdir,images,backup,iso
  • set maximum number of backups to 99 (should be more than enough):
    pvesm set local --maxfiles 99

Set of misc. scripts

WARNING! Always customize and/or double-check these scripts before running them !!!

Script batch_create.sh to create multiple VMs from single backup ("full clone" from backup):

#!/bin/bash
set -e
for i in `seq 2 15`
do
vmid=$((100 + $i))
set -x
qm create $vmid \
    --archive /var/lib/vz/dump/vzdump-qemu-101-2018_05_09-17_07_13.vma.lzo \
    --unique 1
# --name can't be specify with --archive option...
qm set $vmid --name "CentOS7.4MariaBench-$i"
set +x
done
echo "ALL Done"
exit 0

Script batch_remove.sh to quickly remove VMs from 102 to 112(!):

#!/bin/bash
set -e
for i in `seq 2 12`
do
vmid=$((100 + $i))
set -x
qm destroy $vmid
set +x
done
echo "ALL DONE"
exit 0

Script batch_start_and_wait.sh to start and wait (till QEMU agent responds) sequence of VMs:

 
#!/bin/bash
t1=`mktemp`
set -e
for i in `seq 106 110`
do
        echo -n "Starting $i: "
        qm start $i
        while true
        do
                if qm agent $i ping 2> $t1
                then
                        echo
                        break
                fi
                fgrep -q timeout $t1
                echo -n "."
                sleep 1
        done
done
echo "All done"
exit 0

Script shutdown_all_running.sh to shutdown all running VMs using QEMU Guest Agent's Shutdown command:

#!/bin/bash
set -e
for i in `qm list | grep ' running ' | awk '{print $1}'`
do
set -x
qm agent $i shutdown
set +x
done
echo "All running VMs were shut down"
exit 0

SPICE - limit resolution under CentOS7

I have Proxmox VE 6.2-15 host. My CentOS 7.9.2009 (Core) guest uses:

  • Display: SPICE (qxl)
  • or on command line:
    qm config VMID_NUMBER | egrep '^vga:'
    
    vga: qxl

I want to reduce ridiculously high console resolution to something sane. I had great luck with documentation on: https://wiki.archlinux.org/index.php/kernel_mode_setting

Here is step by step guide:

  • verify that kernel really uses qxl drm device:
    lsmod | grep qxl
    
    qxl                    59032  1
    ttm                    96673  1 qxl
    drm_kms_helper        186531  1 qxl
    drm                   456166  4 qxl,ttm,drm_kms_helper
  • there must be qxl driver and usage count must be at least 1
  • now we need to find used video output - using modified scriptlet from https://wiki.archlinux.org/index.php/kernel_mode_setting
    for p in /sys/class/drm/*/status; do con=${p%/status}; echo -n "${con#*/card?-}: "; cat $p; done | egrep ':\s+connected'
    
    Virtual-1: connected
    
  • now we can add (again from ArchLinux wiki) proper video=settings to grub configuration
  • update line below in /etc/default/grub:
    GRUB_CMDLINE_LINUX="video=Virtual-1:800x600"
  • (course change your output and resolution to suit your needs)
  • regenerate boot grub2 configuration using:
    grub2-mkconfig -o /boot/grub2/grub.cfg
  • reboot your computer using init 6 and watch your SPICE console (using virt-viewer associated to *.vv extension)

Enable os-prober

I share my Proxmox installation with other Linux distributions and therefore I need os-prober to automatically add those in Grub menu.

However in Proxmox os-prober is disabled and not installed, to avoid errors caused by scanning LVM (where are VM installed - lvm-thin storage).

To enable it we need to:

  1. install os-prober using (obvious):
    apt-get install os-prober
  2. Now we have to edit os-prober script to disable scanning LVM (my foerign - Gentoo Linux is installed in normal primary partition):
    --- /usr/bin/os-prober.orig	2022-11-13 08:50:14.873216283 +0100
    +++ /usr/bin/os-prober	2022-11-13 08:51:45.234407275 +0100
    @@ -70,10 +70,10 @@
     	fi
     
     	# Also detect OSes on LVM volumes (assumes LVM is active)
    -	if type lvs >/dev/null 2>&1; then
    -		echo "$(LVM_SUPPRESS_FD_WARNINGS=1 log_output lvs --noheadings --separator : -o vg_name,lv_name |
    -			sed "s|-|--|g;s|^[[:space:]]*\(.*\):\(.*\)$|/dev/mapper/\1-\2|")"
    -	fi
    +	#if type lvs >/dev/null 2>&1; then
    +	#	echo "$(LVM_SUPPRESS_FD_WARNINGS=1 log_output lvs --noheadings --separator : -o vg_name,lv_name |
    +	#		sed "s|-|--|g;s|^[[:space:]]*\(.*\):\(.*\)$|/dev/mapper/\1-\2|")"
    +	#fi
     }
  3. Finally we have to enable os-prober by changing line in /etc/default/grub.d/proxmox-ve.cfg to:
    GRUB_DISABLE_OS_PROBER=false
  4. And run update-grub - now you should have detected other OSes installed.

Comparing Proxmox backends - "benchmark"

I decide to compare best speed of various Proxmox backends (settings to get max speed, not data safety!):

  • ext4 (Dir backend) - using these mount flags: commit=60,noatime,barrier=0,data=writeback
  • lvm-thin
  • zfs

WARNING! It is not real benchmark. It just gives me little clue what can I expect from various Proxmox backends...

I use:

  • MB: K9N Platinum, MSI 7250
  • CPU: AMD X2 (dual-core, 2GHz)
  • PCIe AHCI SATA 3 controller:
    # Numeric IDs
    03:00.0 0106: 1b21:1164 (rev 02) (prog-if 01 [AHCI 1.0])
      Subsystem: 2116:2116
    # Names
    03:00.0 SATA controller: ASMedia Technology Inc. Device 1164 (rev 02) (prog-if 01 [AHCI 1.0])
      Subsystem: ZyDAS Technology Corp. Device 2116
    
  • SATA3 SSSD: KINGSTON SA400S37480G, 480GB
  • 8 GB RAM

Hypervisor: Proxmox VE 7.4-3 (May 2023)

Disabled mitigations for both Host and Guest in /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="mitigations=off"

And run sudo update-grub

Tested VM:

  • latest Debian 11
  • recommended settings:
    • Options -> Use tablet for pointer: No (USB eats lot of CPU, even when Idle)
    • Hotplug: Disabled
  • 1 CPU core (host type for passthrough), 2GB RAM
  • filesystem: BTRFS with default settings relatime,space_cache,subvolid=256,subvol=/@rootfs
  • testing package build (mc):
  • source build installed:
    mkdir ~/src
    cd ~/src
    sudo apt-get install devscripts dpkg-dev
    sudo apt-get build-dep mc
    apt-get source mc
  • full rebuild of package with
    cd ~/src/mc-4.8.26
    time debuild -i -us -uc -b

General results:

  • Proxmox iowait barely touches 1% (!)
  • But System is around 30% in guest, and 20% in host

Ext4 backend results:

  • build time:
    real    8m49.175s
    user    5m50.623s
    sys     2m47.223s
    
    • 2nd run - very consistent(!)
      real    8m58.381s
      user    5m53.755s
      sys     2m53.494s
      
  • under ZFS backend:
    real    9m8.706s
    user    5m56.599s
    sys     3m0.121s
    
    • 2nd run was even worse:
      real    9m40.666s
      user    6m12.011s
      sys     3m16.522s
      
  • under LVM-thin backend (Proxmox default)
    real    8m58.277s
    user    5m55.074s
    sys     2m51.104s
    
    • 2nd run worse, but not so much:
      real    9m17.272s
      user    6m8.327s
      sys     2m58.437s
      

Trying Debian 11 guest on ext4 instead of BTRFS show no measurable difference. Here is

  • guest fs: ext4
  • Proxmox backend: ext4 (dir)
  • build time:
    real    8m57.992s
    user    5m51.097s
    sys     2m56.637s
    

Retest: Proxmox Host installed just on single ext4 LV (removed lvm-thin, /var/lib/vz uses main root LV, using qcow2), and same Debian 11 VM on ext4:

  • build time:
    real    8m35.107s
    user    5m44.194s
    sys     2m42.038s
    
    • results are very consistent - on 2nd run just +/-1second.

Disable KSM

Kernel Samepage Merging (KSM) attempt to dedpulicated same memory pages of VMs. However it comes at the expense of CPU usage. Because I have only 2 cores and I rarely run more than 1 VM I disable it by following: https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM)

  • script ./disable-ksm.sh
#!/bin/bash
# https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM)
set -xeuo pipefail
systemctl disable --now ksmtuned
echo 2 > /sys/kernel/mm/ksm/run
exit 0

Accessing partitions on LVM-Thin

Scenario:

  • have installed Fedora 39 on lvm-thin
  • want to mount that volume directly on Proxmox VE and backup it with tar

The problem is how to tell kernel to recognize partitions inside LVM volume.

Example for Fedora 39, using GPT UEFI:

# NOTE: disk-0 is hoding UEFI variables, so OS disk is disk-1 !

echo p | fdisk /dev/mapper/pveiron-vm--103--disk--1

...
Disklabel type: gpt
...

Device                                        Start      End  Sectors  Size Type
/dev/mapper/pveiron-vm--103--disk--1-part1     2048  1230847  1228800  600M EFI System
/dev/mapper/pveiron-vm--103--disk--1-part2  1230848 30590975 29360128   14G Linux filesystem
/dev/mapper/pveiron-vm--103--disk--1-part3 30590976 33552383  2961408  1.4G Linux swap

# now list partitions that will be mapped:
kpartx -l /dev/mapper/pveiron-vm--103--disk--1

pveiron-vm--103--disk--1p1 : 0 1228800 /dev/mapper/pveiron-vm--103--disk--1 2048
pveiron-vm--103--disk--1p2 : 0 29360128 /dev/mapper/pveiron-vm--103--disk--1 1230848
pveiron-vm--103--disk--1p3 : 0 2961408 /dev/mapper/pveiron-vm--103--disk--1 30590976

# Finaly map thse partitions
kpartx -a /dev/mapper/pveiron-vm--103--disk--1

# mount filesystem of interest
mount -r /dev/mapper/pveiron-vm--103--disk--1p2 /mnt/source/
mkdir -p /PATH_TO_BACKUPS/fedora39-in-vm
tar -cva --numeric-owner --one-file-system -f PATH_TO_BACKUPS/fedora39-in-vm/f39-rootfs.tar.zst -C /mnt/source .

WARNING! Above tar command does not include extended attributes (required for selinux), but I will disable selinux.

Getting Storage usage from CLI

When using pvesm status it displays disk usage in bytes, which is not much useful for humans. But based on https://forum.proxmox.com/threads/how-to-alert-on-disk-space.90982/ found that this simple command produces nice output:

pvesh get /nodes/`hostname`/storage

┌──────────────────────────┬───────────┬─────────┬────────┬────────────┬─────────┬────────┬────────────┬────────────┬───────────────┐
│ content                  │ storage   │ type    │ active │      avail │ enabled │ shared │      total │       used │ used_fraction │
╞══════════════════════════╪═══════════╪═════════╪════════╪════════════╪═════════╪════════╪════════════╪════════════╪═══════════════╡
│ images,rootdir           │ ssd-thin  │ lvmthin │ 1      │  42.37 GiB │ 1       │ 0      │  56.27 GiB │  13.90 GiB │        24.70% │
├──────────────────────────┼───────────┼─────────┼────────┼────────────┼─────────┼────────┼────────────┼────────────┼───────────────┤
│ images,vztmpl,backup,iso │ local     │ dir     │ 1      │  24.02 GiB │ 1       │ 0      │ 103.78 GiB │  74.54 GiB │        71.83% │
├──────────────────────────┼───────────┼─────────┼────────┼────────────┼─────────┼────────┼────────────┼────────────┼───────────────┤
│ rootdir,images           │ local-lvm │ lvmthin │ 1      │ 154.47 GiB │ 1       │ 0      │ 392.75 GiB │ 238.28 GiB │        60.67% │
└──────────────────────────┴───────────┴─────────┴────────┴────────────┴─────────┴────────┴────────────┴────────────┴───────────────┘

All storage units are nicely formatted - you can see both GiB's and percentages. Tested on Proxmox VE 8.1-4.

Tip: There is also one handful parameter to specify output format:

man pvesh

...
--output-format <json | json-pretty | text | yaml> (default = text)

I plan to use some simple monitoring (probably monit?) to send alerts when disk usage exceeds specific threshold.

Add NAT Network with custom DHCP and DNS

Sometimes it is useful to have dedicated NAT network with custom DHCP and DNS server for VMs, for example to log DNS queries. We will partially follow https://pve.proxmox.com/wiki/Network_Configuration and also [[Proxmox in Azure]]

Here is my original standard /etc/network/interfaces with standard bridge vmbr0:

auto lo
iface lo inet loopback

iface enp0s8 inet manual

auto vmbr0
iface vmbr0 inet static
	address 192.168.0.51/24
	gateway 192.168.0.1
	bridge-ports enp0s8
	bridge-stp off
	bridge-fd 0

iface enp0s9 inet manual

To add NAT network append to it this:

# Experimental NAT network
auto vmbr2
iface vmbr2 inet static
    address  10.10.10.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

    post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE

Please note that -o vmbr0 on last 2 lines must specify routable interface that has Internet access - normally vmbr0.

If you enable firewall on VM you need to also have these two lines to vmbr2 definition in /etc/network/interfaces:

  post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
  post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

See https://pve.proxmox.com/wiki/Network_Configuration for details.

Now we will use dnsmasq to provide DHCP and DNS server for NAT network

  • WARNING! System wide dnsmasq may clash with Proxmox SDN feature. Do not use configuration below if you also use Proxmox SDN!
  • install dnsmasq with:
    apt-get install dnsmasq
  • create new configuration file /etc/dnsmasq.d/nat.conf with contents:
    listen-address=10.10.10.1
    # below specify interface where is NAT network:
    interface=vmbr2
    log-queries
    log-dhcp
    dhcp-range=10.10.10.100,10.10.10.200,12h
    # set gateway in DHCP response;
    dhcp-option=option:router,10.10.10.1
    # set dnsmasq as DNS server:
    dhcp-option=option:dns-server,10.10.10.1
    
    # DNS: Do NOT return IPv6  addresses (AAAA)
    filter-AAAA
    
    # register static IP for specific MAC address
    #dhcp-host=11:22:33:44:55:66,192.168.0.60
    # add custom DNS entry 
    #address=/double-click.net/127.0.0.1
    
  • if you disabled resolveconf you may have to also uncomment in /etc/default/dnsmasq
    IGNORE_RESOLVCONF=yes

Now reboot to both:

  • apply new /etc/network/interfaces with NAT network on bridge vmbr2
  • apply new configuration for dnsmasq

After reboot:

  • to see log output, use journalctl -u dnsmasq
  • to test it on VM replace original bridge vmbr0 with our NAT bridge vmbr2 in VM -> Options -> Hardware -> Network
  • then boot VM and ensure that it uses DHCP assigne IP. In case of RHEL and clones you can use nmtui (NetworkManager Text User Interface) to change IPv4 configuration from "Manual" to "Automatic"
  • verify that assigned IP is in expected range (10.10.10.100 to 10.10.10.200), that gateway is 10.10.10.1 (with ip r command) and that DNS is correct (inspecting /etc/resolv.conf for example)
Clone this wiki locally