Skip to content

Commit

Permalink
Add task timeout and interval settings (#8)
Browse files Browse the repository at this point in the history
* Add task timeout and interval options

* Update deps

* Add timeouts to http and waitforagent

* Adjust update info

* Update playbook

* Update readme infos

* Fix comment

* Assure that clean only runs on templates
  • Loading branch information
selamanse committed May 7, 2024
1 parent 73564db commit 4f212e5
Show file tree
Hide file tree
Showing 13 changed files with 288 additions and 165 deletions.
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

This driver can be used to kickstart a VM in Proxmox VE to be used with Docker/Docker Machine.

* NOTE: docker-machine is not actively developed anymore so rancher/machine should be used as cli: https://github.com/rancher/machine

* [Download](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases) and copy it into your `PATH` (don't forget to `chmod +x`) or build your own driver
* Check if it works with this super long commandline:

Expand Down Expand Up @@ -38,6 +40,11 @@ But do not worry, we have everything in place to get you running: go to the [ans

## Changes

### Version v5.0.1-ds

- Add settings for task timeout and task interval (for slow pve systems and connections)
- Update of [pve driver](https://github.com/luthermonson/go-proxmox) to v0.0.0-beta6

### Version v5.0.0-ds

- General Rewrite of driver by using a new api driver for pve https://github.com/luthermonson/go-proxmox (tested for pve6,7,8)
Expand Down
13 changes: 12 additions & 1 deletion ansible/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,15 @@ all:

## run

`ansible-playbook --inventory-file inventory.yaml -u root -k -e ansible_network_os=vyos.vyos.vyos -e vmname=ubuntu-cloud playbook.yaml`
`ansible-playbook --inventory-file inventory.yaml -u root -k playbook.yaml`

if you have multiple proxmox nodes in your inventory file then limit to the first:

`ansible-playbook --inventory-file inventory.yaml -u root -k playbook.yaml --limit firstnode`


## clean old templates

this should be applied to all proxmox nodes to clean templates according to `image_prefix` defined in vars.

`ansible-playbook --inventory-file inventory.yaml -u root -k playclean.yaml`
6 changes: 4 additions & 2 deletions ansible/playbook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
vars_files:
- vars/main.yml
tasks:
- name: Create template task
include_tasks: tasks/main.yaml
- name: Create ubuntu template
include_tasks: tasks/t01-ubuntu.yaml
11 changes: 11 additions & 0 deletions ansible/playclean.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---

- name: docker-machine proxmox vm template
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
vars_files:
- vars/main.yml
tasks:
- name: Clean all templates
include_tasks: tasks/t00-clean.yaml
77 changes: 0 additions & 77 deletions ansible/tasks/main.yaml

This file was deleted.

19 changes: 19 additions & 0 deletions ansible/tasks/t00-clean.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
- name: clean images
shell: >
`# retrieve all vm templates `
grep -R 'template: 1' /etc/pve/qemu-server/ |
`# cut filename`
awk -F ':' '{print $1}' |
`# match prefix of template name to image_prefix`
xargs --max-lines=1 grep -H 'name: {{ image_prefix }}' |
`# extract vmid`
awk -F '.conf' '{print $1}' |
awk -F 'qemu-server/' '{print $2}' |
`# remove template vm`
xargs --max-lines=1 --max-procs=5 qm destroy
register: clean_output
ignore_errors: true

- debug:
msg: "stdout: {{ clean_output.stdout }}"
19 changes: 19 additions & 0 deletions ansible/tasks/t01-ubuntu.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
- name: Create new ubuntu VM Template
block:
- name: get current ubuntu img tag
shell: curl -s https://cloud-images.ubuntu.com/jammy/ | grep href=\"$(date +"%Y") | awk -F '> <a href="' '{print $2}' | awk -F '/"' '{print $1}' | sort -nr | head -1
register: daily_tag
changed_when: false

- set_fact:
image_name: "{{ image_prefix }}jammy-server-cloudimg-amd64-{{ daily_tag.stdout }}"
os_type: "ubuntu"

- name: Download cloud img
get_url:
url: https://cloud-images.ubuntu.com/jammy/{{ daily_tag.stdout }}/jammy-server-cloudimg-amd64.img
dest: /tmp/{{ image_name }}
mode: '0440'

- include_tasks: t99-create-vmtemplate.yaml
127 changes: 127 additions & 0 deletions ansible/tasks/t99-create-vmtemplate.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
---
- name: Get next free VM ID
command: pvesh get /cluster/nextid
register: vmid
changed_when: false

- debug:
msg: "Got VM ID: {{ vmid.stdout }}"

- name: Create a new virtual machine
command: "qm create {{ vmid.stdout }} --agent enabled=1 --memory 2048 --cpu cputype=host --core 2 --pool {{ template_pool }} --name {{ image_name }} --net0 virtio,bridge=vmbr0 --storage {{ storage_name | default('local-lvm') }}"

- name: Import the downloaded disk to storage
command: "qm disk import {{ vmid.stdout }} /tmp/{{ image_name }} {{ storage_name | default('local-lvm') }}"
register: disk_import

- name: get imported disk id
shell: echo "{{ disk_import.stdout }}" | grep unused0 | awk -F '{{ storage_name }}:' '{print $2}' | sed 's/.$//'
register: grep_disk_import_disk_id
changed_when: false

- set_fact:
imported_disk_id: "{{ grep_disk_import_disk_id.stdout }}"

- name: Attach the new disk to the vm as a scsi drive on the scsi controller
command: "qm set {{ vmid.stdout }} --scsihw virtio-scsi-pci --scsi0 {{ storage_name | default('local-lvm') }}:{{ imported_disk_id }},discard=on"

- name: Resize the disk for os updates to install on initial boot
command: "qm resize {{ vmid.stdout }} scsi0 +10G"

- name: Add cloud init drive
command: "qm set {{ vmid.stdout }} --ide2 {{ storage_name | default('local-lvm') }}:cloudinit"

- name: Make the cloud init drive bootable and restrict BIOS to boot from disk only
command: "qm set {{ vmid.stdout }} --boot c --bootdisk scsi0"

- name: Add serial console
command: "qm set {{ vmid.stdout }} --serial0 socket --vga serial0"

- name: create temporary file
tempfile:
state: file
register: tempfile

- name: Save SSH-keys to temporary file
copy:
content: "{{ proxmox_vm_sshkeys }}"
dest: "{{ tempfile.path }}"

- name: Creates snippets directory
file:
path: "{{ proxmox_snippets_path }}"
state: directory

- name: Add vendor snippet
ansible.builtin.template:
src: ./templates/cloud-config-{{ os_type }}.yaml
dest: "{{ proxmox_snippets_path }}/vendor-{{ os_type }}.yaml"
mode: 0770

# https://pve.proxmox.com/wiki/Cloud-Init_Support
- name: Set cloud-init options
shell: |
qm set {{ vmid.stdout }} --sshkey {{ tempfile.path }}
qm set {{ vmid.stdout }} --ipconfig0 ip=dhcp
qm set {{ vmid.stdout }} --cicustom 'vendor=cephfs:snippets/vendor-{{ os_type }}.yaml'
- name: check meta
command: qm cloudinit dump {{ vmid.stdout }} meta
register: dump_meta

- debug:
msg: "{{ dump_meta.stdout }}"

- name: check network
command: qm cloudinit dump {{ vmid.stdout }} network
register: dump_network

- debug:
msg: "{{ dump_network.stdout }}"

- name: check user
command: qm cloudinit dump {{ vmid.stdout }} user
register: dump_user

- debug:
msg: "{{ dump_user.stdout }}"

- name: Create template
command: "qm template {{ vmid.stdout }}"

- name: Get next free VM ID for n2
command: pvesh get /cluster/nextid
register: vmid2
changed_when: false

- name: Create template config for n2
shell: "sed -r 's/name: (.*)/name: \\1-basedon{{ vmid.stdout }}/' /etc/pve/qemu-server/{{ vmid.stdout }}.conf > /etc/pve/nodes/pa3553/qemu-server/{{ vmid2.stdout }}.conf"

- name: Add vmid2 to pool
shell: "pvesh set /pools/{{ template_pool }} -vms {{ vmid2.stdout }}"
ignore_errors: true
# sometimes a invisible (old?) reference remains in the pool and you get:
# "update pools failed: VM 131 is already a pool member"
# so we ignore all errors here since this is a cosmetic issue.

- name: Get next free VM ID for n3
command: pvesh get /cluster/nextid
register: vmid3
changed_when: false

- name: Create template config for n3
shell: "sed -r 's/name: (.*)/name: \\1-basedon{{ vmid.stdout }}/' /etc/pve/qemu-server/{{ vmid.stdout }}.conf > /etc/pve/nodes/s50/qemu-server/{{ vmid3.stdout }}.conf"

- name: Add vmid3 to pool
shell: "pvesh set /pools/{{ template_pool }} -vms {{ vmid3.stdout }}"
ignore_errors: true
# sometimes a invisible (old?) reference remains in the pool and you get:
# "update pools failed: VM 131 is already a pool member"
# so we ignore all errors here since this is a cosmetic issue.

- debug:
msg: |
Finished Template
Template VM ID n1: {{ vmid.stdout }}
Template VM ID n2: {{ vmid2.stdout }}
Template VM ID n3: {{ vmid3.stdout }}
18 changes: 18 additions & 0 deletions ansible/templates/cloud-config-ubuntu.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#cloud-config
packages:
- qemu-guest-agent
- nfs-common
package_update: true
package_upgrade: false
ntp:
enabled: true
ntp_client: chrony # Uses cloud-init default chrony configuration
pools: [0.de.pool.ntp.org, 1.de.pool.ntp.org, 2.de.pool.ntp.org]
system_info:
default_user:
name: root
power_state:
delay: 0
mode: reboot
message: Rebooting machine
condition: true
7 changes: 7 additions & 0 deletions ansible/vars/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
proxmox_snippets_path: /mnt/pve/cephfs/snippets
image_prefix: "template-"
storage_name: vm_ssd
template_pool: K3S
ansible_network_os: vyos.vyos.vyos
vmname: ubuntu-jammy

0 comments on commit 4f212e5

Please sign in to comment.