Permalink
Browse files

support for CoreOS container linux configs!

highlights:
* coreos is in its own coreos.py library file. first step to
splitting things out and organizing a bit more
* moved the templates for the user data file to jinja2. these are a lot
more flexible than the old mako templates.
* migrated from the old cloud-configs to container linux configs,
which are transpiled into ignition configs
* some required command-line flags for qemu are not provied out of the box
by virt-install. vmbuilder modifies the XML to add those required flags
and arguments
* documentation updates: apparmor config modifications are needed to handle
Ignition/XML files which are outside the standard tree.
  • Loading branch information...
jforman committed Dec 27, 2017
1 parent 131da49 commit 0cc65134d3dfd1aaaf14392a9e947e428969b491
Showing with 291 additions and 193 deletions.
  1. +17 −20 README.md
  2. +63 −70 configs/coreos_user_data.template
  3. +203 −99 coreos.py
  4. +8 −1 vmbuilder.py
  5. +0 −3 vmtypes.py
View
@@ -1,12 +1,12 @@
# virtbuilder
virtbuilder is a helpful wrapper script that makes creating virtual machines managed by libvirt a lot easier and more automated.
virtbuilder is a helpful wrapper script that makes creating virtual machines managed by libvirt a lot easier.
## Functionality Provided
* Delete pre-existing VM and VM disk images before creating a new one.
* Ubuntu and Debian installs without needing to download an ISO locally.
* CoreOS cloud config support.
* CoreOS support with ability to provide with a templated Container Linux config file.
* CoreOS etcd-based cluster support using CoreOS public Discovery service.
* Listing disk pools and volumes in those pools.
@@ -22,7 +22,7 @@ Supported types of virtual machines that can be created:
## Requirements
* Python
* Python module: bs4, ipaddress, libvirt, mako, netaddr
* Python module: bs4, ipaddress, libvirt, jinja2, netaddr
## Usage
@@ -61,7 +61,8 @@ There are several required parameters.
* bridge_interface
* disk_pool_name
* host_name and domain_name
* host_name
* domain_name
* vm_type
### Creating a single Debian/Ubuntu VM
@@ -79,7 +80,7 @@ If you want to tie your CoreOS VMs together into an etcd-based cluster the follo
```
vmbuilder.py \
--bridge_interface ${vm_host_iface} --disk_pool_name localdump --host_name ${base_name} --vm_type coreos --domain_name ${vm_domainname} --coreos_create_cluster --cluster_size ${cluster_size} --coreos_cluster_overlay_network ${dotted_quad}/${netmask} create_vm
--bridge_interface ${vm_host_iface} --disk_pool_name localdump --host_name ${base_name} --vm_type coreos --domain_name ${vm_domainname} --coreos_create_cluster --cluster_size ${cluster_size} create_vm
```
### A three-VM CoreOS cluster using static IP addressing for each CoreOS node.
@@ -97,21 +98,17 @@ vmbuilder.py create_vm --bridge_interface ${interface} --domain_name foo.dmz.exa
An NFS mount stanza can be added to your cloud config file with the following flag.
```
--coreos_nfs_mount allmyfiles1:/foo/bar
```
This creates the following stanza in your cloud config:
```
- name: rpc-statsd.service
command: start
enable: true
- name: foo-bar.mount
command: start
content: |
[Mount]
What=allmyfiles1:/foo/bar
Where=/foo/bar
Type=nfs
The NFS directory of the remote server will be mounted on your CoreOS VM at /foo/bar.
```
This will automatically start the statsd service, as well as mount /foo/bar from
sever allmyfiles1 into /foo/bar on the CoreOS machine.
## Notes
The CoreOS Container Linux configs, resultant Ignition configs as well as their
libvirt XML files are stored within the disk pool directory itself. Previously
these files were stored in the default libvirt/qemu directory within the host.
This provides for more resilient storage (on my host this is a ZFS-backed NFS
share which is backed up remotely). Because of this new directory location,
an update to the apparmor configuration file is needed. Explanation can be found
in this CoreOS bug: https://github.com/coreos/bugs/issues/2083
@@ -1,81 +1,74 @@
#cloud-config
storage:
files:
- path: "/etc/hostname"
filesystem: root
contents:
inline: {{ vm_name }}
ssh_authorized_keys:
% for key in ssh_keys:
- ${key}
% endfor
passwd:
users:
- name: "core"
ssh_authorized_keys:
{%- for key in ssh_keys %}
- "{{ key }}"
{%- endfor %}
hostname: ${vm_name}
% if not nfs_mounts is UNDEFINED:
write-files:
- path: /etc/conf.d/nfs
permissions: '0644'
content: |
OPTS_RPC_MOUNTD=""
% endif
coreos:
{%- if static_network %}
networkd:
units:
% if not static_network is UNDEFINED:
- name: systemd-networkd.service
command: stop
- name: 00-eth0.network
runtime: true
content: |
contents: |
[Match]
Name=eth0
[Network]
% for current_dns in dns:
DNS=${current_dns}
% endfor
Address=${ip_address}/${network_prefixlen}
Gateway=${gateway}
- name: down-interfaces.service
command: start
content: |
[Service]
Type=oneshot
ExecStart=/usr/bin/ip link set eth0 down
ExecStart=/usr/bin/ip addr flush dev eth0
% endif
- name: systemd-networkd.service
command: restart
% if not nfs_mounts is UNDEFINED:
- name: rpc-statd.service
command: start
Address={{ ip_address }}/{{ network_prefixlen }}
Gateway={{ gateway }}
{%- for current_dns in dns %}
DNS={{ current_dns -}}
{% endfor %}
{% endif %}
{%- if create_cluster %}
etcd:
name: "{{ vm_name }}"
discovery: "{{ discovery_url }}"
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
initial_cluster: "{{ vm_name }}=http://{PRIVATE_IPV4}:2380"
flannel:
etcd_prefix: "/coreos.com/network2"
network_config: '{ "Network": "{{ fleet_overlay_network }}" }'
locksmith:
reboot_strategy: "etcd-lock"
etcd_endpoints: "http://localhost:2379"
{% else %}
locksmith:
reboot_strategy: "reboot"
{%- endif %}
{%- if nfs_mounts %}
systemd:
units:
{%- for current_mount in nfs_mounts %}
- name: {{ current_mount['name'] }}.mount
enable: true
% for current_mount in nfs_mounts:
- name: ${current_mount['name']}.mount
command: start
content: |
contents: |
[Unit]
Before=remote-fs.target
[Mount]
What=${current_mount['what']}
Where=${current_mount['where']}
What={{ current_mount['what'] }}
Where={{ current_mount['where'] }}
Type=nfs
% endfor
% endif
% if not discovery_url is UNDEFINED:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "${fleet_overlay_network}" }'
etcd2:
discovery: ${discovery_url}
advertise-client-urls: http://${etcd_listen_host}:2379,http://${etcd_listen_host}:4001
initial-advertise-peer-urls: http://${etcd_listen_host}:2380
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
listen-peer-urls: http://${etcd_listen_host}:2380,http://${etcd_listen_host}:7001
fleet:
public-ip: ${etcd_listen_host}
update:
reboot-strategy: "etcd-lock"
% endif
[Install]
WantedBy=remote-fs.target
{% endfor %}
{%- endif %}
update:
group: "{{ coreos_channel }}"
Oops, something went wrong.

0 comments on commit 0cc6513

Please sign in to comment.