Skip to content
This repository has been archived by the owner on Jun 29, 2022. It is now read-only.

Add the kvm-libvirt platform to lokoctl #810

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open

Conversation

pothos
Copy link
Member

@pothos pothos commented Aug 17, 2020

The KVM libvirt Terraform module was not yet available from lokoctl
despite it being an easy way to try out Lokomotive without any cloud
provider accounts (and their attached costs). It allows to use
Lokomotive on any Flatcar Container Linux image, even local development
builds. It also gives direct access to the VGA console for debugging.
The cluster VMs can be shut down in virt-manager while they are not
needed.

(Reactivating the CI pipeline for libvirt is the next step.)

Closes: #214

@johananl
Copy link
Member

Thanks @pothos! I'm looking at this. In the meantime, could you please fix the linting issues? If you don't have time to follow up, we can pick this up, too.

@pothos
Copy link
Member Author

pothos commented Aug 18, 2020

Thanks for having a look. I can address some of the issues but I'm not sure about all of the linting issues because similar code is present in other Lokomotive platforms already (without disabling the linter for these lines).

@johananl
Copy link
Member

Thanks for having a look. I can address some of the issues but I'm not sure about all of the linting issues because similar code is present in other Lokomotive platforms already (without disabling the linter for these lines).

Yeah, the linter only checks added/modified lines. I suggest we fix the linting issues here and I can open a follow up issue to do the same for the other platforms. We need a green CI before we can merge.

Copy link
Member

@johananl johananl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the PR. On the technical side this seems to work very nicely.

I have multiple issues with the docs - see my inline comments.

sure that a couple of container images fit on the node.

```sh
$ sudo dnf install wget bzip2 qemu-img # or sudo apt install wget bzip2 qemu-utils
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These packages are listed as requirements. If we explain how to install them then they are no longer requirements but rather a part of the guide.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, should this be moved to the bullet point where the requirements are stated, like Fedora packages A B C or Debian/Ubuntu packages A B C?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it could make sense to list per-distro packages for common distros in the requirements if that's the question.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the PR and sticked to the steps with the distro comments.

docs/configuration-reference/platforms/kvm-libvirt.md Outdated Show resolved Hide resolved
pkg/platform/kvmlibvirt/kvmlibvirt.go Outdated Show resolved Hide resolved
docs/quickstarts/kvm-libvirt.md Outdated Show resolved Hide resolved
docs/quickstarts/kvm-libvirt.md Outdated Show resolved Hide resolved
docs/quickstarts/kvm-libvirt.md Outdated Show resolved Hide resolved
docs/quickstarts/kvm-libvirt.md Outdated Show resolved Hide resolved
docs/quickstarts/kvm-libvirt.md Outdated Show resolved Hide resolved
pkg/platform/kvmlibvirt/kvmlibvirt.go Outdated Show resolved Hide resolved
pkg/platform/kvmlibvirt/kvmlibvirt.go Outdated Show resolved Hide resolved
@pothos
Copy link
Member Author

pothos commented Aug 18, 2020

Thanks, will try to go through it end of the week.

@johananl
Copy link
Member

Thanks, will try to go through it end of the week.

Cool. As said, we can also help with matching the style to our existing guides.

@pothos
Copy link
Member Author

pothos commented Aug 18, 2020

Feel free to push commits 👍

@pothos
Copy link
Member Author

pothos commented Aug 26, 2020

Rebased the PR but now bootstrap reports this error and times out: WARNING: /assets/manifests does not exist, not creating any self-hosted assets

@ipochi
Copy link
Member

ipochi commented Oct 20, 2020

Rebased the PR but now bootstrap reports this error and times out: WARNING: /assets/manifests does not exist, not creating any self-hosted assets

Thats because no charts are copied for installion.

I have got it working, plus

  • rebased to the latest master
  • tls bootstrapping enabled

Will update the PR tomorrow after cleanup.

The KVM libvirt Terraform module was not yet available from lokoctl
despite it being an easy way to try out Lokomotive without any cloud
provider accounts (and their attached costs). It allows to use
Lokomotive on any Flatcar Container Linux image, even local development
builds. It also gives direct access to the VGA console for debugging.
The cluster VMs can be shut down in virt-manager while they are not
needed.
This commit updates the versions.tf and terraform providers to the
required format for 0.13 version.

Signed-off-by: Imran Pochi <imran@kinvolk.io>
This commit adds the control plane charts that need to be installed once
the bootstrap control plane is running.

Signed-off-by: Imran Pochi <imran@kinvolk.io>
Signed-off-by: Imran Pochi <imran@kinvolk.io>
Signed-off-by: Imran Pochi <imran@kinvolk.io>
@ipochi
Copy link
Member

ipochi commented Oct 21, 2020

@invidian @johananl @surajssd
PR is open for re-review.

Thanks to @pothos for initiating the effort around this.

@ipochi ipochi requested a review from johananl October 21, 2020 11:50
* A Linux host OS.
* At least 4 GB of available RAM.
* An SSH key pair.
* Terraform `v0.12.x`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We now need v0.13.x

* An SSH key pair.
* Terraform `v0.12.x`
[installed](https://learn.hashicorp.com/terraform/getting-started/install.html#install-terraform).
* [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for this, it gets automatically installed now

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1078 😄

VirtualCPUs int `hcl:"virtual_cpus,optional"`
VirtualMemory int `hcl:"virtual_memory,optional"`
CLCSnippets []string `hcl:"clc_snippets,optional"`
Labels string `hcl:"labels,optional"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use map[string]string as we are doing with other platforms

Comment on lines +113 to +115
{{- if $pool.Labels }}
labels = {{ $pool.Labels }}
{{- end }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{{- if $pool.Labels }}
  labels = {
  {{- range $k, $v := $pool.Labels }}
    "{{ $k }}" = "{{ $v }}",
  {{- end }}
 }

Comment on lines 128 to 150
provider "ct" {
version = "~> 0.5.0"
}

provider "local" {
version = "~> 1.2"
}

provider "null" {
version = "~> 2.1"
}

provider "template" {
version = "~> 2.1"
}

provider "tls" {
version = "~> 2.0"
}

provider "random" {
version = "~> 2.2"
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this can be removed from here and moved to assets/terraform-modules/kvm-libvirt/flatcar-linux/kubernetes/versions.tf

}

provider "libvirt" {
uri = "qemu:///system"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just started setting up Terraform with existing libvirtd cluster (not lokomotive) and noticed this PR.

For those of us with a home 'server' it would be great if we can pass the kvm host through cluster configuration, perhaps along the lines of:

  {{- if .Config.QemuURI }}
  uri     = {{.Config.QemuURI }}
  {{else}} 
  uri     = "qemu:///system"
  {{- end}}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To use a remote location? Maybe it's not easy to get this works with the image volume pool, or?

Copy link
Contributor

@karlskewes karlskewes Nov 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Providing a:

  1. external file eg: https://example.com/flat-car.img results in terraform runner (laptop) downloading it and then copying across the network to the server. I think this is reasonable.
  2. laptop local file eg: /path/to/flat-car.img results in terraform runner copying across network to the server. Also reasonable.

It is unintuitive but I don't think there's an alternative to have remote kvm host fetch images - see: dmacvicar/terraform-provider-libvirt#299 (comment)

I currently supply a path to images that's actually a NFS mount from the 'server' which is kind of gross (looping data) but works in the sense that I don't need to keep the image on my laptop. 😅

# Qcow2 base OS volume that can be shared by VM's (domain's)
resource "libvirt_volume" "os_base" {
  name   = "${var.guest_hostname}-os_base"
  source = "/mnt/nfs/path/to/ubuntu-focal-server-cloudimg-amd64.img"
}

# OS volume per guest VM
resource "libvirt_volume" "os" {
  count          = var.guest_count
  name           = "${var.guest_hostname}-${format("%02d", count.index)}-os.qcow2"
  base_volume_id = libvirt_volume.os_base.id
  pool           = var.guest_pool_name
  size           = var.os_volume_size
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please let me know if I misunderstood the question.
If the concept is amenable, perhaps we could add a line to markdown file saying that image sources will be fetched by lokoctl executor and copied to remote kvm host?

Maybe I build a fork of this PR with the change and do some testing first. :)
Then I can do a small PR once this one merged or just post here that it works as suspected if this is still being iterated on?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started building this PR and trying to run it against my remote kvm server.

On top of a couple of string quotes suggested here the changes are: https://github.com/kskewes/lokomotive/commit/6246cc76e8e7287759023565e5fc000677aa4266
It creates all of the libvirt domains and things but the default NAT networking doesn't enable access from lokoctl runner to hosts. It really needs to be a bridge network for me. I'll have another go later and can iterate in a fork.

DisableSelfHostedKubelet bool `hcl:"disable_self_hosted_kubelet,optional"`
EnableReporting bool `hcl:"enable_reporting,optional"`
EnableAggregation bool `hcl:"enable_aggregation,optional"`
EnableTLSBootstrap bool `hcl:"enable_tsl_bootstrap,optional"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
EnableTLSBootstrap bool `hcl:"enable_tsl_bootstrap,optional"`
EnableTLSBootstrap bool `hcl:"enable_tls_bootstrap,optional"`

{{- end }}

{{- if $pool.VirtualMemory }}
virtual_memory {{ $pool.VirtualMemory }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
virtual_memory {{ $pool.VirtualMemory }}
virtual_memory = {{ $pool.VirtualMemory }}

{{- end }}

{{- if .Config.ControllerVirtualMemory}}
virtual_memory {{ .Config.ControllerVirtualMemory}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
virtual_memory {{ .Config.ControllerVirtualMemory}}
virtual_memory = {{ .Config.ControllerVirtualMemory}}

{{- end }}

{{- if .Config.PodCidr }}
pod_cidr = {{.Config.PodCidr }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pod_cidr = {{.Config.PodCidr }}
pod_cidr = "{{.Config.PodCidr }}"

pod_cidr = {{.Config.PodCidr }}
{{- end }}
{{- if .Config.ServiceCidr }}
service_cidr = {{.Config.ServiceCidr }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
service_cidr = {{.Config.ServiceCidr }}
service_cidr = "{{.Config.ServiceCidr }}"

cluster_domain_suffix = {{$.Config.ClusterDomainSuffix }}
{{- end }}
{{- if $.Config.ServiceCidr }}
service_cidr = {{$.Config.ServiceCidr }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
service_cidr = {{$.Config.ServiceCidr }}
service_cidr = "{{$.Config.ServiceCidr }}"

{{- end }}

{{- if .Config.ClusterDomainSuffix }}
cluster_domain_suffix = {{.Config.ClusterDomainSuffix }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cluster_domain_suffix = {{.Config.ClusterDomainSuffix }}
cluster_domain_suffix = "{{.Config.ClusterDomainSuffix }}"

machine_domain = "{{$.Config.MachineDomain}}"
cluster_name = "{{$.Config.ClusterName}}"
{{- if $.Config.ClusterDomainSuffix }}
cluster_domain_suffix = {{$.Config.ClusterDomainSuffix }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cluster_domain_suffix = {{$.Config.ClusterDomainSuffix }}
cluster_domain_suffix = "{{$.Config.ClusterDomainSuffix }}"

{{- end }}

{{- if .Config.NetworkIPAutodetectionMethod }}
network_ip_autodetection_method = {{ .Config.NetworkIPAutodetectionMethod }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
network_ip_autodetection_method = {{ .Config.NetworkIPAutodetectionMethod }}
network_ip_autodetection_method = "{{ .Config.NetworkIPAutodetectionMethod }}"

@invidian
Copy link
Member

invidian commented Nov 8, 2020

@kskewes if you're interested in Lokomotive using libvirt underneath, there is also #392, which might be in a better shape than this PR (though libvirt usage there is rather only for testing).

@karlskewes
Copy link
Contributor

@kskewes if you're interested in Lokomotive using libvirt underneath, there is also #392, which might be in a better shape than this PR (though libvirt usage there is rather only for testing).

Had a go at running up #392.

  1. I like the de-duplication of terraform modules.
  2. It seems to rely on known network addressing for tinkerbell which is possible with NAT networks, matching static mac allocation on my router, or plumbing through some more variables.
  3. NAT networks in libvirt allow host originated connections to ssh to VM's, but it doesn't (by default) setup iptables rules for remote hosts (my laptop/lokoctl runner). Currently I bridge separate NIC's on the kvm host switch ports and this keeps the kvm host networking relatively clean.

Ultimately I think either continuing with #810 or doing my own libvirt plus bare-metal quickstart might be the fastest way for me to come up to speed on flatcar linux and lokomotive. Then look at how I might be able to assist with #392.

@invidian invidian removed their request for review June 7, 2022 10:43
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support kvm-libvirt terraform module
6 participants