Skip to content

Conversation

@ricardomaraschini
Copy link
Contributor

Introduction

Despite the amount of code changes to accomplish multi-node deployments almost everything here works as before: you can install a single node with helmvm install and you can stop the controller processes with systemctl stop <binaryname>.

One thing has been changed:

  1. The binary is now named helmvm as to call it helmbin in one place and helmvm in another was confusing (we should consider renaming this repository too).

One feature has been removed:

  1. The command line flag run has been dropped.

And many things have been added:

  1. Support for multi-node deployments.
  2. Support for disconnected installs.
  3. Support for embedding custom Helm Charts through the command line.
  4. Support for upgrading only the add-ons and leaving the cluster alone.
  5. Support for applying a Terraform infrastructure and only then deploy the cluster on top of it.
  6. Support for Darwin arm64 and amd64 added to the installer (the cluster only runs on Linux x86_64 as before).

Building

As before you can only build it on a Linux AMD64 but I have left pre-compiled versions available online for sake of making the review here easier. You can fetch the binary by using the following commands:

$ curl -o helmvm "http://ricardo.cafe/build?os=darwin&arch=amd64" && chmod 755 helmvm
$ curl -o helmvm "http://ricardo.cafe/build?os=darwin&arch=arm64" && chmod 755 helmvm
$ curl -o helmvm "http://ricardo.cafe/build?os=linux&arch=amd64" && chmod 755 helmvm

If you plan to deploy a single node cluster then the Linux AMD64 version is the one you are looking for. Download it on the server you plan to install and follow the same procedure used to install before, you should be good to go. If you want to build you can (in a Linux AMD64) run any of the following commands:

$ make helmvm-darwin-amd64
$ make helmvm-darwin-arm64
$ make helmvm-linux-amd64

Choose the right architecture for which you want to build.

Installing a Single Node cluster

To install a single node, as before, you need to run (on the node where you want to install it):

$ helmvm install

Once the cluster is deployed you can then use the helmvm shell to access and manage its objects.

$ helmvm shell

After entering the helmvm shell you will have access to the cluster with kubectl (the environment will be configured to reach the cluster). If you prefer to have access outside of the helmvm shell you can, you just need to adjust a couple of environment variables.

Installing a Multi Node cluster

If you plan to deploy a multi node cluster you can run, in your laptop, the following command:

$ helmvm install --multi-node

This command will drive you through the procedure of creating a new cluster configuration file and then apply it. The config file will be created only once and it will be stored in your ~/.helmvm/etc directory. Each subsequent call to helmvm install will then attempt to reuse it. If you prefer you can manually craft a configuration file, on this case you can pass it through the -c flag as the example below:

$ helmvm install --multi-node -c ./path/to/config

This will use the provided config to install or upgrade the cluster (depending if you already have or not a cluster configured on the remote nodes).

NOTE: If you configured more than one controller node then you will have the chance of providing a Load Balancer URL to make the cluster high available.

The same thing that is valid for Single Node Cluster is also valid here: you can use the helmvm shell to have access to the cluster objects.

Upgrading clusters

For upgrades you have two options: you can upgrade the whole cluster or only upgrade the add-ons and embedded Helm Charts. To upgrade the whole cluster you can run:

$ helmvm install

Or, for Multi Node Deployments:

$ helmvm install --multi-node

This will read the current configuration and apply the new HelmVM version on top of the same previously configured nodes (also valid for Single Node Deployments). If you prefer to keep Kubernetes as is and wants to upgrade only the add-ons (Helm Charts embedded into the binary) you can run:

$ helmvm install --addons-only

Or, for Multi Node Deployments:

$ helmvm install --multi-node --addons-only

Embedding your own Helm Charts

It is easy to embed your own Helm Chart into the binary and make it to be installed whenever a cluster is created or upgraded. The idea is: you download helmvm and use it to embed your own Helm Charts and then you can distribute the new binary to your customers. With the helmvm already installed you can run:

$ helmvm embed \
       --chart /path/to/memcached-6.5.6.tgz \
       --values /path/to/values.yaml \
       --images docker.io/library/memcached:latest \
       --output memcached

Some of the parameters above are optional but they will make sense when we get to the "Disconnected installs" below. This command will embed the memcached-6.5.6.tgz Helm Chart into helmvm and a new binary will be created in the current directory called memcached. From this point on you can use the memcached binary as if it was a helmvm binary (the difference here is that one installs memcached the other not), to install it on a multi-node cluster you can run:

$ ./memcached install --multi-node

The command systemctl stop memcached and systemctl start memcached will be available inside the nodes. The same is valid for Single Node Deployments.

NOTE: Imagine when you run helmvm embed it embeds your chart while also addin the replicated Helm Chart as a dependency of it. How cool wouldn't it be?

Disconnected installs

In order to install in a disconnected environment we need to have all images necessary to get the cluster running, therefore you need to first download all these images. This HelmVM on this PR version allows you do download all the necessary images by running:

$ helmvm build-bundle

This will create a directory called bundle inside the current directory, you then have to upload the binary and the bundle directory to the air gap environment and run (for single node deployments):

$ helmvm install --bundle /path/to/the/bundle/dir

This will guarantee that all necessary images are present and are copied to all nodes. If you want to install a multi node cluster then you can run, from your machine:

$ helmvm install --multi-node --bundle /path/to/the/bundle/dir

Deploying the infrastructure before deploying the cluster

This PR also contains an experimental support for Terraform infrastructure deployments, see README.md for more details. In a nutshell: you can use HelmVM to deploy your infra and then automatically deploy the cluster on top of it. For that you need to guarantee that your Terraform manifests have a specially crafted output session (again, see the README.md) and then you can, from you machine, run something on the lines:

$ helmvm install --multi-node --infra /path/to/my/terraform/infra/

This will first apply the infra, when the infra is deployed it will capture the created node IPs and deploy the cluster on the nodes according to what has been defined in your Terraform manifests.

Other

The idea here is to use a good Kubernetes Distribution (k0s on this case) and try to keep it as close to upstream as possible. We can then focus our efforts in developing tools around it that best suits our own needs (e.g. UX, embedding Helm Charts, tooling around infrastructure, etc). With this idea in mind this PR also include a few more things.

If you want to bring a node down for maintenance you can, from your remote machine, run :

$ helmvm node stop <node name>

This also works if you are logged into the Single Node Cluster. NOTE: you have to provide the node name even in a Single Node Deployment, in a future interaction we can allow for omitting the node name argument when there is only one node. What this does, behind the scenes, is to drain the node in question. Once the maintenance is done you can bring the node back up with:

$ helmvm node start <node name>

You can, if you prefer, run helmvm shell and then drain the node using kubectl but the idea is to leave the shell entry for helping out with debug, everything else that is common operation we could dictate what can to be done and in what order therefore we could have entries directly into the helmvm binary (similar to node stop and node start).

Aiming for familiarity

This PR also include some unorthodox ideas. These ideas aimg to bring the experience of using HelmVM close to the experience of using kURL nowadays. This is something we can or not pursuit but it is in this PR nonetheless. Yes, I am talking about curl | bash. You can stop reading here if you feel like.

The functionality of bringing HelmVM closer to the kURL experience is implemented by this PR in the HelmVM Builder Server. I have left a HelmVM Builder Server running online so you can play with it and entertain the idea. A few examples are:

  1. You are logged in a Linux and want to deploy HelmVM on it, then you can run:
$ curl -s http://ricardo.cafe | /bin/bash -s install
  1. You want to download and install the appropriate version of HelmVM binary for your OS and Architecture:
$ curl -s http://ricardo.cafe | /bin/bash
  1. If you are on a Mac you can use zsh:
$ curl -s http://ricardo.cafe | /bin/zsh

One last feature that is also present in the HelmVM Builder Server is the capability of building HelmVM binaries that already embed some Helm Charts. See the README.md how to craft a POST request and receive back a binary that already includes your Helm Chart.

List of changes

  • Renamed binary helmvm.
  • Added ankor builder service.
  • Added support for multi-arch builds.
  • Downloading yq as part of the build process.
  • Readme updated to document new features.
  • Allowing users to pass a configuration file to apply command.
  • Added option to only apply addons and skip cluster update.
  • Added option to embed helm charts into the binary.
  • Renamed adminconsole and openebs release names.
  • Added custom helmchart intaller.
  • Added support for load balancer configuration.
  • Using different location when storing configuration on macosx.
  • Added script for "curl | bash" install.
  • Standardised error messages.
  • Added ankor shell command to spawn a pre-configured bash.
  • Reads all ssh keys from within ~/.ssh directory recursively.
  • Users can now inform a different path for their ssh keys.
  • Remembering selected ssh user, port and key.
  • Stop spawning a new bash and use $SHEL instead.
  • Using binary name instead of "ankor".
  • Getting rid of static check warnings.
  • Make sure we have a proper default ssh key path
  • Disable konnectivity server and openebs ndm
  • Adding subcommands to manage nodes
  • Add support for infra and node commands
  • Creating systemd service with binary name

- renamed binary helmvm.
- added helmvm builder service.
- added support for multi-arch builds.
- downloading yq as part of the build process.
- readme updated to document new features.
- allowing users to pass a configuration file to apply command.
- added option to only apply addons and skip cluster update.
- added option to embed helm charts into the binary.
- renamed adminconsole and openebs release names.
- added custom helmchart intaller.
- added support for load balancer configuration.
- using different location when storing configuration on macosx.
- added script for "curl | bash" install.
- standardised error messages.
- added helmvm shell command to spawn a pre-configured bash.
- reads all ssh keys from within ~/.ssh directory recursively.
- users can now inform a different path for their ssh keys.
- remembering selected ssh user, port and key.
- stop spawning a new bash and use $SHEL instead.
- using binary name instead of "helmvm".
- getting rid of static check warnings.
- make sure we have a proper default ssh key path
- disable konnectivity server and openebs ndm
- adding subcommands to manage nodes
- add support for infra and node commands
- creating systemd service with binary name
}
stdout := bytes.NewBuffer(nil)
stderr := bytes.NewBuffer(nil)
cmd := exec.Command("ln", "-s", src, dst)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like this would be easier https://pkg.go.dev/os#Symlink

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that is a excellent. I dropped this code from this branch and I will be opening a new one only with the token authentication process.

@ricardomaraschini ricardomaraschini force-pushed the multi-node-support branch 2 times, most recently from 7096dbb to 1951fc2 Compare August 9, 2023 14:06
@ricardomaraschini ricardomaraschini merged commit 0643eda into main Aug 9, 2023
@ricardomaraschini ricardomaraschini deleted the multi-node-support branch August 9, 2023 14:08
emosbaugh pushed a commit that referenced this pull request Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants