-
Notifications
You must be signed in to change notification settings - Fork 21
Home
With the v3.0.0 update to the Hanlon-Microkernel project, the Hanlon Microkernel itself has changed signficantly, but its role has remained the same. The Microkernel itself remains as a small, in-memory Linux kernel that is used by the Hanlon Server for dynamic, real-time discovery and inventory of the nodes that the Hanlon Server is managing. The Hanlon Server accomplishes these tasks by using the Hanlon Microkernel as the default boot image for any nodes in the network (i.e. any new nodes or nodes which the Hanlon Server doesn't have a matching policy). What has changed is that instead of being a customized version of a standard in-memory Linux ISO (previously, the Hanlon Microkernel was a custom version of the Tiny Core Linux distribution's "tiny" ISO), the Hanlon Microkernel is now a Docker image (based on the Alpine Linux Docker image) that is meant to be deployed as a Docker container inside of a (RancherOS) in-memory Linux OS instance.
As was the case with the previous builds of the Hanlon Microkernel, one of the key attributes that we sought in building a Hanlon Microkernel (Docker) image reduced size. The new Hanlon Microkernel Docker image may be slightly larger than the older (Tiny Core Linux based) ISO (the image file clocks in at about 71MB in size, versus the 51MB or so taken up by the v2.0.1 Hanlon Microkernel ISOs), but the Docker image file can actually be compressed (using gzip
or bzip2
) down to a file that is almost have the size of the earlier, ISO-based Microkernels (a bzipped version of the current, v3.0.0 Hanlon Microkernel image is only 24MB in size). As such, we have not increased the size of the Microkernel significantly, in fact we have reduced it from the point of view of the iPXE-boot server. Since only the RancherOS instance is downloaded during the iPXE-boot process, the iPXE-boot kernel is downloading less than half of what it had to download with the older, ISO-based Microkernel. This greatly reduces the odds of a timeout occurring during the iPXE-boot process (which has been observed by some users when booting multiple nodes using the old ISO-based Microkernel).
So, the question of which ISO to use shifted in the new version of the Microkernel to the question of which image we should use as the base for our Hanlon Microkernel (Docker) image. A number of constraints came to mind, but the list of constraints we applied in order to narrow down our choice of which Docker image we should came down to the following:
- The Docker image should be smaller than 256MB in size (to speed up delivery of the image to the node); smaller was considered better
- Only Docker images that were being actively developed were considered
- The Docker image should be based on a relatively recent Linux kernel (v3.0.0 or later) so that we could be fairly confident that it would support the newer hardware we knew that we knew we would find in many modern data-centers
- Since we knew we would be using Facter as part of the node discovery process, the distribution that the Docker image was based on needed to include a pre-built version of Ruby
- The distribution that the Docker image was based on should be distributed under a "commercial friendly" open-source license in order to support development of commercial versions of these custom extensions moving forward
Once we applied all of these constraints, the Alpine Linux distribution and the GliderLabs Docker image really stood out above the rest. Tiny Core Linux (or TCL) easily met all of the constraints that we had applied (and even met a few that we hadn't considered important, initially):
- The GliderLabs image is very small (only 5MB in size) and is designed to run completely in memory
- Alpine Linux distribution that it is based on is using a (very) recent kernel; as of the time of this writing (the latest release of Alpine Linux, v3.2.3, uses a v3.18.20 Linux kernel and, at the time that this was written, that release was posted about three months ago), so we knew that it would provide support for most of the hardware that we were likely to see.
- Alpine Linux can easily be extended by installing additional packages, and a large set of Alpine Linux packages are available through the Alpine Linux package repository. The complete set of extensions for the v3.2 release of Alpine Linux can be found here.
- The licensing terms under which Alpine Linux is available (GPLv2) are relatively commercial friendly, allowing for later development of commercial extensions for the Microkernel (as long as those extensions are not bundled directly into the ISO). This would not be the case if a distribution that used a GPLv3 license were used instead.
With the foundation for our Microkernel chosen, the next step was to select the components necessary for the code we use in the Microkernel Controller for node discovery (and which, when taken together, make up the Hanlon Microkernel). In the next section, we'll describe these additional components (and their interaction with the Hanlon Server) in a bit more detail.
As was mentioned previously, we have added a number of standard Apline Linux packages (and their dependencies) to the GliderLabs Alpine Linux Docker image in order to support the node discovery process:
- bash - includes the dependencies necessary to enable BASH scripting on the node, which has traditionally been used during the container intialization process
-
sed - includes the dependencies necessary to enable the
sed
command, which has traditionally been used during the container initialization process -
dmidecode - includes the dependencies necessary to enable the
dmidecode
command, which is used internally byfacter
- ruby - includes the dependencies necessary to support execution of Ruby files/scripts
-
ruby-irb - includes the dependencies necessary to enable the
irb
command, which can be useful for debugging of the services that make up the Microkernel Controller when things go wrong; other than that this command is not used within the Microkernel Controller itself - open-lldp - includes the commands used to discover and report on the topology of the network around the node using Link-Layer Datagram Protocol (or LLDP) if it is enabled for the local network
-
util-linux - contains the
lscpu
command, which is used to discover and report on some of the detailed capabilities of any CPUs found on the node - open-vm-tools - contains the kernel modules necessary to correctly report some of the details regarding the node's capabilities when the Microkernel is being used to discover the capabilities of a VMware-based virtual machine
In addition, the following packages have been installed from the "edge" (or "testing") Alpine Linux repositories using the apk install
command:
-
lshw - contains the
lshw
command used to discover and report on the detailed hardware configuration of the node (number of memory slots and details for each, details about the CPU, networking card details, etc.) -
ipmitool - contains the
ipmitool
command that is used to discover and report the details of the BMC (if one exists on the node) attached to the node
In addition to these packages, we also pre-install add a few "Ruby Gems" in our Hanlon Microkernel (Docker) image. Currently, this list of gems includes the following:
- daemons - a gem that provides the capability to wrap existing Ruby classes/scripts as daemon processes (that can be started, stopped, restarted, etc.); this gem is used primarily to wrap the Hanlon Microkernel Controller as a daemon process.
-
facter - provides us with access to
facter
, which is used to discover many "facts" about the systems that the Microkernel is deployed to (other "facts" are discovered using the standardlshw
,lscpu
,dmidecode
, andipmitool
commands). -
json_pure - provides the functionality needed to parse/construct JSON requests, which is critical when interacting with the Hanlon Server; the
json_pure
gem is used because it is purely Ruby based, so we don't have to install any additional packages like we would have to do in order to use the more "performant" (but partly C-based)json
gem instead.
These Ruby Gems are installed dynamically (as part of the docker build ...
process), and are used within the Hanlon Microkernel Controller (and the services that it depends on).
During the Microkernel initialization process, there are two key services:
- The Microkernel Controller - a Ruby-based daemon process that interacts with the Hanlon Server via HTTP
- The Microkernel Web Server - a WEBrick instance that can be used to interact with the Microkernel Controller via HTTP; currently this server is only used by the Microkernel Controller itself to save any configuration changes it might receive from the Hanlon Server (this action actually triggers a restart of the Microkernel Controller by this web server instance).
These two components work together to accomplish the task at hand, namely discovering the capabilities of the server they have been deployed to and reporting those capabilities back to the Hanlon server. The interactions between these two services can best be described as follows:
- The Microkernel Controller (a Ruby-daemon process) interacts with the Hanlon Server and Microkernel Web Server via HTTP/HTTPS (the former via the Microkernel checkin requests, the latter via POSTs that are made to the Microkernel Web Server instance whenever a new configuration is received by the Microkernel Controller from the Hanlon Server).
- The Microkernel Web Server may restart the Microkernel Controller instance occasionally (typically this is done in order to force the Microkernel Controller to pick up and use a new configuration that it received from the Hanlon Server).
For more detailed information about the Hanlon Microkernel (what it is, how it works, how to build your own Microkernel ISOs, and how to build your own Microkernel Extension), users should check the pages on this project Wiki that discuss these topics in more detail:
- An Overview of the Hanlon Microkernel - Provides users with an overview of the Hanlon Microkernel, the Hanlon Microkernel boot process, and how the Hanlon Server uses the Hanlon Microkernel to perform dynamic discovery and registration of nodes in the network.
- Building your own Microkernel image ISO - Describes the process of building a new instance of the Hanlon Microkernel ISO (using the tools provided by the Hanlon-Microkernel project).