diff --git a/blogs.xml b/blogs.xml index 7edb2ba31a..7ef436b04f 100644 --- a/blogs.xml +++ b/blogs.xml @@ -3,1172 +3,1256 @@ NixOS Planet - http://planet.nixos.org + https://planet.nixos.org en - NixOS Planet - http://planet.nixos.org - + NixOS Planet - https://planet.nixos.org + - nixbuild.net: Introducing nixbuild.net - https://blog.nixbuild.net/posts/2020-02-18-introducing-nixbuild-net.html - https://blog.nixbuild.net/posts/2020-02-18-introducing-nixbuild-net.html - <p>Exactly one month ago, I <a href="https://discourse.nixos.org/t/announcing-nixbuild-net-nix-build-as-a-service">announced</a> the <a href="https://nixbuild.net">nixbuild.net</a> service. Since then, there have been lots of work on functionality, performance and stability of the service. As of today, nixbuild.net is exiting alpha and entering private beta phase. If you want to try it out, just <a href="mailto:rickard@nixbuild.net">send me an email</a>.</p> -<p>Today, I’m also launching the <a href="https://blog.nixbuild.net">nixbuild.net blog</a>, which is intended as an outlet for anything related to the nixbuild.net service. Announcements, demos, technical articles and various tips and tricks. We’ll start out with a proper introduction of nixbuild.net; why it was built, what it can help you with and what the long-term goals are.</p> + Craige McWhirter: Building Daedalus Flight on NixOS + http://mcwhirter.com.au//craige/blog/2020/Building_Daedalus_Flight_on_NixOS/ + http://mcwhirter.com.au//craige/blog/2020/Building_Daedalus_Flight_on_NixOS/ + <p><img alt="NixOS Daedalus Gears by Craige McWhirter" src="http://mcwhirter.com.au/files/NixOS_Daedalus_Gears.png" title="NixOS Daedalus Gears by Craige McWhirter" /></p> + +<p><a href="https://daedaluswallet.io/en/flight/">Daedalus Flight</a> was recently released +and this is how you can build and run this version of +<a href="https://daedaluswallet.io/">Deadalus</a> on <a href="https://nixos.org/">NixOS</a>.</p> + +<p>If you want to speed the build process up, you can add the +<a href="https://iohk.io/">IOHK</a> <a href="https://nixos.org/nix/">Nix</a> cache to your own NixOS configuration:</p> + +<p><a href="https://source.mcwhirter.io/craige/mio-ops/src/branch/master/roles/iohk.nix">iohk.nix</a>:</p> + +<pre><code class="nix">nix.binaryCaches = [ + "https://cache.nixos.org" + "https://hydra.iohk.io" +]; +nix.binaryCachePublicKeys = [ + "hydra.iohk.io:f/Ea+s+dFdN+3Y/G+FDgSq+a5NEWhJGzdjvKNGv0/EQ=" +]; +</code></pre> -<h2 id="why-nixbuild.net">Why nixbuild.net?</h2> -<p><a href="https://nixos.org/nix/">Nix</a> has great built-in support for <a href="https://nixos.org/nix/manual/#chap-distributed-builds">distributing builds</a> to remote machines. You just need to setup a standard Nix enviroment on your build machines, and make sure they are accessible via SSH. Just like that, you can offload your heavy builds to a couple of beefy build servers, saving your poor laptop’s fan from spinning up.</p> -<p>However, just when you’ve tasted those sweet distributed builds you very likely run into the issue of <em>scaling</em>.</p> -<p>What if you need a really big server to run your builds, but only really need it once or twice per day? You’ll be wasting a lot of money keeping that build server available.</p> -<p>And what if you occasionally have lots and lots of builds to run, or if your whole development team wants to share the build servers? Then you probably need to add more build servers, which means more wasted money when they are not used.</p> -<p>So, you start looking into auto-scaling your build servers. This is quite easy to do if you use some cloud provider like AWS, Azure or GCP. But, this is where Nix will stop cooperating with you. It is really tricky to get Nix to work nicely together with an auto-scaled set of remote build machines. Nix has only a very coarse view of the “current load” of a build machine and can therefore not make very informed decisions on exactly how to distribute the builds. If there are multiple Nix instances (one for each developer in your team) fighting for the same resources, things get even trickier. It is really easy to end up in a situation where a bunch of really heavy builds are fighting for CPU time on the same build server while the other servers are idle or running lightweight build jobs.</p> -<p>If you use <a href="https://nixos.org/hydra/">Hydra</a>, the continous build system for Nix, you can find scripts for using auto-scaled AWS instances, but it is still tricky to set it up. And in the end, it doesn’t work perfectly since Nix/Hydra has no notion of “consumable” CPU/memory resources so the build scheduling is somewhat hit-and-miss.</p> -<p>Even if you manage to come up with a solution that can handle your workload in an acceptable manner, you now have a new job: <em>maintaining</em> uniquely configured build servers. Possibly for your whole company.</p> -<p>Through my consulting company, <a href="https://immutablesolutions.com/">Immutable Solutions</a>, I’ve done a lot of work on Nix-based deployments, and I’ve always struggled with half-baked solutions to the Nix build farm problem. This is how the idea of the nixbuild.net service was born — a service that can fill in the missing pieces of the Nix distributed build puzzle and package it as a simple, no-maintenance, cost-effective service.</p> -<h2 id="who-are-we">Who are We?</h2> -<p>nixbuild.net is developed and operated by me (Rickard Nilsson) and my colleague David Waern. We both have extensive experience in building Nix-based solutions, for ourselves and for various clients.</p> -<p>We’re bootstrapping nixbuild.net, and we are long-term committed to keep developing and operating the service. Today, nixbuild.net can be productively used for its main purpose — running Nix builds in a scalable and cost-effective way — but there are lots of things that can (and will) be built on top of and around that core. Read more about this below.</p> -<h2 id="what-does-nixbuild.net-look-like">What does nixbuild.net Look Like?</h2> -<p>To the end-user, a person or team using Nix for building software, nixbuild.net behaves just like any other <a href="https://nixos.org/nix/manual/#chap-distributed-builds">remote build machine</a>. As such, you can add it as an entry in your <code>/etc/nix/machines</code> file:</p> -<pre><code>beta.nixbuild.net x86_64-linux - 100 1 big-parallel,benchmark</code></pre> -<p>The <code>big-parallel,benchmark</code> assignment is something that is called <em>system features</em> in Nix. You can use that as a primitive scheduling strategy if you have multiple remote machines. Nix will only submit builds that have been marked as requiring a specific system feature to machines that are assigned that feature.</p> -<p>The number 100 in the file above tells Nix that it is allowed to submit up to 100 simultaneous builds to <code>beta.nixbuild.net</code>. Usually, you use this property to balance builds between remote machines, and to make sure that a machine doesn’t run too many builds at the same time. This works OK when you have rather homogeneous builds, and only one single Nix client is using a set of build servers. If multiple Nix clients use the same set of build servers, this simplistic scheduling breaks down, since a given Nix client loses track on how many builds are really running on a server.</p> -<p>However, when you’re using nixbuild.net, you can set this number to anything really, since nixbuild.net will take care of the scheduling and scaling on its own, and it will not let multiple Nix clients step on each other’s toes. In fact each build that nixbuild.net runs is securely isolated from other builds and by default gets exclusive access to the resources (CPU and memory) it has been assigned.</p> -<p>Apart from setting up the distributed Nix machines, you need to configure SSH. When you register an account on nixbuild.net, you’ll provide us with a public SSH key. The corresponding private key is used for connecting to nixbuild.net. This private key needs to be readable by the user that runs the Nix build. This is usually the <code>root</code> user, if you have a standard Nix setup where the <code>nix-daemon</code> process runs as the root user.</p> -<p>That’s all there is to it, now we can run builds using nixbuild.net!</p> -<p>Let’s try building the following silly build, just so we can see some action:</p> -<pre><code>let pkgs = import &lt;nixpkgs&gt; { system = "x86_64-linux"; }; +<p>If you haven't already, you can clone the <a href="https://github.com/input-output-hk/daedalus">Daedalus +repo</a> and specifically the +1.0.0 tagged commit:</p> -in pkgs.runCommand "silly" {} '' - n=0 - while (($n &lt; 12)); do - date | tee -a $out - sleep 10 - n=$(($n + 1)) - done -''</code></pre> -<p>This build will run for 2 minutes and output the current date every ten seconds:</p> -<pre><code>$ nix-build silly.nix -these derivations will be built: - /nix/store/cy14fc13d3nzl65qp0sywvbjnnl48jf8-silly.drv -building '/nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv' on 'ssh://beta.nixbuild.net'... -Mon Feb 17 20:53:47 UTC 2020 -Mon Feb 17 20:53:57 UTC 2020 -Mon Feb 17 20:54:07 UTC 2020</code></pre> -<p>You can see that Nix is telling us that the build is running on nixbuild.net!</p> -<h3 id="the-nixbuild.net-shell">The nixbuild.net Shell</h3> -<p>nixbuild.net supports a simple shell interface that you can access through SSH. This shell allows you to retrieve information about your builds on the service.</p> -<p>For example, we can list the currently running builds:</p> -<pre><code>$ ssh beta.nixbuild.net shell -nixbuild.net&gt; list builds --running -10524 2020-02-17 21:05:20Z [40.95s] [Running] - /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv</code></pre> -<p>We can also get information about any derivation or nix store path that has been built:</p> -<pre><code>nixbuild.net&gt; show drv /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv -Derivation - path = /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv - builds = 1 - successful builds = 1 +<pre><code>$ git clone --branch 1.0.0 https://github.com/input-output-hk/daedalus.git +</code></pre> -Outputs - out -&gt; /nix/store/8c7sndr3npwmskj9zzp4347cnqh5p8q0-silly +<p>Once you've cloned the repo and checked you're on the 1.0.0 tagged commit, +you can build Daedalus flight with the following command:</p> -Builds - 10524 2020-02-17 21:05:20Z [02:01] [Built]</code></pre> -<p>This shell is under development, and new features are added continuously. A web-based frontend will also be implemented.</p> -<h2 id="the-road-ahead">The Road Ahead</h2> -<p>To finish up this short introduction to nixbuild.net, let’s talk a bit about our long-term goals for the service.</p> -<p>The core purpose of nixbuild.net is to provide Nix users with pay-per-use distributed builds that are simple to set up and integrate into any workflow. The build execution should be performant and secure.</p> -<p>There are a number of features that basically just are nice side-effects of the design of nixbuild.net:</p> -<ul> -<li><p>Building a large number of variants of the same derivation (a build matrix or some sort of parameter sweep) will take the same time as running a single build, since nixbuild.net can run all builds in parallel.</p></li> -<li><p>Running repeated builds to find issues related to non-determinism/reproducability will not take longer than running a single build.</p></li> -<li><p>A whole team/company can share the same account in nixbuild.net letting builds be shared in a cost-effective way. If everyone in a team delegates builds to nixbuild.net, the same derivation will never have to be built twice. This is similar to having a shared Nix cache, but avoids having to configure a cache and perform network uploads for each build artifact. Of course, nixbuild.net can be combined with a Nix cache too, if desired.</p></li> -</ul> -<p>Beyond the above we have lots of thoughts on where we want to take nixbuild.net. I’m not going to enumerate possible directions here and now, but one big area that nixbuild.net is particularly suited for is advanced build analysis and visualisation. The sandbox that has been developed to securely isolate builds from each other also gives us a unique way to analyze exactly how a build behaves. One can imagine nixbuild.net being able give very detailed feedback to users about build bottlenecks, performance regressions, unused dependencies etc.</p> -<p>With that said, our primary focus right now is to make nixbuild.net a robust workhorse for your Nix builds, enabling you to fully embrace Nix without being limited by local compute resources. Please <a href="mailto:rickard@nixbuild.net">get in touch</a> if you want try out nixbuild.net, or if you have any questions or comments!</p> - Tue, 18 Feb 2020 00:00:00 +0000 - support@nixbuild.net (nixbuild.net) - - - Sander van der Burg: A declarative process manager-agnostic deployment framework based on Nix tooling - tag:blogger.com,1999:blog-1397115249631682228.post-3829850759126756827 - http://sandervanderburg.blogspot.com/2020/02/a-declarative-process-manager-agnostic.html - In a previous blog post written two months ago, <a href="https://sandervanderburg.blogspot.com/2019/11/a-nix-based-functional-organization-for.html">I have introduced a new experimental Nix-based process framework</a>, that provides the following features:<br /><br /><ul><li>It uses the <strong>Nix expression language</strong> for configuring running process instances, including their dependencies. The configuration process is based on only a few <strong>simple concepts</strong>: function definitions to define constructors that generate process manager configurations, function invocations to compose running process instances, and <a href="https://sandervanderburg.blogspot.com/2013/09/managing-user-environments-with-nix.html">Nix profiles</a> to make collections of process configurations accessible from a single location.</li><li>The <strong>Nix package manager</strong> delivers all packages and configuration files and isolates them in the Nix store, so that they never conflict with other running processes and packages.</li><li>It identifies <strong>process dependencies</strong>, so that a process manager can ensure that processes are activated and deactivated in the right order.</li><li>The ability to deploy <strong>multiple instances</strong> of the same process, by making conflicting resources configurable.</li><li>Deploying processes/services as an <strong>unprivileged user</strong>.</li><li>Advanced concepts and features, such as <a href="http://man7.org/linux/man-pages/man7/namespaces.7.html">namespaces</a> and <a href="http://man7.org/linux/man-pages/man7/cgroups.7.html">cgroups</a>, are <strong>not required</strong>.</li></ul><br />Another objective of the framework is that it should work with a variety of process managers on a variety of operating systems.<br /><br />In my previous blog post, I was deliberately using sysvinit scripts (also known as LSB Init compliant scripts) to manage the lifecycle of running processes as a starting point, because they are universally supported on Linux and self contained -- sysvinit scripts only require the right packages installed, but they do not rely on external programs that manage the processes' life-cycle. Moreover, sysvinit scripts can also be conveniently used as an unprivileged user.<br /><br />I have also developed a Nix function that can be used to more conveniently generate sysvinit scripts. Traditionally, these scripts are written by hand and basically require that the implementer writes the same boilerplate code over and over again, such as the activities that start and stop the process.<br /><br />The sysvinit script generator function can also be used to directly specify the implementation of all activities that manage the life-cycle of a process, such as:<br /><br /><pre><br />{createSystemVInitScript, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSystemVInitScript {<br /> name = instanceName;<br /> description = "Nginx";<br /> activities = {<br /> start = ''<br /> mkdir -p ${nginxLogDir}<br /> log_info_msg "Starting Nginx..."<br /> loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}<br /> evaluate_retval<br /> '';<br /> stop = ''<br /> log_info_msg "Stopping Nginx..."<br /> killproc ${nginx}/bin/nginx<br /> evaluate_retval<br /> '';<br /> reload = ''<br /> log_info_msg "Reloading Nginx..."<br /> killproc ${nginx}/bin/nginx -HUP<br /> evaluate_retval<br /> '';<br /> restart = ''<br /> $0 stop<br /> sleep 1<br /> $0 start<br /> '';<br /> status = "statusproc ${nginx}/bin/nginx";<br /> };<br /> runlevels = [ 3 4 5 ];<br /><br /> inherit dependencies instanceName;<br />}<br /></pre><br />In the above Nix expression, we specify five activities to manage the life-cycle of Nginx, a free/open source web server:<br /><br /><ul><li>The <strong>start</strong> activity initializes the state of Nginx and starts the process (<a href="https://sandervanderburg.blogspot.com/2020/01/writing-well-behaving-daemon-in-c.html">as a daemon</a> that runs in the background).</li><li><strong>stop</strong> stops the Nginx daemon.</li><li><strong>reload</strong> instructs Nginx to reload its configuration</li><li><strong>restart</strong> restarts the process</li><li><strong>status</strong> shows whether the process is running or not.</li></ul><br />Besides directly implementing activities, the Nix function invocation shown above can also be used on a much <strong>higher level</strong> -- typically, sysvinit scripts follow the same conventions. Nearly all sysvinit scripts implement the activities described above to manage the life-cycle of a process, and these typically need to be re-implemented over and over again.<br /><br />We can also generate the implementations of these activities automatically from a high level specification, such as:<br /><br /><pre><br />{createSystemVInitScript, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSystemVInitScript {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile "-p" stateDir ];<br /> runlevels = [ 3 4 5 ];<br /><br /> inherit dependencies instanceName;<br />}<br /></pre><br />You could basically say that the above <i>createSystemVInitScript</i> function invocation makes the configuration process of a sysvinit script "<a href="https://sandervanderburg.blogspot.com/2016/03/the-nixos-project-and-deploying-systems.html"><strong>more declarative</strong></a>" -- you do not need to specify the activities that need to be executed to manage processes, but instead, you specify the <strong>relevant characteristics</strong> of a running process.<br /><br />From this high level specification, the implementations for all required activities will be derived, using conventions that are commonly used to write sysvinit scripts.<br /><br />After completing the initial version of the process management framework that works with sysvinit scripts, I have also been investigating other process managers. I discovered that their configuration processes have many things in common with the sysvinit approach. As a result, I have decided to explore these declarative deployment concepts a bit further.<br /><br />In this blog post, I will describe a declarative process manager-agnostic deployment approach that we can integrate into the experimental Nix-based process management framework.<br /><br /><h2>Writing declarative deployment specifications for managed running processes</h2><br />As explained in the introduction, I have also been experimenting with other process managers than sysvinit. For example, instead of generating a sysvinit script that manages the life-cycle of a process, such as the Nginx server, we can also generate a supervisord configuration file to define Nginx as a program that can be managed with supervisord:<br /><br /><pre><br />{createSupervisordProgram, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSupervisordProgram {<br /> name = instanceName;<br /> command = "mkdir -p ${nginxLogDir}; "+<br /> "${nginx}/bin/nginx -c ${configFile} -p ${stateDir}";<br /> inherit dependencies;<br />}<br /></pre><br />Invoking the above function will generate a supervisord program configuration file, instead of a sysvinit script.<br /><br />With the following Nix expression, we can generate a systemd unit file so that Nginx's life-cycle can be managed by systemd:<br /><br /><pre><br />{createSystemdService, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSystemdService {<br /> name = instanceName;<br /> Unit = {<br /> Description = "Nginx";<br /> };<br /> Service = {<br /> ExecStartPre = "+mkdir -p ${nginxLogDir}";<br /> ExecStart = "${nginx}/bin/nginx -c ${configFile} -p ${stateDir}";<br /> Type = "simple";<br /> };<br /><br /> inherit dependencies;<br />}<br /></pre><br />What you may probably notice when comparing the above two Nix expressions with the last sysvinit example (that captures process characteristics instead of activities), is that they all contain very similar properties. Their main difference is a slightly different organization and naming convention, because each abstraction function is tailored towards the configuration conventions that each target process manager uses.<br /><br />As discussed in my previous blog post about declarative programming and deployment, declarativity is a spectrum -- the above specifications are (somewhat) declarative because they do not capture the activities to manage the life-cycle of the process (the <strong>how</strong>). Instead, they specify <strong>what</strong> process we want to run. The process manager derives and executes all activities to bring that process in a running state.<br /><br />sysvinit scripts themselves are not declarative, because they specify all activities (i.e. shell commands) that need to be executed to accomplish that goal. supervisord configurations and systemd services configuration files are (somewhat) declarative, because they capture process characteristics -- the process manager executes derives all required activities to bring the process in a running state.<br /><br />Despite the fact that I am not specifying any process management activities, these Nix expressions could still be considered somewhat a "how specification", because each configuration is tailored towards a specific process manager. A process manager, such as syvinit, is a means to accomplish something else: getting a running process whose life-cycle can be conveniently managed.<br /><br />If I would revise the above specifications to only express what I kind of running process I want, disregarding the process manager, then I could simply write:<br /><br /><pre><br />{createManagedProcess, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createManagedProcess {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile" -p" "${stateDir}/${instanceName}" ];<br /><br /> inherit dependencies instanceName;<br />}<br /></pre><br />The above Nix expression simply states that we want to run a managed Nginx process (using certain command-line arguments) and before starting the process, we want to initialize the state by creating the log directory, if it does not exists yet.<br /><br />I can translate the above specification to all kinds of configuration artifacts that can be used by a variety of process managers to accomplish the same outcome. I have developed six kinds of generators allowing me to target the following process managers:<br /><br /><ul><li>sysvinit scripts, also known as <a href="https://wiki.debian.org/LSBInitScripts">LSB Init compliant scripts</a>.</li><li><a href="http://supervisord.org">supervisord</a> programs</li><li><a href="https://www.freedesktop.org/wiki/Software/systemd">systemd</a> services</li><li><a href="https://www.launchd.info">launchd</a> services</li><li><a href="https://www.freebsd.org/doc/en_US.ISO8859-1/articles/rc-scripting/index.html">BSD rc</a> scripts</li><li>Windows services (via Cygwin's <a href="http://web.mit.edu/cygwin/cygwin_v1.3.2/usr/doc/Cygwin/cygrunsrv.README">cygrunsrv</a>)</li></ul><br />Translating the properties of the process manager-agnostic configuration to a process manager-specific properties is quite straight forward for most concepts -- in many cases, there is a direct mapping between a property in the process manager-agnostic configuration to a process manager-specific property.<br /><br />For example, when we intend to target supervisord, then we can translate the <i>process</i> and <i>args</i> parameters to a <i>command</i> invocation. For systemd, we can translate <i>process</i> and <i>args</i> to the <i>ExecStart</i> property that refers to a command-line instruction that starts the process.<br /><br />Although the process manager-agnostic abstraction function supports enough features to get some well known system services working (e.g. Nginx, Apache HTTP service, PostgreSQL, MySQL etc.), it does not facilitate all possible features of each process manager -- it will provide a reasonable set of common features to get a process running and to impose some restrictions on it.<br /><br />It is still possible work around the feature limitations of process manager-agnostic deployment specifications. We can also influence the generation process by defining <strong>overrides</strong> to get process manager-specific properties supported:<br /><br /><pre><br />{createManagedProcess, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createManagedProcess {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile" -p" "${stateDir}/${instanceName}" ];<br /><br /> inherit dependencies instanceName;<br /><br /> overrides = {<br /> sysvinit = {<br /> runlevels = [ 3 4 5 ];<br /> };<br /> };<br />}<br /></pre><br />In the above example, we have added an override specifically for sysvinit to tell that the init system that the process should be started in runlevels 3, 4 and 5 (which implies the process should stopped in the remaining runlevels: 0, 1, 2, and 6). The other process managers that I have worked with do not have a notion of runlevels.<br /><br />Similarly, we can use an override to, for example, use systemd-specific features to run a process in a Linux namespace etc.<br /><br /><h2>Simulating process manager-agnostic concepts with no direct equivalents</h2><br />For some process manager-agnostic concepts, process managers do not always have direct equivalents. In such cases, there is still the possibility to apply non-trivial simulation strategies.<br /><br /><h3>Foreground processes or daemons</h3><br />What all deployment specifications shown in this blog post have in common is that their main objective is to bring a process in a running state. How these processes are expected to behave is different among process managers.<br /><br />sysvinit and BSD rc scripts expect processes to <strong>daemonize</strong> -- on invocation, a process spawns another process that keeps running in the background (the daemon process). After the initialization of the daemon process is done, the parent process terminates. If processes do not deamonize, the startup process execution blocks indefinitely.<br /><br />Daemons introduce another complexity from a process management perspective -- when invoking an executable from a shell session in background mode, the shell can you tell its process ID, so that it can be stopped when it is no longer necessary.<br /><br />With deamons, an invoked processes forks another child process (or when it supposed to really behave well: it double forks) that becomes the daemon process. The daemon process gets adopted by the init system, and thus remains in the background even if the shell session ends.<br /><br />The shell that invokes the executable does not know the PIDs of the resulting daemon processes, because that value is only propagated to the daemon's parent process, not the calling shell session. To still be able to control it, a well-behaving daemon typically writes its process IDs to a so-called PID file, so that it can be reliably terminated by a shell command when it is no longer required.<br /><br />sysvinit and BSD rc scripts extensively use PID files to control daemons. By using a process' PID file, the managing sysvinit/BSD rc script can tell you whether a process is running or not and reliably terminate a process instance.<br /><br />"More modern" process managers, such as launchd, supervisord, and cygrunsrv, do not work with processes that daemonize -- instead, these process managers are daemons themselves that invoke processes that work in "foreground mode".<br /><br />One of the advantages of this approach is that services can be more reliably controlled -- because their PIDs are directly propagated to the controlling daemon from the <i>fork()</i> library call, it is no longer required to work with PID files, that may not always work reliably (for example: a process might abrubtly terminate and never clean its PID file, giving the system the false impression that it is still running).<br /><br />systemd improves process control even further by using Linux cgroups -- although foreground process may be controlled more reliably than daemons, they can still fork other processes (e.g. a web service that creates processes per connection). When the controlling parent process terminates, and does not properly terminate its own child processes, they may keep running in the background indefintely. With cgroups it is possible for the process manager to retain control over all processes spawned by a service and terminate them when a service is no longer needed.<br /><br />systemd has another unique advantage over the other process managers -- it can work both with foreground processes and daemons, although foreground processes seem to have to preference according to the documentation, because they are much easier to control and develop.<br /><br />Many common system services, such as OpenSSH, MySQL or Nginx, have the ability to both run as a foreground process and as a daemon, typically by providing a command-line parameter or defining a property in a configuration file.<br /><br />To provide an optimal user experience for all supported process managers, it is typically a good thing in the process manager-agnostic deployment specification to specify both how a process can be used as a foreground process and as a daemon:<br /><br /><pre><br />{createManagedProcess, nginx, stateDir, runtimeDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createManagedProcess {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-p" "${stateDir}/${instanceName}" "-c" configFile ];<br /> foregroundProcessExtraArgs = [ "-g" "daemon off;" ];<br /> daemonExtraArgs = [ "-g" "pid ${runtimeDir}/${instanceName}.pid;" ];<br /><br /> inherit dependencies instanceName;<br /><br /> overrides = {<br /> sysvinit = {<br /> runlevels = [ 3 4 5 ];<br /> };<br /> };<br />}<br /></pre><br />In the above example, we have revised Nginx expression to both specify how the process can be started as a foreground process and as a daemon. The only thing that needs to be configured differently is one global directive in the Nginx configuration file -- by default, Nginx runs as a deamon, but by adding the <i>daemon off;</i> directive to the configuration we can run it in foreground mode.<br /><br />When we run Nginx as daemon, we configure a PID file that refers to the instance name so that multiple instances can co-exist.<br /><br />To make this conveniently configurable, the above expression does the following:<br /><br /><ul><li>The <i>process</i> parameter specifies the process that needs to be started both in foreground mode and as a daemon. The <i>args</i> parameter specifies common command-line arguments that both the foreground and daemon process will use.</li><li>The <i>foregroundProcessExtraArgs</i> parameter specifies additional command-line arguments that are only used when the process is started in foreground mode. In the above example, it is used to provide Nginx the global directive that disables the daemon setting.</li><li>The <i>daemonExtraArgs</i> parameter specifies additional command-line arguments that are only used when the process is started as a daemon. In the above example, it used to provide Nginx a global directive with a PID file path that uniquely identifies the process instance.</li></ul><br />For custom software and services implemented in different language than C, e.g. Node.js, Java or Python, it is far less common that they have the ability to daemonize -- they can typically only be used as foreground processes.<br /><br />Nonetheless, we can still daemonize foreground-only processes, by using an external tool, such as <a href="http://www.libslack.org/daemon/">libslack's <i>daemon</i></a> command:<br /><br /><pre><br />$ daemon -U -i myforegroundprocess<br /></pre><br />The above command deamonizes the foreground process and creates a PID file for it, so that it can be managed by the sysvinit/BSD rc utility scripts.<br /><br />The opposite kind of "simulation" is also possible -- if a process can only be used as a daemon, then we can use a <strong>proxy process</strong> to make it appear as a foreground process:<br /><br /><pre style="overflow: auto;"><br />export _TOP_PID=$$<br /><br /># Handle to SIGTERM and SIGINT signals and forward them to the daemon process<br />_term()<br />{<br /> trap "exit 0" TERM<br /> kill -TERM "$pid"<br /> kill $_TOP_PID<br />}<br /><br />_interrupt()<br />{<br /> kill -INT "$pid"<br />}<br /><br />trap _term SIGTERM<br />trap _interrupt SIGINT<br /><br /># Start process in the background as a daemon<br />${executable} "$@"<br /><br /># Wait for the PID file to become available.<br /># Useful to work with daemons that don't behave well enough.<br />count=0<br /><br />while [ ! -f "${_pidFile}" ]<br />do<br /> if [ $count -eq 10 ]<br /> then<br /> echo "It does not seem that there isn't any pid file! Giving up!"<br /> exit 1<br /> fi<br /><br /> echo "Waiting for ${_pidFile} to become available..."<br /> sleep 1<br /><br /> ((count=count++))<br />done<br /><br /># Determine the daemon's PID by using the PID file<br />pid=$(cat ${_pidFile})<br /><br /># Wait in the background for the PID to terminate<br />${if stdenv.isDarwin then ''<br /> lsof -p $pid +r 3 &amp;&gt;/dev/null &amp;<br />'' else if stdenv.isLinux || stdenv.isCygwin then ''<br /> tail --pid=$pid -f /dev/null &amp;<br /> '' else if stdenv.isBSD || stdenv.isSunOS then ''<br /> pwait $pid &amp;<br /> '' else<br /> throw "Don't know how to wait for process completion on system: ${stdenv.system}"}<br /><br /># Wait for the blocker process to complete.<br /># We use wait, so that bash can still<br /># handle the SIGTERM and SIGINT signals that may be sent to it by<br /># a process manager<br />blocker_pid=$!<br />wait $blocker_pid<br /></pre><br />The idea of the proxy script shown above is that it runs as a foreground process as long as the daemon process is running and relays any relevant incoming signals (e.g. a terminate and interrupt) to the daemon process.<br /><br />Implementing this proxy was a bit tricky:<br /><br /><ul><li>In the beginning of the script we configure signal handlers for the <i>TERM</i> and <i>INT</i> signals so that the process manager can terminate the daemon process.</li><li>We must start the daemon and wait for it to become available. Although the parent process of a well-behaving daemon should only terminate when the initialization is done, this turns out not be a hard guarantee -- to make the process a bit more robust, we deliberately wait for the PID file to become available, before we attempt to wait for the termination of the daemon.</li><li>Then we wait for the PID to terminate. The bash shell has an internal <i>wait</i> command that can be used to wait for a background process to terminate, but this only works with processes in the same process group as the shell. Daemons are in a new session (with different process groups), so they cannot be monitored by the shell by using the <i>wait</i> command.<br /><br /><a href="https://stackoverflow.com/questions/1058047/wait-for-a-process-to-finish">From this Stackoverflow article</a>, I learned that we can use the <i>tail</i> command of GNU Coreutils, or <i>lsof</i> on macOS/Darwin, and <i>pwait</i> on BSDs and Solaris/SunOS to monitor processes in other process groups.</li><li>When a command is being executed by a shell script (e.g. in this particular case: <i>tail</i>, <i>lsof</i> or <i>pwait</i>), the shell script can no longer respond to signals until the command completes. To still allow the script to respond to signals while it is waiting for the daemon process to terminate, we must run the previous command in background mode, and we use the <i>wait</i> instruction to block the script. <a href="https://unix.stackexchange.com/questions/146756/forward-sigterm-to-child-in-bash">While a <i>wait</i> command is running, the shell can respond to signals</a>.</li></ul><br />The generator function will automatically pick the best solution for the selected target process manager -- this means that when our target process manager are sysvinit or BSD rc scripts, the generator automatically picks the configuration settings to run the process as a daemon. For the remaining process managers, the generator will pick the configuration settings that runs it as a foreground process.<br /><br />If a desired process model is not supported, then the generator will automatically simulate it. For instance, if we have a foreground-only process specification, then the generator will automatically configure a sysvinit script to call the <i>daemon</i> executable to daemonize it.<br /><br />A similar process happens when a daemon-only process specification is deployed for a process manager that cannot work with it, such as supervisord.<br /><br /><h3>State initialization</h3><br />Another important aspect in process deployment is <strong>state initialization</strong>. Most system services require the presence of state directories in which they can store their PID, log and temp files. If these directories do not exist, the service may not work and refuse to start.<br /><br />To cope with this problem, I typically make processes self initializing -- before starting the process, I check whether the state has been intialized (e.g. check if the state directories exist) and re-initialize the initial state if needed.<br /><br />With most process managers, state initialization is easy to facilitate. For sysvinit and BSD rc scripts, we just use the generator to first execute the shell commands to initialize the state before the process gets started.<br /><br />Supervisord allows you to execute multiple shell commands in a single <i>command</i> directive -- we can just execute a script that initializes the state before we execute the process that we want to manage.<br /><br />systemd has a <i>ExecStartPre</i> directive that can be used to specify shell commands to execute before the main process starts.<br /><br />Apple launchd and cygrunsrv, however, do not have a generic shell execution mechanism or some facility allowing you to execute things before a process starts. Nonetheless, we can still ensure that the state is going to be initialized by creating a <strong>wrapper script</strong> -- first the wrapper script does the state initialization and then executes the main process.<br /><br />If a state initialization procedure was specified and the target process manager does not support scripting, then the generator function will transparently wrap the main process into a wrapper script that supports state initialization.<br /><br /><h3>Process dependencies</h3><br />Another important generic concept is process dependency management. For example, Nginx can act as a reverse proxy for another web application process. To provide a functional Nginx service, we must be sure that the web application process gets activated as well, and that the web application is activated before Nginx.<br /><br />If the web application process is activated after Nginx or missing completely, then Nginx is (temporarily) unable to redirect incoming requests to the web application process causing end-users to see bad gateway errors.<br /><br />The process managers that I have experimented with all have a different notion of process dependencies.<br /><br />sysvinit scripts can optionally declare dependencies in their comment sections. Tools that know how to interpret these dependency specifications can use it to decide the right activation order. Systems using sysvinit typically ignore this specification. Instead, they work with sequence numbers in their file names -- each run level configuration directory contains a prefix (S or K) followed by two numeric digits that defines the start or stop order.<br /><br />supervisord does not work with dependency specifications, but every program can optionally provide a <i>priority</i> setting that can be used to order the activation and deactivation of programs -- lower priority numbers have precedence over high priority numbers.<br /><br />From dependency specifications in a process management expression, the generator function can automatically derive sequence numbers for process managers that require it.<br /><br />Similar to sysvinit scripts, BSD rc scripts can also declare dependencies in their comment sections. Contrary to sysvinit scripts, BSD rc scripts can use the <a href="https://www.freebsd.org/cgi/man.cgi?rcorder(8)"><i>rcorder</i></a> tool to parse these dependencies from the comments section and automatically derive the order in which the BSD rc scripts need to be activated.<br /><br /><i>cygrunsrv</i> also allows you directly specify process dependencies. The Windows service manager makes sure that the service get activated in the right order and that all process dependencies are activated first. The only limitation is that cygrunsrv only allows up to 16 dependencies to be specified per service.<br /><br />To simulate process dependencies with systemd, we can use two properties. The <i>Wants</i> property can be used to tell systemd that another service needs to be activated first. The <i>After</i> property can be used to specify the ordering.<br /><br />Sadly, it seems that launchd has no notion of process dependencies at all -- processes can be activated by certain events, e.g. when a kernel module was loaded or through socket activation, but it does not seem to have the ability to configure process dependencies or the activation ordering. When our target process manager is launchd, then we simply have to inform the user that proper activation ordering cannot be guaranteed.<br /><br /><h2>Changing user privileges</h2><br />Another general concept, that has subtle differences in each process manager, is changing user privileges. Typically for the deployment of system services, you do not want to run these services as root user (that has full access to the filesystem), but as an unprivileged user.<br /><br />sysvinit and BSD rc scripts have to change users through the <i>su</i> command. The <i>su</i> command can be used to change the user ID (UID), and will automatically adopt the primary group ID (GID) of the corresponding user.<br /><br />Supervisord and <i>cygrunsrv</i> can also only change user IDs (UIDs), and will adopt the primary group ID (GID) of the corresponding user.<br /><br />Systemd and launchd can both change the user IDs and group IDs of the process that it invokes.<br /><br />Because only changing UIDs are universally supported amongst process managers, I did not add a configuration property that allows you to change GIDs in a process manager-agnostic way.<br /><br /><h2>Deploying process manager-agnostic configurations</h2><br />With a processes Nix expression, we can define which process instances we want to run (and how they can be constructed from source code and their dependencies):<br /><br /><pre><br />{ pkgs ? import { inherit system; }<br />, system ? builtins.currentSystem<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? false<br />, processManager<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange processManager;<br /> }; <br />in <br />rec { <br /> webapp = rec { <br /> port = 5000; <br /> dnsName = "webapp.local"; <br /> <br /> pkg = constructors.webapp { <br /> inherit port; <br /> }; <br /> }; <br /> <br /> nginxReverseProxy = rec {<br /> port = 8080;<br /><br /> pkg = constructors.nginxReverseProxy {<br /> webapps = [ webapp ];<br /> inherit port;<br /> } {};<br /> };<br />}<br /></pre><br />In the above Nix expression, we compose two running process instances:<br /><br /><ul><li><i>webapp</i> is a trivial web application process that will simply return a static HTML page by using the HTTP protocol.</li><li><i>nginxReverseProxy</i> is a Nginx server configured as a reverse proxy server. It will forward incoming HTTP requests to the appropriate web application instance, based on the virtual host name. If a virtual host name is <i>webapp.local</i>, then Nginx forwards the request to the <i>webapp</i> instance.</li></ul><br />To generate the configuration artifacts for the process instances, we refer to a separate constructors Nix expression. Each constructor will call the <i>createManagedProcess</i> function abstraction (as shown earlier) to construct a process configuration in a process manager-agnostic way.<br /><br />With the following command-line instruction, we can generate sysvinit scripts for the <i>webapp</i> and Nginx processes declared in the processes expression, and run them as an unprivileged user with the state files managed in our home directory:<br /><br /><pre><br />$ nixproc-build --process-manager sysvinit \<br /> --state-dir /home/sander/var \<br /> --force-disable-user-change processes.nix<br /></pre><br />By adjusting the <i>--process-manager</i> parameter we can also generate artefacts for a different process manager. For example, the following command will generate systemd unit config files instead of sysvinit scripts:<br /><br /><pre><br />$ nixproc-build --process-manager systemd \<br /> --state-dir /home/sander/var \<br /> --force-disable-user-change processes.nix<br /></pre><br />The following command will automatically build and deploy all processes, using sysvinit as a process manager:<br /><br /><pre><br />$ nixproc-sysvinit-switch --state-dir /home/sander/var \<br /> --force-disable-user-change processes.nix<br /></pre><br />We can also run a life-cycle management activity on all previously deployed processes. For example, to retrieve the statuses of all processes, we can run:<br /><br /><pre><br />$ nixproc-sysvinit-runactivity status<br /></pre><br />We can also traverse the processes in reverse dependency order. This is particularly useful to reliably stop all processes, without breaking any process dependencies:<br /><br /><pre><br />$ nixproc-sysvinit-runactivity -r stop<br /></pre><br />Similarly, there are command-line tools to use the other supported process managers. For example, to deploy systemd units instead of sysvinit scripts, you can run:<br /><br /><pre><br />$ nixproc-systemd-switch processes.nix<br /></pre><br /><h2>Distributed process manager-agnostic deployment with Disnix</h2><br />As shown in the previous process management framework blog post, it is also possible to deploy processes to machines in a network and have inter-dependencies between processes. These kinds of deployments can be managed by <a href="https://sandervanderburg.blogspot.com/2011/02/disnix-toolset-for-distributed.html">Disnix</a>.<br /><br />Compared to the previous blog post (in which we could only deploy sysvinit scripts), we can now also use any process manager that the framework supports. The Dysnomia toolset provides plugins that supports all process managers that this framework supports:<br /><br /><pre><br />{ pkgs, distribution, invDistribution, system<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? false<br />, processManager ? "sysvinit"<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange processManager;<br /> };<br /><br /> processType =<br /> if processManager == "sysvinit" then "sysvinit-script"<br /> else if processManager == "systemd" then "systemd-unit"<br /> else if processManager == "supervisord" then "supervisord-program"<br /> else if processManager == "bsdrc" then "bsdrc-script"<br /> else if processManager == "cygrunsrv" then "cygrunsrv-service"<br /> else throw "Unknown process manager: ${processManager}";<br />in<br />rec {<br /> webapp = rec {<br /> name = "webapp";<br /> port = 5000;<br /> dnsName = "webapp.local";<br /> pkg = constructors.webapp {<br /> inherit port;<br /> };<br /> type = processType;<br /> };<br /><br /> nginxReverseProxy = rec {<br /> name = "nginxReverseProxy";<br /> port = 8080;<br /> pkg = constructors.nginxReverseProxy {<br /> inherit port;<br /> };<br /> dependsOn = {<br /> inherit webapp;<br /> };<br /> type = processType;<br /> };<br />}<br /></pre><br />In the above expression, we have extended the previously shown processes expression into a Disnix service expression, in which every attribute in the attribute set represents a service that can be distributed to a target machine in the network.<br /><br />The <i>type</i> attribute of each service indicates which Dysnomia plugin needs to manage its life-cycle. We can automatically select the appropriate plugin for our desired process manager by deriving it from the <i>processManager</i> parameter.<br /><br />The above Disnix expression has a drawback -- in a <strong>heteregenous network</strong> of machines (that run multiple operating systems and/or process managers), we need to compose all desired variants of each service with configuration files for each process manager that we want to use.<br /><br />It is also possible to have <strong>target-agnostic</strong> services, by delegating the translation steps to the corresponding target machines. Instead of directly generating a configuration file for a process manager, we generate a JSON specification containing all parameters that are passed to <i>createManagedProcess</i>. We can use this JSON file to build the corresponding configuration artefacts on the target machine:<br /><br /><pre><br />{ pkgs, distribution, invDistribution, system<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? false<br />, processManager ? null<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange processManager;<br /> };<br />in<br />rec {<br /> webapp = rec {<br /> name = "webapp";<br /> port = 5000;<br /> dnsName = "webapp.local";<br /> pkg = constructors.webapp {<br /> inherit port;<br /> };<br /> type = "managed-process";<br /> };<br /><br /> nginxReverseProxy = rec {<br /> name = "nginxReverseProxy";<br /> port = 8080;<br /> pkg = constructors.nginxReverseProxy {<br /> inherit port;<br /> };<br /> dependsOn = {<br /> inherit webapp;<br /> };<br /> type = "managed-process";<br /> };<br />}<br /></pre><br />In the above services model, we have set the <i>processManager</i> parameter to <i>null</i> causing the generator to print JSON presentations of the function parameters passed to <i>createManagedProcess</i>.<br /><br />The <i>managed-process</i> type refers to a Dysnomia plugin that consumes the JSON specification and invokes the <i>createManagedProcess</i> function to convert the JSON configuration to a configuration file used by the preferred process manager.<br /><br />In the infrastructure model, we can configure the preferred process manager for each target machine:<br /><br /><pre><br />{<br /> test1 = {<br /> properties = {<br /> hostname = "test1";<br /> };<br /> containers = {<br /> managed-process = {<br /> processManager = "sysvinit";<br /> };<br /> };<br /> };<br /><br /> test2 = {<br /> properties = {<br /> hostname = "test2";<br /> };<br /> containers = {<br /> managed-process = {<br /> processManager = "systemd";<br /> };<br /> };<br /> };<br />}<br /></pre><br />In the above infrastructure model, the <i>managed-proces</i> container on the first machine: <i>test1</i> has been configured to use sysvinit scripts to manage processes. On the second test machine: <i>test2</i> the <i>managed-process</i> container is configured to use systemd to manage processes.<br /><br />If we distribute the services in the services model to targets in the infrastructure model as follows:<br /><br /><pre><br />{infrastructure}:<br /><br />{<br /> webapp = [ infrastructure.test1 ];<br /> nginxReverseProxy = [ infrastructure.test2 ];<br />}<br /></pre><br />and the deploy the system as follows:<br /><br /><pre style="overflow: auto; font-size: 90%;"><br />$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix<br /></pre><br />Then the <i>webapp</i> process will distributed to the <i>test1</i> machine in the network and will be managed with a sysvinit script.<br /><br />The <i>nginxReverseProxy</i> will be deployed to the <i>test2</i> machine and managed as a systemd job. The Nginx reverse proxy forwards incoming connections to the <i>webapp.local</i> domain name to the web application process hosted on the first machine.<br /><br /><h2>Discussion</h2><br />In this blog post, I have introduced a process manager-agnostic function abstraction making it possible to target all kinds of process managers on a variety of operating systems.<br /><br />By using a single set of declarative specifications, we can:<br /><br /><ul><li>Target six different process managers on four different kinds of operating systems.</li><li>Implement various kinds of deployment scenarios: production deployments, test deployments as an unprivileged user.</li><li>Construct multiple instances of processes.</li></ul><br />In a distributed-context, the advantage is that we can uniformly target all supported process managers and operating systems in a heterogeneous environment from a single declarative specification.<br /><br />This is particularly useful to facilitate technology diversity -- for example, one of the key selling points of Microservices is that "any technology" can be used to implement them. In many cases, technology diversity is "restricted" to frameworks, programming languages, and storage technologies.<br /><br />One particular aspect that is rarely changed is the choice of operating systems, because of the limitations of deployment tools -- most deployment solutions for Microservices are container-based and heavily rely on Linux-only concepts, such as Namespaces and cgroups.<br /><br />With this process managemenent framework and the recent Dysnomia plugin additions for Disnix, it is possible to target all kinds of operating systems that support the Nix package manager, making the operating system component selectable as well. This, for example, allows you to also pick the best operating system to implement a certain requirement -- for example, when performance is important you might pick Linux, and when there is a strong emphasis on security, you could pick OpenBSD to host a mission criticial component.<br /><br /><h2>Limitations</h2><br />The following table, summarizes the differences between the process manager solutions that I have investigated:<br /><br /><div><table style="border-style: solid; border-width: 1px;"><tbody><tr><th></th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">sysvinit</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">bsdrc</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">supervisord</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">systemd</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">launchd</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">cygrunsrv</th></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Process type</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">daemon</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">daemon</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground<br />daemon</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Process control method</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">PID files</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">PID files</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Process PID</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">cgroups</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Process PID</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Process PID</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Scripting support</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Process dependency management</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Numeric ordering</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Dependency-based</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Numeric ordering</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Dependency-based<br />+ dependency loading</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">None</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Dependency-based<br />+ dependency loading</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">User changing capabilities</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user and group</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user and group</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user and group</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Unprivileged user deployments</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes*</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes*</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes*</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Operating system support</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Linux</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">FreeBSD<br /> &gt;OpenBSD<br />NetBSD</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Many UNIX-like:<br />Linux<br />macOS<br />FreeBSD<br />Solaris<br /></td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Linux (+glibc) only</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">macOS (Darwin)</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Windows (Cygwin)</td></tr> </tbody></table></div><br />Although we can facilitate lifecycle management from a common specification with a variety of process managers, only the most important common features are supported.<br /><br />Not every concept can be done in a process manager agnostic way. For example, we cannot generically do any isolation of resources (except for packages, because we use Nix). It is difficult to generalize these concepts because these they are not standardized, e.g. the POSIX standard does not descibe namespaces and cgroups (or similar concepts).<br /><br />Furthermore, most process managers (with the exception of supervisord) are operating system specific. As a result, it still matters what process manager is picked.<br /><br /><h2>Related work</h2><br />Process manager-agnostic deployment is not entirely a new idea. Dysnomia already has a target-agnostic 'process' plugin for quite a while, that translates a simple deployment specification (constisting of key-value pairs) to a systemd unit configuration file or sysvinit script.<br /><br />The features of Dysnomia's <i>process</i> plugin are much more limited compared to the <i>createManagedProcess</i> abstraction function described in this blog post. It does not support any other than process managers than sysvint and systemd, and it can only work with foreground processes.<br /><br />Furthermore, target agnostic configurations cannot be easily extended -- it is possible to (ab)use the templating mechanism, but it has no first class overridde facilities.<br /><br />I also found a project called <a href="https://github.com/jordansissel/pleaserun">pleaserun</a> that also has the objective to generate configuration files for a variety of process managers (my approach and pleaserunit, both support sysvinit scripts, systemd and launchd).<br /><br />It seems to use template files to generate the configuration artefacts, and it does not seem to have a generic extension mechanism. Furthermore, it provides no framework to configure the location of shared resources, automatically install package dependencies or to compose multiple instances of processes.<br /><br /><h2>Some remaining thoughts</h2><br />Although the Nix package manager (not the NixOS distribution), should be portable amongst a variety of UNIX-like systems, it turns out that the only two operating systems that are well supported are Linux and macOS. Nix was reported to work on a variety of other UNIX-like systems in the past, but recently it seems that many things are broken.<br /><br />To make Nix work on FreeBSD 12.1, I have used the latest stable Nix package manager version <a href="https://github.com/0mp/freebsd-ports-nix">with patches from this repository</a>. It turns out that there is still a patch missing to work around in a bug in FreeBSD that incorrectly kills all processes in a process group. Fortunately, when we run Nix as as unprivileged user, this bug does not seem to cause any serious problems.<br /><br />Recent versions of Nixpkgs turn out to be horribly broken on FreeBSD -- the FreeBSD stdenv does not seem to work at all. I tried switching back to stdenv-native (a <i>stdenv</i> environment that impurely uses the host system's compiler and core executables), but that also no longer seems to work in the last three major releases -- the Nix expression evaluation breaks in several places. Due to the intense amount of changes and assumptions that the <i>stdenv</i> infrastructure currently makes, it was as good as impossible for me to fix the infrastructure.<br /><br />As another workaround, I reverted back very to a very old version of Nixpkgs (version 17.03 to be precise), that still has a working stdenv-native environment. With some tiny adjustments (e.g. adding some shell aliases for some GNU variants of certain shell executables to <i>stdenv-native</i>), I have managed to get some basic Nix packages working, including Nginx on FreeBSD.<br /><br />Surprisingly, running Nix on Cygwin was less painful than FreeBSD (because of all the GNUisms that Cygwin provides). Similar to FreeBSD, recent versions of Nixpkgs also appear to be broken, including the Cygwin stdenv environment. By reverting back to <i>release-18.03</i> (that still has a somewhat working <i>stdenv</i> for Cygwin), I have managed to build a working Nginx version.<br /><br />As a future improvement to Nixpkgs, I would like to propose a testing solution for stdenv-native. Although I understand that is difficult to dedicate manpower to maintain all unconventional Nix/Nixpkgs ports, stdenv-native is something that we can also convienently test on Linux and prevent from breaking in the future.<br /><br /><h2>Availability</h2><br /><a href="https://github.com/svanderburg/nix-processmgmt">The latest version of my experimental Nix-based process framework</a>, that includes the process manager-agnostic configuration function described in this blog post, can be obtained from my GitHub page.<br /><br />In addition, the repository also contains some example cases, including the web application system described in this blog post, and a set of common system services: MySQL, Apache HTTP server, PostgreSQL and Apache Tomcat.<br /><br /> - Sat, 15 Feb 2020 20:07:00 +0000 - noreply@blogger.com (Sander van der Burg) - - - Cachix: CDN and double storage size - https://blog.cachix.org/post/2020-01-28-cdn-and-double-storage/ - https://blog.cachix.org/post/2020-01-28-cdn-and-double-storage/ - Cachix - Nix binary cache hosting, has grown quite a bit in recent months in terms of day to day usage and that was mostly noticable on bandwidth. -Over 3000 GB were served in December 2019. -CDN by CloudFlare Increased usage prompted a few backend machine instance upgrades to handle concurrent upload/downloads, but it became clear it’s time to abandon single machine infrastructure. -As of today, all binary caches are served by CloudFlare CDN. - Wed, 29 Jan 2020 08:00:00 +0000 - support@cachix.org (Domen Kožar) +<pre><code>$ nix build -f . daedalus --argstr cluster mainnet_flight +</code></pre> + +<p>Once the build completes, you're ready to launch Daedalus Flight:</p> + +<pre><code>$ ./result/bin/daedalus +</code></pre> + +<p>To verify that you have in fact built Daedalus Flight, first head to the +<code>Daedalus</code> menu then <code>About Daedalus</code>. You should see a title such as +"DAEDALUS 1.0.0". The second check, is to press <code>[Ctl]+d</code> to access <code>Daedalus +Diagnostocs</code> and your <code>Daedalus state directory</code> should have <code>mainnet_flight</code> +at the end of the path.</p> + +<p>If you've got these, give yourself a pat on the back and grab yourself a +refreshing bevvy while you wait for blocks to sync.</p> + +<p><img alt="Daedalus FC1 screenshot" src="http://mcwhirter.com.au/files/Daedalus_FC1.png" title="Daedalus FC1 screenshot" /></p> + Thu, 23 Apr 2020 23:28:59 +0000 - Mayflower: __structuredAttrs in Nix - https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/ - https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/ - In Nix 2 a new parameter to the derivation primitive was added. It changes how information is passed to the derivation builder. -Current State In order to show how it changes the handling of parameters to derivation, the first example will show the current state with __structuredAttrs set to false and the stdenv.mkDerivation wrapper around derivation. All parameters are passed to the builder as environment variables, canonicalised by Nix in imitation of shell script conventions: - Mon, 20 Jan 2020 12:00:00 +0000 + nixbuild.net: Binary Cache Support + https://blog.nixbuild.net/posts/2020-04-18-binary-cache-support.html + https://blog.nixbuild.net/posts/2020-04-18-binary-cache-support.html + <p>Up until now, nixbuild.net has not supported directly fetching build dependencies from binary caches like <a href="https://cache.nixos.org">cache.nixos.org</a> or <a href="https://cachix.org">Cachix</a>. All build dependencies have instead been uploaded from the user’s local machine to nixbuild.net the first time they’ve been needed.</p> +<p>Today, this bottleneck has been removed, since nixbuild.net now can fetch build dependencies directly from binary caches, without taxing users’ upload bandwidth.</p> + +<p>By default, the official Nix binary cache (<a href="https://cache.nixos.org">cache.nixos.org</a>) is added to all nixbuild.net accounts, but a nixbuild.net user can freely decide on which caches that should be queried for build dependencies (including <a href="https://cachix.org">Cachix</a> caches).</p> +<p>An additional benefit of the new support for binary caches is that users that trust the same binary caches automatically share build dependencies from those caches. This means that if one user’s build has triggered a download from for example cache.nixos.org, the next user that comes along and needs the same build dependency doesn’t have to spend time on downloading that dependency.</p> +<p>For more information on how to use binary caches with nixbuild.net, see the <a href="https://docs.nixbuild.net/getting-started/">documentation</a>.</p> + Sat, 18 Apr 2020 00:00:00 +0000 + support@nixbuild.net (nixbuild.net) - Hercules Labs: Hercules CI & Cachix split up - https://blog.hercules-ci.com/2020/01/14/hercules-ci-cachix-split-up/ - https://blog.hercules-ci.com/2020/01/14/hercules-ci-cachix-split-up/ - <p>After careful consideration of how to balance between the two products, we’ve decided to split up. Each of the two products will be a separate entity:</p> + Graham Christensen: Erase your darlings + http://grahamc.com//blog/erase-your-darlings + http://grahamc.com/blog/erase-your-darlings + <p>I erase my systems at every boot.</p> + +<p>Over time, a system collects state on its root partition. This state +lives in assorted directories like <code class="highlighter-rouge">/etc</code> and <code class="highlighter-rouge">/var</code>, and represents +every under-documented or out-of-order step in bringing up the +services.</p> + +<blockquote> + <p>“Right, run <code class="highlighter-rouge">myapp-init</code>.”</p> +</blockquote> + +<p>These small, inconsequential “oh, oops” steps are the pieces that get +lost and don’t appear in your runbooks.</p> + +<blockquote> + <p>“Just download ca-certificates to … to fix …”</p> +</blockquote> + +<p>Each of these quick fixes leaves you doomed to repeat history in three +years when you’re finally doing that dreaded RHEL 7 to RHEL 8 upgrade.</p> + +<blockquote> + <p>“Oh, <code class="highlighter-rouge">touch /etc/ipsec.secrets</code> or the l2tp tunnel won’t work.”</p> +</blockquote> + +<h3 id="immutable-infrastructure-gets-us-so-close">Immutable infrastructure gets us <em>so</em> close</h3> + +<p>Immutable infrastructure is a wonderfully effective method of +eliminating so many of these forgotten steps. Leaning in to the pain +by deleting and replacing your servers on a weekly or monthly basis +means you are constantly testing and exercising your automation and +runbooks.</p> + +<p>The nugget here is the regular and indiscriminate removal of system +state. Destroying the whole server doesn’t leave you much room to +forget the little tweaks you made along the way.</p> + +<p>These techniques work great when you meet two requirements:</p> <ul> - <li>Hercules CI becomes part of Robert Hensing’s Ensius B.V.</li> - <li>Cachix becomes part of Domen Kožar’s Enlambda OÜ</li> + <li>you can provision and destroy servers with an API call</li> + <li>the servers aren’t inherently stateful</li> </ul> -<p>For customers there will be no changes, except for the point of contact in support requests.</p> +<h4 id="long-running-servers">Long running servers</h4> -<p>Domen &amp; Robert</p> - Tue, 14 Jan 2020 00:00:00 +0000 - - - Mayflower: Windows-on-NixOS, part 1: Migrating bare-metal to a VM - https://nixos.mayflower.consulting/blog/2019/11/27/windows-vm-storage/ - https://nixos.mayflower.consulting/blog/2019/11/27/windows-vm-storage/ - This is part 1 of a series of blog posts explaining how we took an existing Windows installation on hardware and moved it into a VM running on top of NixOS. -Background We have a decently-equipped desktop PC sitting in our office, which is designated for data experiments using TensorFlow and such. During off-hours, it's also used for games, and for that purpose it has Windows installed on it. We decided to try moving Windows into a VM within NixOS so that we could run both operating systems in parallel. - Wed, 27 Nov 2019 06:00:00 +0000 - - - Craige McWhirter: Deploying and Configuring Vim on NixOS - http://mcwhirter.com.au//craige/blog/2019/Deploying_and_Configuring_Vim_on_NixOS/ - http://mcwhirter.com.au//craige/blog/2019/Deploying_and_Configuring_Vim_on_NixOS/ - <p><img alt="NixOS Gears by Craige McWhirter" src="http://mcwhirter.com.au/files/NixOS_Gears.png" title="NixOS Gears by Craige McWhirter" /></p> +<p>There are lots of cases in which immutable infrastructure <em>doesn’t</em> +work, and the dirty secret is <strong>those servers need good tools the +most.</strong></p> -<p>I had a need to deploy <a href="https://www.vim.org/">vim</a> and my particular preferred -configuration both system-wide and across multiple systems (via -<a href="https://nixos.org/nixops/">NixOps</a>).</p> +<p>Long-running servers cause long outages. Their runbooks are outdated +and incomplete. They accrete tweaks and turn in to an ossified, +brittle snowflake — except its arms are load-bearing.</p> -<p>I started by creating a file named <code>vim.nix</code> that would be imported into either -<code>/etc/nixos/configuration.nix</code> or an appropriate NixOps Nix file. This example -is a stub that shows a number of common configuration items:</p> +<p>Let’s bring the ideas of immutable infrastructure to these systems +too. Whether this system is embedded in a stadium’s jumbotron, in a +datacenter, or under your desk, we <em>can</em> keep the state under control.</p> -<p><a href="https://source.mcwhirter.io/craige/nixos-examples/src/branch/master/applications/editors/vim.nix">vim.nix</a>:</p> +<h4 id="fhs-isnt-enough">FHS isn’t enough</h4> -<pre><code class="nix">with import &lt;nixpkgs&gt; {}; +<p>The hard part about applying immutable techniques to long running +servers is knowing exactly where your application state ends and the +operating system, software, and configuration begin.</p> -vim_configurable.customize { - name = "vim"; # Specifies the vim binary name. - # Below you can specify what usually goes into `~/.vimrc` - vimrcConfig.customRC = '' - " Preferred global default settings: - set number " Enable line numbers by default - set background=dark " Set the default background to dark or light - set smartindent " Automatically insert extra level of indentation - set tabstop=4 " Default tabstop - set shiftwidth=4 " Default indent spacing - set expandtab " Expand [TABS] to spaces - syntax enable " Enable syntax highlighting - colorscheme solarized " Set the default colour scheme - set t_Co=256 " use 265 colors in vim - set spell spelllang=en_au " Default spell checking language - hi clear SpellBad " Clear any unwanted default settings - hi SpellBad cterm=underline " Set the spell checking highlight style - hi SpellBad ctermbg=NONE " Set the spell checking highlight background - match ErrorMsg '\s\+$' " +<p>This is hard because legacy operating systems and the Filesystem +Hierarchy Standard poorly separate these areas of concern. For +example, <code class="highlighter-rouge">/var/lib</code> is for state information, but how much of this do +you actually care about tracking? What did you configure in <code class="highlighter-rouge">/etc</code> on +purpose?</p> - let g:airline_powerline_fonts = 1 " Use powerline fonts - let g:airline_theme='solarized' " Set the airline theme +<p>The answer is probably not a lot.</p> - set laststatus=2 " Set up the status line so it's coloured and always on +<p>You may not care, but all of this accumulation of junk is a tarpit. +Everything becomes harder: replicating production, testing changes, +undoing mistakes.</p> - " Add more settings below - ''; - # store your plugins in Vim packages - vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; { - start = [ # Plugins loaded on launch - airline # Lean &amp; mean status/tabline for vim that's light as air - solarized # Solarized colours for Vim - vim-airline-themes # Collection of themes for airlin - vim-nix # Support for writing Nix expressions in vim - ]; - # manually loadable by calling `:packadd $plugin-name` - # opt = [ phpCompletion elm-vim ]; - # To automatically load a plugin when opening a filetype, add vimrc lines like: - # autocmd FileType php :packadd phpCompletion - }; -} -</code></pre> +<h3 id="new-computer-smell">New computer smell</h3> -<p>I then needed to import this file into my system packages stanza:</p> +<p>Getting a new computer is this moment of cleanliness. The keycaps +don’t have oils on them, the screen is perfect, and the hard drive +is fresh and unspoiled — for about an hour or so.</p> -<pre><code class="nix"> environment = { - systemPackages = with pkgs; [ - someOtherPackages # Normal package listing - ( - import ./vim.nix - ) - ]; - }; -</code></pre> +<p>Let’s get back to that.</p> -<p>This will then install and configure Vim as you've defined it.</p> +<h2 id="how-is-this-possible">How is this possible?</h2> -<p>If you'd like to give this build a run in a non-production space, I've written <a href="https://source.mcwhirter.io/craige/nixos-examples/src/branch/master/applications/editors/vim_vm.nix">vim_vm.nix</a> with which you can build a VM, ssh into afterwards and test the Vim configuration:</p> +<p>NixOS can boot with only two directories: <code class="highlighter-rouge">/boot</code>, and <code class="highlighter-rouge">/nix</code>.</p> -<pre><code class="bash">$ nix-build '&lt;nixpkgs/nixos&gt;' -A vm --arg configuration ./vim_vm.nix -... -$ export QEMU_OPTS="-m 4192" -$ export QEMU_NET_OPTS="hostfwd=tcp::18080-:80,hostfwd=tcp::10022-:22" -$ ./result/bin/run-vim-vm-vm -</code></pre> +<p><code class="highlighter-rouge">/nix</code> contains read-only system configurations, which are specified +by your <code class="highlighter-rouge">configuration.nix</code> and are built and tracked as system +generations. These never change. Once the files are created in <code class="highlighter-rouge">/nix</code>, +the only way to change the config’s contents is to build a new system +configuration with the contents you want.</p> -<p>Then, from a another terminal:</p> +<p>Any configuration or files created on the drive outside of <code class="highlighter-rouge">/nix</code> is +state and cruft. We can lose everything outside of <code class="highlighter-rouge">/nix</code> and <code class="highlighter-rouge">/boot</code> +and have a healthy system. My technique is to explicitly opt in and +<em>choose</em> which state is important, and only keep that.</p> -<pre><code class="bash">$ ssh nixos@localhost -p 10022 -</code></pre> +<p>How this is possible comes down to the boot sequence.</p> -<p>And you should be in a freshly baked NixOS VM with your Vim config ready to be -used.</p> +<p>For NixOS, the bootloader follows the same basic steps as a standard +Linux distribution: the kernel starts with an initial ramdisk, and the +initial ramdisk mounts the system disks.</p> -<p>There's an always current example of my <a href="https://source.mcwhirter.io/craige/mio-ops/src/branch/master/roles/vim.nix">production Vim -configuration</a> -in my <a href="https://source.mcwhirter.io/craige/mio-ops/">mio-ops</a> repo.</p> - Thu, 14 Nov 2019 04:18:37 +0000 - - - Hercules Labs: Hercules CI Agent 0.6.1 - https://blog.hercules-ci.com/2019/11/12/hercules-ci-agent-0.6.1-release/ - https://blog.hercules-ci.com/2019/11/12/hercules-ci-agent-0.6.1-release/ - <p>We’ve released <a href="https://github.com/hercules-ci/hercules-ci-agent/releases/tag/hercules-ci-agent-0.6.1">hercules-ci-agent 0.6.1</a>, days after <a href="https://github.com/hercules-ci/hercules-ci-agent/releases/tag/hercules-ci-agent-0.6.0">0.6.0</a> release.</p> +<p>And here is where the similarities end.</p> -<p>Everyone is encouraged to upgrade, as it brings performance improvements, a bugfix to IFD and better onboarding experience.</p> +<h3 id="nixoss-early-startup">NixOS’s early startup</h3> -<h3 id="061---2019-11-06">0.6.1 - 2019-11-06</h3> +<p>NixOS configures the bootloader to pass some extra information: a +specific system configuration. This is the secret to NixOS’s +bootloader rollbacks, and also the key to erasing our disk on each +boot. The parameter is named <code class="highlighter-rouge">systemConfig</code>.</p> -<h3 id="fixed">Fixed</h3> +<p>On every startup the very early boot stage knows what the system’s +configuration should be: the entire system configuration is stored in +the read-only <code class="highlighter-rouge">/nix/store</code>, and the directory passed through +<code class="highlighter-rouge">systemConfig</code> has a reference to the config. Early boot then +manipulates <code class="highlighter-rouge">/etc</code> and <code class="highlighter-rouge">/run</code> to match the chosen setup. Usually this +involves swapping out a few symlinks.</p> -<ul> - <li> - <p>Fix token leak to system log when reporting an HTTP exception. This was introduced by a library upgrade. -This was discovered after tagging 0.6.0 but before the release was -announced and before moving of the <code class="highlighter-rouge">stable</code> branch. -Only users of the <code class="highlighter-rouge">hercules-ci-agent</code> <code class="highlighter-rouge">master</code> branch and the unannounced -tag were exposed to this leak. -We recommend to follow the <code class="highlighter-rouge">stable</code> branch.</p> - </li> - <li> - <p>Temporarily revert a Nix GC configuration change that might cause problems -until agent gc root behavior is improved.</p> - </li> -</ul> +<p>If <code class="highlighter-rouge">/etc</code> simply doesn’t exist, however, early boot <em>creates</em> <code class="highlighter-rouge">/etc</code> +and moves on like it were any other boot. It also <em>creates</em> <code class="highlighter-rouge">/var</code>, +<code class="highlighter-rouge">/dev</code>, <code class="highlighter-rouge">/home</code>, and any other core directories that must be present.</p> -<h3 id="060---2019-11-04">0.6.0 - 2019-11-04</h3> +<p>Simply speaking, an empty <code class="highlighter-rouge">/</code> is <em>not surprising</em> to NixOS. In fact, +the NixOS netboot, EC2, and installation media all start out this way.</p> -<h3 id="changed">Changed</h3> +<h2 id="opting-out">Opting out</h2> -<ul> - <li>Switch to Nix 2.3 and NixOS 19.09. <em>You should update your deployment to reflect the NixOS upgrade</em>, unless you’re using terraform or nix-darwin, where it’s automatic.</li> - <li>Increased parallellism during push to cachix</li> - <li>Switch to NixOS 19.09</li> - <li>Enable min-free/max-free Nix GC</li> -</ul> +<p>Before we can opt in to saving data, we must opt out of saving data +<em>by default</em>. I do this by setting up my filesystem in a way that +lets me easily and safely erase the unwanted data, while preserving +the data I do want to keep.</p> -<h3 id="fixed-1">Fixed</h3> +<p>My preferred method for this is using a ZFS dataset and rolling it +back to a blank snapshot before it is mounted. A partition of any +other filesystem would work just as well too, running <code class="highlighter-rouge">mkfs</code> at boot, +or something similar. If you have a lot of RAM, you could skip the +erase step and make <code class="highlighter-rouge">/</code> a tmpfs.</p> -<ul> - <li>Transient errors during source code fetching are now retried</li> - <li>Fixed a bug related to narinfo caching in the context of IFD</li> - <li>Fixed an exception when the root of ci.nix is a list, although lists are unsupported</li> -</ul> +<h3 id="opting-out-with-zfs">Opting out with ZFS</h3> +<p>When installing NixOS, I partition my disk with two partitions, one +for the boot partition, and another for a ZFS pool. Then I create and +mount a few datasets.</p> -<h2 id="what-we-do">What we do</h2> +<p>My root dataset:</p> -<p>Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, -to speed up adoption and lower integration cost. We offer -<a href="https://hercules-ci.com">Continuous Integration</a> and <a href="https://cachix.org">Binary Caches</a>.</p> - Tue, 12 Nov 2019 00:00:00 +0000 - - - Sander van der Burg: A Nix-based functional organization for managing processes - tag:blogger.com,1999:blog-1397115249631682228.post-7384934454548345241 - http://sandervanderburg.blogspot.com/2019/11/a-nix-based-functional-organization-for.html - The <a href="https://sandervanderburg.blogspot.com/2012/11/an-alternative-explaination-of-nix.html">Nix expression language</a> and the Nix packages repository follow a number of unorthodox, but simple conventions that provide all kinds of benefits, such as the ability to conveniently construct multiple variants of packages and store them safely in isolation without any conflicts.<br /><br />The scope of the Nix package manager, however, is limited to <b>package deployment</b> only. Other tools in the Nix project extend deployment to other kinds of domains, such as machine level deployment (<a href="https://sandervanderburg.blogspot.com/2011/01/nixos-purely-functional-linux.html">NixOS</a>), networks of machines (<a href="https://sandervanderburg.blogspot.com/2015/03/on-nixops-disnix-service-deployment-and.html">NixOps</a>) and service-oriented systems (<a href="https://sandervanderburg.blogspot.com/2011/02/disnix-toolset-for-distributed.html">Disnix</a>).<br /><br />In addition to packages, there is also a category of systems (such as systems following the microservices paradigm) that are composed of <b>running processes</b>.<br /><br />Recently, I have been automating deployments of several kinds of systems that are composed of running processes and I have investigated how we can map the most common Nix packaging conventions to construct specifications that we can use to automate the deployment of these kinds of systems.<br /><br /><h2>Some common Nix packaging conventions</h2><br />The Nix package manager implements a so-called <b>purely functional deployment model</b>. In Nix, packages are constructed in the Nix expression language from <b>pure functions</b> in which side effects are eliminated as much as possible, such as undeclared dependencies residing in global directories, such as <i>/lib</i> and <i>/bin</i>.<br /><br />The function parameters of a build function refer to <b>all required inputs</b> to construct the package, such as the build instructions, the source code, environment variables and all required build-time dependencies, such as compilers, build tools and libraries.<br /><br />A big advantage of eliminating side effects (or more realistically: significantly reducing side effects) is to support <b>reproducible deployment</b> -- when building the same package with the same inputs on a different machine, we should get a (nearly) bit-identical result.<br /><br />Strong reproducibility guarantees, for example, make it possible to <b>optimize</b> package deployments by only building a package from source code once and then downloading binary substitutes from remote servers that can be trusted.<br /><br />In addition to the fact that packages are constructed by executing pure functions (with some caveats), the Nixpkgs repository -- that contains a large set of well known free and open source packages -- follows a number of <b>conventions</b>. One of such conventions is that most package build recipes reside in separate files and that each recipe declares a function.<br /><br />An example of such a build recipe is:<br /><br /><pre style="font-size: 90%; overflow: auto;">{ stdenv, fetchurl, pkgconfig, glib, gpm, file, e2fsprogs<br />, perl, zip, unzip, gettext, libssh2, openssl}:<br /><br />stdenv.mkDerivation rec {<br /> pname = "mc";<br /> version = "4.8.23";<br /><br /> src = fetchurl {<br /> url = "http://www.midnight-commander.org/downloads/${pname}-${version}.tar.xz";<br /> sha256 = "077z7phzq3m1sxyz7li77lyzv4rjmmh3wp2vy86pnc4387kpqzyx";<br /> };<br /><br /> buildInputs = [<br /> pkgconfig perl glib slang zip unzip file gettext libssh2 openssl<br /> ];<br /><br /> configureFlags = [ "--enable-vfs-smb" ];<br /><br /> meta = {<br /> description = "File Manager and User Shell for the GNU Project";<br /> homepage = http://www.midnight-commander.org;<br /> maintainers = [ stdenv.lib.maintainers.sander ];<br /> platforms = with stdenv.lib.platforms; linux ++ darwin;<br /> };<br />}<br /></pre><br />The Nix expression shown above (<i>pkgs/tools/misc/mc/default.nix</i>) describes how to build the <a href="http://www.midnight-commander.org/">Midnight Commander</a> from source code and its inputs:<br /><br /><ul><li>The first line declares a function in which the function arguments refer to all <b>dependencies</b> required to build Midnight Commander: <i>stdenv</i> refers to an environment that provides standard UNIX utilities, such as <i>cat</i> and <i>ls</i> and basic build utilities, such as <i>gcc</i> and <i>make</i>. <i>fetchurl</i> is a utility function that can be used to download artifacts from remote locations and that can verify the integrity of the downloaded artifact.<br /><br />The remainder of the function arguments refer to packages that need to be provided as build-time dependencies, such as tools and libraries.</li><li>In the function body, we invoke the <i>stdenv.mkDerivation</i> function to construct a Nix package from source code.<br /><br />By default, if no build instructions are provided, it will automatically execute the standard GNU Autotools/GNU Make build procedure: <i>./configure; make; make install</i>, automatically downloads and unpacks the tarball specified by the <i>src</i> parameter, and uses <i>buildInputs</i> to instruct the configure script to automatically find the dependencies it needs.</li></ul><br />A function definition that describes a package build recipe is not very useful on its own -- to be able to build a package, it needs to be invoked with the appropriate parameters.<br /><br />A Nix package is <b>composed</b> in a top-level Nix expression (<i>pkgs/top-level/all-packages.nix</i>) that declares one big data structure: an attribute set, in which every attribute name refers to a possible variant of a package (typically only one) and each value to a function invocation that builds the package, with the desired versions of variants of the dependencies that a package may need:<br /><br /><pre>{ system ? builtins.currentSystem }:<br /><br />rec {<br /> stdenv = ...<br /> fetchurl = ...<br /> pkgconfig = ...<br /> glib = ...<br /><br /> ...<br /><br /> openssl = import ../development/libraries/openssl {<br /> inherit stdenv fetchurl zlib ...;<br /> };<br /><br /> mc = import ../tools/misc/mc {<br /> inherit stdenv fetchurl pkgconfig glib gpm file e2fsprogs perl;<br /> inherit zip unzip gettext libssh2 openssl;<br /> };<br />}<br /></pre><br />The last attribute (<i>mc</i>) in the attribute set shown above, builds a specific variant of Midnight Commander, by passing the dependencies that it needs as parameters. It uses the <i>inherit</i> language construct to bind the parameters that are declared in the same lexical scope.<br /><br />All the dependencies that Midnight Commander needs are declared in the same attribute set and composed in a similar way.<br /><br />(As a sidenote: in the above example, we explicitly propagate all function parameters, which is quite verbose and tedious. In Nixpkgs, it is also possible to use a convenience function called: <i>callPackage</i> that will automatically pass the attributes with the same names as the function arguments as parameters.)<br /><br />With the composition expression above and running the following command-line instruction:<br /><br /><pre>$ nix-build all-packages.nix -A mc<br />/nix/store/wp3r8qv4k510...-mc-4.8.23<br /></pre><br />The Nix package manager will first deploy all build-time dependencies that Midnight Commander needs, and will then build Midnight Commander from source code. The build result is stored in the <b>Nix store</b> (<i>/nix/store/...-mc-4.8.23</i>), in which all build artifacts reside in isolation in their own directories.<br /><br />We can start Midnight Commander by providing the full path to the <i>mc</i> executable:<br /><br /><pre>$ /nix/store/wp3r8qv4k510...-mc-4.8.23/bin/mc<br /></pre><br />The prefix of every artifact in the Nix store is a SHA256 hash code derived from all inputs provided to the build function. The SHA256 hash prefix makes it possible to safely store multiple versions and variants of the same package next to each other, because they never share the same name.<br /><br />If Nix happens to compute a SHA256 that is already in the Nix store, then the build result is exactly the same, preventing Nix from doing the same build again.<br /><br />Because the Midnight Commander build recipe is a function, we can also adjust the function parameters to build different variants of the same package. For example, by changing the <i>openssl</i> parameter, we can build a Midnight Commander variant that uses a specific version of OpenSSL that is different than the default version:<br /><br /><pre style="font-size: 90%; overflow: auto;">{ system ? builtins.currentSystem }:<br /><br />rec {<br /> stdenv = ...<br /> fetchurl = ...<br /> pkgconfig = ...<br /> glib = ...<br /><br /> ...<br /><br /> openssl_1_1_0 = import ../development/libraries/openssl/1.1.0.nix {<br /> inherit stdenv fetchurl zlib ...;<br /> };<br /><br /> mc_alternative = import ../tools/misc/mc {<br /> inherit stdenv fetchurl pkgconfig glib gpm file e2fsprogs perl;<br /> inherit zip unzip gettext libssh2;<br /> openssl = openssl_1_1_0; # Use a different OpenSSL version<br /> };<br />}<br /></pre><br />We can build our alternative Midnight Commander variant as follows:<br /><br /><pre>$ nix-build all-packages.nix -A mc_alternative<br />/nix/store/0g0wm23y85nc0y...-mc-4.8.23<br /></pre><br />As may be noticed, we get a different Nix store path, because we build Midnight Commander with different build inputs.<br /><br />Although the purely functional model provides all kinds of nice benefits (such as reproducibility, the ability conveniently construct multiple variants of a package, and storing them in isolation without any conflicts), it also has a big inconvenience from a user point of view -- as a user, it is very impractical to remember the SHA256 hash prefixes of a package to start a program.<br /><br />As a solution, Nix also makes it possible to construct <a href="https://sandervanderburg.blogspot.com/2013/09/managing-user-environments-with-nix.html"><b>user environments</b></a> (probably better known as Nix profiles), by using the <i>nix-env</i> tool or using the <i>buildEnv {}</i> function in Nixpkgs.<br /><br />User environments are symlink trees that blend the content of a set of packages into a single directory in the Nix store so that they can be accessed from one single location. By adding the <i>bin/</i> sub folder of a user environment to the <i>PATH</i> environment variable, it becomes possible for a user to start a command-line executable without specifying a full path.<br /><br />For example, with the <i>nix-env</i> tool we can install the Midnight Commander in a Nix profile:<br /><br /><pre>$ nix-env -f all-packages.nix -iA mc<br /></pre><br />and then start it as follows:<br /><br /><pre>$ mc<br /></pre><br />The above command works if the Nix profile is in the <i>PATH</i> environment variable of the user.<br /><br /><h2>Mapping packaging conventions to process management</h2><br />There are four important packaging conventions that the Nix package manager and the Nixpkgs repository follow that I want to emphasize:<br /><br /><ul><li>Invoking the <b>derivation</b> function (typically through <i>stdenv.mkDerivation</i> or an abstraction built around it) builds a package from its build inputs.</li><li>Every package build recipe <b>defines</b> a <b>function</b> in which the function parameters refer to all possible build inputs. We can use this function to compose all kinds of variants of a package.</li><li><b>Invoking</b> a package build recipe function constructs a particular variant of a package and stores the result in the Nix store.</li><li><b>Nix profiles</b> blend the content of a collection of packages into one directory and makes them accessible from a single location.</li></ul><br />(As a sidenote: There is some discussion in the Nix community about these concepts. For example, one of the (self-)criticisms is that the Nix expression language, that is specifically designed as a DSL for package management, has no package concept in the language.<br /><br />Despite this oddity, I personally think that functions are a simple and powerful concept. The only thing that is a bit of a poor decision in my opinion is to call the mechanism that executes a build: <b>derivation</b>).<br /><br />Process management is quite different from package management -- we need to have an executable deployed first (typically done by a package manager, such as Nix), but in addition, we also need to <b>manage</b> the <b>life-cycle</b> of a process, such as starting and stopping it. These facilities are not Nix's responsibility. Instead, we need to work with a <b>process manager</b> that can facilitate these.<br /><br />Furthermore, systems composed of running processes have a kind of dependency relationship that Nix does not manage -- they may also communicate with other processes (e.g. via a network connection or UNIX domain sockets).<br /><br />As a consequence, they require the presence of other processes in order to work. This means that processes need to be activated in the right order or, alternatively, the communication between two dependent processes need to be queued until both are available.<br /><br />If these dependency requirements are not met, then a system may not work. For example, a web application process is useless if the database backend is not available.<br /><br />In order to fully automate the deployment of systems that are composed of running processes, we can do package management with Nix first and then we need to:<br /><br /><ul><li><b>Integrate with a process manager</b>, by generating artifacts that a process manager can work with, such as scripts and/or configuration files.</li><li>Make it possible to specify the <b>process dependencies</b> so that they can be managed (by a process manager or by other means) and activated in the right order.</li></ul><br /><h2>Generating sysvinit scripts</h2><br />There a variety of means to manage processes. A simple (and for today's standards maybe an old fashioned and perhaps controversial) way to manage processes is by using <a href="https://wiki.archlinux.org/index.php/SysVinit">sysvinit scripts</a> (also known as LSB Init compliant scripts).<br /><br />A sysvinit script implements a set of activities and a standardized interface allowing us to manage the lifecycle of a specific process, or a group of processes.<br /><br />For example, on a traditional Linux distribution, we can start a process, such as the <a href="https://nginx.com">Nginx web server</a>, with the following command:<br /><br /><pre>$ /etc/init.d/nginx start<br /></pre><br />and stop it as follows: <br /><br /><pre>$ /etc/init.d/nginx stop<br /></pre><br />A sysvinit script is straight forward to implement and follows a number of conventions:<br /><br /><pre style="overflow: auto;">#!/bin/bash<br /><br />## BEGIN INIT INFO<br /># Provides: nginx<br /># Default-Start: 3 4 5<br /># Default-Stop: 0 1 2 6<br /># Should-Start: webapp<br /># Should-Stop: webapp<br /># Description: Nginx<br />## END INIT INFO<br /><br />. /lib/lsb/init-functions<br /><br />case "$1" in<br /> start)<br /> log_info_msg "Starting Nginx..."<br /> mkdir -p /var/nginx/logs<br /> start_daemon /usr/bin/nginx -c /etc/nginx.conf -p /var/nginx <br /> evaluate_retval<br /> ;;<br /><br /> stop)<br /> log_info_msg "Stopping Nginx..."<br /> killproc /usr/bin/nginx<br /> evaluate_retval<br /> ;;<br /><br /> reload)<br /> log_info_msg "Reloading Nginx..."<br /> killproc /usr/bin/nginx -HUP<br /> evaluate_retval<br /> ;;<br /><br /> restart)<br /> $0 stop<br /> sleep 1<br /> $0 start<br /> ;;<br /><br /> status)<br /> statusproc /usr/bin/nginx<br /> ;;<br /><br /> *)<br /> echo "Usage: $0 {start|stop|reload|restart|status}"<br /> exit 1<br /> ;;<br />esac<br /></pre><br /><ul><li>A sysvinit script typically starts by providing some <b>metadata</b>, such a description, in which runlevels it needs to be started and stopped, and which dependencies the script has.<br /><br />In classic Linux distributions, meta information is typically ignored, but more sophisticated process managers, such as <a href="https://www.freedesktop.org/wiki/Software/systemd/">systemd</a>, can use it to automatically configure the activation/deactivation ordering.</li><li>The body defines a <b>case statement</b> that executes a requested activity.</li><li>Activities use a special construct (in the example above it is: <i>evaluate_retval</i>) to display the <b>status</b> of an instruction, typically whether a process has started or stopped successfully or not, using appropriate colors (e.g. red in case of a failure, green in case of sucess).</li><li>sysvinit scripts typically define a number of <b>commonly used activities</b>: <i>start</i> starts a process, <i>stop</i> stops a process, <i>reload</i> sends a <i>HUP</i> signal to the process to let it reload its configuration (if applicable), <i>restart</i> restarts the process, <i>status</i> indicates the status, and there is a fallback activity that displays the usage to the end user to show which activities can be executed.</li></ul><br />sysvinit scripts use number of utility functions that are defined by the <a href="http://refspecs.linuxbase.org/lsb.shtml">Linux Standards Base (LSB)</a>:<br /><br /><ul><li><i>start_daemon</i> is a utility function that is typically used for starting a process. It has the expectation that the process <a href="http://www.netzmafia.de/skripten/unix/linux-daemon-howto.html">daemonizes</a> -- a process that daemonizes will fork another process that keeps running in the background and then terminates immediately.<br /><br />Controlling a daemonized processes is a bit tricky -- when spawning a process the shell can tell you its process id (PID), so that it can be controlled, but it cannot tell you the PID of the process that gets daemonized by the invoked process, because that is beyond the shell's control.<br /><br />As a solution, most programs that daemonize will write a PID file (e.g. <i>/var/run/nginx.pid</i>) that can be used to determine the PID of the daemon so that it can be controlled.<br /><br />To do proper housekeeping, the <i>start_daemon</i> function will check whether such a PID file already exists, and will only start the process when it needs to.</li><li>Stopping a process, or sending it a different kind of signal, is typically done with the <i>killproc</i> function.<br /><br />This function will search for the corresponding PID file of the process (by default, a PID file that has the same name as the executable or a specified PID file) and uses the corresponding PID content to terminate the daemon. As a fallback, if no PID file exists, it will scan the entire process table and kills the process with the same name.</li><li>We can determine the status of a process (e.g. whether it is running or not), with the <i>statusproc</i> function that also consults the corresponding PID file or scans the process table if needed.</li></ul><br />Most common system software have the ability to deamonize, such as nginx, the Apache HTTP server, MySQL and PostgreSQL. Unfortunately, application services (such as microservices) that are implemented with technologies such as Python, Node.js or Java Springboot do not have this ability out of the box.<br /><br />Fortunately, we can use an external utility, such as <a href="http://www.libslack.org/daemon/">libslack's daemon command</a>, to let these foreground-only processes daemonize. Although it is possible to conveniently daemonize external processes, this functionality is not part of the LSB standard.<br /><br />For example, using the following command to start the web application front-end process will automatically daemonize a foreground process, such as a simple Node.js web application, and creates a PID file so that it can be controlled by the sysvinit utility functions:<br /><br /><pre>$ daemon -U -i /home/sander/webapp/app.js<br /></pre><br />In addition to manually starting and stopping sysvinit scripts, sysvinit scripts are also typically started on startup and stopped on shutdown, or when a user switches between runlevels. These processes are controlled by symlinks that reside in an <i>rc.d</i> directory that have specific prefixes:<br /><br /><pre>/etc/<br /> init.d/<br /> webapp<br /> nginx<br /> rc0.d/<br /> K98nginx -&gt; ../init.d/nginx<br /> K99webapp -&gt; ../init.d/webapp<br /> rc1.d/<br /> K98nginx -&gt; ../init.d/nginx<br /> K99webapp -&gt; ../init.d/webapp<br /> rc2.d/<br /> K98nginx -&gt; ../init.d/nginx<br /> K99webapp -&gt; ../init.d/webapp<br /> rc3.d/<br /> S00webapp -&gt; ../init.d/nginx<br /> S01nginx -&gt; ../init.d/webapp<br /> rc4.d/<br /> S00webapp -&gt; ../init.d/nginx<br /> S01nginx -&gt; ../init.d/webapp<br /> rc5.d/<br /> S00webapp -&gt; ../init.d/nginx<br /> S01nginx -&gt; ../init.d/webapp<br /> rc6.d/<br /> K98nginx -&gt; ../init.d/nginx<br /> K99webapp -&gt; ../init.d/webapp<br /></pre><br />In the above directory listing, every <i>rc?.d</i> directory contains symlinks to scripts in the <i>init.d</i> directory.<br /><br />The first character of each symlink file indicates whether an <i>init.d</i> script should be started (S) or stopped (K). The two numeric digits that follow indicate the order in which the scripts need to be started and stopped.<br /><br />Each runlevel has a specific purpose <a href="https://refspecs.linuxbase.org/LSB_3.0.0/LSB-PDA/LSB-PDA/runlevels.html">as described in the LSB standard</a>. In the above situation, when we boot the system in multi-user mode on the console (run level 3), first our Node.js web application will be started, followed by nginx. On a reboot (when we enter runlevel 6) nginx and then the web application will be stopped. Basically, the stop order is the reverse of the start order.<br /><br />To conveniently automate the deployment of sysvinit scripts, I have created a utility function called: <i>createSystemVInitScript</i> that makes it possible to generate sysvinit script with the Nix package manager.<br /><br />We can create a Nix expression that generates a sysvinit script for nginx, such as:<br /><br /><pre>{createSystemVInitScript, nginx}:<br /><br />let<br /> configFile = ./nginx.conf;<br /> stateDir = "/var";<br />in<br />createSystemVInitScript { <br /> name = "nginx";<br /> description = "Nginx";<br /> activities = {<br /> start = ''<br /> mkdir -p ${stateDir}/logs<br /> log_info_msg "Starting Nginx..."<br /> loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}<br /> evaluate_retval<br /> '';<br /> stop = ''<br /> log_info_msg "Stopping Nginx..."<br /> killproc ${nginx}/bin/nginx<br /> evaluate_retval<br /> '';<br /> reload = ''<br /> log_info_msg "Reloading Nginx..."<br /> killproc ${nginx}/bin/nginx -HUP<br /> evaluate_retval<br /> '';<br /> restart = ''<br /> $0 stop<br /> sleep 1<br /> $0 start<br /> '';<br /> status = "statusproc ${nginx}/bin/nginx";<br /> };<br /> runlevels = [ 3 4 5 ];<br />}<br /></pre><br />The above expression defines a function in which the function parameters refer to all dependencies that we need to construct the sysvinit script to manage a nginx server: <i>createSystemVInitScript</i> is the utility function that creates sysvinit scripts, <i>nginx</i> is the package that provides Nginx.<br /><br />In the body, we invoke the: <i>createSystemVInitScript</i> to construct a sysvinit script:<br /><br /><ul><li>The <b>name</b> corresponds to name of the sysvinit script and the <b>description</b> to the description displayed in the metadata header.</li><li>The <b>activities</b> parameter refers to an attribute set in which every name refers to an activity and every value to the shell commands that need to be executed for this activity.<br /><br />We can use this parameter to specify the start, stop, reload, restart and status activities for nginx. The function abstraction will automatically configure the fallback activity that displays the usage to the end-user including the activities that the script supports.</li><li>The <b>runlevels</b> parameter indicates in which runlevels the <i>init.d</i> script should be started. For these runlevels, the function will create start symlinks. An implication is that for the runlevels that are not specified (0, 1, 2, and 6) the script will automatically create stop symlinks.</li></ul><br />As explained earlier, sysvinit script use conventions. One of such conventions is that most activities typically display a description, then execute a command, and finally display the status of that command, such as:<br /><br /><pre>log_info_msg "Starting Nginx..."<br />loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}<br />evaluate_retval<br /></pre><br />The <i>createSystemVInit</i> script also a notion of <b>instructions</b>, that are automatically translated into activities displaying task descriptions (derived from the general description) and the status. Using the <i>instructions</i> parameter allows us to simplify the above expression to:<br /><br /><pre>{createSystemVInitScript, nginx}:<br /><br />let<br /> configFile = ./nginx.conf;<br /> stateDir = "/var";<br />in<br />createSystemVInitScript { <br /> name = "nginx";<br /> description = "Nginx";<br /> instructions = {<br /> start = {<br /> activity = "Starting";<br /> instruction = ''<br /> mkdir -p ${stateDir}/logs<br /> loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}<br /> '';<br /> };<br /> stop = {<br /> activity = "Stopping";<br /> instruction = "killproc ${nginx}/bin/nginx";<br /> };<br /> reload = {<br /> activity = "Reloading";<br /> instruction = "killproc ${nginx}/bin/nginx -HUP";<br /> };<br /> };<br /> activities = {<br /> status = "statusproc ${nginx}/bin/nginx";<br /> };<br /> runlevels = [ 3 4 5 ];<br />}<br /></pre><br />In the above expression, the start, stop and reload activities have been simplified by defining them as instructions allowing us to write less repetitive boilerplate code.<br /><br />We can reduce the amount of boilerplate code even further -- the kind of activities that we need to implement for managing process are typically mostly the same. When we want to manage a process, we typically want a start, stop, restart, status activity and, if applicable, a reload activity if a process knows how to handle the HUP signal.<br /><br />Instead of speciying activities or instructions, it is also possible to specify which process we want to manage, and what kind of parameters the process should take:<br /><br /><pre>{createSystemVInitScript, nginx}:<br /><br />let<br /> configFile = ./nginx.conf;<br /> stateDir = "/var";<br />in<br />createSystemVInitScript { <br /> name = "nginx";<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${stateDir}/logs<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile "-p" stateDir ];<br /> runlevels = [ 3 4 5 ];<br />}<br /></pre><br />From the <i>process</i> and <i>args</i> parameters, the <i>createSystemVInitScript</i> automatically derives all relevant activities that we need to manage the process. It is also still possible to augment or override the generated activities by means of the <i>instructions</i> or <i>activities</i> parameters.<br /><br />Besides processes that already have the ability to daemonize, it is also possible to automatically daemonize foreground processes with this function abstraction. This is particularly useful to generate a sysvinit script for the Node.js web application service, that lacks this ability:<br /><br /><pre>{createSystemVInitScript}:<br /><br />let<br /> webapp = (import ./webapp {}).package;<br />in<br />createSystemVInitScript {<br /> name = "webapp";<br /> process = "${webapp}/lib/node_modules/webapp/app.js";<br /> processIsDaemon = false;<br /> runlevels = [ 3 4 5 ];<br /> environment = {<br /> PORT = 5000;<br /> };<br />}<br /></pre><br />In the above Nix expression, we set the parameter: <i>processIsDaemon</i> to <i>false</i> (the default value is: <i>true</i>) to indicate that the process is not a deamon, but a foreground process. The <i>createSystemVInitScript</i> function will generate a start activity that invokes the <i>daemon</i> command to daemonize it.<br /><br />Another interesting feature is that we can specify <strong>process dependency relationships</strong>. For example, an nginx server can act as a reverse proxy for the Node.js web application.<br /><br />To reliably activate the entire system, we must make sure that the web application process is deployed before Nginx is deployed. If we activate the system in the opposite order, then the reverse proxy may redirect users to an non-existent web application causing them to see 502 bad gateway errors.<br /><br />We can use the <strong>dependency parameter</strong> with a reference to a sysvinit script to indicate that this sysvinit script has a dependency. For example, we can revise the Nginx sysvinit script expression as follows:<br /><br /><pre>{createSystemVInitScript, nginx, webapp}:<br /><br />let<br /> configFile = ./nginx.conf;<br /> stateDir = "/var";<br />in<br />createSystemVInitScript { <br /> name = "nginx";<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${stateDir}/logs<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile "-p" stateDir ];<br /> runlevels = [ 3 4 5 ];<br /> dependencies = [ webapp ];<br />}<br /></pre><br />In the above example, we pass the <i>webapp</i> sysvinit script as a dependency (through the <i>dependencies</i> parameter). Adding it as a dependency causes the generator to compute a start sequence number for the nginx script that will be higher than the web app sysvinit script and stop sequence number that will be lower than the web app script.<br /><br />The different sequence numbers ensure that webapp is started before nginx starts, and that the nginx stops before the webapp stops.<br /><br /><h2>Configuring managed processes</h2><br />So far composing sysvinit scripts is still very similar to composing ordinary Nix packages. We can also extend the four Nix packaging conventions described in the introduction to create a process management discipline.<br /><br />Similar to the convention in which every package is in a separate file, and defines a function in which the function parameters refers to all package dependencies, we can extend this convention for processes to also include relevant parameters to configure a service.<br /><br />For example, we can write a Nix expression for the web application process as follows:<br /><br /><pre>{createSystemVInitScript, port ? 5000}:<br /><br />let<br /> webapp = (import /home/sander/webapp {}).package;<br />in<br />createSystemVInitScript {<br /> name = "webapp";<br /> process = "${webapp}/lib/node_modules/webapp/app.js";<br /> processIsDaemon = false;<br /> runlevels = [ 3 4 5 ];<br /> environment = {<br /> PORT = port;<br /> };<br />}<br /></pre><br />In the above expression, the <i>port</i> function parameter allows us to configure the TCP port where the web application listens to (and defaults to 5000).<br /><br />We can also make the configuration of nginx configurable. For example, we can create a function abstraction that creates a configuration for nginx to let it act as a reverse proxy for the web application process shown earlier:<br /><br /><pre>{createSystemVInitScript, stdenv, writeTextFile, nginx<br />, runtimeDir, stateDir, logDir, port ? 80, webapps ? []}:<br /><br />let<br /> nginxStateDir = "${stateDir}/nginx";<br />in<br />import ./nginx.nix {<br /> inherit createSystemVInitScript nginx instanceSuffix;<br /> stateDir = nginxStateDir;<br /><br /> dependencies = map (webapp: webapp.pkg) webapps;<br /><br /> configFile = writeTextFile {<br /> name = "nginx.conf";<br /> text = ''<br /> error_log ${nginxStateDir}/logs/error.log;<br /> pid ${runtimeDir}/nginx.pid;<br /><br /> events {<br /> worker_connections 190000;<br /> }<br /><br /> http {<br /> ${stdenv.lib.concatMapStrings (dependency: ''<br /> upstream webapp${toString dependency.port} {<br /> server localhost:${toString dependency.port};<br /> }<br /> '') webapps}<br /><br /> ${stdenv.lib.concatMapStrings (dependency: ''<br /> server {<br /> listen ${toString port};<br /> server_name ${dependency.dnsName};<br /><br /> location / {<br /> proxy_pass http://webapp${toString dependency.port};<br /> }<br /> }<br /> '') webapps}<br /> }<br /> '';<br /> };<br />}<br /></pre><br />The above Nix expression's funtion header defines, in addition to the package dependencies, process configuration parameters that make it possible to configure the TCP port that Nginx listens to (port 80 by default) and to which web applications it should forward requests based on their virtual host property.<br /><br />In the body, these properties are used to generate a <i>nginx.conf</i> file that defines virtualhosts for each web application process. It forwards incoming requests to the appropriate web application instance. To connect to a web application instance, it uses the port number that the <i>webapp</i> instance configuration provides.<br /><br />Similar to ordinary Nix expressions, Nix expressions for processes also need to be composed, by passing the appropriate function parameters. This can be done in a <strong>process composition expression</strong> that has the following structure:<br /><br /><pre>{ pkgs ? import &lt;nixpkgs&gt; { inherit system; }<br />, system ? builtins.currentSystem<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />}:<br /><br />let<br /> createSystemVInitScript = import ./create-sysvinit-script.nix {<br /> inherit (pkgs) stdenv writeTextFile daemon;<br /> inherit runtimeDir tmpDir;<br /><br /> createCredentials = import ./create-credentials.nix {<br /> inherit (pkgs) stdenv;<br /> };<br /><br /> initFunctions = import ./init-functions.nix {<br /> basePackages = [<br /> pkgs.coreutils<br /> pkgs.gnused<br /> pkgs.inetutils<br /> pkgs.gnugrep<br /> pkgs.sysvinit<br /> ];<br /> inherit (pkgs) stdenv;<br /> inherit runtimeDir;<br /> };<br /> };<br />in<br />rec {<br /> webapp = rec {<br /> port = 5000;<br /> dnsName = "webapp.local";<br /><br /> pkg = import ./webapp.nix {<br /> inherit createSystemVInitScript port;<br /> };<br /> };<br /><br /> nginxReverseProxy = rec {<br /> port = 80;<br /><br /> pkg = import ./nginx-reverse-proxy.nix {<br /> inherit createSystemVInitScript;<br /> inherit stateDir logDir runtimeDir port;<br /> inherit (pkgs) stdenv writeTextFile nginx;<br /> webapps = [ webapp ];<br /> };<br /> };<br />}<br /></pre><br />The above expression (<i>processes.nix</i>) has the following structure:<br /><br /><ul><li>The expression defines a function in which the function parameters allow common properties that apply to all processes to be configured: <i>pkgs</i> refers to the set of Nixpkgs that contains a big collection of free and open source packages, <i>system</i> refers to the system architecture to build packages for, and <i>stateDir</i> to the directory where processes should store their state (which is <i>/var</i> according to the LSB standard).<br /><br />The remaining parameters specify the runtime, log and temp directories, that are typically sub directories in the state directory.</li><li>In the let block, we compose our <i>createSystemVInitScript</i> function using the relevant state directory parameters, base packages and utility functions.</li><li>In the body, we construct an attribute set in which every name represents a process name and every value an attribute set that contains process properties.</li><li>One reserved process property of a process attribute set is the <i>pkg</i> property that refers to a package providing the sysvinit script.</li><li>The remaining process properties can be freely chosen and can be consumed by any process that has a dependency on it.<br /><br />For example, the <i>nginxReverseProxy</i> service uses the <i>port</i> and <i>dnsName</i> properties of the <i>webapp</i> process to configure nginx to forward requests to the provided DNS host name (<i>webapp.local</i>) to the web application process listening on the specified TCP port (<i>5000</i>).</li></ul><br />Using the above composition Nix expression for processes and the following command-line instruction, we can build the sysvinit script for the web application process:<br /><br /><pre>$ nix-build processes.nix -A webapp<br /></pre><br />We can start the web application process by using the generated sysvinit script, as follows:<br /><br /><pre>$ ./result/bin/etc/rc.d/init.d/webapp start<br /></pre><br />and stop it as follows:<br /><br /><pre>$ ./result/bin/etc/rc.d/init.d/webapp stop<br /></pre><br />We can also build the nginx reverse proxy in a similar way, but to properly activate it, we must make sure that the webapp process is activated first.<br /><br />To reliably manage a set of processes and activate them in the right order, we can also generate a Nix profile that contains all <i>init.d</i> scripts and <i>rc.d</i> symlinks for stopping and starting:<br /><br /><pre>{ pkgs ? import &lt;nixpkgs&gt; { inherit system; }<br />, system ? builtins.currentSystem<br />}:<br /><br />let<br /> buildSystemVInitEnv = import ./build-sysvinit-env.nix {<br /> inherit (pkgs) buildEnv;<br /> };<br />in<br />buildSystemVInitEnv {<br /> processes = import ./processes.nix {<br /> inherit pkgs system;<br /> };<br />}<br /></pre><br />The above expression imports the process composition expression shown earlier, and invokes the <i>buildSystemVInitEnv</i> to compose a Nix profile out of it. We can build this environment as follows:<br /><br /><pre>$ nix-build profile.nix<br /></pre><br />Visually, the content of the Nix profile can presented as follows:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-901SfouMyio/XcPyvcgwzcI/AAAAAAAAIgI/A6Er10Z55goGXuaDAG5lN4mY-nvXDX-XgCLcBGAsYHQ/s1600/processes-simple.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-901SfouMyio/XcPyvcgwzcI/AAAAAAAAIgI/A6Er10Z55goGXuaDAG5lN4mY-nvXDX-XgCLcBGAsYHQ/s1600/processes-simple.png" /></a></div><br />In the above diagram the ovals denote processes and the arrows denote process dependency relationships. The arrow indicates that the <i>webapp</i> process needs to be activated before the <i>nginxReverseProxy</i>.<br /><br />We can use the system's <i>rc</i> script manage the starting and stopping the processes when runlevels are switched. Runlevels 1-5 make it possible to start the processes on startup and 0 and 6 to stop them on shutdown or reboot.<br /><br />In addition to the system's <i>rc</i> script, we can also directly control the processes in a Nix profile -- I have created a utility script called: <i>rcswitch</i> that makes it possible to manually start all processes in a profile:<br /><br /><pre>$ rcswitch ./result/etc/rc.d/rc3.d<br /></pre><br />we can also use the <i>rcswitch</i> command to do an upgrade from one set of processes to another:<br /><br /><pre>$ rcswitch ./result/etc/rc.d/rc.3 ./oldresult/etc/rc.d/rc3.d<br /></pre><br />The above command checks which of the sysvinit scripts exist in both profiles and will only deactivate obsolete processes and activate new processes.<br /><br />With the <i>rcrunactivity</i> command it is possible to run arbitrary activities on all processes in a profile. For example, the following command will show all statuses:<br /><br /><pre>$ rcactivity status ./result/etc/rc.d/rc3.d<br /></pre><br /><h2>Deploying services as an unprivileged user</h2><br />The process composition expression shown earlier is also a Nix function that takes various kinds of state properties as parameters.<br /><br />By default, it has been configured in such a way that it facilitates production deployments. For example, it stores the state of all services in the global <i>/var</i> directory. Only the super user has the permissions to alter the structure of the global <i>/var</i> directory.<br /><br />It is also possible to change these configuration parameters in such a way that it becomes possible as an unprivileged user to do process deployment.<br /><br />For example, by changing the port number of the <i>nginxReverseProxy</i> process to a value higher than 1024, such as 8080 (an unprivileged user is not allowed to bind any services to ports below 1024), and changing the <i>stateDir</i> parameter to a directory in a user's home directory, we can deploy our web application service and Nginx reverse proxy as an unprivileged user:<br /><br /><pre>$ nix-build processes.nix --argstr stateDir /home/sander/var \<br /> -A nginxReverseProxy<br /></pre><br />By overriding the <i>stateDir</i> parameter, the resulting Nginx process has been configured to store all state in <i>/home/sander/var</i> as opposed to the global <i>/var</i> that cannot be modified by an unprivileged user.<br /><br />As an unprivileged user, I should be able to start the Nginx reverse proxy as follows:<br /><br /><pre>$ ./result/etc/rc.d/init.d/nginx start<br /></pre><br />The above Nginx instance can be reached by opening: <i>http://localhost:8080</i> in a web browser.<br /><br /><h2>Creating multiple process instances</h2><br />So far, we have only been deploying single instances of processes. For the Nginx reverse proxy example, it may also be desired to deploy <strong>multiple instances</strong> of the webapp process so that we can manage forwardings for multiple virtual domains.<br /><br />We can adjust the Nix expression for the webapp to make it possible to create multiple process instances:<br /><br /><pre>{createSystemVInitScript}:<br />{port, instanceSuffix ? ""}:<br /><br />let<br /> webapp = (import ./webapp {}).package;<br /> instanceName = "webapp${instanceSuffix}";<br />in<br />createSystemVInitScript {<br /> name = instanceName;<br /> inherit instanceName;<br /> process = "${webapp}/lib/node_modules/webapp/app.js";<br /> processIsDaemon = false;<br /> runlevels = [ 3 4 5 ];<br /> environment = {<br /> PORT = port;<br /> };<br />}<br /></pre><br />The above Nix expression is a modified webapp build recipe that facilitates instantiation:<br /><br /><ul><li>We have split the Nix expression into two nested functions. The first line: the outer function header defines all dependencies and configurable properties that apply to all services instances.</li><li>The inner function header allows all <b>instance specific</b> properties to be configured so that multiple instances can co-exist. An example of such a property is the <i>port</i> parameter -- only one service can bind to a specific TCP port. Configuring an instance to bind to different port allows two instances co-exist.<br /><br />The <i>instanceSuffix</i> parameter makes it possible to give each webapp process a unique name (e.g. by providing a numeric value).<br /><br />From the package name and instance suffix a unique <i>instanceName</i> is composed. Propagating the <i>instanceName</i> to the <i>createSystemVInitScript</i> function instructs the <i>daemon</i> command to create a unique PID file (not a PID file that corresponds to the executable name) for each daemon process so that multiple instances can be controlled independently.</li></ul><br />Although this may sound as a very uncommon use case, it is also possible to change the Nix expression for the Nginx reverse proxy to support multiple instances.<br /><br />Typically, for system services, such as web servers and database servers, it is very uncommon to run multiple instances at the same time. Despite the fact that it is uncommon, it is actually possible and quite useful for development and/or experimentation purposes:<br /><br /><pre>{ createSystemVInitScript, stdenv, writeTextFile, nginx<br />, runtimeDir, stateDir, logDir}:<br /><br />{port ? 80, webapps ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxStateDir = "${stateDir}/${instanceName}";<br />in<br />import ./nginx.nix {<br /> inherit createSystemVInitScript nginx instanceSuffix;<br /> stateDir = nginxStateDir;<br /><br /> dependencies = map (webapp: webapp.pkg) webapps;<br /><br /> configFile = writeTextFile {<br /> name = "nginx.conf";<br /> text = ''<br /> error_log ${nginxStateDir}/logs/error.log;<br /> pid ${runtimeDir}/${instanceName}.pid;<br /><br /> events {<br /> worker_connections 190000;<br /> }<br /><br /> http {<br /> ${stdenv.lib.concatMapStrings (dependency: ''<br /> upstream webapp${toString dependency.port} {<br /> server localhost:${toString dependency.port};<br /> }<br /> '') webapps}<br /><br /> ${stdenv.lib.concatMapStrings (dependency: ''<br /> server {<br /> listen ${toString port};<br /> server_name ${dependency.dnsName};<br /><br /> location / {<br /> proxy_pass http://webapp${toString dependency.port};<br /> }<br /> }<br /> '') webapps}<br /> }<br /> '';<br /> };<br />}<br /></pre><br />The code fragment above shows a revised Nginx expression that supports instantiation:<br /><br /><ul><li>Again, the Nix expression defines a nested function in which the outer function header refers to configuration properties for all services, whereas the inner function header refers to all conflicting parameters that need to be changed so that multiple instances can co-exist.</li><li>The <i>port</i> parameter allows the TCP port where Nginx bind to be configured. To have two instances co-existing they both need to bind to unreserved ports.</li><li>As with the previous example, the <i>instanceSuffix</i> parameter makes it possible to compose unique names for each Nginx instance. The <i>instanceName</i> variable that is composed from it, is used to create and configure a dedicate state directory, and a unique PID file that does not conflict with other Nginx instances.</li></ul><br />With this new convention of nested functions for instantiatable services means that we have to compose these expressions twice. First, we need to pass all parameters that configure properties that apply to all service instances. This can be done in a Nix expression that has the following structure:<br /><br /><pre>{ pkgs<br />, system<br />, stateDir<br />, logDir<br />, runtimeDir<br />, tmpDir<br />}:<br /><br />let<br /> createSystemVInitScript = import ./create-sysvinit-script.nix {<br /> inherit (pkgs) stdenv writeTextFile daemon;<br /> inherit runtimeDir tmpDir;<br /><br /> createCredentials = import ./create-credentials.nix {<br /> inherit (pkgs) stdenv;<br /> };<br /><br /> initFunctions = import ./init-functions.nix {<br /> basePackages = [<br /> pkgs.coreutils<br /> pkgs.gnused<br /> pkgs.inetutils<br /> pkgs.gnugrep<br /> pkgs.sysvinit<br /> ];<br /> inherit (pkgs) stdenv;<br /> inherit runtimeDir;<br /> };<br /> };<br />in<br />{<br /> webapp = import ./webapp.nix {<br /> inherit createSystemVInitScript;<br /> };<br /><br /> nginxReverseProxy = import ./nginx-reverse-proxy.nix {<br /> inherit createSystemVInitScript stateDir logDir runtimeDir;<br /> inherit (pkgs) stdenv writeTextFile nginx;<br /> };<br />}<br /></pre><br />The above Nix expression is something we could call a <b>constructors expression</b> (<i>constructors.nix</i>) that returns an attribute set in which each member refers to a function that allows us to compose a specific process instance.<br /><br />By using the constructors expression shown above, we can create a processes composition expression that works with multiple instances:<br /><br /><pre>{ pkgs ? import { inherit system; }<br />, system ? builtins.currentSystem<br />, stateDir ? "/home/sbu"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs system stateDir runtimeDir logDir tmpDir;<br /> };<br />in<br />rec {<br /> webapp1 = rec {<br /> port = 5000;<br /> dnsName = "webapp1.local";<br /><br /> pkg = constructors.webapp {<br /> inherit port;<br /> instanceSuffix = "1";<br /> };<br /> };<br /><br /> webapp2 = rec {<br /> port = 5001;<br /> dnsName = "webapp2.local";<br /><br /> pkg = constructors.webapp {<br /> inherit port;<br /> instanceSuffix = "2";<br /> };<br /> };<br /><br /> webapp3 = rec {<br /> port = 5002;<br /> dnsName = "webapp3.local";<br /><br /> pkg = constructors.webapp {<br /> inherit port;<br /> instanceSuffix = "3";<br /> };<br /> };<br /><br /> webapp4 = rec {<br /> port = 5003;<br /> dnsName = "webapp4.local";<br /><br /> pkg = constructors.webapp {<br /> inherit port;<br /> instanceSuffix = "4";<br /> };<br /> };<br /><br /> nginxReverseProxy = rec {<br /> port = 8080;<br /><br /> pkg = constructors.nginxReverseProxy {<br /> webapps = [ webapp1 webapp2 webapp3 webapp4 ];<br /> inherit port;<br /> };<br /> };<br /><br /> webapp5 = rec {<br /> port = 6002;<br /> dnsName = "webapp5.local";<br /><br /> pkg = constructors.webapp {<br /> inherit port;<br /> instanceSuffix = "5";<br /> };<br /> };<br /><br /> webapp6 = rec {<br /> port = 6003;<br /> dnsName = "webapp6.local";<br /><br /> pkg = constructors.webapp {<br /> inherit port;<br /> instanceSuffix = "6";<br /> };<br /> };<br /><br /> nginxReverseProxy2 = rec {<br /> port = 8081;<br /><br /> pkg = constructors.nginxReverseProxy {<br /> webapps = [ webapp5 webapp6 ];<br /> inherit port;<br /> instanceSuffix = "2";<br /> };<br /> };<br />}<br /></pre><br />In the above expression, we import the constructors expression, as shown earlier. In the body, we construct multiple instances of these processes by using the constructors functions:<br /><br /><ul><li>We compose six web application instances (<i>webapp1</i>, <i>webapp2</i>, ..., <i>webapp6</i>), each of them listening on a unique TCP port.</li><li>We compose two Nginx instances (<i>nginxReverseProxy</i>, <i>nginxReverseProxy2</i>). The first instance listens on TCP port 8080 and redirects the user to any of the first three web application processes, based on the virtual host name. The other Nginx instance listens on TCP port 8081, redirecting the user to the remaining web apps based on the virtual host name.</li></ul><br />We can represent the above composition expression visually, as follows:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-syQ6BJILH0U/XcP1J5WDK6I/AAAAAAAAIgc/Ys2LMsIdGK0flM8sOL_lCGD9GP77GphpwCLcBGAsYHQ/s1600/processes-instances.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-syQ6BJILH0U/XcP1J5WDK6I/AAAAAAAAIgc/Ys2LMsIdGK0flM8sOL_lCGD9GP77GphpwCLcBGAsYHQ/s400/processes-instances.png" width="520" /></a></div><br />As with the previous examples, we can deploy each process instance individually:<br /><br /><pre>$ nix-build processes.nix -A webapp3<br />$ ./result/etc/rc.d/init.d/webapp3 start<br /></pre><br />Or the the whole set as a Nix profile:<br /><br /><pre>$ nix-build profile.nix<br />$ rcswitch ./result/etc/rc.d/rc3.d<br /></pre><br />Again, the <i>rcswitch</i> command will make sure that all processes are activated in the right order. This means that the webapp processes are activated first, followed by the Nginx reverse proxies.<br /><br /><h2>Managing user accounts/state with Dysnomia</h2><br />Most of the deployment of the processes can be automated in a stateless way -- Nix can deploy the executable as a Nix package and the sysvinit script can manage the lifecycle.<br /><br />There is another concern, that we may also want to address. Typically, it is not recommended to run processes as a root user, such as essential system services, for security and safety reasons.<br /><br />In order to run a process as an unprivileged user, an unprivileged group and user account must be created first by some means. Furthermore, when undeploying a process, we may also want to remove the dedicated user and group.<br /><br />User account management is a feature that the Nix package manager does not support -- Nix only works with files stored in the Nix store and cannot/will not (by design) change any files on the host system, such as <i>/etc/passwd</i> where the user accounts are stored.<br /><br />I have created a deployment tool for state management (<a href="https://sandervanderburg.blogspot.com/2012/03/deployment-of-mutable-components.html">Dysnomia</a>) that can be used for this purpose. It facilitates a plugin system that can manage deployment activities for components that Nix does not support: activating, deactivating, taking snapshots, restoring snapshots etc.<br /><br />I have created a Dysnomia plugin called: <i>sysvinit-script</i> that can activate or deactivate a process by invoking a sysvinit script. It can also create or discard users and groups from a declarative configuration file that is included with a sysvinit script.<br /><br />We can revise a process Nix expression to start a process as an unprivileged user:<br /><br /><pre>{createSystemVInitScript}:<br />{port, instanceSuffix ? ""}:<br /><br />let<br /> webapp = (import ./webapp {}).package;<br /> instanceName = "webapp${instanceSuffix}";<br />in<br />createSystemVInitScript {<br /> name = instanceName;<br /> inherit instanceName;<br /> process = "${webapp}/lib/node_modules/webapp/app.js";<br /> processIsDaemon = false;<br /> runlevels = [ 3 4 5 ];<br /> environment = {<br /> PORT = port;<br /> };<br /> user = instanceName;<br /><br /> credentials = {<br /> groups = {<br /> "${instanceName}" = {};<br /> };<br /> users = {<br /> "${instanceName}" = {<br /> group = instanceName;<br /> description = "Webapp";<br /> };<br /> };<br /> };<br />}<br /></pre><br />The above Nix expression is a revised webapp Nix expression that facilitates user switching:<br /><br /><ul><li>The <i>user</i> parameter specifies that we want to run the process as an unprivileged user. Because this process can also be instantiated, we have to make sure that it gets a unique name. To facilitate that, we create a user with the same username as the instance name.</li><li>The <i>credentials</i> parameter refers to a specification that instructs the <i>sysvinit-script</i> Dysnomia plugin to create an unprivileged user and group on activation, and discard them on deactivation.</li></ul><br />For production purposes (e.g. when we deploy processes as the root user), switching to unprivileged users is useful, but for development purposes, such as running a set of processes as an unprivileged user, we cannot switch users because we may not have the permissions to do so.<br /><br />For convenience purposes, it is also possible to globally disable user switching, which we can do as follows:<br /><br /><pre>{ pkgs<br />, stateDir<br />, logDir<br />, runtimeDir<br />, tmpDir<br />, forceDisableUserChange<br />}:<br /><br />let<br /> createSystemVInitScript = import ./create-sysvinit-script.nix {<br /> inherit (pkgs) stdenv writeTextFile daemon;<br /> inherit runtimeDir tmpDir forceDisableUserChange;<br /><br /> createCredentials = import ./create-credentials.nix {<br /> inherit (pkgs) stdenv;<br /> };<br /><br /> initFunctions = import ./init-functions.nix {<br /> basePackages = [<br /> pkgs.coreutils<br /> pkgs.gnused<br /> pkgs.inetutils<br /> pkgs.gnugrep<br /> pkgs.sysvinit<br /> ];<br /> inherit (pkgs) stdenv;<br /> inherit runtimeDir;<br /> };<br /> };<br />in<br />{<br /> ...<br />}<br /></pre><br />In the above example, the <i>forceDisableUserChange</i> parameter can be used to globally disable user switching for all sysvinit scripts composed in the expression. It invokes a feature of the <i>createSystemVInitScript</i> to ignore any user settings that might have been propagated to it.<br /><br />With the following command we can deploy a process that does not switch users, despite having user settings configured in the process Nix expressions:<br /><br /><pre>$ nix-build processes.nix --arg forceDisableUserChange true<br /></pre><br /><h2>Distributed process deployment with Disnix</h2><br />As explained earlier, I have adopted four common Nix package conventions and extended them suit the needs of process management.<br /><br />This is not the only solution that I have implemented that builds on these four conventions -- the other solution is Disnix, that extends Nix's packaging principles to (distributed) service-oriented systems.<br /><br />Disnix extends Nix expressions for ordinary packages with another category of dependencies: <b>inter-dependencies</b> that model dependencies on services that may have been deployed to remote machines in a network and require a network connection to work.<br /><br />In Disnix, a service expression is a nested function in which the outer function header specifies all <b>intra-dependencies</b> (local dependencies, such as build tools and libraries), and the inner function header refers to inter-dependencies.<br /><br />It is also possible to combine the concepts of process deployment described in this blog post with the service-oriented system concepts of Disnix, such as inter-dependencies -- the example with Nginx reverse proxies and web application processes can be extended to work in a network of machines.<br /><br />Besides deploying a set processes (that may have dependencies on each other) to a single machine, it is also possible to deploy the web application processes to different machines in the network than the machine where the Nginx reverse proxy is deployed to.<br /><br />We can configure the reverse proxy in such a way that it will forward requests to the machine where the web application processes may have been deployed to.<br /><br /><pre style="overflow: auto;">{ createSystemVInitScript, stdenv, writeTextFile, nginx<br />, runtimeDir, stateDir, logDir<br />}:<br /><br />{port ? 80, instanceSuffix ? ""}:<br /><br />interDeps:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxStateDir = "${stateDir}/${instanceName}";<br />in<br />import ./nginx.nix {<br /> inherit createSystemVInitScript nginx instanceSuffix;<br /> stateDir = nginxStateDir;<br /><br /> dependencies = map (dependencyName: <br /> let<br /> dependency = builtins.getAttr dependencyName interDeps;<br /> in<br /> dependency.pkg<br /> ) dependencies;<br /><br /> configFile = writeTextFile {<br /> name = "nginx.conf";<br /> text = ''<br /> error_log ${nginxStateDir}/logs/error.log;<br /> pid ${runtimeDir}/${instanceName}.pid;<br /><br /> events {<br /> worker_connections 190000;<br /> }<br /><br /> http {<br /> ${stdenv.lib.concatMapStrings (dependencyName:<br /> let<br /> dependency = builtins.getAttr dependencyName interDeps;<br /> in<br /> ''<br /> upstream webapp${toString dependency.port} {<br /> server ${dependency.target.properties.hostname}:${toString dependency.port};<br /> }<br /> '') (builtins.attrNames interDeps)}<br /><br /> ${stdenv.lib.concatMapStrings (dependencyName:<br /> let<br /> dependency = builtins.getAttr dependencyName interDeps;<br /> in<br /> ''<br /> server {<br /> listen ${toString port};<br /> server_name ${dependency.dnsName};<br /><br /> location / {<br /> proxy_pass http://webapp${toString dependency.port};<br /> }<br /> }<br /> '') (builtins.attrNames interDeps)}<br /> }<br /> '';<br /> };<br />}<br /></pre><br />The above Nix expression is a revised Nginx configuration that also works with inter-dependencies:<br /><br /><ul><li>The above Nix expression defines three nested functions. The purpose of the outermost function (the first line) is to configure all local dependencies that are common to all process instances. The middle function defines all process instance parameters that are potentially conflicting and need to be configurd with unique values so that multiple instances can co-exist. The third (inner-most) function refers to the inter-dependencies of this process: services that may reside on a different machine in the network and need to be reached with a network connection.</li><li>The inter-dependency function header (<i>interDeps:</i>) takes an arbitrary number of dependencies. These inter-dependencies refer to all web application process instances that the Nginx reverse proxy should redirect to.</li><li>In the body, we generate an <i>nginx.conf</i> that uses the inter-dependencies to set up the forwardings.<br /><br />Compared to the previous Nginx reverse proxy example, it will use the <i>dependency.target.properties.hostname</i> property that refers to the hostname of the machine where the web application process is deployed to instead of a forwarding to <i>localhost</i>. This makes it possible to connect to a web application process that may have been deployed to another machine.</li><li>The inter-dependencies are also passed to the <i>dependencies</i> function parameter of the Nginx function. This will ensure that if Nginx and a web application process are distributed to the same machine by Disnix, they will also get activated in the right order by the system's <i>rc</i> script on startup.</li></ul><br />A with the previous examples, we need to compose the above Disnix expression multiple times. The composition of the constructors can be done in the constructors expression (as shown in the previous examples).<br /><br />The processes' instance properties and inter-dependencies can be configured in the Disnix <b>services model</b>, that shares many similarities with process composition expression, shown earlier. As a matter of fact, a Disnix services model is a superset of it:<br /><br /><pre>{ pkgs, distribution, invDistribution, system<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? true<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange;<br /> };<br />in<br />rec {<br /> webapp = rec {<br /> name = "webapp";<br /> port = 5000;<br /> dnsName = "webapp.local";<br /> pkg = constructors.webapp {<br /> inherit port;<br /> };<br /> type = "sysvinit-script";<br /> };<br /><br /> nginxReverseProxy = rec {<br /> name = "nginxReverseProxy";<br /> port = 8080;<br /> pkg = constructors.nginxReverseProxy {<br /> inherit port;<br /> };<br /> dependsOn = {<br /> inherit webapp;<br /> };<br /> type = "sysvinit-script";<br /> };<br />}<br /></pre><br />The above Disnix services model defines two services (representing processes) that have an inter-dependency on each other, as specified with the <i>dependsOn</i> parameter property of each service.<br /><br />The <i>sysvinit-script</i> <i>type</i> property instructs Disnix to deploy the services as processes managed by a sysvinit script. In a Disnix-context, services have no specific form or meaning, and can basically represent anything. The type property is used to tell Disnix with what kind of service we are dealing with.<br /><br />To properly configure remote dependencies we also need to know the target machines where we can deploy to and what their properties are. This is where we can use an <b>infrastructure</b> model for.<br /><br />For example, a simple infrastructure model of two machines could be:<br /><br /><pre>{<br /> test1.properties.hostname = "test1";<br /> test2.properties.hostname = "test2";<br />}<br /></pre><br />We must also tell Disnix to which target machines we want to distribute the services. This can be done in a <b>distribution model</b>:<br /><br /><pre>{infrastructure}:<br /><br />{<br /> webapp = [ infrastructure.test1 ];<br /> nginxReverseProxy = [ infrastructure.test2 ];<br />}<br /></pre><br />In the above distribution model we distribute the <i>webapp</i> process to the first target machine and the <i>nginxReverseProxy</i> to the second machine. Because both services are deployed to different machines in the network, the <i>nginxReverseProxy</i> uses a network link to forward incoming requests to the web application.<br /><br />By running the following command-line instruction:<br /><br /><pre style="font-size: 90%;">$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix<br /></pre><br />Disnix will deploy the processes to the target machines defined in the distribution model.<br /><br />The result is the following deployment architecture:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-s_AJzyNIr38/XcP1acRlEyI/AAAAAAAAIgk/PmVj4qDFQTwatjcMGuxevkYtXNQMEZtvwCLcBGAsYHQ/s1600/processes-distributed.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-s_AJzyNIr38/XcP1acRlEyI/AAAAAAAAIgk/PmVj4qDFQTwatjcMGuxevkYtXNQMEZtvwCLcBGAsYHQ/s1600/processes-distributed.png" /></a></div><br />As may be noticed by looking at the above diagram, the process dependency manifest itself as a network link managed as an inter-dependency by Disnix.<br /><br /><h2>Conclusion</h2><br />In this blog post, I have described a Nix-based functional organization for managing processes based on four simple Nix packaging conventions. This approach offers the following benefits:<br /><br /><ul><li>Integration with many process managers that manage the lifecycle of a process (in this particular blog post: using sysvinit scripts).</li><li>The ability to relocate state to other locations, which is useful to facilitate unprivileged user deployments.</li><li>The ability to create multiple instances of processes, by making conflicting properties configurable.</li><li>Disabling user switching, which is useful to facilitate unprivileged user deployments.</li><li>It can be used on any Linux system that has the Nix package manager installed. It can be used on NixOS, but NixOS is not a requirement.</li></ul><br /><h3>Related work</h3><br />Integrating process management with Nix package deployment is not a new subject, nor something that is done for the first time.<br /><br />Many years ago, there was the "trace" Subversion repository (that was named after the research project TraCE: Transparent Configuration Environments funded by <a href="http://www.jacquard.nl">NWO/Jacquard</a>), the repository in which all Nix-related development was done before the transition was made to GitHub (before 2012).<br /><br />In the trace repository, there was also a services project that could be used to generate sysvinit-like scripts that could be used on any Linux distribution, and several non-Linux systems as well, such as FreeBSD.<br /><br />Eelco Dolstra's PhD thesis Chapter 9 describes a distributed deployment prototype that extends the init script approach to networks of machines. The prototype facilitates the distribution of init scripts to remote machines and heterogeneous operating systems deployment -- an init script can be built for multiple operating systems, such as Linux and FreeBSD.<br /><br />Although the prototype shares some concepts with Disnix and the process management described in this blog post support, it also lacks many features -- it has no notion of process dependencies, inter-dependencies, the ability to separate services/processes and infrastructure, and to specify distribution mappings between process and target machines including the deployment of redundant instances.<br /> &gt;<br />Originally, NixOS used to work with the generated scripts from services sub project in the trace repository, but quite quickly adopted <a href="http://upstart.ubuntu.com">Upstart</a> as its init system. Gradually, the init scripts and upstart jobs got integrated, and eventually replaced by Upstart jobs completely. As a result, it was no longer possible to run services independently of NixOS.<br /><br />NixOS is a Linux distribution whose static aspects are fully managed by Nix, including user packages, configuration files, the Linux kernel, and kernel modules. NixOS machine configurations are deployed from a single declarative specification.<br /><br />Although NixOS is an extension of Nix deployment principles to machine-level deployment, a major conceptual difference between NixOS and the Nix packages repository is that NixOS generates a big data structure made out of all potential configuration options that NixOS provides. It uses this (very big) generated data structure as an input for an activation script that will initialize all dynamic system parts, such as populating the state directories (e.g. <i>/var</i>) and loading systemd jobs.<br /><br />In early incarnations of NixOS, the organization of the repository was quite monolithic -- there was one NixOS file that defines all configuration options for all possible system configuration aspetcts, one file that defines the all the system user accounts, one file that defines all global configuration files in <i>/etc</i>. When it was desired to add a new system service, all these global configuration files need to be modified.<br /><br />Some time later (mid 2009), the NixOS module system was introduced that makes it possible to isolate all related configuration aspects of, for example, a system service into a separate module. Despite the fact that configuration aspects are isolated, the NixOS module system has the ability (through a concept called <a href="http://r6.ca/blog/20140422T142911Z.html">fixed points</a>) to refer to properties of the entire configuration. The NixOS module system merges all configuration aspects of all modules into a single configuration data structure.<br /><br />The NixOS module system is quite powerful. In many ways, it is much more powerful than the process management approach described in this blog post. The NixOS module system allows you to refer, override and adjust any system configuration aspect in any module.<br /><br />For example, a system service, such as the OpenSSH server, can automatically configure the firewall module in such a way that it will open the SSH port (port 22). With the functional approach described in this blog post, everything has to be made explicit and must be propagated through function arguments. This is probably more memory efficient, but a lot less flexible, and more tedious to write.<br /><br />There are also certain things that NixOS and the NixOS module system cannot do. For example, with NixOS, it is not possible to create multiple instances of system services which the process management conventions described in this blog post can.<br /><br />NixOS has another drawback -- evaluating system configurations requires all possible NixOS configuration options to be evaluated. <a href="https://nixos.org/nixos/manual/options.html">There are actually quite a few of of them</a>.<br /><br />As a result, evaluating a NixOS configuration is quite slow and memory consuming. For single systems, this is typically not a big problem, but for networked NixOS/NixOps configurations, this may be a problem -- for example, I have an old laptop with 4 GiB of RAM that can no longer deploy a test network of three VirtualBox machines using the latest stable NixOS release (19.09), because the Nix evaluator runs out of memory.<br /><br />Furthermore, NixOS system services can only be used when you install NixOS as your system's software distribution. It is currently not possible to install Nix on a conventional Linux distribution and use NixOS' system services (systemd services) independently of the entire operating system.<br /><br />The lack of being able to deploy system services independently is not a limitation of the NixOS module system -- there is also an external project called <a href="https://github.com/LnL7/nix-darwin"><i>nix-darwin</i></a> that uses the NixOS module system to generate launchd services, that can be run on top of macOS, that is not managed by the Nix package manager.<br /><br />The idea to have a separate function header for creating instances of processes is also not entirely new -- a couple of years ago <a href="https://sandervanderburg.blogspot.com/2016/06/deploying-containers-with-disnix-as.html">I have revised the internal deployment model of Disnix to support multiple container instances</a>.<br /><br />In a Disnix-context, containers can represent anything that can host multiple service instances, such as a process manager, application container, or database management system. I was already using the convention to have a separate function header that makes it possible to create multiple instances of services. In this blog post, I have extended this formalism specifically for managing processes.<br /><br /><h3>Discussion</h3><br />In this blog post, I have picked sysvinit scripts for process management. The reason why I have picked an old-fashioned solution is not that I consider this to be the best process management facility, or that systemd, the init system that NixOS uses, is a bad solution.<br /><br />My first reason to choose sysvinit scripts is because it is more universally supported than systemd.<br /><br />The second reason is that I want to emphasize the value that a functional organization can provide, independent of the process management solution.<br /><br />Using sysvinit scripts for managing process have all kinds of drawbacks and IMO there is a legitimate reason why alternatives exist, such as systemd (but also other solutions).<br /><br />For example, controlling daemonized processes is difficult and fragile -- the convention that daemons should follow is to create PID files, but it is not a hard guarantee daemons will comply and that nothing will go wrong. As a result, a daemonized process may escape control of the process manager. systemd, for example, puts all processes that it needs to control in a cgroup and as a result, cannot escape systemd's control.<br /><br />Furthermore, you may also want to use the more advanced features of the Linux kernel, such as namespaces and cgroups to prevent process from interfering with other processes on the system and the available system resources that a system provides. Namespaces and cgroups are a first class feature in systemd.<br /><br />If you do not like sysvinit scripts: the functional organization described in this blog post is not specifically designed for sysvinit -- it is actually <strong>process manager agnostic</strong>. I have also implemented a function called: <i>createSystemdService</i> that makes it possible to construct systemd services.<br /><br /> The following Nix expression composes a systemd service for the web application process, shown earlier:<br /><br /><pre><br />{stdenv, createSystemdService}:<br />{port, instanceSuffix ? ""}:<br /><br />let<br /> webapp = (import ./webapp {}).package;<br /> instanceName = "webapp${instanceSuffix}";<br />in<br />createSystemdService {<br /> name = instanceName;<br /><br /> environment = {<br /> PORT = port;<br /> };<br /><br /> Unit = {<br /> Description = "Example web application";<br /> Documentation = http://example.com;<br /> };<br /><br /> Service = {<br /> ExecStart = "${webapp}/lib/node_modules/webapp/app.js";<br /> };<br />}<br /></pre><br />I also tried <a href="http://supervisord.org/">supervisord</a> -- we can write the following Nix expression to compose a supervisord program configuration file for the web application process:<br /><br /><pre><br />{stdenv, createSupervisordProgram}:<br />{port, instanceSuffix ? ""}:<br /><br />let<br /> webapp = (import ./webapp {}).package;<br /> instanceName = "webapp${instanceSuffix}";<br />in<br />createSupervisordProgram {<br /> name = instanceName;<br /><br /> command = "${webapp}/lib/node_modules/webapp/app.js";<br /> environment = {<br /> PORT = port;<br /> };<br />}<br /></pre><br />Switching process managers retains our ability to benefit from the facilities that the functional configuration framework provides -- we can use it manage process dependencies, configure state directories, disable user management and when we use Disnix: manage inter-dependencies and bind it to services that are not processes.<br /><br />Despite the fact that sysvinit scripts are primitive, there are also two advantages that I see over more "modern alternatives", such as systemd:<br /><br /><ul><li>Systemd and supervisord require the presence of a deamon that manages processes (i.e. the systemd and supervisord deamons). sysvinit scripts are <strong>self-contained</strong> from a process management perspective -- the Nix package manager provides the package dependencies that the sysvinit scripts needs (e.g. basic shell utilities, sysvinit commands), but other than that, it does not require anything else.</li><li>We can also easily deploy sysvinit scripts to any Linux distribution that has the Nix package manager installed. There are no additional requirements. Systemd services, for example, require the presence of the systemd daemon. Furthermore, we also have to interfere with the host system's systemd service that may also be used to manage essential system services.</li><li>We can also easily use sysvinit scripts to deploy processes as an unprivileged user to a machine that has a single-user Nix installation -- the sysvinit script infrastructure does not require any tools or daemons that require super user privileges.</li></ul><br /><h2>Acknowledgements</h2><br />I have borrowed the <i>init-functions</i> script from the LFS Bootscripts package of the <a href="http://linuxfromscratch.org">Linux from Scratch project</a> to get an implementation of the utility functions that the LSB standard describes.<br /><br /><h2>Availability and future work</h2><br />The functionality described in this blog post is still a work in progress and only a first milestone in a bigger objective.<br /><br />The latest implementation of the process management framework can be found in <a href="https://github.com/svanderburg/nix-processmgmt">my experimental Nix process management repository</a>. The <i>sysvinit-script</i> Dysnomia plugin resides in <a href="https://github.com/svanderburg/dysnomia/tree/processmanagement-wip">an experimental branch of the Dysnomia repository</a>.<br /><br />In the next blog post, I will introduce another interesting concept that we can integrate into the functional process management framework.<br /><br /> - Mon, 11 Nov 2019 22:43:00 +0000 - noreply@blogger.com (Sander van der Burg) +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/local/root +</code></pre></div></div> + +<p>Before I even mount it, I <strong>create a snapshot while it is totally +blank</strong>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs snapshot rpool/local/root@blank +</code></pre></div></div> + +<p>And then mount it:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mount -t zfs rpool/local/root /mnt +</code></pre></div></div> + +<p>Then I mount the partition I created for the <code class="highlighter-rouge">/boot</code>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir /mnt/boot +# mount /dev/the-boot-partition /mnt/boot +</code></pre></div></div> + +<p>Create and mount a dataset for <code class="highlighter-rouge">/nix</code>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/local/nix +# mkdir /mnt/nix +# mount -t zfs rpool/local/nix /mnt/nix +</code></pre></div></div> + +<p>And a dataset for <code class="highlighter-rouge">/home</code>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/safe/home +# mkdir /mnt/home +# mount -t zfs rpool/safe/home /mnt/home +</code></pre></div></div> + +<p>And finally, a dataset explicitly for state I want to persist between +boots:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/safe/persist +# mkdir /mnt/persist +# mount -t zfs rpool/safe/persist /mnt/persist +</code></pre></div></div> + +<blockquote> + <p><em>Note:</em> in my systems, datasets under <code class="highlighter-rouge">rpool/local</code> are never backed +up, and datasets under <code class="highlighter-rouge">rpool/safe</code> are.</p> +</blockquote> + +<p>And now safely erasing the root dataset on each boot is very easy: +after devices are made available, roll back to the blank snapshot:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">boot</span><span class="o">.</span><span class="nv">initrd</span><span class="o">.</span><span class="nv">postDeviceCommands</span> <span class="o">=</span> <span class="nv">lib</span><span class="o">.</span><span class="nv">mkAfter</span> <span class="s2">''</span><span class="err"> +</span><span class="s2"> zfs rollback -r rpool/local/root@blank</span><span class="err"> +</span><span class="s2"> ''</span><span class="p">;</span> +<span class="p">}</span> +</code></pre></div></div> + +<p>I then finish the installation as normal. If all goes well, your +next boot will start with an empty root partition but otherwise be +configured exactly as you specified.</p> + +<h2 id="opting-in">Opting in</h2> + +<p>Now that I’m keeping no state, it is time to specify what I do want +to keep. My choices here are different based on the role of the +system: a laptop has different state than a server.</p> + +<p>Here are some different pieces of state and how I preserve them. These +examples largely use reconfiguration or symlinks, but using ZFS +datasets and mount points would work too.</p> + +<h4 id="wireguard-private-keys">Wireguard private keys</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code> for the key:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/etc/wireguard/ +</code></pre></div></div> + +<p>And use Nix’s wireguard module to generate the key there:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">networking</span><span class="o">.</span><span class="nv">wireguard</span><span class="o">.</span><span class="nv">interfaces</span><span class="o">.</span><span class="nv">wg0</span> <span class="o">=</span> <span class="p">{</span> + <span class="nv">generatePrivateKeyFile</span> <span class="o">=</span> <span class="kc">true</span><span class="p">;</span> + <span class="nv">privateKeyFile</span> <span class="o">=</span> <span class="s2">"/persist/etc/wireguard/wg0"</span><span class="p">;</span> + <span class="p">};</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="networkmanager-connections">NetworkManager connections</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/etc</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/etc/NetworkManager/system-connections +</code></pre></div></div> + +<p>And use Nix’s <code class="highlighter-rouge">etc</code> module to set up the symlink:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">etc</span><span class="o">.</span><span class="s2">"NetworkManager/system-connections"</span> <span class="o">=</span> <span class="p">{</span> + <span class="nv">source</span> <span class="o">=</span> <span class="s2">"/persist/etc/NetworkManager/system-connections/"</span><span class="p">;</span> + <span class="p">};</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="bluetooth-devices">Bluetooth devices</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/var</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/var/lib/bluetooth +</code></pre></div></div> + +<p>And then use systemd’s tmpfiles.d rules to create a symlink from +<code class="highlighter-rouge">/var/lib/bluetooth</code> to my persisted directory:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">systemd</span><span class="o">.</span><span class="nv">tmpfiles</span><span class="o">.</span><span class="nv">rules</span> <span class="o">=</span> <span class="p">[</span> + <span class="s2">"L /var/lib/bluetooth - - - - /persist/var/lib/bluetooth"</span> + <span class="p">];</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="ssh-host-keys">SSH host keys</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/etc</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/etc/ssh +</code></pre></div></div> + +<p>And use Nix’s openssh module to create and use the keys in that +directory:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">services</span><span class="o">.</span><span class="nv">openssh</span> <span class="o">=</span> <span class="p">{</span> + <span class="nv">enable</span> <span class="o">=</span> <span class="kc">true</span><span class="p">;</span> + <span class="nv">hostKeys</span> <span class="o">=</span> <span class="p">[</span> + <span class="p">{</span> + <span class="nv">path</span> <span class="o">=</span> <span class="s2">"/persist/ssh/ssh_host_ed25519_key"</span><span class="p">;</span> + <span class="nv">type</span> <span class="o">=</span> <span class="s2">"ed25519"</span><span class="p">;</span> + <span class="p">}</span> + <span class="p">{</span> + <span class="nv">path</span> <span class="o">=</span> <span class="s2">"/persist/ssh/ssh_host_rsa_key"</span><span class="p">;</span> + <span class="nv">type</span> <span class="o">=</span> <span class="s2">"rsa"</span><span class="p">;</span> + <span class="nv">bits</span> <span class="o">=</span> <span class="mi">4096</span><span class="p">;</span> + <span class="p">}</span> + <span class="p">];</span> + <span class="p">};</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="acme-certificates">ACME certificates</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/var</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/var/lib/acme +</code></pre></div></div> + +<p>And then use systemd’s tmpfiles.d rules to create a symlink from +<code class="highlighter-rouge">/var/lib/acme</code> to my persisted directory:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">systemd</span><span class="o">.</span><span class="nv">tmpfiles</span><span class="o">.</span><span class="nv">rules</span> <span class="o">=</span> <span class="p">[</span> + <span class="s2">"L /var/lib/acme - - - - /persist/var/lib/acme"</span> + <span class="p">];</span> +<span class="p">}</span> +</code></pre></div></div> + +<h3 id="answering-the-question-what-am-i-about-to-lose">Answering the question “what am I about to lose?”</h3> + +<p>I found this process a bit scary for the first few weeks: was I losing +important data each reboot? No, I wasn’t.</p> + +<p>If you’re worried and want to know what state you’ll lose on the next +boot, you can list the files on your root filesystem and see if you’re +missing something important:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># tree -x / +├── bin +│   └── sh -&gt; /nix/store/97zzcs494vn5k2yw-dash-0.5.10.2/bin/dash +├── boot +├── dev +├── etc +│   ├── asound.conf -&gt; /etc/static/asound.conf +... snip ... +</code></pre></div></div> + +<p>ZFS can give you a similar answer:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs diff rpool/local/root@blank +M / ++ /nix ++ /etc ++ /root ++ /var/lib/is-nix-channel-up-to-date ++ /etc/pki/fwupd ++ /etc/pki/fwupd-metadata +... snip ... +</code></pre></div></div> + +<h2 id="your-stateless-future">Your stateless future</h2> + +<p>You may bump in to new state you meant to be preserving. When I’m +adding new services, I think about the state it is writing and whether +I care about it or not. If I care, I find a way to redirect its state +to <code class="highlighter-rouge">/persist</code>.</p> + +<p>Take care to reboot these machines on a somewhat regular basis. It +will keep things agile, proving your system state is tracked +correctly.</p> + +<p>This technique has given me the “new computer smell” on every boot +without the datacenter full of hardware, and even on systems that do +carry important state. I have deployed this strategy to systems in the +large and small: build farm servers, database servers, my NAS and home +server, my raspberry pi garage door opener, and laptops.</p> + +<p>NixOS enables powerful new deployment models in so many ways, allowing +for systems of all shapes and sizes to be managed properly and +consistently. I think this model of ephemeral roots is yet +another example of this flexibility and power. I would like to see +this partitioning scheme become a reference architecture and take us +out of this eternal tarpit of legacy.</p> + Mon, 13 Apr 2020 00:00:00 +0000 - Hercules Labs: Launching Hercules CI - https://blog.hercules-ci.com/2019/10/22/launching-hercules-ci/ - https://blog.hercules-ci.com/2019/10/22/launching-hercules-ci/ - <p>In March 2018 we set ourselves a <strong>mission to provide seamless infrastructure to teams using Nix -in day-to-day software development</strong>.</p> + Graham Christensen: ZFS Datasets for NixOS + http://grahamc.com//blog/nixos-on-zfs + http://grahamc.com/blog/nixos-on-zfs + <p>The outdated and historical nature of the <a href="https://grahamc.com/feed/fhs">Filesystem Hierarchy +Standard</a> means traditional Linux distributions have to go to great +lengths to separate “user data” from “system data.”</p> + +<p>NixOS’s filesystem architecture does cleanly separate user data from +system data, and has a much easier job to do.</p> + +<h3 id="traditional-linuxes">Traditional Linuxes</h3> + +<p>Because FHS mixes these two concerns across the entire hierarchy, +splitting these concerns requires identifying every point across +dozens of directories where the data is the system’s or the user’s. +When adding ZFS to the mix, the installers typically have to create +over a dozen datasets to accomplish this.</p> + +<p>For example, Ubuntu’s upcoming ZFS support creates 16 datasets:</p> + +<pre><code class="language-tree">rpool/ +├── ROOT +│   └── ubuntu_lwmk7c +│   ├── log +│   ├── mail +│   ├── snap +│   ├── spool +│   ├── srv +│   ├── usr +│   │   └── local +│   ├── var +│   │   ├── games +│   │   └── lib +│   │   ├── AccountServices +│   │   ├── apt +│   │   ├── dpkg +│   │   └── NetworkManager +│   └── www +└── USERDATA +</code></pre> + +<p>Going through the great pains of separating this data comes with +significant advantages: a recursive snapshot at any point in the tree +will create an atomic, point-in-time snapshot of every dataset below.</p> + +<p>This means in order to create a consistent snapshot of the system +data, an administrator would only need to take a recursive snapshot +at <code class="highlighter-rouge">ROOT</code>. The same is true for user data: take a recursive snapshot of +<code class="highlighter-rouge">USERDATA</code> and all user data is saved.</p> + +<h3 id="nixos">NixOS</h3> + +<p>Because Nix stores all of its build products in <code class="highlighter-rouge">/nix/store</code>, NixOS +doesn’t mingle these two concerns. NixOS’s runtime system, installed +packages, and rollback targets are all stored in <code class="highlighter-rouge">/nix</code>.</p> + +<p>User data is not.</p> + +<p>This removes the entire complicated tree of datasets to facilitate +FHS, and leaves us with only a few needed datasets.</p> + +<h2 id="datasets">Datasets</h2> + +<p>Design for the atomic, recursive snapshots when laying out the +datasets.</p> -<p>In June 2018 we <a href="https://cachix.org/">released a solution for developers to easily share binary caches</a>, -trusted today by over a thousand developers.</p> +<p>In particular, I don’t back up the <code class="highlighter-rouge">/nix</code> directory. This entire +directory can always be rebuilt later from the system’s +<code class="highlighter-rouge">configuration.nix</code>, and isn’t worth the space.</p> -<p>In Octobter 2018 we <a href="https://www.youtube.com/watch?v=py26iM26Qg4&amp;list=PLgknCdxP89ReJKWX3sthcsbBYsoihzSQX&amp;index=12&amp;t=137s">showed the very first demo of Hercules CI at NixCon 2018</a>.</p> +<p>One way to model this might be splitting up the data into three +top-level datasets:</p> -<p>In March 2019 we added <a href="https://blog.hercules-ci.com/cachix/nix/2019/03/07/announcing-private-cachix/">added support for private binary caches</a>.</p> +<pre><code class="language-tree">tank/ +├── local +│   └── nix +├── system +│   └── root +└── user + └── home +</code></pre> + +<p>In <code class="highlighter-rouge">tank/local</code>, I would store datasets that should almost never be +snapshotted or backed up. <code class="highlighter-rouge">tank/system</code> would store data that I would +want periodic snapshots for. Most importantly, <code class="highlighter-rouge">tank/user</code> would +contain data I want regular snapshots and backups for, with a long +retention policy.</p> + +<p>From here, you could add a ZFS dataset per user:</p> + +<pre><code class="language-tree">tank/ +├── local +│   └── nix +├── system +│   └── root +└── user + └── home +    ├── grahamc +    └── gustav +</code></pre> -<p>Since April 2019 we have been gradually giving out early access to the preview release with over <strong>a hundred participating developers</strong>.</p> +<p>Or a separate dataset for <code class="highlighter-rouge">/var</code>:</p> -<h2 id="today">Today</h2> +<pre><code class="language-tree">tank/ +├── local +│   └── nix +├── system +│   ├── var +│   └── root +└── user +</code></pre> -<p>We are <strong>announcing general availability of continuous integration specialized for Nix projects.</strong></p> +<p>Importantly, this gives you three buckets for independent and +regular snapshots.</p> -<p><a href="https://hercules-ci.com">Check out the landing page to get started</a>.</p> +<p>The important part is having <code class="highlighter-rouge">/nix</code> under its own top-level dataset. +This makes it a “cousin” to the data you <em>do</em> want backup coverage on, +making it easier to take deep, recursive snapshots atomically.</p> -<h2 id="going-forward">Going forward</h2> +<h2 id="properties">Properties</h2> -<p>In the coming months we’re going to work closely with customers to polish the experience and continue to save developer’s time.</p> +<ul> + <li>Enable compression with <code class="highlighter-rouge">compression=on</code>. Specifying <code class="highlighter-rouge">on</code> instead of +<code class="highlighter-rouge">lz4</code> or another specific algorithm will always pick the best +available compression algorithm.</li> + <li>The dataset containing journald’s logs (where <code class="highlighter-rouge">/var</code> lives) should +have <code class="highlighter-rouge">xattr=sa</code> and <code class="highlighter-rouge">acltype=posixacl</code> set to allow regular users to +read their journal.</li> + <li>Nix doesn’t use <code class="highlighter-rouge">atime</code>, so <code class="highlighter-rouge">atime=off</code> on the <code class="highlighter-rouge">/nix</code> dataset is +fine.</li> + <li>NixOS requires (as of 2020-04-11) <code class="highlighter-rouge">mountpoint=legacy</code> for all +datasets. NixOS does not yet have tooling to require implicitly +created ZFS mounts to settle before booting, and <code class="highlighter-rouge">mountpoint=legacy</code> +plus explicit mount points in <code class="highlighter-rouge">hardware-configuration.nix</code> will +ensure all your datasets are mounted at the right time.</li> +</ul> + +<p>I don’t know how to pick <code class="highlighter-rouge">ashift</code>, and usually just allow ZFS to guess +on my behalf.</p> + +<h2 id="partitioning">Partitioning</h2> + +<p>I only create two partitions:</p> + +<ol> + <li><code class="highlighter-rouge">/boot</code> formatted <code class="highlighter-rouge">vfat</code> for EFI, or <code class="highlighter-rouge">ext4</code> for BIOS</li> + <li>The ZFS dataset partition.</li> +</ol> -<p>For <strong>support</strong> (with getting started and other questions), -contact me at <a href="mailto:domen@hercules-ci.com">domen@hercules-ci.com</a> so we can set you up -and make sure you get the most out of our CI.</p> +<p>There are spooky articles saying only give ZFS entire disks. The +truth is, you shouldn’t split a disk into two active partitions. +Splitting the disk this way is just fine, since <code class="highlighter-rouge">/boot</code> is rarely +read or written.</p> -<p>Subscribe to <a href="https://twitter.com/hercules_ci">@hercules_ci</a> for updates.</p> +<blockquote> + <p><em>Note:</em> If you do partition the disk, make sure you set the disk’s +scheduler to <code class="highlighter-rouge">none</code>. ZFS takes this step automatically if it does +control the entire disk.</p> -<hr /> + <p>On NixOS, you an set your scheduler to <code class="highlighter-rouge">none</code> via:</p> -<h2 id="what-we-do">What we do</h2> + <div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> <span class="nv">boot</span><span class="o">.</span><span class="nv">kernelParams</span> <span class="o">=</span> <span class="p">[</span> <span class="s2">"elevator=none"</span> <span class="p">];</span> <span class="p">}</span> +</code></pre></div> </div> +</blockquote> -<p>Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, -to speed up adoption and lower integration cost. We offer -<a href="https://hercules-ci.com">Continuous Integration</a> and <a href="https://cachix.org">Binary Caches</a>.</p> - Tue, 22 Oct 2019 00:00:00 +0000 +<h1 id="clean-isolation">Clean isolation</h1> + +<p>NixOS’s clean separation of concerns reduces the amount of complexity +we need to track when considering and planning our datasets. This +gives us flexibility later, and enables some superpowers like erasing +my computer on every boot, which I’ll write about on Monday.</p> + Sat, 11 Apr 2020 00:00:00 +0000 - Matthew Bauer: Improved performance in Nixpkgs - https://matthewbauer.us/blog/avoid-subshells.html - https://matthewbauer.us/blog/avoid-subshells.html - <div class="outline-2" id="outline-container-org9610d97"> -<h2 id="org9610d97"><span class="section-number-2">1</span> Avoiding subshells</h2> -<div class="outline-text-2" id="text-1"> + nixbuild.net: New nixbuild.net Resources + https://blog.nixbuild.net/posts/2020-03-27-nixbuild-net-beta.html + https://blog.nixbuild.net/posts/2020-03-27-nixbuild-net-beta.html + <p>On the support side of the nixbuild.net service, two new resources have been published:</p> +<ul> +<li><p><a href="https://docs.nixbuild.net">docs.nixbuild.net</a>, collecting all available documentation for nixbuild.net users.</p></li> +<li><p>The <a href="https://github.com/nixbuild/feedback">nixbuild.net feedback</a> repository on GitHub, providing a way to report issues or ask questions related to the service.</p></li> +</ul> +<p>These resources are mainly useful for nixbuild.net beta users, but they are open to anyone. And anyone is of course welcome to request a free beta account for evaluating nixbuild.net, by just <a href="mailto:rickard@nixbuild.net">sending me an email</a>.</p> + Fri, 27 Mar 2020 00:00:00 +0000 + support@nixbuild.net (nixbuild.net) + + + Matthew Bauer: Announcing Nixiosk + https://matthewbauer.us/blog/nixiosk.html + https://matthewbauer.us/blog/nixiosk.html + <p> +Today I’m announcing a project I’ve been working on for the last few +weeks. I’m calling it Nixiosk which is kind of a smashing together of +the words NixOS and Kiosk. The idea is to have an easy way to make +locked down, declarative systems +</p> + <p> -A common complain in using Nixpkgs is that things can become slow when -you have lots of dependencies. Processing of build inputs is processed -in Bash which tends to be pretty hard to make performant. Bash doesn’t -give us any way to loop through dependencies in parallel, so we end up -with pretty slow Bash. Luckily, someone has found some ways to speed -this up with some clever tricks in the <code>setup.sh</code> script. +My main application of this is my two Raspberry Pi systems that I own. +Quite a few people have installed NixOS on these systems, but usually +they are starting from some prebuilt image. A major goal of this +project is to make it easy to build these images yourself. For this to +work, I’ve had to make lots of changes to NixOS cross-compilation +ecosystem, but the results seem to be very positive. I also want the +system to be locked down so that no user can login directly on the +machine. Instead, all administration is done on a remote machine, and +deployed through SSH and Nix remote builders. </p> -</div> -<div class="outline-3" id="outline-container-orgd7f8feb"> -<h3 id="orgd7f8feb"><span class="section-number-3">1.1</span> Pull request</h3> -<div class="outline-text-3" id="text-1-1"> <p> -Albert Safin (<a href="https://github.com/xzfc">@xzfc</a> on GitHub) made an excellent PR that promises to -improve performance for all users of Nixpkgs. The PR is available at -<a href="https://github.com/NixOS/nixpkgs/pull/69131">PR #69131</a>. The basic idea is to avoid invoking “subshells” in Bash. A -subshell is basically anything that uses <code>$(cmd ...)</code>. Each subshell -requires forking a new process which has a constant time cost that -ends up being ~2ms. This isn’t much in isolation, but adds up in big -loops. +Right now, I have RetroArch (a frontend for a bunch of emulators) on +my Raspberry Pi 4, and Epiphany (a web browser) on my Raspberry Pi 0. +Both systems seem to be working pretty well. </p> <p> -Subshells are usually used in Bash because they are convenient and -easy to reason about. It’s easy to understand how a subshell works as -it’s just substituting the result of one command into another’s -arguments. We don’t usually care about the performance cost of -subshells. In the hot path of Nixpkgs’ <code>setup.sh</code>, however, it’s -pretty important to squeeze every bit of performance we can. +GitHub: <a href="https://github.com/matthewbauer/nixiosk">https://github.com/matthewbauer/nixiosk</a> </p> +<div class="outline-2" id="outline-container-org11baea3"> +<h2 id="org11baea3"><span class="section-number-2">1</span> Deploying</h2> +<div class="outline-text-2" id="text-1"> +</div> +<div class="outline-3" id="outline-container-org3936587"> +<h3 id="org3936587"><span class="section-number-3">1.1</span> Install Nix</h3> +<div class="outline-text-3" id="text-1-1"> <p> -A few interesting changes were required to make this work. I’ll go -through and document what there are. More information can be found at -<a href="https://www.gnu.org/software/bash/manual/bash.html">the Bash manual</a>. +If you haven’t already, you need to install Nix. This can be done +through the installer: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh</span> -<span class="org-diff-context">index 326a60676a26..60067a4051de 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-hunk-header">@@ -98,7 +98,7 @@</span><span class="org-diff-function"> _callImplicitHook() {</span> -<span class="org-diff-context"> # hooks exits the hook, not the caller. Also will only pass args if</span> -<span class="org-diff-context"> # command can take them</span> -<span class="org-diff-context"> _eval() {</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> if [ "$(type -t "$1")" = function ]; then</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> if declare -F "$1" &gt; /dev/null 2&gt;&amp;1; then</span> -<span class="org-diff-context"> set +u</span> -<span class="org-diff-context"> "$@" # including args</span> -<span class="org-diff-context"> else</span> - +<pre class="src src-sh">$ bash &lt;(curl -L https://nixos.org/nix/install) </pre> </div> +</div> +</div> +<div class="outline-3" id="outline-container-org9c45d30"> +<h3 id="org9c45d30"><span class="section-number-3">1.2</span> Cache</h3> +<div class="outline-text-3" id="text-1-2"> <p> -The first change is pretty easy to understand. It just replaces the -<code>type</code> call with a <code>declare</code> call, utilizing an exit code in place of -stdout. Unfortunately, <code>declare</code> is <a href="https://www.gnu.org/software/bash/manual/bash.html#index-declare">a Bashism</a> which is not available -in all POSIX shells. It’s been ill defined whether Bashisms can be -used in Nixpkgs, but we now will require Nixpkgs to be sourced with -Bash 4+. +To speed things up, you should setup a binary cache for nixiosk. This +can be done easily through <a href="https://nixiosk.cachix.org/">Cachix</a>. First, install Cachix: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh</span> -<span class="org-diff-context">index 60067a4051de..7e7f8739845b 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-hunk-header">@@ -403,6 +403,7 @@</span><span class="org-diff-function"> findInputs() {</span> -<span class="org-diff-context"> # The current package's host and target offset together</span> -<span class="org-diff-context"> # provide a &lt;=-preserving homomorphism from the relative</span> -<span class="org-diff-context"> # offsets to current offset</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local -i mapOffsetResult</span> -<span class="org-diff-context"> function mapOffset() {</span> -<span class="org-diff-context"> local -ri inputOffset="$1"</span> -<span class="org-diff-context"> if (( "$inputOffset" &lt;= 0 )); then</span> -<span class="org-diff-hunk-header">@@ -410,7 +411,7 @@</span><span class="org-diff-function"> findInputs() {</span> -<span class="org-diff-context"> else</span> -<span class="org-diff-context"> local -ri outputOffset="$inputOffset - 1 + $targetOffset"</span> -<span class="org-diff-context"> fi</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> echo "$outputOffset"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> mapOffsetResult="$outputOffset"</span> -<span class="org-diff-context"> }</span> - -<span class="org-diff-context"> # Host offset relative to that of the package whose immediate</span> -<span class="org-diff-hunk-header">@@ -422,8 +423,8 @@</span><span class="org-diff-function"> findInputs() {</span> - -<span class="org-diff-context"> # Host offset relative to the package currently being</span> -<span class="org-diff-context"> # built---as absolute an offset as will be used.</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local -i hostOffsetNext</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> hostOffsetNext="$(mapOffset relHostOffset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> mapOffset relHostOffset</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local -i hostOffsetNext="$mapOffsetResult"</span> - -<span class="org-diff-context"> # Ensure we're in bounds relative to the package currently</span> -<span class="org-diff-context"> # being built.</span> -<span class="org-diff-hunk-header">@@ -441,8 +442,8 @@</span><span class="org-diff-function"> findInputs() {</span> - -<span class="org-diff-context"> # Target offset relative to the package currently being</span> -<span class="org-diff-context"> # built.</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local -i targetOffsetNext</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> targetOffsetNext="$(mapOffset relTargetOffset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> mapOffset relTargetOffset</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local -i targetOffsetNext="$mapOffsetResult"</span> - -<span class="org-diff-context"> # Once again, ensure we're in bounds relative to the</span> -<span class="org-diff-context"> # package currently being built.</span> - +<pre class="src src-sh">$ nix-env -iA cachix -f https://cachix.org/api/v1/install </pre> </div> <p> -Similarly, this change makes <code>mapOffset</code> set to it’s result to -<code>mapOffsetResult</code> instead of printing it to stdout, avoiding the -subshell. Less functional, but more performant! +Then, use the nixiosk cache: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh</span> -<span class="org-diff-context">index 7e7f8739845b..e25ea735a93c 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-hunk-header">@@ -73,21 +73,18 @@</span><span class="org-diff-function"> _callImplicitHook() {</span> -<span class="org-diff-context"> set -u</span> -<span class="org-diff-context"> local def="$1"</span> -<span class="org-diff-context"> local hookName="$2"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> case "$(type -t "$hookName")" in</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> (function|alias|builtin)</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> set +u</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> "$hookName";;</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> (file)</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> set +u</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> source "$hookName";;</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> (keyword) :;;</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> (*) if [ -z "${!hookName:-}" ]; then</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> return "$def";</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> else</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> set +u</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "${!hookName}"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> fi;;</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> esac</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> if declare -F "$hookName" &gt; /dev/null; then</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set +u</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> "$hookName"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> elif type -p "$hookName" &gt; /dev/null; then</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set +u</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> source "$hookName"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> elif [ -n "${!hookName:-}" ]; then</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set +u</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> eval "${!hookName}"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> else</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> return "$def"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> fi</span> -<span class="org-diff-context"> # `_eval` expects hook to need nounset disable and leave it</span> -<span class="org-diff-context"> # disabled anyways, so Ok to to delegate. The alternative of a</span> -<span class="org-diff-context"> # return trap is no good because it would affect nested returns.</span> +<pre class="src src-sh">$ cachix use nixiosk +</pre> +</div> +</div> +</div> + +<div class="outline-3" id="outline-container-org16dc38e"> +<h3 id="org16dc38e"><span class="section-number-3">1.3</span> Configuration</h3> +<div class="outline-text-3" id="text-1-3"> +<p> +To make things simple, it just reads from an ad-hoc JSON file that +describe the hardware plus some other customizations. It looks like +this: +</p> +<div class="org-src-container"> +<pre class="src src-json">{ + <span class="org-keyword">"hostName"</span>: <span class="org-string">"nixiosk"</span>, + <span class="org-keyword">"hardware"</span>: <span class="org-string">"raspberryPi4"</span>, + <span class="org-keyword">"authorizedKeys"</span>: [], + <span class="org-keyword">"program"</span>: { + <span class="org-keyword">"package"</span>: <span class="org-string">"epiphany"</span>, + <span class="org-keyword">"executable"</span>: <span class="org-string">"/bin/epiphany"</span>, + <span class="org-keyword">"args"</span>: [<span class="org-string">"https://en.wikipedia.org/"</span>] + }, + <span class="org-keyword">"networks"</span>: { + <span class="org-keyword">"my-router"</span>: <span class="org-string">"0000000000000000000000000000000000000000000000000000000000000000"</span>, + }, + <span class="org-keyword">"locale"</span>: { + <span class="org-keyword">"timeZone"</span>: <span class="org-string">"America/New_York"</span>, + <span class="org-keyword">"regDom"</span>: <span class="org-string">"US"</span>, + <span class="org-keyword">"lang"</span>: <span class="org-string">"en_US.UTF-8"</span> + }, + <span class="org-keyword">"localSystem"</span>: { + <span class="org-keyword">"system"</span>: <span class="org-string">"x86_64-linux"</span>, + <span class="org-keyword">"sshUser"</span>: <span class="org-string">"me"</span>, + <span class="org-keyword">"hostName"</span>: <span class="org-string">"my-laptop-host"</span>, + } +} </pre> </div> <p> -This change replaces the <code>type -t</code> command with calls to specific Bash -builtins. <code>declare -F</code> tells us if the hook is a function, <code>type -p</code> -tells us if <code>hookName</code> is a file, and otherwise <code>-n</code> tells us if the -hook is non-empty. Again, this introduces a Bashism. +Here’s a basic idea of what each of these fields do: </p> +<ul class="org-ul"> +<li>hostName: Name of the host to use. If mDNS is configured on your +network, this can be used to identify the IP address of the device +via “&lt;hostName&gt;.local”.</li> +<li>hardware: A string describing what hardware we are using. Valid +values currently are “raspberryPi0”, “raspberryPi1”, “raspberryPi2”, +“raspberryPi3”, “raspberryPi4”.</li> +<li>authorizedKeys: A list of SSH public keys that are authorized to +make changes to your device. Note this is required because no +passwords will be set for this system.</li> +<li>program: What to do in the kiosk. This should be a Nixpkgs attribute +(<b>package</b>), an <b>executable</b> in that package, and a list of <b>args</b>.</li> +<li>networks: This is a name/value pairing of SSIDs to PSK passphrases. +This can be found with the wpa_passphrase(8) command from +wpa_supplicant.</li> +<li>locale: This provides some information of what localizations to use. +You can set <a href="https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2">regulation domain</a>, <a href="https://www.gnu.org/software/libc/manual/html_node/Locale-Names.html#Locale-Names">language</a>, <a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones">time zone</a> via “regDom”, +“lang”, and “timeZone”. If unspecified, defaults to US / English / +New York.</li> +<li>localSystem: Information on system to use for <a href="https://github.com/matthewbauer/nixiosk#remote-builder-optional">remote builder</a>. +Optional.</li> +</ul> +</div> +</div> + +<div class="outline-3" id="outline-container-orgddeb048"> +<h3 id="orgddeb048"><span class="section-number-3">1.4</span> Initial deployment</h3> +<div class="outline-text-3" id="text-1-4"> <p> -In the worst case, this does replace one <code>case</code> with multiple <code>if</code> -branches. But since most hooks are functions, most of the time this -ends up being a single <code>if</code>. +The deployment is pretty easy provided you have <a href="https://nixos.org/nix/">Nix installed</a>. Here +are some steps: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh</span> -<span class="org-diff-context">index e25ea735a93c..ea550a6d534b 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-hunk-header">@@ -449,7 +449,8 @@</span><span class="org-diff-function"> findInputs() {</span> -<span class="org-diff-context"> [[ -f "$pkg/nix-support/$file" ]] || continue</span> - -<span class="org-diff-context"> local pkgNext</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> for pkgNext in $(&lt; "$pkg/nix-support/$file"); do</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> read -r -d '' pkgNext &lt; "$pkg/nix-support/$file" || true</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> for pkgNext in $pkgNext; do</span> -<span class="org-diff-context"> findInputs "$pkgNext" "$hostOffsetNext" "$targetOffsetNext"</span> -<span class="org-diff-context"> done</span> -<span class="org-diff-context"> done</span> - +<pre class="src src-sh">$ git clone https://github.com/matthewbauer/nixiosk.git +$ cd nixiosk/ +$ cp nixiosk.json.sample nixiosk.json </pre> </div> <p> -This change replaces the <code>$(&lt; )</code> call with a <code>read</code> call. This is a -little surprising since <code>read</code> is using an empty delimiter <code>''</code> -instead of a new line. This replaces one Bashsism <code>$(&lt; )</code> with another -in <code>-d</code>. And, the result, gets rid of a remaining subshell usage. +Now you need to make some changes to nixiosk.json to reflect what you +want your system to do. The important ones are ‘authorizedKeys’ and +‘networks’ so that your systems can startup and you can connect to it. +</p> + +<p> +If you have an SSH key setup, you can get its value with: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/build-support/bintools-wrapper/setup-hook.sh b/pkgs/build-support/bintools-wrapper/setup-hook.sh</span> -<span class="org-diff-context">index f65b792485a0..27d3e6ad5120 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/build-support/bintools-wrapper/setup-hook.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/build-support/bintools-wrapper/setup-hook.sh</span></span> -<span class="org-diff-hunk-header">@@ -61,9 +61,8 @@</span><span class="org-diff-function"> do</span> -<span class="org-diff-context"> if</span> -<span class="org-diff-context"> PATH=$_PATH type -p "@targetPrefix@${cmd}" &gt; /dev/null</span> -<span class="org-diff-context"> then</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> upper_case="$(echo "$cmd" | tr "[:lower:]" "[:upper:]")"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> export "${role_pre}${upper_case}=@targetPrefix@${cmd}";</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> export "${upper_case}${role_post}=@targetPrefix@${cmd}";</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> export "${role_pre}${cmd^^}=@targetPrefix@${cmd}";</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> export "${cmd^^}${role_post}=@targetPrefix@${cmd}";</span> -<span class="org-diff-context"> fi</span> -<span class="org-diff-context"> done</span> +<pre class="src src-sh">$ cat $<span class="org-variable-name">HOME</span>/.ssh/id_rsa.pub +<span class="org-whitespace-line">ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC050iPG8ckY/dj2O3ol20G2lTdr7ERFz4LD3R4yqoT5W0THjNFdCqavvduCIAtF1Xx/OmTISblnGKf10rYLNzDdyMMFy7tUSiC7/T37EW0s+EFGhS9yOcjCVvHYwgnGZCF4ec33toE8Htq2UKBVgtE0PMwPAyCGYhFxFLYN8J8/xnMNGqNE6iTGbK5qb4yg3rwyrKMXLNGVNsPVcMfdyk3xqUilDp4U7HHQpqX0wKrUvrBZ87LnO9z3X/QIRVQhS5GqnIjRYe4L9yxZtTjW5HdwIq1jcvZc/1Uu7bkMh3gkCwbrpmudSGpdUlyEreaHOJf3XH4psr6IMGVJvxnGiV9 mbauer@dellbook</span> +</pre> +</div> + +<p> +which will give you a line for “authorizedKeys” like: +</p> +<div class="org-src-container"> +<pre class="src src-json"><span class="org-keyword"><span class="org-whitespace-line">"authorizedKeys"</span></span><span class="org-whitespace-line">: [</span><span class="org-string"><span class="org-whitespace-line">"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC050iPG8ckY/dj2O3ol20G2lTdr7ERFz4LD3R4yqoT5W0THjNFdCqavvduCIAtF1Xx/OmTISblnGKf10rYLNzDdyMMFy7tUSiC7/T37EW0s+EFGhS9yOcjCVvHYwgnGZCF4ec33toE8Htq2UKBVgtE0PMwPAyCGYhFxFLYN8J8/xnMNGqNE6iTGbK5qb4yg3rwyrKMXLNGVNsPVcMfdyk3xqUilDp4U7HHQpqX0wKrUvrBZ87LnO9z3X/QIRVQhS5GqnIjRYe4L9yxZtTjW5HdwIq1jcvZc/1Uu7bkMh3gkCwbrpmudSGpdUlyEreaHOJf3XH4psr6IMGVJvxnGiV9 mbauer@dellbook"</span></span><span class="org-whitespace-line">],</span> </pre> </div> <p> -This replace a call to <code>tr</code> with a usage of the <code>^^</code>. -<code>${parameter^^pattern}</code> is <a href="https://www.gnu.org/software/bash/manual/bash.html#Shell-Parameter-Expansion">a Bash 4 feature</a> and allows you to -upper-case a string without calling out to <code>tr</code>. +and you can get a PSK value for your WiFi network with: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/build-support/bintools-wrapper/setup-hook.sh b/pkgs/build-support/bintools-wrapper/setup-hook.sh</span> -<span class="org-diff-context">index 27d3e6ad5120..2e15fa95c794 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/build-support/bintools-wrapper/setup-hook.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/build-support/bintools-wrapper/setup-hook.sh</span></span> -<span class="org-diff-hunk-header">@@ -24,7 +24,8 @@</span><span class="org-diff-function"> bintoolsWrapper_addLDVars () {</span> -<span class="org-diff-context"> # Python and Haskell packages often only have directories like $out/lib/ghc-8.4.3/ or</span> -<span class="org-diff-context"> # $out/lib/python3.6/, so having them in LDFLAGS just makes the linker search unnecessary</span> -<span class="org-diff-context"> # directories and bloats the size of the environment variable space.</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> if [[ -n "$(echo $1/lib/lib*)" ]]; then</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local -a glob=( $1/lib/lib* )</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> if [ "${#glob[*]}" -gt 0 ]; then</span> -<span class="org-diff-context"> export NIX_${role_pre}LDFLAGS+=" -L$1/lib"</span> -<span class="org-diff-context"> fi</span> - fi +<pre class="src src-sh">$ nix run nixpkgs.wpa_supplicant -c wpa_passphrase my-network +<span class="org-variable-name">network</span>={ + <span class="org-variable-name">ssid</span>=<span class="org-string">"my-network"</span> + <span class="org-comment-delimiter">#</span><span class="org-comment">psk="abcdefgh"</span> + <span class="org-variable-name">psk</span>=17e76a6490ac112dbeba996caa7cd1387c6ebf6ce721ef704f92b681bb2e9000 +} </pre> </div> <p> -Here, we are checking for whether any files exist in <code>/lib/lib*</code> using -a glob. It originally used a subshell to check if the result was -empty, but this change replaces it with the Bash <code>${#parameter}</code> -<a href="https://www.gnu.org/software/bash/manual/bash.html#Shell-Parameter-Expansion">length operation</a>. +so your .json file looks like: </p> <div class="org-src-container"> -<pre class="src src-diff"><span class="org-diff-context">diff --git a/pkgs/stdenv/generic/setup.sh b/pkgs/stdenv/generic/setup.sh</span> -<span class="org-diff-context">index 311292169ecd..326a60676a26 100644</span> -<span class="org-diff-header">--- </span><span class="org-diff-header"><span class="org-diff-file-header">a/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-header">+++ </span><span class="org-diff-header"><span class="org-diff-file-header">b/pkgs/stdenv/generic/setup.sh</span></span> -<span class="org-diff-hunk-header">@@ -17,7 +17,8 @@</span><span class="org-diff-function"> fi</span> -<span class="org-diff-context"> # code). The hooks for &lt;hookName&gt; are the shell function or variable</span> -<span class="org-diff-context"> # &lt;hookName&gt;, and the values of the shell array ‘&lt;hookName&gt;Hooks’.</span> -<span class="org-diff-context"> runHook() {</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set -u # May be called from elsewhere, so do `set -u`.</span> - -<span class="org-diff-context"> local hookName="$1"</span> -<span class="org-diff-hunk-header">@@ -32,7 +33,7 @@</span><span class="org-diff-function"> runHook() {</span> -<span class="org-diff-context"> set -u # To balance `_eval`</span> -<span class="org-diff-context"> done</span> - -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "${oldOpts}"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> -<span class="org-diff-context"> return 0</span> -<span class="org-diff-context"> }</span> - -<span class="org-diff-hunk-header">@@ -40,7 +41,8 @@</span><span class="org-diff-function"> runHook() {</span> -<span class="org-diff-context"> # Run all hooks with the specified name, until one succeeds (returns a</span> -<span class="org-diff-context"> # zero exit code). If none succeed, return a non-zero exit code.</span> -<span class="org-diff-context"> runOneHook() {</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set -u # May be called from elsewhere, so do `set -u`.</span> - -<span class="org-diff-context"> local hookName="$1"</span> -<span class="org-diff-hunk-header">@@ -57,7 +59,7 @@</span><span class="org-diff-function"> runOneHook() {</span> -<span class="org-diff-context"> set -u # To balance `_eval`</span> -<span class="org-diff-context"> done</span> - -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "${oldOpts}"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> -<span class="org-diff-context"> return "$ret"</span> -<span class="org-diff-context"> }</span> - -<span class="org-diff-hunk-header">@@ -500,10 +502,11 @@</span><span class="org-diff-function"> activatePackage() {</span> -<span class="org-diff-context"> (( "$hostOffset" &lt;= "$targetOffset" )) || exit -1</span> - -<span class="org-diff-context"> if [ -f "$pkg" ]; then</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set +u</span> -<span class="org-diff-context"> source "$pkg"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "$oldOpts"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> -<span class="org-diff-context"> fi</span> - -<span class="org-diff-context"> # Only dependencies whose host platform is guaranteed to match the</span> -<span class="org-diff-hunk-header">@@ -522,10 +525,11 @@</span><span class="org-diff-function"> activatePackage() {</span> -<span class="org-diff-context"> fi</span> - -<span class="org-diff-context"> if [[ -f "$pkg/nix-support/setup-hook" ]]; then</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set +u</span> -<span class="org-diff-context"> source "$pkg/nix-support/setup-hook"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "$oldOpts"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> -<span class="org-diff-context"> fi</span> -<span class="org-diff-context"> }</span> - -<span class="org-diff-hunk-header">@@ -1273,17 +1277,19 @@</span><span class="org-diff-function"> showPhaseHeader() {</span> - -<span class="org-diff-context"> genericBuild() {</span> -<span class="org-diff-context"> if [ -f "${buildCommandPath:-}" ]; then</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set +u</span> -<span class="org-diff-context"> source "$buildCommandPath"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "$oldOpts"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> -<span class="org-diff-context"> return</span> -<span class="org-diff-context"> fi</span> -<span class="org-diff-context"> if [ -n "${buildCommand:-}" ]; then</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set +u</span> -<span class="org-diff-context"> eval "$buildCommand"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "$oldOpts"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> -<span class="org-diff-context"> return</span> -<span class="org-diff-context"> fi</span> - -<span class="org-diff-hunk-header">@@ -1313,10 +1319,11 @@</span><span class="org-diff-function"> genericBuild() {</span> - -<span class="org-diff-context"> # Evaluate the variable named $curPhase if it exists, otherwise the</span> -<span class="org-diff-context"> # function named $curPhase.</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> local oldOpts="$(shopt -po nounset)"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> local oldOpts="-u"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> shopt -qo nounset || oldOpts="+u"</span> -<span class="org-diff-context"> set +u</span> -<span class="org-diff-context"> eval "${!curPhase:-$curPhase}"</span> -<span class="org-diff-indicator-removed">-</span><span class="org-diff-removed"> eval "$oldOpts"</span> -<span class="org-diff-indicator-added">+</span><span class="org-diff-added"> set "$oldOpts"</span> - -<span class="org-diff-context"> if [ "$curPhase" = unpackPhase ]; then</span> -<span class="org-diff-context"> cd "${sourceRoot:-.}"</span> +<pre class="src src-json"><span class="org-keyword">"networks"</span>: { + <span class="org-keyword">"my-network"</span>: <span class="org-string">"17e76a6490ac112dbeba996caa7cd1387c6ebf6ce721ef704f92b681bb2e9000"</span>, +}, +</pre> +</div> +<p> +Now, after inserting your Raspberry Pi SD card into the primary slot, +you can deploy to it with: +</p> + +<div class="org-src-container"> +<pre class="src src-sh">$ ./deploy.sh /dev/mmcblk0 </pre> </div> <p> -This last change is maybe the trickiest. <code>$(shopt -po nounset)</code> is -used to get <a href="https://www.gnu.org/software/bash/manual/bash.html#The-Shopt-Builtin">the old value</a> of <code>nounset</code>. The <code>nounset</code> setting tells -Bash to treat <a href="https://www.gnu.org/software/bash/manual/bash.html#The-Set-Builtin">unset variables as an error</a>. This is used temporarily -for phases and hooks to enforce this property. It will be reset to its -previous value after we finish evaling the current phase or hook. To -avoid the subshell here, the stdout provided in <code>shopt -po</code> is -replaced with an exit code provided in <code>shopt -qo nounset</code>. If the -<code>shopt -qo nounset</code> fails, we set <code>oldOpts</code> to <code>+u</code>, otherwise it is -assumed that it is <code>-u</code>. +You can now eject your SD card and insert it into your Raspberry Pi. +It will boot immediately to an Epiphany browser, loading +en.wikipedia.org. </p> <p> -This commit was first merged in on September 20, but it takes a while -for it to hit master. Today, it was finally merged into master -(October 13) in <a href="https://github.com/NixOS/nixpkgs/commits/4e6826a">4e6826a</a> so we can finally can see the benefits from -it! +<a href="https://github.com/matthewbauer/nixiosk#troubleshooting">Troubleshooting steps</a> can be found in the README. </p> </div> </div> -<div class="outline-3" id="outline-container-org2621377"> -<h3 id="org2621377"><span class="section-number-3">1.2</span> Benchmarking</h3> -<div class="outline-text-3" id="text-1-2"> +<div class="outline-3" id="outline-container-orgefabaf8"> +<h3 id="orgefabaf8"><span class="section-number-3">1.5</span> Redeployments</h3> +<div class="outline-text-3" id="text-1-5"> <p> -Hyperfine makes it easy to compare differences in timings. You can -install it locally with: +You can pretty easily make changes to a running system given you have +SSH access. This is as easy as cloning the running config: </p> <div class="org-src-container"> -<pre class="src src-shell">$ nix-env -iA nixpkgs.hyperfine +<pre class="src src-sh">$ git clone ssh://root@nixiosk.local/etc/nixos/configuration.git nixiosk-configuration +$ cd nixiosk-configuration </pre> </div> <p> -Here are some of the results: +Then, make some changes in your repo. After your done, you can just +run ‘git push’ to redeploy. </p> <div class="org-src-container"> -<pre class="src src-shell">$ hyperfine --warmup 3 <span class="org-sh-escaped-newline">\</span> - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :'</span> <span class="org-sh-escaped-newline">\</span> - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :'</span> -Benchmark <span class="org-comment-delimiter">#</span><span class="org-comment">1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :</span> - Time (mean ± σ): 436.4 ms ± 8.5 ms [User: 324.7 ms, System: 107.8 ms] - Range (min … max): 430.8 ms … 459.6 ms 10 runs - -Benchmark <span class="org-comment-delimiter">#</span><span class="org-comment">2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :</span> - Time (mean ± σ): 244.5 ms ± 2.3 ms [User: 190.7 ms, System: 34.2 ms] - Range (min … max): 241.8 ms … 248.3 ms 12 runs - -Summary - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p stdenv --run :'</span> ran -<span class="org-whitespace-line"> 1.79 ± 0.04 times faster than </span><span class="org-string"><span class="org-whitespace-line">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p stdenv --run :'</span></span> +<pre class="src src-sh">$ git add . +$ git commit +$ git push </pre> </div> -<div class="org-src-container"> -<pre class="src src-shell">$ hyperfine --warmup 3 <span class="org-sh-escaped-newline">\</span> - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :'</span> <span class="org-sh-escaped-newline">\</span> - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :'</span> -Benchmark <span class="org-comment-delimiter">#</span><span class="org-comment">1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :</span> - Time (mean ± σ): 3.428 s ± 0.015 s [User: 2.489 s, System: 1.081 s] - Range (min … max): 3.404 s … 3.453 s 10 runs - -Benchmark <span class="org-comment-delimiter">#</span><span class="org-comment">2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :</span> - Time (mean ± σ): 873.4 ms ± 12.2 ms [User: 714.7 ms, System: 89.3 ms] - Range (min … max): 861.5 ms … 906.4 ms 10 runs - -Summary - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p i3.buildInputs --run :'</span> ran -<span class="org-whitespace-line"> 3.92 ± 0.06 times faster than </span><span class="org-string"><span class="org-whitespace-line">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p i3.buildInputs --run :'</span></span> -</pre> -</div> +<p> +You’ll see the NixOS switch-to-configuration log in your command +output. If all is successful, the system should immediately reflect +your changes. If not, the output of Git should explain what went +wrong. +</p> -<div class="org-src-container"> -<pre class="src src-shell">$ hyperfine --warmup 3 <span class="org-sh-escaped-newline">\</span> - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :'</span> <span class="org-sh-escaped-newline">\</span> - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :'</span> -<span class="org-whitespace-line">Benchmark </span><span class="org-comment-delimiter"><span class="org-whitespace-line">#</span></span><span class="org-comment"><span class="org-whitespace-line">1: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :</span></span> - Time (mean ± σ): 4.380 s ± 0.024 s [User: 3.155 s, System: 1.443 s] - Range (min … max): 4.339 s … 4.409 s 10 runs - -<span class="org-whitespace-line">Benchmark </span><span class="org-comment-delimiter"><span class="org-whitespace-line">#</span></span><span class="org-comment"><span class="org-whitespace-line">2: nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :</span></span> - Time (mean ± σ): 1.007 s ± 0.011 s [User: 826.7 ms, System: 114.2 ms] - Range (min … max): 0.995 s … 1.026 s 10 runs - -Summary - <span class="org-string">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/4e6826a.tar.gz -p inkscape.buildInputs --run :'</span> ran -<span class="org-whitespace-line"> 4.35 ± 0.05 times faster than </span><span class="org-string"><span class="org-whitespace-line">'nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/33366cc.tar.gz -p inkscape.buildInputs --run :'</span></span> -</pre> +<p> +Note, that some versions of the Raspberry Pi like the 0 and the 1 are +not big enough to redeploy the whole system. You will probably need to +setup remote builders. This is <a href="https://github.com/matthewbauer/nixiosk#remote-builder-optional">described in the README</a>. +</p> +</div> </div> +</div> + +<div class="outline-2" id="outline-container-org6df267f"> +<h2 id="org6df267f"><span class="section-number-2">2</span> Technology</h2> +<div class="outline-text-2" id="text-2"> +<p> +Here are some of the pieces that make the Kiosk system possible: +</p> + +<ul class="org-ul"> +<li><a href="https://www.hjdskes.nl/projects/cage/">Cage</a> / <a href="https://wayland.freedesktop.org/">Wayland</a>: Cage is a Wayland compositor that allows only one +application to display at a time. This makes the system a true +Kiosk.</li> +<li><a href="https://nixos.org/">NixOS</a> - A Linux distro built on top of functional package management.</li> +<li><a href="https://gitlab.com/obsidian.systems/basalt/">Basalt</a>: A tool to manage NixOS directly from Git. This allows doing +push-to-deploy directly to NixOS.</li> +<li><a href="https://www.freedesktop.org/wiki/Software/Plymouth/">Plymouth</a>: Nice graphical boot animations. Right now, it uses the +NixOS logo but in the future this should be configurable so that you +can include your own branding.</li> +<li><a href="https://www.openssh.com/">OpenSSH</a>: Since no direct login is available, SSH is required for +remote administration.</li> +<li><a href="http://www.avahi.org/">Avahi</a>: Configures mDNS registration for the system, allowing you to +remember host names instead of IP addresses.</li> +</ul> <p> -Try running these commands yourself, and compare the results. +I would also like to include some more tools to make administration +easier: </p> + +<ul class="org-ul"> +<li>ddclient / miniupnp: Allow registering external IP address with a +DNS provider. This would enable administration outside of the +device’s immediate network.</li> +</ul> </div> </div> -<div class="outline-3" id="outline-container-orgec0cdcf"> -<h3 id="orgec0cdcf"><span class="section-number-3">1.3</span> Results</h3> -<div class="outline-text-3" id="text-1-3"> +<div class="outline-2" id="outline-container-org6777f70"> +<h2 id="org6777f70"><span class="section-number-2">3</span> Project</h2> +<div class="outline-text-2" id="text-3"> <p> -Avoiding subshells leads to a decrease in up to 4x of the time it used -to take. That multiplier is going to depend on precisely how many -inputs we are processing. It’s a pretty impressive improvement, and it -comes with no added cost. These kind of easy wins in performance are -pretty rare, and worth celebrating! +You can try it out right now if you have an Raspberry Pi system. Other +hardware is probably not too hard, but may require tweaking. The +project page is available at <a href="https://github.com/matthewbauer/nixiosk">https://github.com/matthewbauer/nixiosk</a> +and issues and pull requests are welcomed. </p> </div> -</div> </div> - Sun, 13 Oct 2019 00:00:00 +0000 + Mon, 23 Mar 2020 00:00:00 +0000 - Hercules Labs: Agent 0.5.0 with Terraform support and simpler configuration - https://blog.hercules-ci.com/2019/10/07/agent-0.5.0-more-terraform-less-configuration/ - https://blog.hercules-ci.com/2019/10/07/agent-0.5.0-more-terraform-less-configuration/ - <p>Last week, we’ve released <a href="https://github.com/hercules-ci/hercules-ci-agent/releases/tag/hercules-ci-agent-0.5.0">agent version 0.5.0</a>. The main theme for the release is ease of installation. Running an agent should be as simple as possible, so we made:</p> - -<ul> - <li>simplifications to the binary cache configuration</li> - <li><a href="https://github.com/hercules-ci/terraform-hercules-ci#readme">terraform modules and an example</a></li> -</ul> - -<p>Follow <a href="https://docs.hercules-ci.com/hercules-ci/getting-started/">getting started guide</a> to set up your first agent.</p> - -<p>If you have and you’re using the module (NixOS, NixOps, nix-darwin) the update is entirely self-explanatory. Otherwise, check <a href="https://github.com/hercules-ci/hercules-ci-agent/releases/tag/hercules-ci-agent-0.5.0">the notes</a>.</p> - -<h3 id="trusted-user">Trusted-user</h3> - -<p>The agent now relies on being a <code class="highlighter-rouge">trusted-user</code> to the Nix daemon. The agent does not allow projects to execute arbitrary Nix store operations anyway. It may improve security since it simplifies configuration and secrets handling.</p> - -<p>The security model for the agent is simple at this point: only build git refs from your repository. This way, third-party contributors can not run arbitrary code on your agent system; only contributors with write access to the repo can.</p> - -<p>Talking about trust, we’ll <a href="https://github.com/hercules-ci/docs.hercules-ci.com/issues/67">share some details</a> about securely doing CI for Open Source with Bors soon!</p> - Mon, 07 Oct 2019 00:00:00 +0000 + Cachix: Proposal for improving Nix error messages + https://blog.cachix.org/post/2020-03-18-proposal-for-improving-nix-error-messages/ + https://blog.cachix.org/post/2020-03-18-proposal-for-improving-nix-error-messages/ + I’m lucky to be in touch with a lot of people that use Nix day to day. +One of the most occouring annoyances that pops up more frequently with those starting with Nix are confusing error messages. +Since Nix community has previously succesfully stepped up and funded removal of Perl to reduce barriers for source code contributions, I think we ought to do the same for removing barriers when using Nix. + Wed, 18 Mar 2020 08:00:00 +0000 + support@cachix.org (Domen Kožar) - Craige McWhirter: Installing LineageOS 16 on a Samsung SM-T710 (gts28wifi) - http://mcwhirter.com.au//craige/blog/2019/Installing_LineageOS_16_on_Samsung_T710/ - http://mcwhirter.com.au//craige/blog/2019/Installing_LineageOS_16_on_Samsung_T710/ - <ol> -<li>Check the prerequisites</li> -<li>Backup any files you want to keep</li> -<li>Download LineageOS ROM and optional GAPPS package</li> -<li>Copy LineageOS image &amp; additional packages to the SM-T710</li> -<li>Boot into recovery mode</li> -<li>Wipe the existing installation.</li> -<li>Format the device</li> -<li>Install LineageOS ROM and other optional ROMs.</li> -</ol> - + Flying Circus: Our new NixOS 19.03 Platform Is Ready for Production + http://blog.flyingcircus.io/?p=5295 + https://blog.flyingcircus.io/2020/02/28/our-new-nixos-19-03-platform-is-ready-for-production/ + <p>We have developed our third-generation platform which is now based on NixOS 19.03. All provided components have been ported to the new platform and VMs are already running in production.</p> -<p><strong>0 - Check the Prerequisites</strong></p> -<ul> -<li>The device already has the <a href="https://source.mcwhirter.io/craige/hardware-notes/src/branch/master/samsung/SM-T710.rst">latest TWRP -installed</a>.</li> -<li>Android debugging is enabled on the device</li> -<li><a href="https://developer.android.com/studio/command-line/adb">ADB</a> is installed on -your workstation.</li> -<li>You have a suitably configured SD card as a back up handy.</li> -</ul> - - -<p>I use this <a href="https://source.mcwhirter.io/craige/nixos-examples/src/branch/master/development/mobile/android.nix">android.nix</a> -to ensure my <a href="https://nixos.org/">NixOS</a> environment has the prerequisites -install and configured for it's side of the process.</p> - -<p><strong>1 - Backup any Files You Want to Keep</strong></p> - -<p>I like to use <code>adb</code> to pull the files from the device. There are also other -methods available too.</p> - -<pre><code>$ adb pull /sdcard/MyFolder ./Downloads/MyDevice/ -</code></pre> -<p>Usage of <code>adb</code> is documented at <a href="https://developer.android.com/studio/command-line/adb">Android Debug Bridge</a></p> +<p>Most of our development work is done for the new platform and new features will be available only for it. We pull in security updates from upstream regularly and will follow new NixOS releases more quickly in the future. The old NixOS 15.09 platform still receives critical security and bug fixes.</p> -<p><strong>2 - Download LineageOS ROM and optional GAPPS package</strong></p> -<p>I downloaded -<a href="https://doc-14-2s-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/95vm843q4kbkiarmoi4894gi5u1n00nc/1570060800000/00470383411279991671/*/1l5Jn6O-mb8OfmfQqXZqKz4UD3O2Qq5-e?e=download">lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip</a> from -<a href="https://drive.google.com/drive/folders/16vvKfv_wJa7eNizak8Wklm0ow38B5gLu">gts28wifi</a>.</p> -<p>I also downloaded <a href="https://opengapps.org/">Open GApps</a> ARM, nano to enable -Google Apps.</p> +<p>Effective March 6, VMs created via customer self-service will use the 19.03 platform.</p> -<p>I could have also downloaded and installed LineageOS -<a>addonsu</a> and -<a>addonsu-remove</a> -but opted not to at this point.</p> -<p><strong>3 - Copy LineageOS image &amp; additional packages to the SM-T710</strong></p> -<p>I use <code>adb</code> to copy the files files across:</p> +<p>You can find the documentation for the new platform here:</p> -<pre><code>$ adb push ./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip /sdcard/ -./lineage-16.0-20191001-UNOFFICIAL-gts28wifi.zip: 1 file pushed. 12.1 MB/s (408677035 bytes in 32.263s) -$ adb push ./open_gapps-arm-9.0-nano-20190405.zip /sdcard/ -./open_gapps-arm-9.0-nano-20190405.zip: 1 file pushed. 11.1 MB/s (185790181 bytes in 15.948s) -</code></pre> -<p>I also copy both to the SD card at this point as the SM-T710 is an awful device -to work with and in many random cases will not work with ADB. When this -happens, I fall back to the SD card.</p> -<p><strong>4 - Boot into recovery mode</strong></p> +<p><a href="https://flyingcircus.io/doc/guide/platform_nixos_2/index.html">https://flyingcircus.io/doc/guide/platform_nixos_2/index.html</a></p> -<p>I power the device off, then power it back into recovery mode by holding down -<code>[home]</code>+<code>[volume up]</code>+<code>[power]</code>.</p> -<p><strong>5 - Wipe the existing installation</strong></p> -<p>Press <strong>Wipe</strong> then <strong>Advanced Wipe</strong>.</p> +<p>We recommend user profiles (done with buildEnv) in case your application needs specific packages in its environment:</p> -<p>Select:</p> -<ul> -<li>Dalvik / Art Cache</li> -<li>System</li> -<li>Data</li> -<li>Cache</li> -</ul> +<p><a href="https://flyingcircus.io/doc/guide/platform_nixos_2/user_profile.html">https://flyingcircus.io/doc/guide/platform_nixos_2/user_profile.html</a></p> -<p>Swipe <strong>Swipe to Wipe</strong> at the bottom of the screen.</p> -<p>Press <strong>Back</strong> to return to the <strong>Advanced Wipe</strong> screen.</p> -<p>Press the triangular "back" button once to return to the <strong>Wipe</strong> screen.</p> +<h2>Upgrading 15.09 Machines</h2> -<p><strong>6 - Format the device</strong></p> -<p>Press <strong>Format Data</strong>.</p> -<p>Type <strong>yes</strong> and press blue check mark at the bottom-right corner to commence -the format process.</p> +<p>Upgrading existing VMs online is supported and we have already done that for a number of VMs.<br />Sometimes however, it can be better to create new NixOS VMs in parallel and set up your applications there.</p> -<p>Press <strong>Back</strong> to return to the <strong>Advanced Wipe</strong> screen.</p> -<p>Press the triangular "back" button twice to return to the main screen.</p> -<p><strong>7 - Install LineageOS ROM and other optional ROMs</strong></p> +<p>Most managed components will just work after the upgrade. We are working on instructions for specific things that should be done before or after the upgrade.</p> -<p>Press <strong>Install</strong>, select the images you wish to install and swipe make it go.</p> -<p>Reboot when it's completed and you should be off and running wtth a brand new -LineageOS 16 on this tablet.</p> - Thu, 03 Oct 2019 23:04:41 +0000 - - - Hercules Labs: Post-mortem on recent Cachix downtime - https://blog.hercules-ci.com/2019/09/30/recent-cachix-downtime/ - https://blog.hercules-ci.com/2019/09/30/recent-cachix-downtime/ - <p>On 6th of September, <a href="https://cachix.org">Cachix</a> experienced 3 hours of downtime.</p> -<p>We’d like to let you know exactly what happened and what measures we have taken to prevent such an event from happening in the future.</p> +<p>If you’re a customer with a support contract in the “Guided” or “Managed” service classes<br />then we’ll approach you directly and discuss when and how to upgrade VMs in the coming months.</p> -<h2 id="timeline-utc">Timeline (UTC)</h2> -<ul> - <li>2019-09-06 17:15:05: cachix.org down alert triggered</li> - <li>2019-09-06 20:06:00: Domen gets out of <a href="https://munihac.de/2019.html">MuniHac</a> dinner in the basement and receives the alert</li> - <li>2019-09-06 20:19:00: Domen restarts server process</li> - <li>2019-09-06 20:19:38: cachix.org is back up</li> -</ul> -<h2 id="observations">Observations</h2> +<p>If you’re a customer in the “Hosted” service class then we recommend contacting our support team to discuss the upgrade.</p> -<p>The backend logs were full of:</p> -<pre><code class="language-log">Sep 06 17:02:34 cachix-production.cachix cachix-server[6488]: Network.Socket.recvBuf: resource vanished (Connection reset by peer) -</code></pre> -<p>And:</p> +<h2>If you have questions …</h2> -<pre><code class="language-log">(ConnectionFailure Network.BSD.getProtocolByName: does not exist (no such protocol name: tcp))) -</code></pre> -<p>Most importantly, there were no logs after downtime was triggered and until the restart:</p> -<pre><code class="language-log">Sep 06 17:15:48 cachix-production.cachix cachix-server[6488]: Network.Socket.recvBuf: resource vanished (Connection reset by peer) -Sep 06 20:19:26 cachix-production.cachix systemd[1]: Stopping cachix server service... -</code></pre> +<p>As always: if you have any questions or comments then let us know and send us an email to <a href="mailto:support@flyingcircus.io">support@flyingcircus.io</a>.</p> + Fri, 28 Feb 2020 14:16:49 +0000 + + + nixbuild.net: Introducing nixbuild.net + https://blog.nixbuild.net/posts/2020-02-18-introducing-nixbuild-net.html + https://blog.nixbuild.net/posts/2020-02-18-introducing-nixbuild-net.html + <p>Exactly one month ago, I <a href="https://discourse.nixos.org/t/announcing-nixbuild-net-nix-build-as-a-service">announced</a> the <a href="https://nixbuild.net">nixbuild.net</a> service. Since then, there have been lots of work on functionality, performance and stability of the service. As of today, nixbuild.net is exiting alpha and entering private beta phase. If you want to try it out, just <a href="mailto:rickard@nixbuild.net">send me an email</a>.</p> +<p>Today, I’m also launching the <a href="https://blog.nixbuild.net">nixbuild.net blog</a>, which is intended as an outlet for anything related to the nixbuild.net service. Announcements, demos, technical articles and various tips and tricks. We’ll start out with a proper introduction of nixbuild.net; why it was built, what it can help you with and what the long-term goals are.</p> -<p>Our monitoring revealed an increased number of nginx connections and file handles (the time are in CEST - UTC+2):</p> +<h2 id="why-nixbuild.net">Why nixbuild.net?</h2> +<p><a href="https://nixos.org/nix/">Nix</a> has great built-in support for <a href="https://nixos.org/nix/manual/#chap-distributed-builds">distributing builds</a> to remote machines. You just need to setup a standard Nix enviroment on your build machines, and make sure they are accessible via SSH. Just like that, you can offload your heavy builds to a couple of beefy build servers, saving your poor laptop’s fan from spinning up.</p> +<p>However, just when you’ve tasted those sweet distributed builds you very likely run into the issue of <em>scaling</em>.</p> +<p>What if you need a really big server to run your builds, but only really need it once or twice per day? You’ll be wasting a lot of money keeping that build server available.</p> +<p>And what if you occasionally have lots and lots of builds to run, or if your whole development team wants to share the build servers? Then you probably need to add more build servers, which means more wasted money when they are not used.</p> +<p>So, you start looking into auto-scaling your build servers. This is quite easy to do if you use some cloud provider like AWS, Azure or GCP. But, this is where Nix will stop cooperating with you. It is really tricky to get Nix to work nicely together with an auto-scaled set of remote build machines. Nix has only a very coarse view of the “current load” of a build machine and can therefore not make very informed decisions on exactly how to distribute the builds. If there are multiple Nix instances (one for each developer in your team) fighting for the same resources, things get even trickier. It is really easy to end up in a situation where a bunch of really heavy builds are fighting for CPU time on the same build server while the other servers are idle or running lightweight build jobs.</p> +<p>If you use <a href="https://nixos.org/hydra/">Hydra</a>, the continous build system for Nix, you can find scripts for using auto-scaled AWS instances, but it is still tricky to set it up. And in the end, it doesn’t work perfectly since Nix/Hydra has no notion of “consumable” CPU/memory resources so the build scheduling is somewhat hit-and-miss.</p> +<p>Even if you manage to come up with a solution that can handle your workload in an acceptable manner, you now have a new job: <em>maintaining</em> uniquely configured build servers. Possibly for your whole company.</p> +<p>Through my consulting company, <a href="https://immutablesolutions.com/">Immutable Solutions</a>, I’ve done a lot of work on Nix-based deployments, and I’ve always struggled with half-baked solutions to the Nix build farm problem. This is how the idea of the nixbuild.net service was born — a service that can fill in the missing pieces of the Nix distributed build puzzle and package it as a simple, no-maintenance, cost-effective service.</p> +<h2 id="who-are-we">Who are We?</h2> +<p>nixbuild.net is developed and operated by me (Rickard Nilsson) and my colleague David Waern. We both have extensive experience in building Nix-based solutions, for ourselves and for various clients.</p> +<p>We’re bootstrapping nixbuild.net, and we are long-term committed to keep developing and operating the service. Today, nixbuild.net can be productively used for its main purpose — running Nix builds in a scalable and cost-effective way — but there are lots of things that can (and will) be built on top of and around that core. Read more about this below.</p> +<h2 id="what-does-nixbuild.net-look-like">What does nixbuild.net Look Like?</h2> +<p>To the end-user, a person or team using Nix for building software, nixbuild.net behaves just like any other <a href="https://nixos.org/nix/manual/#chap-distributed-builds">remote build machine</a>. As such, you can add it as an entry in your <code>/etc/nix/machines</code> file:</p> +<pre><code>beta.nixbuild.net x86_64-linux - 100 1 big-parallel,benchmark</code></pre> +<p>The <code>big-parallel,benchmark</code> assignment is something that is called <em>system features</em> in Nix. You can use that as a primitive scheduling strategy if you have multiple remote machines. Nix will only submit builds that have been marked as requiring a specific system feature to machines that are assigned that feature.</p> +<p>The number 100 in the file above tells Nix that it is allowed to submit up to 100 simultaneous builds to <code>beta.nixbuild.net</code>. Usually, you use this property to balance builds between remote machines, and to make sure that a machine doesn’t run too many builds at the same time. This works OK when you have rather homogeneous builds, and only one single Nix client is using a set of build servers. If multiple Nix clients use the same set of build servers, this simplistic scheduling breaks down, since a given Nix client loses track on how many builds are really running on a server.</p> +<p>However, when you’re using nixbuild.net, you can set this number to anything really, since nixbuild.net will take care of the scheduling and scaling on its own, and it will not let multiple Nix clients step on each other’s toes. In fact each build that nixbuild.net runs is securely isolated from other builds and by default gets exclusive access to the resources (CPU and memory) it has been assigned.</p> +<p>Apart from setting up the distributed Nix machines, you need to configure SSH. When you register an account on nixbuild.net, you’ll provide us with a public SSH key. The corresponding private key is used for connecting to nixbuild.net. This private key needs to be readable by the user that runs the Nix build. This is usually the <code>root</code> user, if you have a standard Nix setup where the <code>nix-daemon</code> process runs as the root user.</p> +<p>That’s all there is to it, now we can run builds using nixbuild.net!</p> +<p>Let’s try building the following silly build, just so we can see some action:</p> +<pre><code>let pkgs = import &lt;nixpkgs&gt; { system = "x86_64-linux"; }; -<p><img alt="File handles and nginx connections" src="https://blog.hercules-ci.com/images/cachix-downtime-monitoring.png" /></p> +in pkgs.runCommand "silly" {} '' + n=0 + while (($n &lt; 12)); do + date | tee -a $out + sleep 10 + n=$(($n + 1)) + done +''</code></pre> +<p>This build will run for 2 minutes and output the current date every ten seconds:</p> +<pre><code>$ nix-build silly.nix +these derivations will be built: + /nix/store/cy14fc13d3nzl65qp0sywvbjnnl48jf8-silly.drv +building '/nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv' on 'ssh://beta.nixbuild.net'... +Mon Feb 17 20:53:47 UTC 2020 +Mon Feb 17 20:53:57 UTC 2020 +Mon Feb 17 20:54:07 UTC 2020</code></pre> +<p>You can see that Nix is telling us that the build is running on nixbuild.net!</p> +<h3 id="the-nixbuild.net-shell">The nixbuild.net Shell</h3> +<p>nixbuild.net supports a simple shell interface that you can access through SSH. This shell allows you to retrieve information about your builds on the service.</p> +<p>For example, we can list the currently running builds:</p> +<pre><code>$ ssh beta.nixbuild.net shell +nixbuild.net&gt; list builds --running +10524 2020-02-17 21:05:20Z [40.95s] [Running] + /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv</code></pre> +<p>We can also get information about any derivation or nix store path that has been built:</p> +<pre><code>nixbuild.net&gt; show drv /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv +Derivation + path = /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv + builds = 1 + successful builds = 1 -<h2 id="conclusions">Conclusions</h2> +Outputs + out -&gt; /nix/store/8c7sndr3npwmskj9zzp4347cnqh5p8q0-silly +Builds + 10524 2020-02-17 21:05:20Z [02:01] [Built]</code></pre> +<p>This shell is under development, and new features are added continuously. A web-based frontend will also be implemented.</p> +<h2 id="the-road-ahead">The Road Ahead</h2> +<p>To finish up this short introduction to nixbuild.net, let’s talk a bit about our long-term goals for the service.</p> +<p>The core purpose of nixbuild.net is to provide Nix users with pay-per-use distributed builds that are simple to set up and integrate into any workflow. The build execution should be performant and secure.</p> +<p>There are a number of features that basically just are nice side-effects of the design of nixbuild.net:</p> <ul> - <li> - <p>The main cause for downtime was hanged backend. The underlying cause was not identified -due to lack of information.</p> - </li> - <li> - <p>The backend was failing some requests due to reaching the limit of 1024 file descriptors.</p> - </li> - <li> - <p>The duration of the downtime was due to the absence of a telephone signal.</p> - </li> +<li><p>Building a large number of variants of the same derivation (a build matrix or some sort of parameter sweep) will take the same time as running a single build, since nixbuild.net can run all builds in parallel.</p></li> +<li><p>Running repeated builds to find issues related to non-determinism/reproducability will not take longer than running a single build.</p></li> +<li><p>A whole team/company can share the same account in nixbuild.net letting builds be shared in a cost-effective way. If everyone in a team delegates builds to nixbuild.net, the same derivation will never have to be built twice. This is similar to having a shared Nix cache, but avoids having to configure a cache and perform network uploads for each build artifact. Of course, nixbuild.net can be combined with a Nix cache too, if desired.</p></li> </ul> - -<h2 id="what-weve-already-done">What we’ve already done</h2> +<p>Beyond the above we have lots of thoughts on where we want to take nixbuild.net. I’m not going to enumerate possible directions here and now, but one big area that nixbuild.net is particularly suited for is advanced build analysis and visualisation. The sandbox that has been developed to securely isolate builds from each other also gives us a unique way to analyze exactly how a build behaves. One can imagine nixbuild.net being able give very detailed feedback to users about build bottlenecks, performance regressions, unused dependencies etc.</p> +<p>With that said, our primary focus right now is to make nixbuild.net a robust workhorse for your Nix builds, enabling you to fully embrace Nix without being limited by local compute resources. Please <a href="mailto:rickard@nixbuild.net">get in touch</a> if you want try out nixbuild.net, or if you have any questions or comments!</p> + Tue, 18 Feb 2020 00:00:00 +0000 + support@nixbuild.net (nixbuild.net) + + + Sander van der Burg: A declarative process manager-agnostic deployment framework based on Nix tooling + tag:blogger.com,1999:blog-1397115249631682228.post-3829850759126756827 + http://sandervanderburg.blogspot.com/2020/02/a-declarative-process-manager-agnostic.html + In a previous blog post written two months ago, <a href="https://sandervanderburg.blogspot.com/2019/11/a-nix-based-functional-organization-for.html">I have introduced a new experimental Nix-based process framework</a>, that provides the following features:<br /><br /><ul><li>It uses the <strong>Nix expression language</strong> for configuring running process instances, including their dependencies. The configuration process is based on only a few <strong>simple concepts</strong>: function definitions to define constructors that generate process manager configurations, function invocations to compose running process instances, and <a href="https://sandervanderburg.blogspot.com/2013/09/managing-user-environments-with-nix.html">Nix profiles</a> to make collections of process configurations accessible from a single location.</li><li>The <strong>Nix package manager</strong> delivers all packages and configuration files and isolates them in the Nix store, so that they never conflict with other running processes and packages.</li><li>It identifies <strong>process dependencies</strong>, so that a process manager can ensure that processes are activated and deactivated in the right order.</li><li>The ability to deploy <strong>multiple instances</strong> of the same process, by making conflicting resources configurable.</li><li>Deploying processes/services as an <strong>unprivileged user</strong>.</li><li>Advanced concepts and features, such as <a href="http://man7.org/linux/man-pages/man7/namespaces.7.html">namespaces</a> and <a href="http://man7.org/linux/man-pages/man7/cgroups.7.html">cgroups</a>, are <strong>not required</strong>.</li></ul><br />Another objective of the framework is that it should work with a variety of process managers on a variety of operating systems.<br /><br />In my previous blog post, I was deliberately using sysvinit scripts (also known as LSB Init compliant scripts) to manage the lifecycle of running processes as a starting point, because they are universally supported on Linux and self contained -- sysvinit scripts only require the right packages installed, but they do not rely on external programs that manage the processes' life-cycle. Moreover, sysvinit scripts can also be conveniently used as an unprivileged user.<br /><br />I have also developed a Nix function that can be used to more conveniently generate sysvinit scripts. Traditionally, these scripts are written by hand and basically require that the implementer writes the same boilerplate code over and over again, such as the activities that start and stop the process.<br /><br />The sysvinit script generator function can also be used to directly specify the implementation of all activities that manage the life-cycle of a process, such as:<br /><br /><pre><br />{createSystemVInitScript, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSystemVInitScript {<br /> name = instanceName;<br /> description = "Nginx";<br /> activities = {<br /> start = ''<br /> mkdir -p ${nginxLogDir}<br /> log_info_msg "Starting Nginx..."<br /> loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}<br /> evaluate_retval<br /> '';<br /> stop = ''<br /> log_info_msg "Stopping Nginx..."<br /> killproc ${nginx}/bin/nginx<br /> evaluate_retval<br /> '';<br /> reload = ''<br /> log_info_msg "Reloading Nginx..."<br /> killproc ${nginx}/bin/nginx -HUP<br /> evaluate_retval<br /> '';<br /> restart = ''<br /> $0 stop<br /> sleep 1<br /> $0 start<br /> '';<br /> status = "statusproc ${nginx}/bin/nginx";<br /> };<br /> runlevels = [ 3 4 5 ];<br /><br /> inherit dependencies instanceName;<br />}<br /></pre><br />In the above Nix expression, we specify five activities to manage the life-cycle of Nginx, a free/open source web server:<br /><br /><ul><li>The <strong>start</strong> activity initializes the state of Nginx and starts the process (<a href="https://sandervanderburg.blogspot.com/2020/01/writing-well-behaving-daemon-in-c.html">as a daemon</a> that runs in the background).</li><li><strong>stop</strong> stops the Nginx daemon.</li><li><strong>reload</strong> instructs Nginx to reload its configuration</li><li><strong>restart</strong> restarts the process</li><li><strong>status</strong> shows whether the process is running or not.</li></ul><br />Besides directly implementing activities, the Nix function invocation shown above can also be used on a much <strong>higher level</strong> -- typically, sysvinit scripts follow the same conventions. Nearly all sysvinit scripts implement the activities described above to manage the life-cycle of a process, and these typically need to be re-implemented over and over again.<br /><br />We can also generate the implementations of these activities automatically from a high level specification, such as:<br /><br /><pre><br />{createSystemVInitScript, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSystemVInitScript {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile "-p" stateDir ];<br /> runlevels = [ 3 4 5 ];<br /><br /> inherit dependencies instanceName;<br />}<br /></pre><br />You could basically say that the above <i>createSystemVInitScript</i> function invocation makes the configuration process of a sysvinit script "<a href="https://sandervanderburg.blogspot.com/2016/03/the-nixos-project-and-deploying-systems.html"><strong>more declarative</strong></a>" -- you do not need to specify the activities that need to be executed to manage processes, but instead, you specify the <strong>relevant characteristics</strong> of a running process.<br /><br />From this high level specification, the implementations for all required activities will be derived, using conventions that are commonly used to write sysvinit scripts.<br /><br />After completing the initial version of the process management framework that works with sysvinit scripts, I have also been investigating other process managers. I discovered that their configuration processes have many things in common with the sysvinit approach. As a result, I have decided to explore these declarative deployment concepts a bit further.<br /><br />In this blog post, I will describe a declarative process manager-agnostic deployment approach that we can integrate into the experimental Nix-based process management framework.<br /><br /><h2>Writing declarative deployment specifications for managed running processes</h2><br />As explained in the introduction, I have also been experimenting with other process managers than sysvinit. For example, instead of generating a sysvinit script that manages the life-cycle of a process, such as the Nginx server, we can also generate a supervisord configuration file to define Nginx as a program that can be managed with supervisord:<br /><br /><pre><br />{createSupervisordProgram, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSupervisordProgram {<br /> name = instanceName;<br /> command = "mkdir -p ${nginxLogDir}; "+<br /> "${nginx}/bin/nginx -c ${configFile} -p ${stateDir}";<br /> inherit dependencies;<br />}<br /></pre><br />Invoking the above function will generate a supervisord program configuration file, instead of a sysvinit script.<br /><br />With the following Nix expression, we can generate a systemd unit file so that Nginx's life-cycle can be managed by systemd:<br /><br /><pre><br />{createSystemdService, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createSystemdService {<br /> name = instanceName;<br /> Unit = {<br /> Description = "Nginx";<br /> };<br /> Service = {<br /> ExecStartPre = "+mkdir -p ${nginxLogDir}";<br /> ExecStart = "${nginx}/bin/nginx -c ${configFile} -p ${stateDir}";<br /> Type = "simple";<br /> };<br /><br /> inherit dependencies;<br />}<br /></pre><br />What you may probably notice when comparing the above two Nix expressions with the last sysvinit example (that captures process characteristics instead of activities), is that they all contain very similar properties. Their main difference is a slightly different organization and naming convention, because each abstraction function is tailored towards the configuration conventions that each target process manager uses.<br /><br />As discussed in my previous blog post about declarative programming and deployment, declarativity is a spectrum -- the above specifications are (somewhat) declarative because they do not capture the activities to manage the life-cycle of the process (the <strong>how</strong>). Instead, they specify <strong>what</strong> process we want to run. The process manager derives and executes all activities to bring that process in a running state.<br /><br />sysvinit scripts themselves are not declarative, because they specify all activities (i.e. shell commands) that need to be executed to accomplish that goal. supervisord configurations and systemd services configuration files are (somewhat) declarative, because they capture process characteristics -- the process manager executes derives all required activities to bring the process in a running state.<br /><br />Despite the fact that I am not specifying any process management activities, these Nix expressions could still be considered somewhat a "how specification", because each configuration is tailored towards a specific process manager. A process manager, such as syvinit, is a means to accomplish something else: getting a running process whose life-cycle can be conveniently managed.<br /><br />If I would revise the above specifications to only express what I kind of running process I want, disregarding the process manager, then I could simply write:<br /><br /><pre><br />{createManagedProcess, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createManagedProcess {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile" -p" "${stateDir}/${instanceName}" ];<br /><br /> inherit dependencies instanceName;<br />}<br /></pre><br />The above Nix expression simply states that we want to run a managed Nginx process (using certain command-line arguments) and before starting the process, we want to initialize the state by creating the log directory, if it does not exists yet.<br /><br />I can translate the above specification to all kinds of configuration artifacts that can be used by a variety of process managers to accomplish the same outcome. I have developed six kinds of generators allowing me to target the following process managers:<br /><br /><ul><li>sysvinit scripts, also known as <a href="https://wiki.debian.org/LSBInitScripts">LSB Init compliant scripts</a>.</li><li><a href="http://supervisord.org">supervisord</a> programs</li><li><a href="https://www.freedesktop.org/wiki/Software/systemd">systemd</a> services</li><li><a href="https://www.launchd.info">launchd</a> services</li><li><a href="https://www.freebsd.org/doc/en_US.ISO8859-1/articles/rc-scripting/index.html">BSD rc</a> scripts</li><li>Windows services (via Cygwin's <a href="http://web.mit.edu/cygwin/cygwin_v1.3.2/usr/doc/Cygwin/cygrunsrv.README">cygrunsrv</a>)</li></ul><br />Translating the properties of the process manager-agnostic configuration to a process manager-specific properties is quite straight forward for most concepts -- in many cases, there is a direct mapping between a property in the process manager-agnostic configuration to a process manager-specific property.<br /><br />For example, when we intend to target supervisord, then we can translate the <i>process</i> and <i>args</i> parameters to a <i>command</i> invocation. For systemd, we can translate <i>process</i> and <i>args</i> to the <i>ExecStart</i> property that refers to a command-line instruction that starts the process.<br /><br />Although the process manager-agnostic abstraction function supports enough features to get some well known system services working (e.g. Nginx, Apache HTTP service, PostgreSQL, MySQL etc.), it does not facilitate all possible features of each process manager -- it will provide a reasonable set of common features to get a process running and to impose some restrictions on it.<br /><br />It is still possible work around the feature limitations of process manager-agnostic deployment specifications. We can also influence the generation process by defining <strong>overrides</strong> to get process manager-specific properties supported:<br /><br /><pre><br />{createManagedProcess, nginx, stateDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createManagedProcess {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-c" configFile" -p" "${stateDir}/${instanceName}" ];<br /><br /> inherit dependencies instanceName;<br /><br /> overrides = {<br /> sysvinit = {<br /> runlevels = [ 3 4 5 ];<br /> };<br /> };<br />}<br /></pre><br />In the above example, we have added an override specifically for sysvinit to tell that the init system that the process should be started in runlevels 3, 4 and 5 (which implies the process should stopped in the remaining runlevels: 0, 1, 2, and 6). The other process managers that I have worked with do not have a notion of runlevels.<br /><br />Similarly, we can use an override to, for example, use systemd-specific features to run a process in a Linux namespace etc.<br /><br /><h2>Simulating process manager-agnostic concepts with no direct equivalents</h2><br />For some process manager-agnostic concepts, process managers do not always have direct equivalents. In such cases, there is still the possibility to apply non-trivial simulation strategies.<br /><br /><h3>Foreground processes or daemons</h3><br />What all deployment specifications shown in this blog post have in common is that their main objective is to bring a process in a running state. How these processes are expected to behave is different among process managers.<br /><br />sysvinit and BSD rc scripts expect processes to <strong>daemonize</strong> -- on invocation, a process spawns another process that keeps running in the background (the daemon process). After the initialization of the daemon process is done, the parent process terminates. If processes do not deamonize, the startup process execution blocks indefinitely.<br /><br />Daemons introduce another complexity from a process management perspective -- when invoking an executable from a shell session in background mode, the shell can you tell its process ID, so that it can be stopped when it is no longer necessary.<br /><br />With deamons, an invoked processes forks another child process (or when it supposed to really behave well: it double forks) that becomes the daemon process. The daemon process gets adopted by the init system, and thus remains in the background even if the shell session ends.<br /><br />The shell that invokes the executable does not know the PIDs of the resulting daemon processes, because that value is only propagated to the daemon's parent process, not the calling shell session. To still be able to control it, a well-behaving daemon typically writes its process IDs to a so-called PID file, so that it can be reliably terminated by a shell command when it is no longer required.<br /><br />sysvinit and BSD rc scripts extensively use PID files to control daemons. By using a process' PID file, the managing sysvinit/BSD rc script can tell you whether a process is running or not and reliably terminate a process instance.<br /><br />"More modern" process managers, such as launchd, supervisord, and cygrunsrv, do not work with processes that daemonize -- instead, these process managers are daemons themselves that invoke processes that work in "foreground mode".<br /><br />One of the advantages of this approach is that services can be more reliably controlled -- because their PIDs are directly propagated to the controlling daemon from the <i>fork()</i> library call, it is no longer required to work with PID files, that may not always work reliably (for example: a process might abrubtly terminate and never clean its PID file, giving the system the false impression that it is still running).<br /><br />systemd improves process control even further by using Linux cgroups -- although foreground process may be controlled more reliably than daemons, they can still fork other processes (e.g. a web service that creates processes per connection). When the controlling parent process terminates, and does not properly terminate its own child processes, they may keep running in the background indefintely. With cgroups it is possible for the process manager to retain control over all processes spawned by a service and terminate them when a service is no longer needed.<br /><br />systemd has another unique advantage over the other process managers -- it can work both with foreground processes and daemons, although foreground processes seem to have to preference according to the documentation, because they are much easier to control and develop.<br /><br />Many common system services, such as OpenSSH, MySQL or Nginx, have the ability to both run as a foreground process and as a daemon, typically by providing a command-line parameter or defining a property in a configuration file.<br /><br />To provide an optimal user experience for all supported process managers, it is typically a good thing in the process manager-agnostic deployment specification to specify both how a process can be used as a foreground process and as a daemon:<br /><br /><pre><br />{createManagedProcess, nginx, stateDir, runtimeDir}:<br />{configFile, dependencies ? [], instanceSuffix ? ""}:<br /><br />let<br /> instanceName = "nginx${instanceSuffix}";<br /> nginxLogDir = "${stateDir}/${instanceName}/logs";<br />in<br />createManagedProcess {<br /> name = instanceName;<br /> description = "Nginx";<br /> initialize = ''<br /> mkdir -p ${nginxLogDir}<br /> '';<br /> process = "${nginx}/bin/nginx";<br /> args = [ "-p" "${stateDir}/${instanceName}" "-c" configFile ];<br /> foregroundProcessExtraArgs = [ "-g" "daemon off;" ];<br /> daemonExtraArgs = [ "-g" "pid ${runtimeDir}/${instanceName}.pid;" ];<br /><br /> inherit dependencies instanceName;<br /><br /> overrides = {<br /> sysvinit = {<br /> runlevels = [ 3 4 5 ];<br /> };<br /> };<br />}<br /></pre><br />In the above example, we have revised Nginx expression to both specify how the process can be started as a foreground process and as a daemon. The only thing that needs to be configured differently is one global directive in the Nginx configuration file -- by default, Nginx runs as a deamon, but by adding the <i>daemon off;</i> directive to the configuration we can run it in foreground mode.<br /><br />When we run Nginx as daemon, we configure a PID file that refers to the instance name so that multiple instances can co-exist.<br /><br />To make this conveniently configurable, the above expression does the following:<br /><br /><ul><li>The <i>process</i> parameter specifies the process that needs to be started both in foreground mode and as a daemon. The <i>args</i> parameter specifies common command-line arguments that both the foreground and daemon process will use.</li><li>The <i>foregroundProcessExtraArgs</i> parameter specifies additional command-line arguments that are only used when the process is started in foreground mode. In the above example, it is used to provide Nginx the global directive that disables the daemon setting.</li><li>The <i>daemonExtraArgs</i> parameter specifies additional command-line arguments that are only used when the process is started as a daemon. In the above example, it used to provide Nginx a global directive with a PID file path that uniquely identifies the process instance.</li></ul><br />For custom software and services implemented in different language than C, e.g. Node.js, Java or Python, it is far less common that they have the ability to daemonize -- they can typically only be used as foreground processes.<br /><br />Nonetheless, we can still daemonize foreground-only processes, by using an external tool, such as <a href="http://www.libslack.org/daemon/">libslack's <i>daemon</i></a> command:<br /><br /><pre><br />$ daemon -U -i myforegroundprocess<br /></pre><br />The above command deamonizes the foreground process and creates a PID file for it, so that it can be managed by the sysvinit/BSD rc utility scripts.<br /><br />The opposite kind of "simulation" is also possible -- if a process can only be used as a daemon, then we can use a <strong>proxy process</strong> to make it appear as a foreground process:<br /><br /><pre style="overflow: auto;"><br />export _TOP_PID=$$<br /><br /># Handle to SIGTERM and SIGINT signals and forward them to the daemon process<br />_term()<br />{<br /> trap "exit 0" TERM<br /> kill -TERM "$pid"<br /> kill $_TOP_PID<br />}<br /><br />_interrupt()<br />{<br /> kill -INT "$pid"<br />}<br /><br />trap _term SIGTERM<br />trap _interrupt SIGINT<br /><br /># Start process in the background as a daemon<br />${executable} "$@"<br /><br /># Wait for the PID file to become available.<br /># Useful to work with daemons that don't behave well enough.<br />count=0<br /><br />while [ ! -f "${_pidFile}" ]<br />do<br /> if [ $count -eq 10 ]<br /> then<br /> echo "It does not seem that there isn't any pid file! Giving up!"<br /> exit 1<br /> fi<br /><br /> echo "Waiting for ${_pidFile} to become available..."<br /> sleep 1<br /><br /> ((count=count++))<br />done<br /><br /># Determine the daemon's PID by using the PID file<br />pid=$(cat ${_pidFile})<br /><br /># Wait in the background for the PID to terminate<br />${if stdenv.isDarwin then ''<br /> lsof -p $pid +r 3 &amp;&gt;/dev/null &amp;<br />'' else if stdenv.isLinux || stdenv.isCygwin then ''<br /> tail --pid=$pid -f /dev/null &amp;<br /> '' else if stdenv.isBSD || stdenv.isSunOS then ''<br /> pwait $pid &amp;<br /> '' else<br /> throw "Don't know how to wait for process completion on system: ${stdenv.system}"}<br /><br /># Wait for the blocker process to complete.<br /># We use wait, so that bash can still<br /># handle the SIGTERM and SIGINT signals that may be sent to it by<br /># a process manager<br />blocker_pid=$!<br />wait $blocker_pid<br /></pre><br />The idea of the proxy script shown above is that it runs as a foreground process as long as the daemon process is running and relays any relevant incoming signals (e.g. a terminate and interrupt) to the daemon process.<br /><br />Implementing this proxy was a bit tricky:<br /><br /><ul><li>In the beginning of the script we configure signal handlers for the <i>TERM</i> and <i>INT</i> signals so that the process manager can terminate the daemon process.</li><li>We must start the daemon and wait for it to become available. Although the parent process of a well-behaving daemon should only terminate when the initialization is done, this turns out not be a hard guarantee -- to make the process a bit more robust, we deliberately wait for the PID file to become available, before we attempt to wait for the termination of the daemon.</li><li>Then we wait for the PID to terminate. The bash shell has an internal <i>wait</i> command that can be used to wait for a background process to terminate, but this only works with processes in the same process group as the shell. Daemons are in a new session (with different process groups), so they cannot be monitored by the shell by using the <i>wait</i> command.<br /><br /><a href="https://stackoverflow.com/questions/1058047/wait-for-a-process-to-finish">From this Stackoverflow article</a>, I learned that we can use the <i>tail</i> command of GNU Coreutils, or <i>lsof</i> on macOS/Darwin, and <i>pwait</i> on BSDs and Solaris/SunOS to monitor processes in other process groups.</li><li>When a command is being executed by a shell script (e.g. in this particular case: <i>tail</i>, <i>lsof</i> or <i>pwait</i>), the shell script can no longer respond to signals until the command completes. To still allow the script to respond to signals while it is waiting for the daemon process to terminate, we must run the previous command in background mode, and we use the <i>wait</i> instruction to block the script. <a href="https://unix.stackexchange.com/questions/146756/forward-sigterm-to-child-in-bash">While a <i>wait</i> command is running, the shell can respond to signals</a>.</li></ul><br />The generator function will automatically pick the best solution for the selected target process manager -- this means that when our target process manager are sysvinit or BSD rc scripts, the generator automatically picks the configuration settings to run the process as a daemon. For the remaining process managers, the generator will pick the configuration settings that runs it as a foreground process.<br /><br />If a desired process model is not supported, then the generator will automatically simulate it. For instance, if we have a foreground-only process specification, then the generator will automatically configure a sysvinit script to call the <i>daemon</i> executable to daemonize it.<br /><br />A similar process happens when a daemon-only process specification is deployed for a process manager that cannot work with it, such as supervisord.<br /><br /><h3>State initialization</h3><br />Another important aspect in process deployment is <strong>state initialization</strong>. Most system services require the presence of state directories in which they can store their PID, log and temp files. If these directories do not exist, the service may not work and refuse to start.<br /><br />To cope with this problem, I typically make processes self initializing -- before starting the process, I check whether the state has been intialized (e.g. check if the state directories exist) and re-initialize the initial state if needed.<br /><br />With most process managers, state initialization is easy to facilitate. For sysvinit and BSD rc scripts, we just use the generator to first execute the shell commands to initialize the state before the process gets started.<br /><br />Supervisord allows you to execute multiple shell commands in a single <i>command</i> directive -- we can just execute a script that initializes the state before we execute the process that we want to manage.<br /><br />systemd has a <i>ExecStartPre</i> directive that can be used to specify shell commands to execute before the main process starts.<br /><br />Apple launchd and cygrunsrv, however, do not have a generic shell execution mechanism or some facility allowing you to execute things before a process starts. Nonetheless, we can still ensure that the state is going to be initialized by creating a <strong>wrapper script</strong> -- first the wrapper script does the state initialization and then executes the main process.<br /><br />If a state initialization procedure was specified and the target process manager does not support scripting, then the generator function will transparently wrap the main process into a wrapper script that supports state initialization.<br /><br /><h3>Process dependencies</h3><br />Another important generic concept is process dependency management. For example, Nginx can act as a reverse proxy for another web application process. To provide a functional Nginx service, we must be sure that the web application process gets activated as well, and that the web application is activated before Nginx.<br /><br />If the web application process is activated after Nginx or missing completely, then Nginx is (temporarily) unable to redirect incoming requests to the web application process causing end-users to see bad gateway errors.<br /><br />The process managers that I have experimented with all have a different notion of process dependencies.<br /><br />sysvinit scripts can optionally declare dependencies in their comment sections. Tools that know how to interpret these dependency specifications can use it to decide the right activation order. Systems using sysvinit typically ignore this specification. Instead, they work with sequence numbers in their file names -- each run level configuration directory contains a prefix (S or K) followed by two numeric digits that defines the start or stop order.<br /><br />supervisord does not work with dependency specifications, but every program can optionally provide a <i>priority</i> setting that can be used to order the activation and deactivation of programs -- lower priority numbers have precedence over high priority numbers.<br /><br />From dependency specifications in a process management expression, the generator function can automatically derive sequence numbers for process managers that require it.<br /><br />Similar to sysvinit scripts, BSD rc scripts can also declare dependencies in their comment sections. Contrary to sysvinit scripts, BSD rc scripts can use the <a href="https://www.freebsd.org/cgi/man.cgi?rcorder(8)"><i>rcorder</i></a> tool to parse these dependencies from the comments section and automatically derive the order in which the BSD rc scripts need to be activated.<br /><br /><i>cygrunsrv</i> also allows you directly specify process dependencies. The Windows service manager makes sure that the service get activated in the right order and that all process dependencies are activated first. The only limitation is that cygrunsrv only allows up to 16 dependencies to be specified per service.<br /><br />To simulate process dependencies with systemd, we can use two properties. The <i>Wants</i> property can be used to tell systemd that another service needs to be activated first. The <i>After</i> property can be used to specify the ordering.<br /><br />Sadly, it seems that launchd has no notion of process dependencies at all -- processes can be activated by certain events, e.g. when a kernel module was loaded or through socket activation, but it does not seem to have the ability to configure process dependencies or the activation ordering. When our target process manager is launchd, then we simply have to inform the user that proper activation ordering cannot be guaranteed.<br /><br /><h2>Changing user privileges</h2><br />Another general concept, that has subtle differences in each process manager, is changing user privileges. Typically for the deployment of system services, you do not want to run these services as root user (that has full access to the filesystem), but as an unprivileged user.<br /><br />sysvinit and BSD rc scripts have to change users through the <i>su</i> command. The <i>su</i> command can be used to change the user ID (UID), and will automatically adopt the primary group ID (GID) of the corresponding user.<br /><br />Supervisord and <i>cygrunsrv</i> can also only change user IDs (UIDs), and will adopt the primary group ID (GID) of the corresponding user.<br /><br />Systemd and launchd can both change the user IDs and group IDs of the process that it invokes.<br /><br />Because only changing UIDs are universally supported amongst process managers, I did not add a configuration property that allows you to change GIDs in a process manager-agnostic way.<br /><br /><h2>Deploying process manager-agnostic configurations</h2><br />With a processes Nix expression, we can define which process instances we want to run (and how they can be constructed from source code and their dependencies):<br /><br /><pre><br />{ pkgs ? import { inherit system; }<br />, system ? builtins.currentSystem<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? false<br />, processManager<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange processManager;<br /> }; <br />in <br />rec { <br /> webapp = rec { <br /> port = 5000; <br /> dnsName = "webapp.local"; <br /> <br /> pkg = constructors.webapp { <br /> inherit port; <br /> }; <br /> }; <br /> <br /> nginxReverseProxy = rec {<br /> port = 8080;<br /><br /> pkg = constructors.nginxReverseProxy {<br /> webapps = [ webapp ];<br /> inherit port;<br /> } {};<br /> };<br />}<br /></pre><br />In the above Nix expression, we compose two running process instances:<br /><br /><ul><li><i>webapp</i> is a trivial web application process that will simply return a static HTML page by using the HTTP protocol.</li><li><i>nginxReverseProxy</i> is a Nginx server configured as a reverse proxy server. It will forward incoming HTTP requests to the appropriate web application instance, based on the virtual host name. If a virtual host name is <i>webapp.local</i>, then Nginx forwards the request to the <i>webapp</i> instance.</li></ul><br />To generate the configuration artifacts for the process instances, we refer to a separate constructors Nix expression. Each constructor will call the <i>createManagedProcess</i> function abstraction (as shown earlier) to construct a process configuration in a process manager-agnostic way.<br /><br />With the following command-line instruction, we can generate sysvinit scripts for the <i>webapp</i> and Nginx processes declared in the processes expression, and run them as an unprivileged user with the state files managed in our home directory:<br /><br /><pre><br />$ nixproc-build --process-manager sysvinit \<br /> --state-dir /home/sander/var \<br /> --force-disable-user-change processes.nix<br /></pre><br />By adjusting the <i>--process-manager</i> parameter we can also generate artefacts for a different process manager. For example, the following command will generate systemd unit config files instead of sysvinit scripts:<br /><br /><pre><br />$ nixproc-build --process-manager systemd \<br /> --state-dir /home/sander/var \<br /> --force-disable-user-change processes.nix<br /></pre><br />The following command will automatically build and deploy all processes, using sysvinit as a process manager:<br /><br /><pre><br />$ nixproc-sysvinit-switch --state-dir /home/sander/var \<br /> --force-disable-user-change processes.nix<br /></pre><br />We can also run a life-cycle management activity on all previously deployed processes. For example, to retrieve the statuses of all processes, we can run:<br /><br /><pre><br />$ nixproc-sysvinit-runactivity status<br /></pre><br />We can also traverse the processes in reverse dependency order. This is particularly useful to reliably stop all processes, without breaking any process dependencies:<br /><br /><pre><br />$ nixproc-sysvinit-runactivity -r stop<br /></pre><br />Similarly, there are command-line tools to use the other supported process managers. For example, to deploy systemd units instead of sysvinit scripts, you can run:<br /><br /><pre><br />$ nixproc-systemd-switch processes.nix<br /></pre><br /><h2>Distributed process manager-agnostic deployment with Disnix</h2><br />As shown in the previous process management framework blog post, it is also possible to deploy processes to machines in a network and have inter-dependencies between processes. These kinds of deployments can be managed by <a href="https://sandervanderburg.blogspot.com/2011/02/disnix-toolset-for-distributed.html">Disnix</a>.<br /><br />Compared to the previous blog post (in which we could only deploy sysvinit scripts), we can now also use any process manager that the framework supports. The Dysnomia toolset provides plugins that supports all process managers that this framework supports:<br /><br /><pre><br />{ pkgs, distribution, invDistribution, system<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? false<br />, processManager ? "sysvinit"<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange processManager;<br /> };<br /><br /> processType =<br /> if processManager == "sysvinit" then "sysvinit-script"<br /> else if processManager == "systemd" then "systemd-unit"<br /> else if processManager == "supervisord" then "supervisord-program"<br /> else if processManager == "bsdrc" then "bsdrc-script"<br /> else if processManager == "cygrunsrv" then "cygrunsrv-service"<br /> else throw "Unknown process manager: ${processManager}";<br />in<br />rec {<br /> webapp = rec {<br /> name = "webapp";<br /> port = 5000;<br /> dnsName = "webapp.local";<br /> pkg = constructors.webapp {<br /> inherit port;<br /> };<br /> type = processType;<br /> };<br /><br /> nginxReverseProxy = rec {<br /> name = "nginxReverseProxy";<br /> port = 8080;<br /> pkg = constructors.nginxReverseProxy {<br /> inherit port;<br /> };<br /> dependsOn = {<br /> inherit webapp;<br /> };<br /> type = processType;<br /> };<br />}<br /></pre><br />In the above expression, we have extended the previously shown processes expression into a Disnix service expression, in which every attribute in the attribute set represents a service that can be distributed to a target machine in the network.<br /><br />The <i>type</i> attribute of each service indicates which Dysnomia plugin needs to manage its life-cycle. We can automatically select the appropriate plugin for our desired process manager by deriving it from the <i>processManager</i> parameter.<br /><br />The above Disnix expression has a drawback -- in a <strong>heteregenous network</strong> of machines (that run multiple operating systems and/or process managers), we need to compose all desired variants of each service with configuration files for each process manager that we want to use.<br /><br />It is also possible to have <strong>target-agnostic</strong> services, by delegating the translation steps to the corresponding target machines. Instead of directly generating a configuration file for a process manager, we generate a JSON specification containing all parameters that are passed to <i>createManagedProcess</i>. We can use this JSON file to build the corresponding configuration artefacts on the target machine:<br /><br /><pre><br />{ pkgs, distribution, invDistribution, system<br />, stateDir ? "/var"<br />, runtimeDir ? "${stateDir}/run"<br />, logDir ? "${stateDir}/log"<br />, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")<br />, forceDisableUserChange ? false<br />, processManager ? null<br />}:<br /><br />let<br /> constructors = import ./constructors.nix {<br /> inherit pkgs stateDir runtimeDir logDir tmpDir;<br /> inherit forceDisableUserChange processManager;<br /> };<br />in<br />rec {<br /> webapp = rec {<br /> name = "webapp";<br /> port = 5000;<br /> dnsName = "webapp.local";<br /> pkg = constructors.webapp {<br /> inherit port;<br /> };<br /> type = "managed-process";<br /> };<br /><br /> nginxReverseProxy = rec {<br /> name = "nginxReverseProxy";<br /> port = 8080;<br /> pkg = constructors.nginxReverseProxy {<br /> inherit port;<br /> };<br /> dependsOn = {<br /> inherit webapp;<br /> };<br /> type = "managed-process";<br /> };<br />}<br /></pre><br />In the above services model, we have set the <i>processManager</i> parameter to <i>null</i> causing the generator to print JSON presentations of the function parameters passed to <i>createManagedProcess</i>.<br /><br />The <i>managed-process</i> type refers to a Dysnomia plugin that consumes the JSON specification and invokes the <i>createManagedProcess</i> function to convert the JSON configuration to a configuration file used by the preferred process manager.<br /><br />In the infrastructure model, we can configure the preferred process manager for each target machine:<br /><br /><pre><br />{<br /> test1 = {<br /> properties = {<br /> hostname = "test1";<br /> };<br /> containers = {<br /> managed-process = {<br /> processManager = "sysvinit";<br /> };<br /> };<br /> };<br /><br /> test2 = {<br /> properties = {<br /> hostname = "test2";<br /> };<br /> containers = {<br /> managed-process = {<br /> processManager = "systemd";<br /> };<br /> };<br /> };<br />}<br /></pre><br />In the above infrastructure model, the <i>managed-proces</i> container on the first machine: <i>test1</i> has been configured to use sysvinit scripts to manage processes. On the second test machine: <i>test2</i> the <i>managed-process</i> container is configured to use systemd to manage processes.<br /><br />If we distribute the services in the services model to targets in the infrastructure model as follows:<br /><br /><pre><br />{infrastructure}:<br /><br />{<br /> webapp = [ infrastructure.test1 ];<br /> nginxReverseProxy = [ infrastructure.test2 ];<br />}<br /></pre><br />and the deploy the system as follows:<br /><br /><pre style="overflow: auto; font-size: 90%;"><br />$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix<br /></pre><br />Then the <i>webapp</i> process will distributed to the <i>test1</i> machine in the network and will be managed with a sysvinit script.<br /><br />The <i>nginxReverseProxy</i> will be deployed to the <i>test2</i> machine and managed as a systemd job. The Nginx reverse proxy forwards incoming connections to the <i>webapp.local</i> domain name to the web application process hosted on the first machine.<br /><br /><h2>Discussion</h2><br />In this blog post, I have introduced a process manager-agnostic function abstraction making it possible to target all kinds of process managers on a variety of operating systems.<br /><br />By using a single set of declarative specifications, we can:<br /><br /><ul><li>Target six different process managers on four different kinds of operating systems.</li><li>Implement various kinds of deployment scenarios: production deployments, test deployments as an unprivileged user.</li><li>Construct multiple instances of processes.</li></ul><br />In a distributed-context, the advantage is that we can uniformly target all supported process managers and operating systems in a heterogeneous environment from a single declarative specification.<br /><br />This is particularly useful to facilitate technology diversity -- for example, one of the key selling points of Microservices is that "any technology" can be used to implement them. In many cases, technology diversity is "restricted" to frameworks, programming languages, and storage technologies.<br /><br />One particular aspect that is rarely changed is the choice of operating systems, because of the limitations of deployment tools -- most deployment solutions for Microservices are container-based and heavily rely on Linux-only concepts, such as Namespaces and cgroups.<br /><br />With this process managemenent framework and the recent Dysnomia plugin additions for Disnix, it is possible to target all kinds of operating systems that support the Nix package manager, making the operating system component selectable as well. This, for example, allows you to also pick the best operating system to implement a certain requirement -- for example, when performance is important you might pick Linux, and when there is a strong emphasis on security, you could pick OpenBSD to host a mission criticial component.<br /><br /><h2>Limitations</h2><br />The following table, summarizes the differences between the process manager solutions that I have investigated:<br /><br /><div><table style="border-style: solid; border-width: 1px;"><tbody><tr><th></th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">sysvinit</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">bsdrc</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">supervisord</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">systemd</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">launchd</th><th style="border-style: solid; border-width: 1px; white-space: nowrap;">cygrunsrv</th></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Process type</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">daemon</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">daemon</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground<br />daemon</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">foreground</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Process control method</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">PID files</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">PID files</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Process PID</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">cgroups</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Process PID</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Process PID</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Scripting support</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Process dependency management</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Numeric ordering</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Dependency-based</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Numeric ordering</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Dependency-based<br />+ dependency loading</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">None</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Dependency-based<br />+ dependency loading</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">User changing capabilities</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user and group</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user and group</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user and group</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">user</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Unprivileged user deployments</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes*</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes*</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">yes*</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">no</td></tr> <tr><th style="border-style: solid; border-width: 1px; white-space: nowrap;">Operating system support</th><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Linux</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">FreeBSD<br /> &gt;OpenBSD<br />NetBSD</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Many UNIX-like:<br />Linux<br />macOS<br />FreeBSD<br />Solaris<br /></td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Linux (+glibc) only</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">macOS (Darwin)</td><td style="border-style: solid; border-width: 1px; white-space: nowrap;">Windows (Cygwin)</td></tr> </tbody></table></div><br />Although we can facilitate lifecycle management from a common specification with a variety of process managers, only the most important common features are supported.<br /><br />Not every concept can be done in a process manager agnostic way. For example, we cannot generically do any isolation of resources (except for packages, because we use Nix). It is difficult to generalize these concepts because these they are not standardized, e.g. the POSIX standard does not descibe namespaces and cgroups (or similar concepts).<br /><br />Furthermore, most process managers (with the exception of supervisord) are operating system specific. As a result, it still matters what process manager is picked.<br /><br /><h2>Related work</h2><br />Process manager-agnostic deployment is not entirely a new idea. Dysnomia already has a target-agnostic 'process' plugin for quite a while, that translates a simple deployment specification (constisting of key-value pairs) to a systemd unit configuration file or sysvinit script.<br /><br />The features of Dysnomia's <i>process</i> plugin are much more limited compared to the <i>createManagedProcess</i> abstraction function described in this blog post. It does not support any other than process managers than sysvint and systemd, and it can only work with foreground processes.<br /><br />Furthermore, target agnostic configurations cannot be easily extended -- it is possible to (ab)use the templating mechanism, but it has no first class overridde facilities.<br /><br />I also found a project called <a href="https://github.com/jordansissel/pleaserun">pleaserun</a> that also has the objective to generate configuration files for a variety of process managers (my approach and pleaserunit, both support sysvinit scripts, systemd and launchd).<br /><br />It seems to use template files to generate the configuration artefacts, and it does not seem to have a generic extension mechanism. Furthermore, it provides no framework to configure the location of shared resources, automatically install package dependencies or to compose multiple instances of processes.<br /><br /><h2>Some remaining thoughts</h2><br />Although the Nix package manager (not the NixOS distribution), should be portable amongst a variety of UNIX-like systems, it turns out that the only two operating systems that are well supported are Linux and macOS. Nix was reported to work on a variety of other UNIX-like systems in the past, but recently it seems that many things are broken.<br /><br />To make Nix work on FreeBSD 12.1, I have used the latest stable Nix package manager version <a href="https://github.com/0mp/freebsd-ports-nix">with patches from this repository</a>. It turns out that there is still a patch missing to work around in a bug in FreeBSD that incorrectly kills all processes in a process group. Fortunately, when we run Nix as as unprivileged user, this bug does not seem to cause any serious problems.<br /><br />Recent versions of Nixpkgs turn out to be horribly broken on FreeBSD -- the FreeBSD stdenv does not seem to work at all. I tried switching back to stdenv-native (a <i>stdenv</i> environment that impurely uses the host system's compiler and core executables), but that also no longer seems to work in the last three major releases -- the Nix expression evaluation breaks in several places. Due to the intense amount of changes and assumptions that the <i>stdenv</i> infrastructure currently makes, it was as good as impossible for me to fix the infrastructure.<br /><br />As another workaround, I reverted back very to a very old version of Nixpkgs (version 17.03 to be precise), that still has a working stdenv-native environment. With some tiny adjustments (e.g. adding some shell aliases for some GNU variants of certain shell executables to <i>stdenv-native</i>), I have managed to get some basic Nix packages working, including Nginx on FreeBSD.<br /><br />Surprisingly, running Nix on Cygwin was less painful than FreeBSD (because of all the GNUisms that Cygwin provides). Similar to FreeBSD, recent versions of Nixpkgs also appear to be broken, including the Cygwin stdenv environment. By reverting back to <i>release-18.03</i> (that still has a somewhat working <i>stdenv</i> for Cygwin), I have managed to build a working Nginx version.<br /><br />As a future improvement to Nixpkgs, I would like to propose a testing solution for stdenv-native. Although I understand that is difficult to dedicate manpower to maintain all unconventional Nix/Nixpkgs ports, stdenv-native is something that we can also convienently test on Linux and prevent from breaking in the future.<br /><br /><h2>Availability</h2><br /><a href="https://github.com/svanderburg/nix-processmgmt">The latest version of my experimental Nix-based process framework</a>, that includes the process manager-agnostic configuration function described in this blog post, can be obtained from my GitHub page.<br /><br />In addition, the repository also contains some example cases, including the web application system described in this blog post, and a set of common system services: MySQL, Apache HTTP server, PostgreSQL and Apache Tomcat.<br /><br /> + Sat, 15 Feb 2020 20:07:00 +0000 + noreply@blogger.com (Sander van der Burg) + + + Cachix: CDN and double storage size + https://blog.cachix.org/post/2020-01-28-cdn-and-double-storage/ + https://blog.cachix.org/post/2020-01-28-cdn-and-double-storage/ + Cachix - Nix binary cache hosting, has grown quite a bit in recent months in terms of day to day usage and that was mostly noticable on bandwidth. +Over 3000 GB were served in December 2019. +CDN by CloudFlare Increased usage prompted a few backend machine instance upgrades to handle concurrent upload/downloads, but it became clear it’s time to abandon single machine infrastructure. +As of today, all binary caches are served by CloudFlare CDN. + Wed, 29 Jan 2020 08:00:00 +0000 + support@cachix.org (Domen Kožar) + + + Mayflower: __structuredAttrs in Nix + https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/ + https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/ + In Nix 2 a new parameter to the derivation primitive was added. It changes how information is passed to the derivation builder. +Current State In order to show how it changes the handling of parameters to derivation, the first example will show the current state with __structuredAttrs set to false and the stdenv.mkDerivation wrapper around derivation. All parameters are passed to the builder as environment variables, canonicalised by Nix in imitation of shell script conventions: + Mon, 20 Jan 2020 12:00:00 +0000 + + + Hercules Labs: Hercules CI & Cachix split up + https://blog.hercules-ci.com/2020/01/14/hercules-ci-cachix-split-up/ + https://blog.hercules-ci.com/2020/01/14/hercules-ci-cachix-split-up/ + <p>After careful consideration of how to balance between the two products, we’ve decided to split up. Each of the two products will be a separate entity:</p> <ul> - <li> - <p>To avoid any hangs in the future, we have configured <a href="http://0pointer.de/blog/projects/watchdog.html">systemd watchdog</a> -which automatically restarts the service if the backend doesn’t respond for 3 seconds. -Doing so we released <a href="https://github.com/hercules-ci/warp-systemd">warp-systemd</a> Haskell library to integrate Warp (Haskell web server) -with systemd, such as socket activation and watchdog features.</p> - </li> - <li> - <p>We’ve increased file descriptors limit to 8192.</p> - </li> - <li> - <p>We’ve set up <a href="https://status.cachix.org/">Cachix status page</a> so that you can check the state of the service.</p> - </li> - <li> - <p>For a better visibility into errors like file handles, we’ve configured <a href="https://sentry.io">sentry.io</a> -error reporting. -Doing so we released <a href="https://github.com/hercules-ci/katip-raven">katip-raven</a> for seamless Sentry integration -of structured logging which we also use to log Warp (Haskell web server) exceptions.</p> - </li> - <li> - <p>Robert is now fully onboarded to be able to resolve any Cachix issues</p> - </li> - <li> - <p>We’ve made a number of improvements for the performance of Cachix. Just tuning GHC RTS settings -shows 15% speed up in common usage.</p> - </li> + <li>Hercules CI becomes part of Robert Hensing’s Ensius B.V.</li> + <li>Cachix becomes part of Domen Kožar’s Enlambda OÜ</li> </ul> -<h2 id="future-work">Future work</h2> +<p>For customers there will be no changes, except for the point of contact in support requests.</p> -<ul> - <li> - <p>Enable debugging builds for production. This would allow systemd watchdog to <a href="https://mpickering.github.io/ghc-docs/build-html/users_guide/debug-info.html#requesting-a-stack-trace-with-sigquit">send signal SIGQUIT</a> and get an execution stack in which program hanged.</p> - - <p>We opened <a href="https://github.com/NixOS/nixpkgs/pull/69552">nixpkgs pull request</a> to lay the ground work -to be able to compile debugging builds.</p> - - <p>However there’s a GHC bug opened showing <a href="https://gitlab.haskell.org/ghc/ghc/issues/15960">debugging builds alter the performance of programs</a>, so we need to asses our impact first.</p> - </li> - <li> - <p>Upgrade <a href="https://github.com/haskell/network">network</a> library to 3.0 fixing <a href="https://github.com/snoyberg/http-client/issues/374#issuecomment-535919090">unneeded file handle usage</a> and <a href="https://github.com/haskell/network-bsd/commit/2167eca412fa488f7b2622fcd61af1238153dae7">a possible candidate for a deadlock</a>.</p> - - <p><a href="https://www.stackage.org/nightly-2019-09-30">Stackage just included network-3.* in latest snapshot</a> -so it’s a matter of weeks.</p> - </li> - <li> - <p>Improve load testing tooling to be able to reason about performance implications.</p> - </li> -</ul> +<p>Domen &amp; Robert</p> + Tue, 14 Jan 2020 00:00:00 +0000 + + + Mayflower: Windows-on-NixOS, part 1: Migrating bare-metal to a VM + https://nixos.mayflower.consulting/blog/2019/11/27/windows-vm-storage/ + https://nixos.mayflower.consulting/blog/2019/11/27/windows-vm-storage/ + This is part 1 of a series of blog posts explaining how we took an existing Windows installation on hardware and moved it into a VM running on top of NixOS. +Background We have a decently-equipped desktop PC sitting in our office, which is designated for data experiments using TensorFlow and such. During off-hours, it’s also used for games, and for that purpose it has Windows installed on it. We decided to try moving Windows into a VM within NixOS so that we could run both operating systems in parallel. + Wed, 27 Nov 2019 06:00:00 +0000 + + + Craige McWhirter: Deploying and Configuring Vim on NixOS + http://mcwhirter.com.au//craige/blog/2019/Deploying_and_Configuring_Vim_on_NixOS/ + http://mcwhirter.com.au//craige/blog/2019/Deploying_and_Configuring_Vim_on_NixOS/ + <p><img alt="NixOS Gears by Craige McWhirter" src="http://mcwhirter.com.au/files/NixOS_Gears.png" title="NixOS Gears by Craige McWhirter" /></p> -<h2 id="summary">Summary</h2> +<p>I had a need to deploy <a href="https://www.vim.org/">vim</a> and my particular preferred +configuration both system-wide and across multiple systems (via +<a href="https://nixos.org/nixops/">NixOps</a>).</p> -<p>We’re confident such issues shouldn’t affect the production anymore and since availability of -Cachix is our utmost priority, we are going to make sure to complete the rest of the work in a timely manner.</p> +<p>I started by creating a file named <code>vim.nix</code> that would be imported into either +<code>/etc/nixos/configuration.nix</code> or an appropriate NixOps Nix file. This example +is a stub that shows a number of common configuration items:</p> -<hr /> +<p><a href="https://source.mcwhirter.io/craige/nixos-examples/src/branch/master/applications/editors/vim.nix">vim.nix</a>:</p> -<h2 id="what-we-do">What we do</h2> +<pre><code class="nix">with import &lt;nixpkgs&gt; {}; -<p>Automated hosted infrastructure for Nix, reliable and reproducible developer tooling, -to speed up adoption and lower integration cost. We offer -<a href="https://hercules-ci.com">Continuous Integration</a> and <a href="https://cachix.org">Binary Caches</a>.</p> - Mon, 30 Sep 2019 00:00:00 +0000 - - - Craige McWhirter: Setting Up Wireless Networking with NixOS - http://mcwhirter.com.au//craige/blog/2019/Setting_Up_Wireless_Networking_with_NixOS/ - http://mcwhirter.com.au//craige/blog/2019/Setting_Up_Wireless_Networking_with_NixOS/ - <p><img alt="NixOS Gears by Craige McWhirter" src="http://mcwhirter.com.au/files/NixOS_Gears.png" title="NixOS Gears by Craige McWhirter" /></p> +vim_configurable.customize { + name = "vim"; # Specifies the vim binary name. + # Below you can specify what usually goes into `~/.vimrc` + vimrcConfig.customRC = '' + " Preferred global default settings: + set number " Enable line numbers by default + set background=dark " Set the default background to dark or light + set smartindent " Automatically insert extra level of indentation + set tabstop=4 " Default tabstop + set shiftwidth=4 " Default indent spacing + set expandtab " Expand [TABS] to spaces + syntax enable " Enable syntax highlighting + colorscheme solarized " Set the default colour scheme + set t_Co=256 " use 265 colors in vim + set spell spelllang=en_au " Default spell checking language + hi clear SpellBad " Clear any unwanted default settings + hi SpellBad cterm=underline " Set the spell checking highlight style + hi SpellBad ctermbg=NONE " Set the spell checking highlight background + match ErrorMsg '\s\+$' " -<p>The current <a href="https://nixos.org/nixos/manual/">NixOS Manual</a> is a little sparse -on details for different options to <a href="https://nixos.org/nixos/manual/index.html#sec-wireless">configure wireless -networking</a>. The -<a href="https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/configuration/wireless.xml">version in -master</a> -is a little better but still ambiguous. I've <a href="https://github.com/NixOS/nixpkgs/pull/66652/files">made a pull -request</a> to resolve -this but in the interim, this documents how to configure a number of wireless -scenarios with NixOS.</p> - -<p>If you're going to use NetworkManager, this is not for you. This is for those -of us who want reproducible configurations.</p> - -<p>To enable a wireless connection with no spaces or special characters in the -name that uses a pre-shared key, you first need to generate the raw PSK:</p> - -<pre><code>$ wpa_passphrase exampleSSID abcd1234 -network={ - ssid="exampleSSID" - #psk="abcd1234" - psk=46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d -} -</code></pre> + let g:airline_powerline_fonts = 1 " Use powerline fonts + let g:airline_theme='solarized' " Set the airline theme -<p>Now you can add the following stanza to your configuration.nix to enable -wireless networking and this specific wireless connection:</p> + set laststatus=2 " Set up the status line so it's coloured and always on -<pre><code>networking.wireless = { - enable = true; - userControlled.enable = true; - networks = { - exampleSSID = { - pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d"; - }; + " Add more settings below + ''; + # store your plugins in Vim packages + vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; { + start = [ # Plugins loaded on launch + airline # Lean &amp; mean status/tabline for vim that's light as air + solarized # Solarized colours for Vim + vim-airline-themes # Collection of themes for airlin + vim-nix # Support for writing Nix expressions in vim + ]; + # manually loadable by calling `:packadd $plugin-name` + # opt = [ phpCompletion elm-vim ]; + # To automatically load a plugin when opening a filetype, add vimrc lines like: + # autocmd FileType php :packadd phpCompletion }; -}; +} </code></pre> -<p>If you had another WiFi connection that had spaces and/or special characters in the name, you would configure it like this:</p> +<p>I then needed to import this file into my system packages stanza:</p> -<pre><code>networking.wireless = { - enable = true; - userControlled.enable = true; - networks = { - "example's SSID" = { - pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d"; - }; +<pre><code class="nix"> environment = { + systemPackages = with pkgs; [ + someOtherPackages # Normal package listing + ( + import ./vim.nix + ) + ]; }; -}; </code></pre> -<p>If you need to connect to a hidden network, you would do it like this:</p> +<p>This will then install and configure Vim as you've defined it.</p> -<pre><code>networking.wireless = { - enable = true; - userControlled.enable = true; - networks = { - myHiddenSSID = { - hidden = true; - pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d"; - }; - }; -}; +<p>If you'd like to give this build a run in a non-production space, I've written <a href="https://source.mcwhirter.io/craige/nixos-examples/src/branch/master/applications/editors/vim_vm.nix">vim_vm.nix</a> with which you can build a VM, ssh into afterwards and test the Vim configuration:</p> + +<pre><code class="bash">$ nix-build '&lt;nixpkgs/nixos&gt;' -A vm --arg configuration ./vim_vm.nix +... +$ export QEMU_OPTS="-m 4192" +$ export QEMU_NET_OPTS="hostfwd=tcp::18080-:80,hostfwd=tcp::10022-:22" +$ ./result/bin/run-vim-vm-vm </code></pre> -<p>The final scenario that I have, is connecting to open SSIDs that have some kind -of secondary method (like a login in web page) for authentication of -connections:</p> +<p>Then, from a another terminal:</p> -<pre><code>networking.wireless = { - enable = true; - userControlled.enable = true; - networks = { - FreeWiFi = {}; - }; -}; +<pre><code class="bash">$ ssh nixos@localhost -p 10022 </code></pre> -<p>This is all fairly straight forward but was non-trivial to find the answers -too.</p> - Thu, 26 Sep 2019 21:38:34 +0000 +<p>And you should be in a freshly baked NixOS VM with your Vim config ready to be +used.</p> + +<p>There's an always current example of my <a href="https://source.mcwhirter.io/craige/mio-ops/src/branch/master/roles/vim.nix">production Vim +configuration</a> +in my <a href="https://source.mcwhirter.io/craige/mio-ops/">mio-ops</a> repo.</p> + Thu, 14 Nov 2019 04:18:37 +0000 diff --git a/flake.lock b/flake.lock index 2b4af9ace4..a11dcd0ebe 100644 --- a/flake.lock +++ b/flake.lock @@ -37,13 +37,13 @@ }, "released-nixpkgs": { "info": { - "lastModified": 1587398327, - "narHash": "sha256-mEKkeLgUrzAsdEaJ/1wdvYn0YZBAKEG3AN21koD2AgU=" + "lastModified": 1588110979, + "narHash": "sha256-wQofKpzp6/adp+Xg4xmgBEHETkK8Law1SsHFzXLenNQ=" }, "locked": { "owner": "NixOS", "repo": "nixpkgs", - "rev": "5272327b81ed355bbed5659b8d303cf2979b6953", + "rev": "ab3adfe1c769c22b6629e59ea0ef88ec8ee4563f", "type": "github" }, "original": { diff --git a/update.sh b/update.sh index 8801ca7613..20069f9f59 100755 --- a/update.sh +++ b/update.sh @@ -6,5 +6,4 @@ UPDATE=1 nix run nixpkgs#gnumake nixpkgs#curl -c make update --keep-going || tru nix flake update \ --update-input released-nixpkgs \ - --update-input nix-pills \ - || true + --update-input nix-pills