Permalink
Browse files

Initial commit.

  • Loading branch information...
0 parents commit 0e2d59902a5cfd5fb4aca103f879d4df79156c07 @sundbp sundbp committed Mar 25, 2013
11 .gitignore
@@ -0,0 +1,11 @@
+/target
+/lib
+/classes
+/checkouts
+/resources/custom-debs
+pom.xml
+*.jar
+*.class
+.lein-deps-sum
+.lein-failures
+.lein-plugins
201 README.md
@@ -0,0 +1,201 @@
+# lxc-crate
+
+A Pallet crate to work with KVM servers and KVM guests.
+
+There is no KVM backend for jclouds/pallet and this is a first attempt
+at "manually" supporting KVM. By manually I mean that all operations
+are done via the node-list provider and lift (converge is never used).
+
+Hence you define your servers and guests in a config map (the format is
+described further down), then perform specific phases on the KVM server(s) to:
+* Setup an already existing host as a KVM server
+* Create a KVM guest VM on a given KVM server
+* Create one big VLAN spanning several KVM servers
+
+After that you can of course perform operations on the KVM guest VMs
+directly without going via the KVM server.
+
+For the moment kvm-crate assumes KVM servers are Ubuntu 12.10 hosts. This
+restriction can loosened in the future if others provide variations that
+works on other distributions and versions.
+
+kvm-crate utilizes the following for setting up VMs and networking:
+* python-vm-builder to create VM images from scratch (if not using base images)
+* libvirt for managing guests
+* openvswitch for networking (and GRE+IPSec for connecting OVS on several KVM servers)
+
+## Configuring a KVM server
+
+First all make sure the following things hold true:
+
+1. Your intended KVM server host exists and runs Ubuntu 12.10
+2. The host is included in your hosts config map with a proper config
+
+Step 2 will be described in more detail shortly, let's first see how
+the call looks (assuming you are use'ing the kvm-create.api namespace):
+
+ (with-nodelist-config [hosts-config {}]
+ (configure-kvm-server "host.to.configure")
+
+*with-nodelist-config* comes from the *pallet-nodelist-helpers* helper project.
+The second argument to *with-nodelist-config* is a map that will be passed as the
+:environment argument to any subsequent lift operation that takes place under the
+covers (and is hence available to any of your own pallet plan functions).
+
+The format of the hosts-config argument that *configure-kvm-server* looks for is this (it's ok to
+add additional content):
+
+ {"host.to.configure" {:host-type :kvm-server
+ :group-spec kvm-create.specs/kvm-server-g
+ :ip "probably same as host0-ip4-ip below"
+ :private-ip "probably same as host1-ip4-ip below" ;; ip on the private network
+ :private-hostname "host-int.to.configure" ;; hostname on the private network
+ :admin-user {:username "root"
+ :ssh-public-key-path (utils/resource-path "ssh-keys/kvm-keys/kvm-id_rsa.pub")
+ :ssh-private-key-path (utils/resource-path "ssh-keys/kvm-keys/kvm-id_rsa")
+ :passphrase "foobar"}
+ :interfaces-file "ovs/interfaces" ;; template for /etc/network/interfaces
+ :interface-config {:host0-ip4-ip "public.iface.ip.address"
+ :host0-ip4-broadcast "public.iface.bcast"
+ :host0-ip4-netmask "public.iface.netmask"
+ :host0-ip4-gateway "public.iface.gw"
+ :host0-ip4-net "public.iface.net"
+ :host1-ip4-ip "private.iface.address"
+ :host1-ip4-broadcast "private.iface.bcast"
+ :host1-ip4-netmask "private.iface.netmask"
+ :host1-ip4-net "private.iface.net"}
+ :ovs-setup (utils/resource-path "ovs/ovs-setup.sh"}}
+
+(The function *utils/resource-path* is from the namespace pallet.utils and
+is handy for referring to paths somewhere onthe local machine classpath)
+
+I've left the configuration of the OpenVSwitch network pretty free form. Hence
+the parts in *:interface-config*, *:interfaces-file* and *:ovs-setup* are freeform
+and needs to be compatible with each other. You can find an example in the
+*kvm-crate/resources/ovs/interfaces-sample* and *kvm-crate/resources/ovs/ovs-setup.sh-sample*.
+It is compatible with the config example above. The example assumes one public interface
+on eth0 that is added to the OVS, we create 2 interfaces pub0 and priv0. The pub0
+inteface gets the same config as eth0 would have had before we made it part of
+the OVS setup, and priv0 is an interace we put on the private network we'll
+set up for the KVM guest VMs. We can use this interface to do things like serve
+DHCP and DNS on the private network.
+
+The *ovs-setup* script is a freeform set of instructions that are supposed
+to properly configure OVS taking into account how your /etc/network/interfaces
+and later private network for the KVM guest VMs are setup. The example
+mentioned above is compatible with the setup descrived in the previous paragraph.
+
+The *configure-kvm-server* function in the *api* namespace can be used
+as a convenience method to perform the KVM server setup (as apart to
+manually using lift). Note it will make the network changes made on the
+KVM server come into effect 2 minutes after it has run (via the at command).
+This is because the network changes may cause our connection to drop and
+signal an error. To avoid this we delay execution until after we have
+disconnected from the host. **NOTE: if you mess up the ovs-setup steps it's
+quite possible to lock yourself out of the remote machine so please take
+extra special care making sure it will run cleanly since you may not get
+a 2nd shot if you don't have console access to the host!**.
+
+You can use the *configure-kvm-server* function in your own functions to
+create more complete functions that do more than just the KVM server
+configuration. For example this is how we setup newly ordered Hetzner
+servers as KVM servers:
+
+ (defn configure-hetzner-kvm-server
+ "Setup a KVM server in Hetzner data-center"
+ [hostname]
+ (helpers/with-nodelist-config [hosts-config/config {}]
+ (println (format "Initial setup for Hetzner host %s.." hostname))
+ (hetzner-api/hetzner-initial-setup hostname)
+ (println (format "Configuring KVM server for %s.." hostname))
+ (kvm-api/configure-kvm-server hostname)
+ (println (format "Finished configuring KVM server %s. Note: it will perform the network changes in 2min!" hostname))))
+
+(the *hetzner-initial-setup* function performs actions needed to
+for a fresh Hetzner machine)
+
+A tip when working with the host config maps is to DRY up your code by
+definining parameterized functions that produce maps of certain types
+representing certain types of hosts. This way you can also compose
+several such functions via *merge*. An example is that you could have
+one function producing the config map required by the Hetzner crate,
+and another for the kvm crate. Use both of these + merge to create
+a config map for a host at Hetzner that is also a KVM server.
+
+## Creating a KVM guest VM
+
+NOT IMPLEMENTED YET
+
+## Connecting the OVS's of several KVM servers
+
+To create one big VLAN where KVM guests on seveal KVM servers can
+communicate with each other we use GRE+IPSec to connect the
+openvswitches of the KVM servers.
+
+The *connect-kvm-servers* function in the *api* namespace will
+connect your KVM servers as specified by the gre connections in
+the config. This is again done in a pretty free form way to
+give maximum flexibility, at the cost of making it possible for
+you to shoot yourself in the foot. The host config map info related
+to gre connections take the following form:
+
+ {"host2.to.configure" {:gre-connections
+ [{:bridge "ovsbr1"
+ :iface "gre0"
+ :remote-ip "1.2.3.4"
+ :psk "my secret key"}]}
+ "host1.to.configure" {:gre-connections
+ [{:bridge "ovsbr1"
+ :iface "gre0"
+ :remote-ip "4.3.2.1"
+ :psk "my secret key"}]}}
+
+In this example we only have two hosts and there's one GRE connection
+setup between the two. If we had more we'd just specify pairs of
+matching GRE connections for the appropriate hosts (note the vector
+of hashes, just add more hashes to have more connections for a given
+host).
+
+The function takes no arugments and can be called like this:
+
+ (kvm-crate.api/connect-kvm-servers)
+
+### Using one KVM server on the private network as DHCP server for guest VMs
+
+In order to be able to assign guest VMs IPs via DHCP we need to assign one
+server to act as DHCP server for the private network. We use *dnsmasq* for
+this purpose, as well as acting as a forwarding DNS server for the guest VMs.
+
+To do this I've provided the server spec *kvm-crate.specs/kvm-dhcp-server*
+that your group spec for the actual KVM server should inherit from.
+
+This spec has a *:configure* phase that does the following:
+
+* Use an upstart job to start dnsmasq when our internal interface is up.
+* Update the dnsmasq dhcp config (hosts and options file).
+
+Notes:
+
+* An example dnsmasq options files can be found in
+ *resources/ovs/ovs-net-dnsmasq.opts-sample*.
+* You can use *(kvm-crate.api/update-dhcp-config dhcp-server-hostname)* to
+ dynamically generate the dnsmasq hosts file (and reread the config via
+ kill -HUP) including all guest VMs.
+
+The host config map needed to support the spec and the update function is:
+
+ {"dhcp.server.to.configure" {:dhcp-interface "priv1"
+ :dnsmasq-optsfile (utils/resource-path "ovs/ovs-net-dnsmasq.opts"}}
+
+### Ensuring all hosts, servers and guest, have proper /etc/hosts files
+
+You can use *kvm-crate.api/update-etc-hosts-files* to update the /etc/hosts files
+of all hosts with the non-DHCP assigned IPs on the private network.
+
+## License
+
+Copyright © 2013 Board Intelligence
+
+Distributed under the MIT License, see
+[http://boardintelligence.mit-license.org](http://boardintelligence.mit-license.org)
+for details.
3 doc/intro.md
@@ -0,0 +1,3 @@
+# Introduction to kvm-crate
+
+TODO: write [great documentation](http://jacobian.org/writing/great-documentation/what-to-write/)
19 project.clj
@@ -0,0 +1,19 @@
+(defproject boardintelligence/lxc-crate "0.1.0-SNAPSHOT"
+ :description "Pallet crate for working with LXC servers and containers"
+ :url "https://github.com/boardintelligence/lxc-crate"
+ :license {:name "MIT"
+ :url "http://boardintelligence.mit-license.org"}
+
+ :dependencies [[org.clojure/clojure "1.4.0"]
+ [com.palletops/pallet "0.8.0-beta.5"]
+ [ch.qos.logback/logback-classic "1.0.7"]
+ [boardintelligence/pallet-nodelist-helpers "0.1.0-SNAPSHOT"]]
+
+ :dev-dependencies [[com.palletops/pallet "0.8.0-beta.5" :type "test-jar"]
+ [com.palletops/pallet-lein "0.6.0-beta.7"]]
+
+ :profiles {:dev
+ {:dependencies [[com.palletops/pallet "0.8.0-beta.5" :classifier "tests"]]
+ :plugins [[com.palletops/pallet-lein "0.6.0-beta7"]]}}
+
+ :local-repo-classpath true)
30 resources/lxc/lxc-defaults
@@ -0,0 +1,30 @@
+# MIRROR to be used by ubuntu template at container creation:
+# Leaving it undefined is fine
+#MIRROR="http://archive.ubuntu.com/ubuntu"
+# or
+#MIRROR="http://<host-ip-addr>:3142/archive.ubuntu.com/ubuntu"
+
+# LXC_AUTO - whether or not to start containers symlinked under
+# /etc/lxc/auto
+LXC_AUTO="true"
+
+# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
+# containers. Set to "false" if you'll use virbr0 or another existing
+# bridge, or mavlan to your host's NIC.
+USE_LXC_BRIDGE="false"
+
+# If you change the LXC_BRIDGE to something other than lxcbr0, then
+# you will also need to update your /etc/lxc/lxc.conf as well as the
+# configuration (/var/lib/lxc/<container>/config) for any containers
+# already created using the default config to reflect the new bridge
+# name.
+# If you have the dnsmasq daemon installed, you'll also have to update
+# /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
+LXC_BRIDGE="lxcbr0"
+LXC_ADDR="10.0.3.1"
+LXC_NETMASK="255.255.255.0"
+LXC_NETWORK="10.0.3.0/24"
+LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
+LXC_DHCP_MAX="253"
+
+LXC_SHUTDOWN_TIMEOUT=120
5 resources/lxc/lxc-tmp.conf
@@ -0,0 +1,5 @@
+lxc.network.type=veth
+lxc.network.script.up=/etc/lxc/ovsup
+#lxc.network.script.down=/etc/lxc/ovsdown
+lxc.network.hwaddr=~{mac}
+lxc.network.flags=up
2 resources/lxc/ovsdown
@@ -0,0 +1,2 @@
+#!/bin/bash
+ovs-vsctl del-port ~{bridge} $5
3 resources/lxc/ovsup
@@ -0,0 +1,3 @@
+#!/bin/bash
+ifconfig $5 0.0.0.0 up
+ovs-vsctl add-port ~{bridge} $5
72 src/lxc_crate/api.clj
@@ -0,0 +1,72 @@
+(ns lxc-crate.api
+ (:require
+ [pallet.algo.fsmop :as fsmop]
+ [pallet.node :as node]
+ [pallet.configure :as configure]
+ [pallet.compute :as compute]
+ [pallet.api :as api]
+ [pallet-nodelist-helpers :as helpers]
+ [lxc-crate.lxc :as lxc]))
+
+(defn host-is-lxc-server?
+ "Check if a host is a LXC server (as understood by the lxc-create)"
+ [hostname]
+ (helpers/host-has-phase? hostname :create-lxc-container))
+
+(defn host-is-lxc-image-server?
+ "Check if a host is an image server (as understood by the lxc-create)"
+ [hostname]
+ (helpers/host-has-phase? hostname :create-lxc-image-step1))
+
+(defn create-lxc-image
+ "Create a lxc image on a given image server."
+ [image-server image-spec-name image-spec]
+ (helpers/ensure-nodelist-bindings)
+ (when-not (host-is-lxc-image-server? image-server)
+ (throw (IllegalArgumentException. (format "%s is not an image server!" image-server))))
+ (let [image-server-conf (get-in helpers/*nodelist-hosts-config* [image-server :image-server])
+ tmp-hostname (:tmp-hostname image-server-conf)]
+
+ (println "create-lxc-image:")
+ (println "Running step 1 - create minimal base container..")
+ (let [result (helpers/run-one-plan-fn image-server lxc/create-lxc-image-step1 {:image-spec image-spec})]
+ (when (fsmop/failed? result)
+ (throw (IllegalStateException. "Failed to create image (step1)!"))))
+
+ (println "Waiting for container to spin up (10s)..")
+ (Thread/sleep (* 10 1000)) ;; give it a chance to spin up
+
+ (println "Running step 2 - image setup function..")
+ (let [result (helpers/run-one-plan-fn tmp-hostname
+ ;;tmp-admin-user
+ lxc/create-lxc-image-step2
+ {:image-spec image-spec})]
+ (when (fsmop/failed? result)
+ (throw (IllegalStateException. "Failed to create image (step2)!"))))
+
+ (println "Waiting for container to halt in 1min (and 10s)..")
+ (Thread/sleep (* 70 1000))
+ (println "Running step 3 - take image snapshot and destroy tmp container..")
+ (let [result (helpers/run-one-plan-fn image-server lxc/create-lxc-image-step3 {:image-spec image-spec
+ :image-spec-name image-spec-name})]
+ (when (fsmop/failed? result)
+ (throw (IllegalStateException. "Failed to create image (step3)!"))))
+ (println "Finished - lxc image created")))
+
+;;;;;;; for dnsmasq crate
+
+;; (defn host-is-lxc-dhcp-server?
+;; "Check if a host is a DHCP server (as understood by the lxc-create)"
+;; [hostname]
+;; (helpers/host-has-phase? hostname :update-dhcp-config))
+
+;; (defn update-dhcp-config
+;; "Update the dhcp hosts file on DHCP server for private LAN."
+;; [dhcp-server]
+;; (helpers/ensure-nodelist-bindings)
+;; (when-not (host-is-dhcp-server? dhcp-server)
+;; (throw (IllegalArgumentException. (format "%s is not a dhcp server!" dhcp-server))))
+;; (let [result (helpers/lift-one-node-and-phase dhcp-server :update-dhcp-config)]
+;; (when (fsmop/failed? result)
+;; (throw (IllegalStateException. "Failed to update dhcp config!")))
+;; result))
196 src/lxc_crate/lxc.clj
@@ -0,0 +1,196 @@
+(ns lxc-crate.lxc
+ "Crate with functions for setting up and configuring LXC servers and containers"
+ (:require [pallet.actions :as actions]
+ [pallet.utils :as utils]
+ [pallet.crate :as crate]
+ [pallet.environment :as env]
+ [pallet.crate.ssh-key :as ssh-key]
+ [pallet.crate :refer [defplan]]))
+
+(defplan install-packages
+ "Install all needed packages for LXC."
+ []
+ (actions/package "lxc"))
+
+(defplan install-lxc-defaults
+ "Install our lxc defaults, don't want default lxcbr0 bridge."
+ []
+ (actions/remote-file "/etc/default/lxc"
+ :local-file (utils/resource-path "lxc/lxc-defaults")
+ :literal true)
+ (actions/exec-checked-script
+ "Restart LXC net"
+ ("service lxc-net restart")))
+
+(defplan remove-dnsmasq-file
+ "Remove default dnsmasq file for lxc"
+ []
+ (actions/exec-checked-script
+ "Delete dnsmasq file for lxc"
+ ("rm -f /etc/dnsmasq.d/lxc")))
+
+(defplan relink-var-lib-lxc-to-home
+ "We prefer to put the containers on /home for space issues, use symlink."
+ []
+ (actions/exec-checked-script
+ "Link lxc container location to /home/lxc"
+ ("rm -rf /var/lib/lxc")
+ ("mkdir -p /home/lxc")
+ ("chmod 755 /home/lxc")
+ ("ln -s /home/lxc /var/lib/lxc")))
+
+(defplan install-ovs-scripts
+ "Install helper scripts we need to connect containers to OVS."
+ []
+ (let [node-hostname (crate/target-name)
+ bridge (env/get-environment [:host-config node-hostname :ovs :bridge])]
+ (actions/remote-file "/etc/lxc/ovsup"
+ :mode "0755"
+ :literal true
+ :template (utils/resource-path "lxc/ovsup")
+ :values {:bridge bridge})
+ (actions/remote-file "/etc/lxc/ovsdown"
+ :mode "0755"
+ :literal true
+ :template (utils/resource-path "lxc/ovdown")
+ :values {:bridge bridge})))
+
+(defplan setup-lxc
+ "Perform all setup needed to make a host an LXC container server."
+ []
+ (install-packages)
+ (install-lxc-defaults)
+ (remove-dnsmasq-file)
+ (relink-var-lib-lxc-to-home)
+ (install-ovs-scripts))
+
+(defplan create-lxc-container
+ "Create a new LXC container"
+ []
+ )
+
+(defplan setup-image-server
+ "Perform all setup needed to make a host an LXC image server."
+ []
+ (let [node-hostname (crate/target-name)
+ mac (env/get-environment [:host-config node-hostname :image-server :tmp-mac])]
+ (actions/package "rdiff-backup")
+ (actions/exec-checked-script
+ "Make sure we have directories needed for image server"
+ ("mkdir -p /home/image-server/images")
+ ("mkdir -p /home/image-server/ssh-keys")
+ ("mkdir -p /home/image-server/etc"))
+ ;; TODO: install known config file into etc used to spin up temp containers
+ (actions/remote-file "/etc/lxc/lxc-tmp.conf"
+ :mode "0644"
+ :literal true
+ :template "lxc/lxc-tmp.conf"
+ :values {:mac mac})))
+
+(defplan create-base-container
+ "Create a base container, just the lxc-create step."
+ [name conf-file template release auth-key-path]
+ (actions/exec-checked-script
+ "Create a base container via lxc-create"
+ ("lxc-create -n" ~name "-f" ~conf-file "-t" ~template "-- --release" ~release "--auth-key" ~auth-key-path)))
+
+(defplan boot-up-container
+ "Boot a given container"
+ [name]
+ (actions/exec-checked-script
+ "Start lxc container"
+ ("lxc-start -d -n" ~name)))
+
+(defplan halt-container
+ "Halt a given container"
+ [name]
+ (actions/exec-checked-script
+ "Start lxc container"
+ (if-not (= @(pipe ("lxc-info -n" ~name)
+ ("grep RUNNING")) "")
+ ("lxc-halt -n" ~name))))
+
+(defplan take-image-snapshot
+ "Take a snapshot of the image of a given container."
+ [hostname spec-name]
+ (let [image-dir (format "/var/lib/lxc/%s" hostname)
+ backup-dir (format "/home/image-server/images/%s" spec-name)]
+ (actions/exec-checked-script
+ "Take a snapshot of a given image"
+ ("rdiff-backup" ~image-dir ~backup-dir))))
+
+(defplan destroy-container
+ "Destroy a given container"
+ [name]
+ (actions/exec-checked-script
+ "Start lxc container"
+ ("lxc-destroy -n" ~name "-f")))
+
+(defplan create-lxc-image-step1
+ "Create a LXC container image according to a given spec - step1.
+ Step 1 will create the base container and start it up."
+ []
+ (let [server (crate/target-name)
+ tmp-hostname (env/get-environment [:host-config server :image-server :tmp-hostname])
+ ssh-public-key (env/get-environment [:host-config tmp-hostname :admin-user :ssh-public-key-path])
+ image-spec (env/get-environment [:image-spec])
+ remote-ssh-key-path (format "/home/image-server/ssh-keys/%s.pub" tmp-hostname)
+ remote-image-dir (format "/var/lib/lxc/%s" tmp-hostname)]
+ (actions/remote-file remote-ssh-key-path
+ :mode "0644"
+ :literal true
+ :local-file ssh-public-key)
+ (actions/exec-checked-script
+ "Ensure old container not in the way"
+ ("lxc-stop -n" ~tmp-hostname)
+ ("sleep 5")
+ ("rm -rf" ~remote-image-dir))
+ (create-base-container tmp-hostname
+ "/etc/lxc/lxc-tmp.conf"
+ (get-in image-spec [:lxc-create :template])
+ (get-in image-spec [:lxc-create :release])
+ remote-ssh-key-path)
+
+ (boot-up-container tmp-hostname)))
+
+(defplan create-lxc-image-step2
+ "Create a LXC container image according to a given spec - step2.
+ Step 2 will run the image setup function in the tmp container.
+ It will also install the wanted root ssh key."
+ []
+ (let [tmp-hostname (crate/target-name)
+ image-spec (env/get-environment [:image-spec])
+ root-auth-key-path (env/get-environment [:image-spec :root-auth-key])]
+
+ ;; first run the setup-fn
+ (when (:setup-fn image-spec)
+ ((:setup-fn image-spec)))
+
+ ;; then install the right root ssh key
+ (actions/exec-checked-script
+ "Remove tmp ssh key"
+ ("rm -f /root/.ssh/authorized_keys"))
+ (ssh-key/authorize-key "root" (slurp root-auth-key-path))
+
+ (actions/exec-checked-script
+ "Halt tmp instance."
+ ("apt-get install at")
+ (pipe ("echo halt")
+ ("at -M now + 1 minute")))))
+
+(defplan create-lxc-image-step3
+ "Create a LXC container image according to a given spec - step3.
+ Step 3 will take a snapshot of the tmp container image and then destroy it."
+ []
+ (let [server (crate/target-name)
+ tmp-hostname (env/get-environment [:host-config server :image-server :tmp-hostname])
+ spec-name (env/get-environment [:image-spec-name])]
+ (println "Taking snapshot of image..")
+ (take-image-snapshot tmp-hostname spec-name)
+ (println "Destroying tmp container..")
+ (destroy-container tmp-hostname)))
+
+;; (defplan create-guest-vm
+;; "TODO: implement"
+;; []
+;; )
29 src/lxc_crate/specs.clj
@@ -0,0 +1,29 @@
+(ns lxc-crate.specs
+ "Server and group specs for working with LXC servers and containers"
+ (:require
+ [pallet.api :as api]
+ [lxc-crate.lxc :as lxc]))
+
+(def
+ ^{:doc "Server spec for a LXC server (host)."}
+ lxc-server
+ (api/server-spec
+ :phases
+ {:configure (api/plan-fn (lxc/setup-lxc))
+ :create-lxc-container (api/plan-fn (lxc/create-lxc-container))}))
+
+(def
+ ^{:doc "Server spec for a LXC image server (host)."}
+ lxc-image-server
+ (api/server-spec
+ :phases
+ {:configure (api/plan-fn (lxc/setup-image-server))
+ :create-lxc-image-step1 (api/plan-fn (lxc/create-lxc-image-step1))}))
+
+(def
+ ^{:doc "Spec for a LXC container."}
+ lxc-container
+ (api/server-spec
+ :phases
+ {;;:firstboot (api/plan-fn (kvm/setup-guest-vm-firstboot))
+ }))
7 test/lxc_crate/api_test.clj
@@ -0,0 +1,7 @@
+(ns lxc-crate.api-test
+ (:use clojure.test
+ lxc-crate.api))
+
+(deftest a-test
+ (testing "FIXME, I fail."
+ (is (= 0 1))))

0 comments on commit 0e2d599

Please sign in to comment.