Skip to content
Multi node IBM Cloud Private Community Edition 3.1.2 w/ Kubernetes 1.12.4 in a Box. Terraform, Packer and BASH based Infrastructure as Code script sets up a multi node LXD cluster, installs ICP-CE and clis on a metal or VM Ubuntu 18.04 host.
Branch: master
Clone or download
HB
Latest commit 856feb9 Mar 21, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cli-setup
docs
icp-setup Update screenshots Mar 21, 2019
lxd-setup
plan
.gitignore
README.md
backup-properties.sh
create-cluster.sh
destroy-cluster.sh Add support for ICP CA domain router crt/key, backup key properties f… Mar 20, 2019
download_icp_cloudctl_helm.sh Update refreshed code and unified script Mar 6, 2019
icp-login-3.1.2-ce.sh
install.properties Update screenshots Mar 21, 2019
install.properties.sample Update screenshots Mar 21, 2019
start_stop_cluster.sh
terra-clean.sh Update refreshed code and unified script Mar 6, 2019

README.md

Welcome to my IBM Cloud Private (Community Edition) on Linux Containers Infrastructure as a Code (IaaC). With the help of this IaaC, developers can easily setup a multi virtual node ICP cluster on a single Linux Metal/VM!!!

This IaC not only takes away the pain of all manual configuration, but will also save valuable resources (nodes) by utilizing a single host machine to provide multi node ICP Kubernetes experience. It will install required CLIs, setup LXD, setup ICP-CE and some utility scripts.

As ICP is installed on LXD VMs, it can be easily installed and removed without any impact to host environment. Only LXD, CLIs and other desired/required packages will be installed on the host.

ICP 3.1.2 - Getting started
High Level Architecture
Supported Platforms
Topologies
View Install Configuration
Usage
Post Install
Screenshots

High Level Architecture

An example 4 node topology

Supported platforms

Host Guest VM ICP-CE LXD Min. Compute Power User Privileges
Ubuntu 18.04 Ubuntu 18.04 3.1.2 3.0.3 (apt) 8Core 16GB-RAM 300GB-Disk root

Topologies

Boot (B) Master/Etcd (ME) Management (M) Proxy (P) Worker (W)
1 (B/ME/M/P) 1+*
1 (B/ME/M) 1 1+*
1 (B/ME/P) 1 1+*
1 (B/ME) 1 1 1+*
*Set desired worker node count in install.properties before setting up cluster.
Supported topologies based on ICP Architecture
ICP Community Edition does not support HA. Master, Management and Proxy nodes count must always be 1

Usage

Git clone:

  sudo su -
  git clone https://github.com/HSBawa/icp-ce-on-linux-containers.git
  cd icp-ce-on-linux-containers

Update install properties:

  For simplified setup, there is one single install.properites file, that will cover configuration for CLIs, LXD and ICP.

  Examples:
  ## Use y to create separate Proxy, Management Nodes
  PROXY_NODE=y
  MGMT_NODE=y

  ## If for some reason public/external IP lookup fails or gets incorrect address,
  ## set lookup to 'n', manually provide IP  addresses and then re-create cluster
  ICP_AUTO_LOOKUP_HOST_IP_ADDRESS_AS_LB_ADDRESS=y
  ICP_MASTER_LB_ADDRESS=none
  ICP_PROXY_LB_ADDRESS=none

  ## Enable/Disable management services ####
  ICP_MGMT_SVC_CUST_METRICS=enabled
  ICP_MGMT_SVC_IMG_SEC_ENFORCE=enabled
  ICP_MGMT_SVC_METERING=enabled
  ...

  ## Used for console/scripted login, provide your choice of username and password
  ## Default namespace will be added to auto-generated login helper script
  ICP_DEFAULT_NAMESPACE=default
  ICP_DEFAULT_ADMIN_USER=admin
  ICP_DEFAULT_ADMIN_PASSWORD=xxxxxxx

Create cluster:

 Usage:    ./create_cluster.sh [options]
              -es or --env-short : Environment name in short. ex: test, dev, demo etc.
              -f  or --force     : [yY]|[yY][eE][sS] or n. Delete cluster LXD components from past install.
              -h  or --host      : Provide host type information: pc (default), vsi, fyre, aws or othervm.
              help               : Print this usage.

  Examples: ./create_cluster.sh --host=fyre
            ./create_cluster.sh --host=fyre -f
            ./create_cluster.sh -es=demo --force --host=pc

  Important Notes:
     - It is imporant to use use right `host` parameter depending upon your host machine/vm.
     - LXD cluster uses internal and private subnet. To expose this cluster, HAProxy is installed and configured by default to enable remote access.
     - Recommended use of `static external IP`.
     - If external IP gets changed after build, remote access to cluster will fail and thus will require a new build.
     - This IaC is not tested with LXD installed via SNAP. I had so many issues using it, that I had to switch to APT based 3.0.3, which is considered as production stable
     - During install, if you encounter error: "...Failed container creation: Create LXC container: LXD doesn't have a uid/gid allocation...", validate that the files '/etc/subgid' and '/etc/subuid' have content similar to shown below:
           lxd:100000:65536
           root:100000:65536
           [username goes here]:165536:65536
     - During install, if your build is stuck at the following message for greater than 10 mins: "....icp_ce_master: Still creating... ", perform the following steps:
           * Cancel installation (Ctrl-C). May need more than one. 
           * Destroy cluster (./destroy_cluster.sh)
           * Create cluster  (./create_cluster.sh)
           
           If you still see this issue next time, open a GIT issue, with as much possible details, and I can take look into it.

Download cloudctl and helm clis:

 ./download_icp_cloudctl_helm.sh

Login into cluster:

 ./icp-login-3.1.2-ce.sh
 or
 cloudctl login -a https://<internal_master_ip>:8443 -u <default_admin_user> -p <default_admin_user> -c id-devicpcluster-account -n default --skip-ssl-validation
 or
 cloudctl login -a https://<public_ip>:8443 -u <default_admin_user> -p <default_admin_user> -c id-devicpcluster-account -n default --skip-ssl-validation

Destory Cluster:

 ./destroy-cluster.sh (Deletes lxd cluster w/ ICP-CE. Use with caution)

Setting up LXD based NFS Server: (Optional)

     NFS Server on Linux Container

Post install


You can’t perform that action at this time.