Skip to content

phozzy/freeipa-in-podman

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deployment of Identity Management system

Description

The goal of this project is to provide a way for deploying IDM system in a cloud. The deployment should be easy, reliable, reproducible and idempotent. It should be possible to use this deployment for disaster recovery.

Used tools

We use FreeIPA as our IDM solution.

This project deploys the IDM to the Hetzner Cloud, but it should be possible to use this project as inspiration for the development of a project deploying the IDM to another cloud.

We provision our servers using Terraform, and if it was decided to go with another cloud, it is possible to use an orchestration tool from that cloud such as AWS CloudFormation or Openstack Heat or any other preferred one.

FreeIPA server runs inside a container. We use Podman as a container engine. Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System.

We use CentOS 8 Linux distribution as a base operating system, but it should be possible to use any distribution that is able to run Podman, uses systemd, and if its kernel has the support for IPvlans and network namespaces. But the current implementaion containes some steps that are specific for CentOS 8. If it is decided to go with other Linux distribution, those steps should be modified.

Pyroute2 is used to create an ipvlan and network namespace for containers network.

After servers are provisioned Ansible is used for the deployment and the configuration of IDM system.

Preparation

External dependencies

Terraform configuration

This project uses Terraform for infrastructure provisioning and uses Terraform remote backend for keeping the state of infrastructure for collaborative work.

To be able to use the remote backend we need to register (or have already registered) organizaton_name organization at Terraform Cloud. Also we need to create workspace_name workspace for this deployment. This workspace should have Local Execution mode in its settings, so it will be used only to store the state of our deployment. This requierements comes from the fact that there is a local-exec provisioner in the projects Terraform code, that generates Ansible inventory file to be used in the next step.

Hetzner cloud configuration

It is assumed that there is an account Hetzner, so we are allowable to create new entities in Hetzner Cloud. We should create a project that will include infrastructure for this project.

Local configuration

Terraform

Terraform CLI installation

Use this manual to install Terraform CLI tool.

Terraform access configuration

In order to be able to access Terraform Cloud from Terraform CLI we need generate an access token in "User settings" at Terraform Cloud, and put it into the ~/.terraformrc file:

credentials "app.terraform.io" {
  token = "xxxxxx.atlasv1.zzzzzzzzzzzzz"
}
Terraform backend initialisation:

In order to initialise Terraform backend there shoud be created backend.hcl file in the root directory of this project with the following content:

# backend.hcl
hostname = "app.terraform.io"
organization = "<organizaton_name>"
workspaces { name = "<workspace_name>" }

Replace words in < > with your organizaton_name and workspace_name.

Although this file containes no sensitive information it is included into .gitignore file.

Run the following command for terraform initialisation:

terraform init -backend-config=backend.hcl

Hetzner

Hetzner CLI installation

Use this manual for CLI installation and configuration.

Ansible

Ansible installation

Use this manual for Ansible installation.

Basement

Disclosure

The creation of some base entities lies outside of the code of this project for certain reasons.

Mapping definiton

We need to define some values and map them to each other. For each node we need to define following values:

  • server - short hostname that will be assigned to a single FreeIPA instance. Part of FQDN assigned to the Floating IP-address used by the FreeIPA instance.
  • host - short hostname that will be assigned to a server running a single FreeIPA container.
  • location - the name of a location of Data Center where the host is running.

Example values for a single node:

server: dc00
location: nbg1
host: etc00

To prevent an ocasional selection of an entity we define additional key-value map for all objects:

object: freeipa

The code containes some default mappings, but this can be easily changed by assigning different values to variables. See configuration chapter.

Floating IP addresses and DNS records

Explanation

IDM systems deployed by this project containe DNS to manage their domain (subdomain) zone. It is quite hard to automate the configuration process of a domain delegation, since there are so many variants. At the same moment it is quite simple task for manual configuration, and this configuration is really static, so it can be done once before running this project.

Floating IP address will be assigned to an IPVlan interface in a network namespace that is going to be used by a container running FreeIPA. So each FreeIPA instance will have its own static ip-address. These static ip-address and network namespace won't be shared with a host VM.

Floating IP creation

  1. Create Floating IP using hcloud utility (example):

    hcloud floating-ip create --description dc00ipv4 --home-location nbg1 --type ipv4
    hcloud floating-ip create --description dc01ipv4 --home-location fsn1 --type ipv4
    hcloud floating-ip create --description dc10ipv4 --home-location hel1 --type ipv4
  2. Configure a domain delegation for your domain (subdomain) using created ip-addresses.

  3. Set reverse DNS of Floating IP addresses (example):

    hcloud floating-ip set-rdns --hostname dc00.<domain> FLOATING_IP_ID

    Do this for each floating ip address created in the first step.

  4. Add labels to Floating IP addresses (example):

    hcloud floating-ip add-label FLOATING_IP_ID object=freeipa4
    hcloud floating-ip add-label FLOATING_IP_ID server=dc00
    hcloud floating-ip add-label FLOATING_IP_ID location=nbg1
    hcloud floating-ip add-label FLOATING_IP_ID host=etc00

    Repeat these commands for every floating ip address in this setup.

Data volumes

Explanation

This setup relies on a persistence of IDM data in external volumes. So we keep the process of volumes creation outside of the projects code to prevent accedentally data deletion.

That also allows us to easily recover in case of host failure.

Volumes creation

  1. Create a volume using hcloud utility (example):

    hcloud volume create --location nbg1 --name dc00 --size 16
    hcloud volume create --location fsn1 --name dc01 --size 16
    hcloud volume create --location hel1 --name dc10 --size 16
  2. Add labels to volumes (example):

    hcloud volume add-label VOLUME_ID host=etc00
    hcloud volume add-label VOLUME_ID object=freeipa

    Repeat this for every volume.

Deployment

Configuration

It is important to be sure that we went through all previous step and have backend.hcl file created and Terraform initialisated.

We need to define values for some variables:

  • hcloud_token - required;
  • hetzner_dns - optional, a list with ip-addresses of DNS. The default list defined in the code;
  • server - optional, a mapping of server names to servers locations (data centers). The default mapping defined in the code. Be aware this mapping should be relevant for labels of volumes and floating ip addresses;
  • server_type - optional, the default value defined in the code. Check Hetzner for all possible values;
  • ssh_key - required. An ID of your public key previously uploaded to Hetzner cloud;
  • ssh_key_private - required. A string with a path to the private ssh-key;
  • remote_user - optional. The default user for OS images in Hetzner cloud is root;
  • server_image - optional. A name of OS image, the default value defined in the code;
  • domain - required. A name for domain zone that will be managed by IDM system;

There are several way to do that:

  • Use environment variables: It is possible to create executalbe vars.sh file with the following content:
    #!/usr/bin/env bash
    export HCLOUD_TOKEN=...
    export TF_VAR_ssh_key=...
    export TF_VAR_ssh_key_private=...
    export TF_VAR_domain=...
  • Create terraform.auto.tfvars file containning variable in Terraform format:
    server_type = "..."
    ssh_key_private = "/home/username/.ssh/id_ed25519"
    remote_user = "..."
    server_image = "..."
    ssh_key = "..."
    domain = "..."
    hcloud_token = "..."
    

Both of these files are mentioned in .gitignore file.

Resource provisioning

After we are done with previous steps, we can:

  1. Verify Terraform code with the following command:
    terraform validate
  2. Run the following command to provision servers:
    terraform run

Configuring and running FreeIPA services

At the last step Terraform will create (update) inventory.yml file from inventory.template. This file defines values for all needed variables except 2 most important ones:

  • a password for admin user;
  • a password for directory manager user.

These 2 password shoud be kept in secret.

There several way to pass these values to our ansible playbook:

  • passing them as --extra-vars parameter to ansible-playbook command, if we are sure that our CLI history is a safe place;

  • creating a file with variables, protected by ansible-vault with following command:

    ansible-vault edit varsafe.yml

    with a following content:

    ---
    ds_password: "some_secret_password"
    admin_password: "some_secret_password"
    ...

Here is an example of ansible command that we can use for configuring and running FreeIPA services:

ansible-playbook -i inventory.yml --extra-vars "@varsafe.yml" --ask-vault-pass main.yml

It may take 20-30 minutes to finish this playbook. After that we may check availability of FreeIPA's web user interface.

That is it! Feedback is wellcome!