Skip to content

Smana/demo-cloud-native-ref

Repository files navigation

Reference repository for building a Cloud Native platform

This is an opiniated set of configuration allowing to manage a Cloud Native platform the GitOps way

Here is the big picture inspired by the CNOE reference implementation.

overview

ℹ️ This repository is used to write new blog posts here

🔄 Flux Dependencies matter

graph TD;
    Namespaces-->CRDs;
    CRDs-->Crossplane;
    Crossplane-->EPIs["EKS Pod Identities"];
    EPIs["EKS Pod Identities"]-->Security;
    EPIs["EKS Pod Identities"]-->Infrastructure;
    EPIs["EKS Pod Identities"]-->Observability;
    Observability-->Tooling;
    Infrastructure-->Tooling;
    Security-->Infrastructure;
    Security-->Observability

This diagram can be hard to understand so these are the key information:

  • Namespaces - Namespaces are the foundational resources in Kubernetes. All subsequent resources can be scoped to namespaces.

  • Custom Resource Definitions (CRDs) - CRDs extend Kubernetes' capabilities by defining new resource types. These must be established before they can be utilized in other applications.

  • Crossplane - Utilized for provisioning the necessary infrastructure components within Kubernetes.

  • EKS Pod Identities - Created using Crossplane, these identities are necessary to grant specific AWS API permissions to certain cluster components.

  • Security - Among other things, this step deploys external-secrets which is essential to use sensitive data into our applications

🏗️ Crossplane configuration

Requirements and security concerns

When the cluster is initialized, we define the permissions for the crossplane controllers using Terraform. This involves attaching a set of IAM policies to a role. This role is crucial for managing AWS resources, a process known as IRSA (IAM Roles for Service Accounts).

We prioritize security by adhering to the principle of least privilege. This means we only grant the necessary permissions, avoiding any excess. For instance, although Crossplane allows it, I have chosen not to give the controllers the ability to delete stateful services like S3 or RDS. This decision is a deliberate step to minimize potential risks.

Additionally, I have put a constraint on the resources the controllers can manage. Specifically, they are limited to managing only those resources which are prefixed with xplane-. This restriction helps in maintaining a more controlled and secure environment.

How is Crossplane deployed?

Basically Crossplane allows to provision and manage Cloud Infrastructure (and even more) using the native Kubernetes features.

It needs to be installed and set up in three successive steps:

  1. Installation of the Kubernetes operator
  2. Deployment of the AWS provider, which provides custom resources, including AWS roles, policies, etc.
  3. Installation of compositions that will generate AWS resources.

🏷️ Related blog posts:

🛂 Federated authentication using (Still not decided: need to explore https://goauthentik.io/ or https://casdoor.org/)

🗒️ Logs with Loki and Vector

📦 OCI Registry with Harbor

The Harbor installation follows the best practices for high availability. It leverages recent Crossplane's features such as Composition functions

  • External RDS database
  • Redis cluster using the bitnami Helm chart
  • Storing artifacts in S3

🏷️ Related blog post: Going Further with Crossplane: Compositions and Functions

🔗 VPN connection using Tailscale

The VPN configuration is done within the terraform/network directory. You can follow the steps described in this README in order to provision a server that allows to access to private resources within AWS.

Most of the time we don't want to expose our resources publicly. For instance our platform tools such as Grafana, the Flux web UI should be access through a secured wire. The risk becomes even more significant when dealing with Kubernetes' API. Indeed, one of the primary recommendations for securing a cluster is to limit access to the API.

Anyway, I intentionnaly created a distinct directory that allows to provision the network and a secured connection. So that there are no confusion with the EKS provisionning.

🏷️ Related blog post: Beyond Traditional VPNs: Simplifying Cloud Access with Tailscale

👮 Runtime security with Falco

✔️ Policies with Kyverno

🔐 Secrets management with Vault and external-secrets operator

🔑 Private PKI with Vault

The Vault creation is made in 2 steps:

  1. Create the cluster as described here
  2. Then configure it using this directory

ℹ️ The provided code outlines the setup and configuration of a highly available, secure, and cost-efficient HashiCorp Vault cluster. It describes the process of creating a Vault instance in either development or high availability mode, with detailed steps for initializing the Vault, managing security tokens, and configuring a robust Public Key Infrastructure (PKI) system. The focus is on balancing performance, security, and cost, using a multi-node cluster, ephemeral nodes with SPOT instances, and a tiered CA structure for digital security.

🏷️ Related blog post: TLS with Gateway API: Efficient and Secure Management of Public and Private Certificates

🌐 Network policies with Cilium

🧪 CI

2 things are checked

  • The terraform code quality, conformance and security using pre-commit-terraform.
  • The kustomize and Kubernetes conformance using kubeconform and building the kustomize configuration.

In order to run the CI checks locally just run the following command

ℹ️ It requires task to be installed

 task check

The same tasks are run in Github Actions.

About

Opiniated Cloud Native Platform Reference

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •