No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Latest commit 213a2ce Nov 13, 2018

README.md

Alfresco Infrastructure

The Alfresco Infrastructure chart aims at bringing in components that will commonly be used by the majority of applications within the Alfresco Digital Business Platform.

Introduction

This chart bootstraps the creation of a persistent volume and persistent volume claim on a Kubernetes cluster using the Helm package manager.

Beside this it will bring in other shared, common components like the Identity Service. See the Helm chart requirements for the list of additional dependencies brought in.

Introduction

This chart bootstraps the creation of a persistent volume and persistent volume claim on a Kubernetes cluster using the Helm package manager. In addition it deploys the nginx-ingress chart which consists in an Ingress Controller that uses ConfigMaps to store nginx configuration.

Prerequisites

Component Recommended version
Docker 17.0.9.1
Kubernetes 1.8.4
Helm 2.8.2

Any variation from these technologies and versions may affect the end result. If you do experience any issues please let us know through our Gitter channel.

Kubernetes Cluster

Please check the Anaxes Shipyard documentation on running a cluster.

K8s Cluster Namespace

As mentioned as part of the Anaxes Shipyard guidelines, you should deploy into a separate namespace in the cluster to avoid conflicts (create the namespace only if it does not already exist):

export DESIREDNAMESPACE=example
kubectl create namespace $DESIREDNAMESPACE

This environment variable will be used in the deployment steps.

Amazon EFS Storage (NOTE! ONLY FOR AWS!)

Create an EFS storage on AWS and make sure it is in the same VPC as your cluster. Make sure you open inbound traffic in the security group to allow NFS traffic. Save the name of the server as in this example:

export NFSSERVER=fs-d660549f.efs.us-east-1.amazonaws.com

Note! The Persistent volume created with NFS to store the data on the created EFS has the ReclaimPolicy set to Recycle. This means that by default, when you delete the release the saved data is deleted automatically.

To change this behaviour and keep the data you can set the persistence.reclaimPolicy value to Retain.

Installing the chart

1. Deploy the infrastructure charts:

helm repo add alfresco-incubator https://kubernetes-charts.alfresco.com/incubator
helm repo add alfresco-stable https://kubernetes-charts.alfresco.com/stable


helm install alfresco-incubator/alfresco-infrastructure \
--set persistence.efs.enabled=true \
--set persistence.efs.dns="$NFSSERVER" \
--namespace $DESIREDNAMESPACE

2. Get the infrastructure release name from the previous command and set it as a variable:

export INFRARELEASE=enervated-deer

3. Wait for the infrastructure release to get deployed. (When checking status all your pods should be READY 1/1):

helm status $INFRARELEASE

4. Teardown:

helm delete --purge $INGRESSRELEASE
helm delete --purge $INFRARELEASE
kubectl delete namespace $DESIREDNAMESPACE

For more information on running and tearing down k8s environments, follow this guide.

Nginx-ingress Custom Configuration

By default, this chart deploys the nginx-ingress chart with the following configuration that will create an ELB when using AWS and will set a dummy certificate on it:

nginx-ingress:
  rbac:
    create: true
  config:
    ssl-redirect: "false"
  controller:
    scope:
      enabled: true

If you want to customize the certificate type on the ingress level, you can choose one of the options below:

Using a self-signed certificate

If you want your own certificate set on the ELB created through AWS you should create a secret from your cert files

kubectl create secret tls certsecret --key /tmp/tls.key --cert /tmp/tls.crt \
  --namespace $DESIREDNAMESPACE

Then deploy the infrastructure chart with the following:

cat <<EOF > infravalues.yaml
#Persistence options
persistence:
  #Enables the creation of a persistent volume
  enabled: true
  efs:
    #Enables EFS ussage
    enabled: false
    #DNS address of EFS
    dns: fs-example.efs.us-east-1.amazonaws.com
    #Base path to use within the EFS that is mounted as a volume
    path: "/"
  #Size allocated to the volume in K8S
  baseSize: 20Gi

nginx-ingress:
  rbac:
    create: true
  controller:
    config:
      ssl-redirect: "false"
    scope:
      enabled: true
    publishService:
      enabled: true
    extraArgs:
      default-ssl-certificate: $DESIREDNAMESPACE/certsecret
EOF

helm install alfresco-incubator/alfresco-infrastructure \
-f infravalues.yaml \
--namespace $DESIREDNAMESPACE

Using an AWS generated certificate and Amazon Route 53 zone

If you

Kubernetes' External DNS can autogenerate a DNS entry for you (a CNAME of the generated ELB) and apply the SSL/TLS certificate to the ELB.

Note: External DNS is currenty in Alpha Version - June 2018

Note: AWS Certificate Manager ARNs are of the form arn:aws:acm:REGION:ACCOUNT:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.

Set DOMAIN to the DNS Zone you used when creating the cluster.

ELB_CNAME="${DESIREDNAMESPACE}.${DOMAIN}"
ELB_CERTIFICATE_ARN=$(aws acm list-certificates | \
  jq '.CertificateSummaryList[] | select (.DomainName == "'${DOMAIN}'") | .CertificateArn')

cat <<EOF > infravalues.yaml
#Persistence options
persistence:
  #Enables the creation of a persistent volume
  enabled: true
  efs:
    #Enables EFS ussage
    enabled: false
    #DNS address of EFS
    dns: fs-example.efs.us-east-1.amazonaws.com
    #Base path to use within the EFS that is mounted as a volume
    path: "/"
  #Size allocated to the volume in K8S
  baseSize: 20Gi

nginx-ingress:
  rbac:
    create: true
  controller:
    config:
      ssl-redirect: "false"
    scope:
      enabled: true
    publishService:
      enabled: true
    service:
      targetPorts:
        http: http
        https: http
      annotations:
        external-dns.alpha.kubernetes.io/hostname: ${ELB_CNAME}
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${ELB_CERTIFICATE_ARN}
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
EOF

helm install alfresco-incubator/alfresco-infrastructure \
-f infravalues.yaml \
--namespace $DESIREDNAMESPACE


For additional information on customizing the nginx-ingress chart please refer to the nginx-ingress chart Readme

Configuration

The following table lists the configurable parameters of the infrastructure chart and their default values.

Parameter Description Default
persistence.enabled Persistence is enabled for this chart true
persistence.baseSize Size of the persistent volume. 20Gi
persistence.reclaimPolicy Policy for keeping or removing the data after helm delete. Use Retain to keep the data. Recycle
persistence.efs.enabled Use efs persistence. false
persistence.efs.dns Elastic File System DNS address none
persistence.efs.path Path into the EFS mount to be used. /
alfresco-infrastructure.activemq.enabled Activemq is enabled for this chart true
alfresco-infrastructure.alfresco-identity-service.enabled Alfresco Identity Service is enabled for this chart true
alfresco-infrastructure.nginx-ingress.enabled Nginx-ingress is enabled for this chart true

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install --name my-release \
  --set persistence.efs.enabled=true \
    alfresco-incubator/alfresco-infrastructure

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install alfresco-incubator/alfresco-infrastructure --name my-release -f values.yaml

Troubleshooting

Error: "realm-secret" already exists When installing the Infrastructure chart, with the Identity Service enabled, if you recieve the message Error: release <release-name> failed: secrets "realm-secret" already exists there is an existing realm secret in the namespace you are installing. This could mean that you are either installing into a namespace with an existing Identity Service or there is a realm secret leftover from a previous installation of the Identity Service.

If the realm secret is leftover from a previous installation it can be removed with the following

$ kubectl delete secret realm-secret --namespace $DESIREDNAMESPACE