Helm chart to install the Alfresco Activiti Enterprise (AAE) infrastructure to model and deploy your process applications:
- Alfresco Identity Service
- Modeling Service
- Modeling App
- Deployment Service
- Admin App
- Transformation (Tika) Service
Once installed, you can deploy new AAE applications:
- via the Admin App using the Deployment Service
- manually customising the alfresco-process-application helm chart.
For all the available values, see the chart README.md.
Setup a Kubernetes cluster following your preferred procedure.
Install the latest version of helm.
An ingress-nginx
should be installed and bound to an external DNS address, for example:
helm upgrade -i ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
-n ingress-nginx --create-namespace
For any command on helm, please verify the output with --dry-run
option, then execute without.
To install from the development chart repo, use alfresco-incubator
rather than alfresco
as CHART_REPO variable.
Check deployment progress with kubectl get pods -w -A
until all containers are running.
If anything is stuck, check events with kubectl get events -w -A
.
export DESIRED_NAMESPACE=${DESIRED_NAMESPACE:-aae}
kubectl create ns $DESIRED_NAMESPACE
Configure access to pull images from quay.io in the installation namespace:
kubectl create secret \
-n $DESIRED_NAMESPACE \
docker-registry quay-registry-secret \
--docker-server=quay.io \
--docker-username=$QUAY_USERNAME \
--docker-password=$QUAY_PASSWORD
where:
- QUAY_USERNAME is your username on Quay
- QUAY_PASSWORD is your password on Quay
export RELEASE_NAME=aae
export CHART_NAME=alfresco-process-infrastructure
export HELM_OPTS="-n $DESIRED_NAMESPACE"
A custom extra values file to add settings for localhost is provided:
export DOMAIN=host.docker.internal
HELM_OPTS+=" -f values-localhost.yaml"
Make sure your local cluster has at least 16GB of memory and 8 CPUs.
The startup might take as much as 10 minutes, use kubectl get pods -A -w
to check the status.
NB if not already present in your /etc/hosts
file, please add a DNS mapping from host.docker.internal
to 127.0.0.1
.
If the hostname host.docker.internal
is not resolved correctly on some deployments, patch them after calling helm via:
kubectl patch deployment -n $DESIRED_NAMESPACE ${RELEASE_NAME}-alfresco-modeling-service -p "$(cat deployment-localhost-patch.yaml)"
export CLUSTER=aaedev
export DOMAIN=$CLUSTER.envalfresco.com
HELM_OPTS+=" \
--set global.gateway.domain=$DOMAIN"
To disable alfresco-deployment-service in the infrastructure:
HELM_OPTS+="
--set alfresco-deployment-service.enabled=false
"
A StorageClass that can work across multiple availability zones need to be available to store project release files per each application:
- for EKS always use EFS
- for AKS only if Multi-AZ is configured, use AFS
Add the helm values to use it:
HELM_OPTS+="
--set alfresco-deployment-service.projectReleaseVolume.storageClass=${STORAGE_CLASS_NAME} \
--set alfresco-deployment-service.projectReleaseVolume.permission=ReadWriteMany
"
NB In order to set email connector all the variables need to be set. If these variables are set then deployment service will use these configs as default for any applications deployed. Once these variables are configured at the deployment of chart via Helm customer won’t have the possibility to override these values from the admin app. In case you want to configure email connector variable from admin-app please dont not configure email connector during helm deployment.
Add the helm properties to configure email connector:
HELM_OPTS+="
--set alfresco-deployment-service.applications.connectors.emailConnector.username=${email_connecor_username}
--set alfresco-deployment-service.applications.connectors.emailConnector.password=${email_connector_password}
--set alfresco-deployment-service.applications.connectors.emailConnector.host=${email_connector_host}
--set alfresco-deployment-service.applications.connectors.emailConnector.port=${email_connector_port}
"
To verify the k8s yaml output:
HELM_OPTS+=" --debug --dry-run"
If all good then launch again without --dry-run
.
Install from the stable repo using a released chart version:
helm upgrade -i --wait \
--repo https://kubernetes-charts.alfresco.com/stable \
$HELM_OPTS $RELEASE_NAME $CHART_NAME
or from the incubator repo for a development chart version:
helm upgrade -i --wait \
--repo https://kubernetes-charts.alfresco.com/incubator \
$HELM_OPTS $RELEASE_NAME $CHART_NAME
or from the current repository directory:
helm repo update
helm dependency update helm/$CHART_NAME
helm upgrade -i --wait \
$HELM_OPTS $RELEASE_NAME helm/$CHART_NAME
Open browser and login to IDS:
open $SSO_URL
To read back the realm from the secret, use:
kubectl get secret \
-n $DESIRED_NAMESPACE \
realm-secret -o jsonpath="{['data']['alfresco-realm\.json']}" | base64 -D > alfresco-realm.json
In an air gapped environment where the Kubernetes cluster has no direct access to external image repositories, use a tool like helm-image-mirror to tag and push images to your internal registry and modify helm charts with the new image locations.
Modify the file values-external-postgresql.yaml providing values for your external database per each service, then run:
HELM_OPTS+=" -f values-external-postgresql.yaml"
Running on GH Actions.
For Dependabot PRs to be validated by CI, the label "CI" should be added to the PR.
Requires the following secrets to be set:
Name | Description |
---|---|
BOT_GITHUB_TOKEN | Token to launch other builds on GH |
BOT_GITHUB_USERNAME | Username to issue propagation PRs |
RANCHER2_URL | Rancher URL to perform helm tests |
RANCHER2_ACCESS_KEY | Rancher access key |
RANCHER2_SECRET_KEY | Rancher secret key |
SLACK_NOTIFICATION_BOT_TOKEN | Token to notify slack on failure |