Heads up: Our Helm Chart docs are moving to our main documentation site. For Insight installers, see Installing Insight.
- Kubernetes 1.14+
This chart will do the following:
- Deploy PostgreSQL database NOTE: For production grade installations it is recommended to use an external PostgreSQL.
- Deploy Elasticsearch.
- Deploy Insight.
- A running Kubernetes cluster
- Dynamic storage provisioning enabled
- Default StorageClass set to allow services using the default StorageClass for persistent storage
- A running Artifactory Enterprise
- Kubectl installed and setup to use the cluster
- Helm v3 installed
Before installing JFrog helm charts, you need to add the JFrog helm repository to your helm client.
helm repo add jfrog https://charts.jfrog.io
helm repo update
NOTE: Check [CHANGELOG.md] for version specific install notes.
In order to connect Insight to your Artifactory installation, you have to use a Join Key, hence it is MANDATORY to provide a Join Key and Jfrog Url to your Insight installation. Here's how you do that:
Retrieve the connection details of your Artifactory installation, from the UI - https://www.jfrog.com/confluence/display/JFROG/General+Security+Settings#GeneralSecuritySettings-ViewingtheJoinKey.
Provide join key and jfrog url as a parameter to the Insight chart installation:
helm upgrade --install insight --set insightServer.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> \
--set insightServer.jfrogUrl=<YOUR_PREVIOUSLY_RETIREVED_BASE_URL> --namespace insight jfrog/insight
Alternatively, you can create a secret containing the join key manually and pass it to the template at install/upgrade time.
# Create a secret containing the key. The key in the secret must be named join-key
kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
# Pass the created secret to helm
helm upgrade --install insight --set insightServer.joinKeySecretName=my-secret --namespace insight jfrog/insight
NOTE: In either case, make sure to pass the same join key on all future calls to helm install
and helm upgrade
! This means always passing --set insightServer.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
. In the second, this means always passing --set insightServer.joinKeySecretName=my-secret
and ensuring the contents of the secret remain unchanged.
Insight uses a common system configuration file - system.yaml
. See official documentation on its usage.
This section is applicable only for deployments with internal postgreSQL.
Internal postgreSQL needs 1 variable to be available on install or upgrade. If it is not set by user, a random 10 character alphanumeric string will be set for the same. It is recommended for the user to set this explicitly during install and upgrade.
...
--set postgresql.postgresqlPassword=<value> \
...
The values should remain same between upgrades.
If this was autogenerated during helm install
, the same password will have to be passed on future upgrades.
Following can be used to read current set password,(refer decoding-a-secret for more info on reading a sceret value)
POSTGRES_PASSWORD=$(kubectl get secret -n <release_name>-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
Following parameter can be set during upgrade,
...
--set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} \
...
In the chart directory, we have added three values files, one for each installation type - small/medium/large. These values files are recommendations for setting resources requests and limits for your installation. The values are derived from the following documentation. You can find them in the corresponding chart directory - values-small.yaml, values-medium.yaml and values-large.yaml
Insight HA cluster uses a unique master key. By default the chart has one set in values.yaml (insightServer.masterKey
).
This key is for demo purpose and should not be used in a production environment!
You should generate a unique one and pass it to the template at install/upgrade time.
# Create a key
export MASTER_KEY=$(openssl rand -hex 32)
echo ${MASTER_KEY}
# Pass the created master key to helm
helm upgrade --install insight --set insightServer.masterKey=${MASTER_KEY} --namespace insight jfrog/insight
Alternatively, you can create a secret containing the master key manually and pass it to the template at install/upgrade time.
# Create a secret containing the key. The key in the secret must be named master-key
kubectl create secret generic my-secret --from-literal=master-key=${MASTER_KEY}
# Pass the created secret to helm
helm upgrade --install insight --namespace insight --set insightServer.masterKeySecretName=my-secret jfrog/insight
NOTE: In either case, make sure to pass the same master key on all future calls to helm install
and helm upgrade
! In the first case, this means always passing --set insightServer.masterKey=${MASTER_KEY}
. In the second, this means always passing --set insightServer.masterKeySecretName=my-secret
and ensuring the contents of the secret remain unchanged.
Once you have a new chart version, you can update your deployment with
helm upgrade insight jfrog/insight
NOTE: Check for any version specific upgrade notes in [CHANGELOG.md]
In cases where a new version is not compatible with existing deployed version (look in CHANGELOG.md) you should
- Deploy new version along side old version (set a new release name)
- Copy configurations and data from old deployment to new one (The following instructions were tested for chart migration from 0.9.4 (3.4.3) to 1.0.0 (3.5.0))
- Copy data and config from old deployment to local filesystem
kubectl cp <elasticsearch-pod>:/usr/share/elasticsearch/data /<local_disk_path>/insight-data/elastic_data -n <old_namespace> kubectl cp <postgres-pod>:/var/lib/postgresql/data /<local_disk_path>/insight-data/postgres_data -n <old_namespace> kubectl cp <insight-server-pod>:/var/opt/jfrog/insight/etc/insight-server.properties /<local_disk_path>/insight-data/insight-server.properties -n <old_namespace> -c insight kubectl cp <insight-server-pod>:/var/opt/jfrog/insight/data/security/insight.key /<local_disk_path>/insight-data/insight.key -n <old_namespace> -c insight
- This point applies only if you have used autogenerated password for postgres in your previous deploy or in your new deployement.
- Get the postgres password from previous deploy, (refer decoding-a-secret for more info on reading a sceret value)
NOTE This needs to be passed with every
POSTGRES_PASSWORD=$(kubectl get secret -n <old_namespace> <old_release_name>-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
helm --set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} install
andhelm --set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} upgrade
- Get the postgres password from previous deploy, (refer decoding-a-secret for more info on reading a sceret value)
- Copy data and config from local filesystem to new deployment
kubectl cp /<local_disk_path>/insight-data/insight.key <insight-server-pod>:/var/opt/jfrog/insight/data/security/mc.key -n <new_namespace> -c insight # Note : This insight-server.properties has to be copied to all the replicas if you plan to scale to more replicas in future kubectl cp /<local_disk_path>/insight-data/insight-server.properties <insight-server-pod>:/var/opt/jfrog/insight/etc/insight-server.properties -n <new_namespace> -c insight kubectl cp /<local_disk_path>/insight-data/elastic_data <insight-server-pod>:/usr/share/elasticsearch -n <new_namespace> -c elasticsearch kubectl cp /<local_disk_path>/insight-data/postgres_data <postgres-pod>:/var/lib/postgresql -n <new_namespace> kubectl exec -it <postgres-pod> -n <new_namespace> -- bash rm -fr /var/lib/postgresql/data cp -fr /var/lib/postgresql/postgres_data/* /var/lib/postgresql/data/ rm -fr /var/lib/postgresql/postgres_data kubectl exec -it <Insight-server-pod> -n <new_namespace> -c elasticsearch -- bash rm -fr /usr/share/elasticsearch/data cp -fr /usr/share/elasticsearch/elastic_data/* /usr/share/elasticsearch/data rm -fr /usr/share/elasticsearch/elastic_data
- Copy data and config from old deployment to local filesystem
- Restart the new deployment
kubectl scale deployment <postgres-deployment> --replicas=0 -n <new_namespace> kubectl scale statefulset <insight-statefulset> --replicas=0 -n <new_namespace> kubectl scale deployment <postgres-deployment> --replicas=1 -n <new_namespace> kubectl scale statefulset <insight-statefulset> --replicas=1 -n <new_namespace> # if you are using autogenerated password for postgres, set the postgres password from previous deploy by running an upgrade # helm --set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} upgrade ...
- A new insight.key will be generated after this upgrade, save a copy of this key. NOTE: This should be passed on all future calls to
helm install
andhelm upgrade
!
export INSIGHT_KEY=$(kubectl exec -it <insight-server-pod> -n <new_namespace> -c insight -- cat /var/opt/jfrog/insight/data/security/insight.key )
- Remove old release
For production grade installations it is recommended to use an external PostgreSQL with a static password
There are cases where you will want to use an external PostgreSQL and not the enclosed PostgreSQL. See more details on configuring the database
This can be done with the following parameters
...
--set postgresql.enabled=false \
--set database.url=${DB_URL} \
--set database.user=${DB_USER} \
--set database.password=${DB_PASSWORD} \
...
NOTE: You must set postgresql.enabled=false
in order for the chart to use the database.*
parameters. Without it, they will be ignored!
You can use already existing secrets for managing the database connection details.
Pass them to the install command with the following parameters
export POSTGRES_USERNAME_SECRET_NAME=
export POSTGRES_USERNAME_SECRET_KEY=
export POSTGRES_PASSWORD_SECRET_NAME=
export POSTGRES_PASSWORD_SECRET_KEY=
...
--set database.secrets.user.name=${POSTGRES_USERNAME_SECRET_NAME} \
--set database.secrets.user.key=${POSTGRES_USERNAME_SECRET_KEY} \
--set database.secrets.password.name=${POSTGRES_PASSWORD_SECRET_NAME} \
--set database.secrets.password.key=${POSTGRES_PASSWORD_SECRET_KEY} \
...
By default, this HELM chart deploys elasticsearch pod. It also configures docker host kernel parameters using a privileged initContainer. In some installations, you would not be allowed to run privileged containers, in which case you can disable docker host configuration by configuring following parameter:
--set elasticsearch.configureDockerHost=false
There are cases where you will want to use an external Elasticsearch and not the enclosed Elasticsearch.
This can be done with the following parameters
...
--set elasticsearch.enabled=false \
--set elasticsearch.url=${ES_URL} \
--set elasticsearch.username=${ES_USERNAME} \
--set elasticsearch.password=${ES_PASSWORD} \
...
By default the internal elasticsearch uses the bundled tls-certificates for configuring searchguard. For production deployments it is recommended to use you own certificates.
Custom certificates can be added by using kubernetes secret. The secret should be created outside of this chart and provided using the tag .Values.elasticsearch.certificatesSecretName
. Please refer the example below.
kubectl create secret generic elastic-certs --from-file=localhost.key=localhost.key --from-file=localhost.pem=localhost.pem --from-file=sgadmin.key=sgadmin.key --from-file=sgadmin.pem=sgadmin.pem --from-file=root-ca.pem=root-ca.pem
Refer- https://docs.search-guard.com/latest/offline-tls-tool for creating certificates
And then pass it to the helm installation
elasticsearch:
certificatesSecretName: elastic-certs
NOTE: If the certificates are changed, rolling update is not possible. Scale down the deployment to one replica and do an helm upgrade
This chart provides the option to add sidecars to tail various logs from Insight containers. See the available values in values.yaml
Get list of containers in the pod
kubectl get pods -n <NAMESPACE> <POD_NAME> -o jsonpath='{.spec.containers[*].name}' | tr ' ' '\n'
View specific log
kubectl logs -n <NAMESPACE> <POD_NAME> -c <LOG_CONTAINER_NAME>
There are cases where an extra sidecar container is needed. For example monitoring agents or log collection.
For this, there is a section for writing a custom sidecar container in the values.yaml. By default it's commented out
common:
## Add custom sidecar containers
customSidecarContainers: |
## Sidecar containers template goes here ##
Create trust between the nodes by copying the ca.crt from the Artifactory server under $JFROG_HOME/artifactory/var/etc/access/keys to of the nodes you would like to set trust with under $JFROG_HOME//var/etc/security/keys/trusted. For more details, Please refer here.
Note: Support for custom certificates using secrets was added from 5.5.x chart versions
Tls certificates can be added by using kubernetes secret. The secret should be created outside of this chart and provided using the tag .Values.missionControl.customCertificates.certificateSecretName
. Please refer the example below.
kubectl create secret generic ca-cert --from-file=ca.crt=ca.crt
And then pass it to the helm installation
missionControl:
customCertificates:
enabled: true
certificateSecretName: ca-cert
router:
tlsEnabled: true
Note:
router.tlsEnabled
is set to true to add HTTPS scheme in liveness and readiness probes.
If you need to use a custom volume, you can use this option.
For this, there is a section for defining custom volumes in the values.yaml. By default it's commented out
common:
## Add custom volumes
customVolumes: |
## Custom volume comes here ##
There are cases where a special, unsupported init processes is needed like checking something on the file system or testing something before spinning up the main container.
For this, there is a section for writing a custom init container in the values.yaml. By default it's commented out
common:
## Add custom init containers
customInitContainers: |
## Init containers template goes here ##
There are also cases where you'd like custom files or for your init container to make changes to the file system the Insight container will see.
For this, there is a section for defining custom volumes in the values.yaml. By default they are left empty.
common:
## Add custom volumes
customVolumes: |
# - name: custom-script
# configMap:
# name: custom-script
## Add custom volumeMounts
customVolumeMounts: |
# - name: custom-script
# mountPath: "/scripts/script.sh"
# subPath: script.sh
If you need to add a custom secret in a custom init or any common container, you can use this option.
For this, there is a section for defining custom secrets in the values.yaml. By default it's commented out
common:
# Add custom secrets - secret per file
customSecrets:
- name: custom-secret
key: custom-secret.yaml
data: >
secret data
To use a custom secret, need to define a custom volume.
common:
## Add custom volumes
customVolumes: |
- name: custom-secret
secret:
secretName: custom-secret
To use a volume, need to define a volume mount as part of a custom init or sidecar container.
common:
customVolumeMounts:
- name: custom-secret
mountPath: /opt/custom-secret.yaml
subPath: custom-secret.yaml
readOnly: true