Install Jenkins,SonarQube & Nexus Repo Manager on Kubernetes with Helm Charts
My understanding about how Jenkins work on Kubernetes cluster...........
-
A pod to support the functionalities of the Jenkins Master will be running on the cluster which uses the image, "jenkins/jenkins:2.319.3-jdk11".
-
There are 2 services to support Jenkins (Master) at port 8000 & Jenkins Agent(Node/client) at port 50000.
-
When a Jenkins job is submitted, the Jenkins Master (pod) will spawn a Jenkins controller agent pod (with the image "jenkins/inbound- agent:4.11.2-4") on the cluster and deletes it after the build. It uses the K8's configMap object named "jenkins-jenkins-jcasc-config" where it stores the specification of the Controller Agent and launches a pod out of it for each job.
-
The Helm Release for Jenkins includes all the necessary plugins required including "kubernetes". This plugin stores Agent Pod & Containers templates (these templates contain Pod's labels, Docker image to launch, Environment variables, Resource requests,limits etc). You can see these settings @ "Manage Jenkins" => "Dashboard" => "Configure Clouds".
-
Along with the above, the plugin needs to store/use the other details such as k8's cluster URL/Ports, Jenkins URL, connection timeout, concurrency limit (The maximum number of concurrently running agent pods that are permitted in this Kubernetes Cloud. If set to empty it means no limit.)
-
The Helm Jenkins repository has these required configurations in a configMap object called "jenkins-jenkins-jcasc-config". While creating the Jenkins components, these values are populated into the cloud configuration. If you want to change any configuration, get it done via Jenkins Master console directly or by updating the configmap and recreating the Jenkins's Statefulset.
-
The workspace for jobs is created by these controller agent pods in a shared volume - /var/jenkins_home which is mounted on the NFS mount (in this case, 10.128.0.11:/jenkins-data).
A MUST READ DOCUMENT:
-
How to Setup Jenkins Build Agents on Kubernetes Pods: https://devopscube.com/jenkins-build-agents-kubernetes/
-
Kubernetes plugin for Jenkins: https://plugins.jenkins.io/kubernetes/
=====
**Installation of Jenkins with Helm Repistory Templates **
- Install Helm
wget https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz
gunzip helm-v3.8.0-linux-amd64.tar.gz
tar -xvf helm-v3.8.0-linux-amd64.tar
sudo mv ./linux-amd64/helm /usr/local/bin/helm
- Add Jenkins to Helm Charts Repo:
helm repo add jenkinsci https://charts.jenkins.io
helm repo list
helm repo update
- Get Chart Name by Searching Repo for Jenkins & Pull/Download scripts:
helm search repo jenkinsci
helm pull jenkinsci/jenkins
- Create a namespace - jenkins
kubectl create namespace jenkins
- Create ServiceAccount named jenkins from the below yaml of the repo:
https://raw.githubusercontent.com/jenkins-infra/jenkins.io/master/content/doc/tutorials/kubernetes/installing-jenkins-on-kubernetes/jenkins-sa.yaml
- Edit values.yaml:
a) Provide StorageClass:
```
storageClass: jenkins-sc
```
b) Speicify ServiceAccount:
```
serviceAccount:
create: false
# The name of the service account is autogenerated by default
name: jenkins
```
c) Static PVC's (THIS IS NOT REQUIRED FOR Dynamic PVC's)
* Create a persistent volume (PV)
* Create NFS mount:
https://github.com/q-uest/Notes-Jenkins/wiki/Create-NFS-mount-in-Cloud
* Create Persistent Volume on the mounted volume:
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
namespace: jenkins
spec:
storageClassName: jenkins-pv
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /jenkins-data/
```
Install Jenkins with helm install 👍
chart=jenkinsci/jenkins
helm install jenkins -n jenkins -f jenkins-values.yaml $chart
Jenkins Helm Installation Issues:
The Jenkins Pod got the below error, due to some conflicts identified by the Pod between the versions of the listed Jenkins' Dependencies,
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 0/2 Init:CrashLoopBackOff 7 11m
io.jenkins.tools.pluginmanager.impl.AggregatePluginPrerequisitesNotMetException: Multiple plugin prerequisites not met:
Plugin kubernetes:1.31.3 (via credentials:1087.v16065d268466) depends on configuration-as-code:1414.v878271fc496f, but there is an older version defined on the top level - configuration-as-code:1.55.1,
Plugin workflow-aggregator:2.6 (via credentials:1087.v16065d268466) depends on configuration-as-code:1414.v878271fc496f, but there is an older version defined on the top level - configuration-as-code:1.55.1,
Plugin git:4.10.2 (via credentials:1087.v16065d268466) depends on configuration-as-code:1414.v878271fc496f, but there is an older version defined on the top level - configuration-as-code:1.55.1
at io.jenkins.tools.pluginmanager.impl.PluginManager.start(PluginManager.java:223)
at io.jenkins.tools.pluginmanager.impl.PluginManager.start(PluginManager.java:172)
at io.jenkins.tools.pluginmanager.cli.Main.main(Main.java:70)
Fixed the issue by updating the values.yaml file like below and by recreating the Jenkins Release
# List of plugins to be install during Jenkins controller start
installPlugins:
- kubernetes:latest
- workflow-aggregator:latest
- git:latest
- configuration-as-code:latest
- The NodePort service has not been created for Jenkins by the helm commands above (Need to check on it). Create a NodePort service as below:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: NodePort
selector:
app.kubernetes.io/instance: jenkins
ports:
- port: 8080
targetPort: 8080
nodePort: 32000
- Get your 'admin' user password by running:
jsonpath="{.data.jenkins-admin-password}"
secret=$(kubectl get secret -n jenkins jenkins -o jsonpath=$jsonpath)
echo $(echo $secret | base64 --decode)
- Get the Jenkins URL to visit by running these commands in the same shell:
jsonpath="{.spec.ports[0].nodePort}"
NODE_PORT=$(kubectl get -n jenkins -o jsonpath=$jsonpath services jenkins-svc)
jsonpath="{.items[0].status.addresses[0].address}"
NODE_IP=$(kubectl get nodes -n jenkins -o jsonpath=$jsonpath)
echo http://$NODE_IP:$NODE_PORT/login
-
Login with the password from step 1 and the username: admin
-
Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file.
======
* Install NFS Subdir External Provisioner for the Postgresql databases with Helm
Before running the helm chart, find the NFS server IP & Exported path & provide like the below:
NFS server=10.138.0.7 Exported Path=/database-data/db
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path
Reference: https://github.com/Oteemo/charts/blob/master/charts/sonarqube/README.md
helm repo add oteemocharts https://oteemo.github.io/charts
helm pull oteemocharts/sonarqube
tar -xvf sonarqube-9.10.1.tgz
There are 2 ways to install Sonarqube:
- Generate K8's manifest files, customize & install
- Configure values.yaml & Install helm release
helm template . > sonarqube.yaml
- Edit "sonarqube.yaml" generated above as described below
There are 2 choices to go to for provisioning volumes:
a) Static PV/PVC's b) Dynamic Provisioning - with volumeClaimTemplates & StorageClassName
-
a)Static PV/PVC's:
* Create PersistentVolume for Postgresql's statefulset:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv
namespace: jenkins
spec:
storageClassName: data-pv
accessModes:
- ReadWriteOnce
capacity:
storage: 8Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /database-data/db
- Create PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data1-pvc
namespace: jenkins
spec:
storageClassName: data-pv
volumeName: data-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
-
SonarQube Installation:
-
PV for SonarQube:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonarqube-pv
namespace: jenkins
spec:
storageClassName: sonarqube-pv
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /database-data
- PVC for Sonarqube:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sonarqube-pvc
namespace: jenkins
spec:
storageClassName: sonarqube-pv
volumeName: sonarqube-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- Provide the claimName in the StatefulSet:
- name: data
persistentVolumeClaim:
claimName: data1-pvc
- b) Dynamic PVC's - with volumeClaimTemplates & StorageClassName
Note: There must have been a storage class created when you created NFS provisioner for Postgresql. If at all, you want to create a new storage class, use the below spec for SC.
- Create StorageClass :
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: data-pv
parameters:
type: pd-standard
provisioner: cluster.local/nfs-subdir-external-provisioner
allowVolumeExpansion: true
reclaimPolicy: Delete
Update StatefulSet's volume spec:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
storageClassName: data-pv
Search for Helm Charts in the Repository
helm search repo oteemocharts
OUTPUT:
NAME CHART VERSION APP VERSION DESCRIPTION
oteemocharts/che 0.1.5 7.3.1 A Helm chart for deploying Eclipse Che to Kuber...
oteemocharts/nexusiq 1.0.5 1.63.0 A Helm chart for Nexus IQ
oteemocharts/sonarqube 9.10.1 8.9.7-community SonarQube is an open sourced code quality scann...
oteemocharts/sonatype-nexus 5.4.0 3.37.3 Sonatype Nexus is an open source repository man...
Install SonarQube:
helm install sonarqube oteemocharts/sonarqube -f values.yaml
Kubernetes objects created by the release:
NAME READY STATUS RESTARTS AGE
pod/sonarqube-postgresql-0 1/1 Running 0 6m26s
pod/sonarqube-sonarqube-b5fc958c-s8v92 1/1 Running 0 6m26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/sonarqube-postgresql ClusterIP 10.107.38.220 <none> 5432/TCP 6m26s
service/sonarqube-postgresql-headless ClusterIP None <none> 5432/TCP 6m26s
service/sonarqube-sonarqube ClusterIP 10.104.240.139 <none> 9000/TCP 6m26s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/sonarqube-sonarqube 1/1 1 1 6m26s
NAME DESIRED CURRENT READY AGE
replicaset.apps/sonarqube-sonarqube-b5fc958c 1 1 1 6m26s
NAME READY AGE
statefulset.apps/sonarqube-postgresql 1/1 6m26s
How the runAsUser/runAsGroup/fsGroup are used by Helm Charts for Postgresql?
SecurityContext settings:
@ Postgresql's Pod level:
securityContext:
fsGroup: 1001
@ Container Level:
securityContext:
runAsUser: 1001
From inside the container.......
$ grep 1001 /etc/passwd
postgresql:x:1001:0::/home/postgresql:/bin/sh
$ grep 1001 /etc/group
// No DATA returned. The "group" database does not have groupID 1001 //
$ id
uid=1001(postgresql) gid=0(root) groups=0(root),1001
(1001 is added to the secondary group above, per the fsGroup value)
$cd /bitnami/postgresql; ls -ltr
total 4
drwx------ 19 postgresql root 4096 Mar 24 07:30 data
$cd /bitnami/data/postgresql;ls -ltr
total 88
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_commit_ts
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_dynshmem
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_serial
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_snapshots
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_twophase
drwx------ 4 postgresql root 4096 Mar 24 07:30 pg_multixact
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_replslot
drwx------ 2 postgresql root 4096 Mar 24 07:30 pg_tblspc
Note:
Look at the GroupID of the filesytem/files above. The given "fsGroup" is reflected. Regardless of whether having the given "fsGroup" value with "/etc/group", Postgresql is not honouring it and changes it through the "entrypoint" scripts to "root" always.
====
From the host OS:
$ grep 1001 /etc/passwd
oshokumar13:x:1001:1002::/home/oshokumar13:/bin/bash
The userID @ hostOS is different from the container. The value "1001" is pointing to "oshokumar13".
$ grep 1001 /etc/group
google-sudoers:x:1001:
$ cd /database-data/db/jenkins-data-release-name-postgresql-0-pvc-d248fd0a-8dbf-44d5-9d0a-e13bd8b29993
$ ls -l
total 4
drwx------ 19 oshokumar13 root 4096 Mar 24 07:30 data
$ cd data;ls -ltr
total 88
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_commit_ts
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_dynshmem
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_serial
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_snapshots
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_twophase
drwx------ 4 oshokumar13 root 4096 Mar 24 07:30 pg_multixact
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_replslot
drwx------ 2 oshokumar13 root 4096 Mar 24 07:30 pg_tblspc
-rw------- 1 oshokumar13 root 3 Mar 24 07:30 PG_VERSION
The userid, 1001 is different in the hostos, hence it is different (oshokumar13). As far as group ownerhip i concerned, the host OS goes by the groupid changed by the Postgresql container which is root(0).
====
HOW TO RESIZE A PV OR PVC? NEED TO CHECK YET.
Create a NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: sqsvc
spec:
selector:
app: sonarqube
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 31111
Here is the kubernetes's objects post-installation of both Jenkins/Sonarqube :
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 2/2 Running 28 5d19h
pod/nfs-subdir-external-provisioner-5df6f58947-snwq6 1/1 Running 5 32h
pod/release-name-postgresql-0 1/1 Running 0 49m
pod/release-name-sonarqube-76df9dc647-f7t7c 1/1 Running 1 49m
pod/release-name-ui-test 0/1 Error 0 5d6h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins ClusterIP 10.101.19.68 <none> 8080/TCP 5d19h
service/jenkins-agent ClusterIP 10.102.57.235 <none> 50000/TCP 5d19h
service/jenkins-svc NodePort 10.109.171.14 <none> 8080:32000/TCP 5d10h
service/release-name-postgresql ClusterIP 10.106.71.211 <none> 5432/TCP 5d6h
service/release-name-postgresql-headless ClusterIP None <none> 5432/TCP 5d6h
service/release-name-sonarqube ClusterIP 10.103.71.177 <none> 9000/TCP 5d6h
service/sqsvc NodePort 10.106.124.231 <none> 9000:31111/TCP 4m19s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-subdir-external-provisioner 1/1 1 1 32h
deployment.apps/release-name-sonarqube 1/1 1 1 3d5h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nfs-subdir-external-provisioner-5df6f58947 1 1 1 32h
replicaset.apps/release-name-sonarqube-6fd749c6cf 0 0 0 4h49m
replicaset.apps/release-name-sonarqube-76df9dc647 1 1 1 3d5h
NAME READY AGE
statefulset.apps/jenkins 1/1 5d19h
statefulset.apps/release-name-postgresql 1/1 49m
Note: It is mandatory to configure a Webhook at SonarQube end, in order to receive reports produced by it from Jenkins's Pipeline jobs. Refer the below & configure the same.
https://github.com/q-uest/Notes-Jenkins/wiki/Jenkins-Pipeline-App-Job-with-Sonarqube-on-K8s-cluster
=====
Nexus Repository Manager
- Get the repository templates, customise as required & create a Nexus release:
helm repo add sonatype https://sonatype.github.io/helm3-charts/
helm search repo nexus-repo
helm pull sonatype/nexus-repository-manager
-
Create NFS mount & mount it onto /nexus-data Refer: https://github.com/q-uest/Notes-Jenkins/wiki/Create-NFS-mount-in-Cloud
-
Ensure permissions are provided to /nexus-data
-
Create PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nexus-pv
namespace: jenkins
spec:
storageClassName: nexus-pv
accessModes:
- ReadWriteOnce
capacity:
storage: 15Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /nexus-data/
-
Update "storageClass" as "nexus-pv" with values.yaml of helm chart.
-
Install the release:
helm install nexus-rel1 .
- Add a NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: nexus-svc
spec:
type: NodePort
selector:
app.kubernetes.io/instance: nexus-rel1
app.kubernetes.io/name: nexus-repository-manager
ports:
- port: 8081
targetPort: 8081
nodePort: 30081
The examples used are from - https://devopscube.com/jenkins-build-agents-kubernetes/
Here is what you should know about the POD template.
-
By default, a JNLP container image is used by the Kubernetes plugin (for Jenkins) to connect to the Jenkins server. It will replace the default image with the image whose container name in the template is given as "jnlp".
-
You can have multiple container templates in a single pod template. Then, each container can be used in different pipeline stages.
-
POD_LABEL will assign a random build label to the pod when the build is triggered. You cannot give any other names other than POD_LABEL
podTemplate {
node(POD_LABEL) {
stage('Run shell') {
sh 'echo hello world'
}
}
}
When executed, an Agent (Node/controller) will be provisioned from something like the following template & the given shell command will be executed in it. [ The below one is copied from the Jenkins Job's console-Output ] :
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.jenkins.svc.cluster.local:8080/job/test1/1/"
runUrl: "job/test1/1/"
labels:
jenkins/jenkins-jenkins-agent: "true"
jenkins/label-digest: "21d3b66c8b6cd7f1cf2f1803e1b9690b56b644ca"
jenkins/label: "test1_1-0z0p9"
name: "test1-1-0z0p9-3fkms-8g3v7"
namespace: "jenkins"
spec:
containers:
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-agent.jenkins.svc.cluster.local:50000"
- name: "JENKINS_AGENT_NAME"
value: "test1-1-0z0p9-3fkms-8g3v7"
- name: "JENKINS_NAME"
value: "test1-1-0z0p9-3fkms-8g3v7"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://jenkins.jenkins.svc.cluster.local:8080/"
image: "jenkins/inbound-agent:4.11-1-jdk11"
name: "jnlp"
resources:
limits: {}
requests:
memory: "256Mi"
cpu: "100m"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
nodeSelector:
kubernetes.io/os: "linux"
restartPolicy: "Never"
volumes:
- emptyDir:
medium: ""
name: "workspace-volume"
How the Agent (node) template will be updated when multiple containers are defined as given below:
podTemplate(containers: [
containerTemplate(
name: 'maven',
image: 'maven:3.8.1-jdk-8',
command: 'sleep',
args: '30d'
),
containerTemplate(
name: 'python',
image: 'python:latest',
command: 'sleep',
args: '30d')
]) {
node(POD_LABEL) {
stage('Get a Maven project') {
git branch: 'main', url: 'https://github.com/spring-projects/spring-petclinic.git'
container('maven') {
stage('Build a Maven project') {
sh '''
echo "maven build"
'''
}
}
}
stage('Get a Python Project') {
git url: 'https://github.com/hashicorp/terraform.git', branch: 'main'
container('python') {
stage('Build a Go project') {
sh '''
echo "Go Build"
'''
}
}
}
}
}
The Agent template will be like the below (from console output of the job from Jenkins). There will be multiple containers as configured in the job above, apart from the default "jnlp" container.
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.jenkins.svc.cluster.local:8080/job/multi-cont-pipeline/3/"
runUrl: "job/multi-cont-pipeline/3/"
labels:
jenkins/jenkins-jenkins-agent: "true"
jenkins/label-digest: "2368df7756af5e2b7c6a6a8be70aa1cf05014111"
jenkins/label: "multi-cont-pipeline_3-zg79v"
name: "multi-cont-pipeline-3-zg79v-s43xw-5g8lm"
namespace: "jenkins"
spec:
containers:
- args:
- "30d"
command:
- "sleep"
image: "python:latest"
imagePullPolicy: "IfNotPresent"
name: "python"
resources:
limits: {}
requests: {}
tty: false
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- args:
- "30d"
command:
- "sleep"
image: "maven:3.8.1-jdk-8"
imagePullPolicy: "IfNotPresent"
name: "maven"
resources:
limits: {}
requests: {}
tty: false
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-agent.jenkins.svc.cluster.local:50000"
- name: "JENKINS_AGENT_NAME"
value: "multi-cont-pipeline-3-zg79v-s43xw-5g8lm"
- name: "JENKINS_NAME"
value: "multi-cont-pipeline-3-zg79v-s43xw-5g8lm"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://jenkins.jenkins.svc.cluster.local:8080/"
image: "jenkins/inbound-agent:4.11-1-jdk11"
name: "jnlp"
resources:
limits: {}
requests:
memory: "256Mi"
cpu: "100m"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
nodeSelector:
kubernetes.io/os: "linux"
restartPolicy: "Never"
volumes:
- emptyDir:
medium: ""
name: "workspace-volume"
The use case of this is, to retain the maven repository (.m2/repository) besides the transient nature of the agent pod where the application is build using "maven".
- Create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: maven-repo-storage
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: jenkins-sc
- The Maven Job (with PVC) :
podTemplate(containers: [
containerTemplate(
name: 'maven',
image: 'maven:latest',
command: 'sleep',
args: '99d'
)
],
volumes: [
persistentVolumeClaim(
mountPath: '/root/.m2/repository',
claimName: 'maven-repo-storage',
readOnly: false
)
])
{
node(POD_LABEL) {
stage('Build Petclinic Java App') {
git url: 'https://github.com/spring-projects/spring-petclinic.git', branch: 'main'
container('maven') {
sh 'mvn -B -ntp clean package -DskipTests'
}
}
}
}
The Agent is setup with the following template (per the Jenkins job's console output):
Note:
The maven container's path (/root/.m2/repository) is mounted on to the PVC now. Hence, the the repository is once downloaded, the subsequent execution of the same job will have lesser duration.
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.jenkins.svc.cluster.local:8080/job/pcl-pljob1/2/"
runUrl: "job/pcl-pljob1/2/"
labels:
jenkins/jenkins-jenkins-agent: "true"
jenkins/label-digest: "5246da42bcc7766ac8de1dfe375cab7a1105b37c"
jenkins/label: "pcl-pljob1_2-qms2t"
name: "pcl-pljob1-2-qms2t-ckv8n-ktjdx"
namespace: "jenkins"
spec:
containers:
- args:
- "99d"
command:
- "sleep"
image: "maven:latest"
imagePullPolicy: "IfNotPresent"
name: "maven"
resources:
limits: {}
requests: {}
tty: false
volumeMounts:
- mountPath: "/root/.m2/repository"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-agent.jenkins.svc.cluster.local:50000"
- name: "JENKINS_AGENT_NAME"
value: "pcl-pljob1-2-qms2t-ckv8n-ktjdx"
- name: "JENKINS_NAME"
value: "pcl-pljob1-2-qms2t-ckv8n-ktjdx"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://jenkins.jenkins.svc.cluster.local:8080/"
image: "jenkins/inbound-agent:4.11-1-jdk11"
name: "jnlp"
resources:
limits: {}
requests:
memory: "256Mi"
cpu: "100m"
volumeMounts:
- mountPath: "/root/.m2/repository"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
nodeSelector:
kubernetes.io/os: "linux"
restartPolicy: "Never"
volumes:
- name: "volume-0"
persistentVolumeClaim:
claimName: "maven-repo-storage"
readOnly: false
- emptyDir:
medium: ""
name: "workspace-volume"