Create a continuous integration system using on-demand Jenkins agents and Compute Engine
- Create a Jenkins agent image with Packer from which on-demand Jenkins agents can be built in the future
- Deploy Jenkins using Cloud Marketplace
- Configure Jenkins deployment with required plugins for launching Jenkins agents and storing build artifacts in Cloud Storage
- Configure lifecycle policies to optimize Cloud Storage costs of long-term build artifact storage
Create a service account from the cloud shell inside your new project and grant it the requisite priviledges
- Use the gcloud command to create the service account
gcloud iam service-accounts create jenkins --display-name jenkins
- Store your service account email address and Google Cloud Project ID in environment variables for future use
export SA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:jenkins" --format='value(email)')
export PROJECT=$(gcloud info --format='value(config.project)')
- Grant the following roles to the newly created service account: storage admin, compute instance admin, compute network admin, compute security admin, iam service account actor
gcloud projects add-iam-policy-binding $PROJECT \
--role roles/storage.admin --member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/compute.instanceAdmin.v1 \
--member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/compute.networkAdmin \
--member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/compute.securityAdmin \
--member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/iam.serviceAccountActor \
--member serviceAccount:$SA_EMAIL
- Create the service account key and download it to your local machine for future use when configuring the JClouds plugin to authenticate with the Compute Engine API. The following command will generate a key file called
jenkins-sa.json
.
gcloud iam service-accounts keys create jenkins-sa.json --iam-account $SA_EMAIL
- Click the button at the top of the cloud console that consists of three vertically stacked dots. Then click "Download File".
- Click the folder icon then click the arrow next to the directory that pops up to browse a list of files. Find the
jenkins-sa.json
file and click it. - Click "download" to save the file to your local machine
We will create a base image for a Compute Engine instance that can be reused when needed and that will contain the necessary software and tools to act as the Jenkins Executor.
Later we will use Packer to build images, which requires SSH to communicate with the build instances. To enable SSH, we must create an SSH key from the cloud shell.
- The following command will check for an existing SSH key and use it if it exists, else, it will create a new key pair.
ls ~/.ssh/id_rsa.pub || ssh-keygen -N ""
- Add the public key to your project's metadata. If you receive a jq error similar to the following
jq: error (at <stdin>:247): Cannot iterate over null (null)
, try adding a?
directly after the[]
in the following code block.
gcloud compute project-info describe \
--format=json | jq -r '.commonInstanceMetadata.items[] | select(.key == "ssh-keys") | .value' > sshKeys.pub
echo "$USER:$(cat ~/.ssh/id_rsa.pub)" >> sshKeys.pub
gcloud compute project-info add-metadata --metadata-from-file ssh-keys=sshKeys.pub
Next we will use Packer to create a base image for our Compute Engine VMs which will act as on-demand temporary build executors in Jenkins. You can customize your build image by adding shell commands to the provisioners
section of the Packer configuration or by adding other Packer provisioners.
- Download and install the latest build of Packer from the Hashicorp website. The following command is using version 1.9.1.
wget https://releases.hashicorp.com/packer/1.9.1/packer_1.9.1_linux_amd64.zip
unzip packer_1.9.1_linux_amd64.zip
- Create the configuration file for your Packer image builds
export PROJECT=$(gcloud info --format='value(config.project)')
cat > jenkins-agent.json <<EOF
{
"builders": [
{
"machine_type": "n2-standard-1",
"type": "googlecompute",
"project_id": "$PROJECT",
"source_image_family": "ubuntu-2004-lts",
"source_image_project_id": "ubuntu-os-cloud",
"zone": "us-central1-f",
"disk_size": "50",
"image_name": "jenkins-agent-{{timestamp}}",
"image_family": "jenkins-agent",
"ssh_username": "ubuntu"
}
],
"provisioners": [
{
"valid_exit_codes": ["0", "1", "2"],
"type": "shell",
"inline": ["sudo apt-get update && sudo apt-get install -y default-jdk"]
}
]
}
EOF
- Build the image by executing Packer
./packer build jenkins-agent.json
- A successful build will yield an output similar to the following
Build 'googlecompute' finished after 3 minutes 14 seconds.
==> Wait completed after 3 minutes 14 seconds
==> Builds finished. The artifacts of successful builds are:
--> googlecompute: A disk image was created: jenkins-agent-1689889295
We will use Cloud Marketplace by Bitnami to provision a Jenkins instance that can use the image we built in the previous section.
- Go to Cloud Marketplace for Jenkins
- Click on Launch
- Enable the required APIs
- Add the correct zone to the configuration
- Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
- Click "Deploy"
- Navigate to the Jenkins login page by clicking "Site Address"
- Log in to Jenkins using the Admin User credentials listed on the details panel
- Jenkins is now ready to use. If you experience a 404 error after logging in, simply remove
/jenkins
after the IP address in your browser and it will take you to the Jenkins dashboard.
First we must install plugins that will allow Jenkins to create agents using Compute Engine and store artifacts from those agents in Cloud Storage.
- From the Jenkins dashboard, click "Manage Jenkins"
- Click "Plugins"
- Click the "Available Plugins" tab
- Use the filter to search for the "Google Compute Engine" and "Google Cloud Storage" plugins
- Select these plugins and click "Download now and install after restart"
- Click the "Restart Jenkins when installation is complete and no jobs are running" checkbox
- From the Jenkins dashboard, click "Manage Jenkins"
- Click "Credentials"
- Under the heading "Stores scoped to Jenkins" click the downward arrow next to "global" and select "Add credentials"
- Set Kind to Google Service Account from private key
- In the Project Name field, enter your Google Cloud project ID
- Next to the JSON key option click "Choose File"
- Add the
jenkins-sa.json
key that you downloaded to your local machine earlier - Click "Create"
Configure the Jenkins Compute Engine plugin with the credentials it uses to provision agent instances.
- From the Jenkins dashboard, click "Manage Jenkins"
- Click "Nodes and Clouds"
- Click the "Clouds" tab
- Click "Add" and select "Compute Engine"
- Under the "Name" field enter
gce
- Under the "Project ID" field enter your Google Cloud project ID
- Under the "Instance Cap" field enter
8
- Under "Service Account Credentials" select your service account which will be listed as your Google Cloud project ID
- Click "Save"
Under Clouds and at the bottom of the Compute Engine configuration panel from the last section, there is a section called "Instance Configurations" that we will use to configure our agent VMs.
- Click "Add"
- Under the "Name Prefix" field enter
ubuntu-2004
- Under the "Description" field enter
Ubuntu agent
- Under the "Labels" field enter
ubuntu-2004
- Under the "Region" field select "us-central1"
- Under the "Zone" field select "us-central1-f"
- Click "Advanced"
- Under the "Machine Type" field select "n1-standard-1"
- Under "Networking" select the "default" setting for both the "Network" and "Subnetwork" fields
- Check the "Attach External IP?" box
- Under "Boot Disk" select your Google Cloud project ID under the "Image project" field
- Under "Image name" select the image you built earlier using Packer
- Under "Size" enter
50
- Click "Save"
- From the Jenkins dashboard, click "Create a job"
- Enter
test
as the item name - Click "Freestyle project"
- Click "OK"
- Select the "Execute concurrent builds if necessary" and "Restrict where this project can be run boxes"
- Under "Label Expression" enter
ubuntu-2004
- Under "Build Steps" click "Add build step" and select "Execute shell"
- Enter
echo "This is a test!"
into the text box - Click "Save"
- Click the "Build Now" tab to start the build
More than likely, you will want to upload artifacts from your builds to Cloud Storage for future analysis or testing. We can configure our Jenkins job to generate a log and build artifact that are both uploaded to Cloud Storage.
- From Cloud Shell, create a storage bucket for the build artifacts
export PROJECT=$(gcloud info --format='value(config.project)')
gsutil mb gs://$PROJECT-jenkins-artifacts
- In the jobs list on the Jenkins UI, select the job we just created. Then click "Configure".
- Under "Build Steps" change the command text field to
env > build_environment.txt
- Under "Post-build Actions" click "Add post-build action"
- Click "Google Cloud Storage Plugin"
- Under "Storage Location" enter
gs://[YOUR_PROJECT_ID]-jenkins-artifacts/$JOB_NAME/$BUILD_NUMBER
but substitute your Google Cloud Project ID where indicated - Click "Add Operation" and then select "Classic Upload"
- Under the "File Pattern" heading enter
build_environment.txt
- Under "Storage Location" enter
gs://[YOUR_PROJECT_ID]-jenkins-artifacts/$JOB_NAME/$BUILD_NUMBER
but substitute your Google Cloud Project ID where indicated - Click the "For failed jobs?" box
- Click "Save"
- Click the "Build Now" tab to begin the build
- From Cloud Shell, access the build artifact using the
gsutil
command
export PROJECT=$(gcloud info --format='value(config.project)')
gsutil cat gs://$PROJECT-jenkins-artifacts/test/2/build_environment.txt
Most likely, you will typically be accessing recent build artifacts rather than older ones. To save costs, use object lifecycle management to move older artifacts from higher-performance storage classes to lower-cost and higher-latency storage classes.
- From Cloud Shell, create a lifecycle configuration file that transfers all build artifacts to Nearline storage after 30 days and all Nearline objects to Coldline storage after 365 days
cat > artifact-lifecycle.json <<EOF
{
"lifecycle": {
"rule": [
{
"action": {
"type": "SetStorageClass",
"storageClass": "NEARLINE"
},
"condition": {
"age": 30,
"matchesStorageClass": ["MULTI_REGIONAL", "STANDARD", "DURABLE_REDUCED_AVAILABILITY"]
}
},
{
"action": {
"type": "SetStorageClass",
"storageClass": "COLDLINE"
},
"condition": {
"age": 365,
"matchesStorageClass": ["NEARLINE"]
}
}
]
}
}
EOF
- Upload the configuration file to your artifact storage bucket
export PROJECT=$(gcloud info --format='value(config.project)')
gsutil lifecycle set artifact-lifecycle.json gs://$PROJECT-jenkins-artifacts
- https://cloud.google.com/architecture/using-jenkins-for-distributed-builds-on-compute-engine
- https://stackoverflow.com/questions/28213232/docker-error-jq-error-cannot-iterate-over-null
- https://stackoverflow.com/questions/52684656/the-zone-does-not-have-enough-resources-available-to-fulfill-the-request-the-re
- https://stackoverflow.com/questions/73337907/packer-build-json-file-for-linux-image-creation-builds-finished-but-no-artifact
- https://stackoverflow.com/questions/52586941/got-an-error-zone-resource-pool-exhausted-when-creating-a-new-instance-on-goog
- https://groups.google.com/g/gce-discussion/c/nV1Ym57OCj8