This repository hosts the Puppet code for the Jenkins project's own infrastructure.
See the Jenkins infrastructure project for overview of the project's infrastructure and the services being managed by this repository. A non exhaustive list of services is available here.
- The services are managed r10k and Puppet, configuration files are available inside this repository.
- There are multiple types of service deployments:
- The majority of services run as containers inside Kubernetes, and are NOT managed here (ref. jenkins-infra/charts)
- Some services like ci.jenkins.io run inside virtual machines provisioned in cloud providers
- the other services are running on bare metal machines provided by sponsors
- There are Puppet templates for all services. Configuration options are defined by Hiera and stored in hieradata. See hieradata/common.yaml for the most of the settings.
- Not all services are fully configured with Configuration-as-Code. For example, Jenkins controllers (jenkinscontroller profile) rely on configurations being supplied from Jenkins home directories.
All containerized services are stored in separate repositories (Plugin Site, IRC Bot, etc.). They have their own release cycles and maintainers. This repo just manages and configures the deployments.
- See this page for service repository links.
- Service images are hosted inside the jenkinsciinfra DockerHub account.
- Usually there is a Continuous Delivery pipeline configured for services inside their repositories.
- Image versions are defined in the hieradata/common.yaml file by the
*:image_tag
variables. Services can be updated by submitting a pull request with the version update.
All the secrets are encrypted within the repository using eyaml. in order to view or edit them:
- Follow instructions in (private repository) https://github.com/jenkins-infra/jenkins-keys
- Use the command
bundle exec eyaml edit <filename>
such asbundle exec eyaml edit ./hieradata/common.yaml
- Ruby 2.6.x is required.
- Please note that Ruby 2.7.x and 3.x have never been tested.
- Bundler 1.17.x is required with the command line
bundle
installed and present in yourPATH
.- Please note that Bundler 2.x had never been tested
- A bash-compliant shell is required.
sh
has never been tested, neither Windows Cygwin Shell (but WSL is ok).
- The command line
yq
in version 4.x is needed
You can always check the Docker image that ci.jenkins.io uses to run the test harness for this project at https://github.com/jenkins-infra/docker-inbound-agents/blob/main/ruby/Dockerfile (Jenkins agent labelled with ruby
).
Run the script ./scripts/setupgems.sh
to ensure that all the local dependencies are ready for local development, including:
- Ruby Gems managed by
bundler
(throughGemfile
andGemfile.lock
) to ensure development tools are available throughbundle exec <tool>
commands - Puppet modules retrieved from
./Puppetfile
and installed to./modules
- Unit Tests fixtures generated from
./Puppetfile
into.fixtures.yml
but also other locations in./spec/
TL;DR: As for today, there are no automated acceptance tests. Contributions are welcome.
A long time ago, this repository used serverspec for on-machine acceptance testing. Combined with Vagrant, it allowed to execute acceptance tests per-role.
But this serverspec with Vagrant uses deprecated (and not maintained anymore) components.
Proposal for the future:
- Switch to Goss as it can also be used for Docker with the
dgoss
wrapper and provides automatic adding tests - ServerSpec V2 executed through
vagrant ssh
(but requires updating ruby dependencies + find a way to run serverspec within the VM instead of outside)
- Make sure that you have set up all the Pre-requisites for local development above
- Install Vagrant version 2.x.
- Install Docker
- Docker Desktop is recommended but any other Docker Engine installation should work.
- Only Linux containers are supported, with Cgroups v2. (CGroups v1 might work).
- The command line
docker
must be present in yourPATH
. - You must be able to share a local directory and to use the flag
--privileged
.
- Run the
./scripts/vagrant-bootstrap.sh
script to prepare your local environment.
To launch a test instance, vagrant up ROLE
where ROLE
is one of the defined roles in "dist/role/manifests/".
Ex: vagrant up jenkins::controller
Note: for this role, there may be the following error message because plugins installation needs a running Jenkins instance while it's not quite ready when it happens:
Error: /Stage[main]/Profile::Jenkinscontroller/Exec[perform-jcasc-reload]: Failed to call refresh: '/usr/bin/curl -XPOST --silent --show-error http://127.0.0.1:8080/reload-configuration-as-code/?casc-reload-token=SuperSecretThatShouldBeEncryptedInProduction' returned 7 instead of one of [0]
You can safely ignore it.
You can re-run puppet and execute tests with vagrant provision ROLE
repeatedly while the VM is up and running.
When it's all done, remove the instance the instance via vagrant destroy ROLE
.
The default branch of this repository is production
which is where pull requests should be applied to by default.
+----------------+
| pull-request-1 |
+-----------x----+
\
\ (review and merge, runs tests)
production \
|---------------o--x--x--x---------------->
When a infra project team member is happy with the code in your pull request, they can merge it to production, which will be automatically deployed to production hosts.
For installing agents refer to the installing agents section of the PuppetLabs documentation.
"Dynamic environments" are in a bit of flux for the current version (3.7) of Puppet Enterprise that we're using. An unfortunate side-effect of this is that creating a branch in this repository is not sufficient to create a dynamic environment that can be used via the Puppet master.
The enable an environment, add a file on the Puppet master:
/etc/puppetlabs/puppet/environments/my-environment-here/environment.conf
with
the following:
modulepath = ./dist:./modules:/opt/puppet/share/puppet/modules
manifest = ./manifests/site.pp
See this page for the overview and links.
And this local page for tips.
Channels:
#jenkins-infra
on the Libera Chat IRC network - see https://www.jenkins.io/chat/- jenkins-infra/helpdesk Issue Tracker in GitHub.
- jenkins-infra@groups.google.com mailing list