Skip to content
This repository has been archived by the owner on Feb 13, 2020. It is now read-only.

Jenkins Development Environment on AWS #13

Merged
merged 27 commits into from Mar 1, 2019

Conversation

jacderida
Copy link
Contributor

Hi Stephen/Calum,

This is a development setup for spinning up and running our Jenkins instance on AWS. In this context, "development" means it's a setup that has a relaxed approach to security. So it's running on the default VPC for our account with all the machines having public IPs assigned, and no HTTPS for Jenkins.

The development environment consists of:

  • A micro instance running the Jenkins master
  • 2 CentOS 7.5 micro instances acting as Docker slaves
  • A Windows Server 2016 t3.small instance acting as a slave (Windows is too slow on a micro)

The reason there are 2 Linux slaves was just so I could test I could get Jenkins to dynamically register each slave.

You can stand up the environment with make jenkins-environment-aws. That process takes about 30 minutes, and at the end it will print the URL of Jenkins. This time could be optimised to spin machines up in parallel and so on, but I don't think it's worth it for a dev setup. Important note: this script is not designed for more than one person to run it at the same time, so if you both wanted to do it you'd need to coordinate who goes first. You can tear everything down again using make clean-aws. There is some stuff you need to install for this, which I've documented in the README.

Suggested acceptance criteria:

  • The instructions describe everything necessary to get the environment running
  • You should be able to login to the Jenkins instance that was just spun up
  • All the slaves should be available in Jenkins
  • Spin up the local environment and make sure Jenkins and slaves are all online (changes for cloud should be compatible with the local setup - tested it myself but would be good to get someone else to do it too)
  • A Linux and Windows build runs through

The last of those might be optional because it takes a long time for a build to run on resources of this size.

Cheers,

Chris

jacderida and others added 26 commits February 15, 2019 10:06
On the local machines we can make the assumption that pip is already
installed, since it's used to install Ansible. On the cloud we need to
set this up. Not every distribution has pip in their package
repositories, so we can just install with the get-pip script.
On the slightly later version of CentOS being used on AWS, the docker
module wouldn't install correctly and required this flag to be passed to
pip.
Introduces a Vagrant configuration for hosting the Jenkins Master on
AWS. The Ansible provision doesn't work correctly yet due to group_vars
not being picked up when using dynamic inventory. Need to investigate
how to get this to work correctly.
This adds a development setup for running Jenkins on AWS. I've used 2
Linux hosts just so I could prove it would work if there was more than 1
slave in the mix (which is likely to be the case for a non-local setup).

The dynamic inventory seemingly needs to consist of the EC2 script plus
a static inventory file that groups the machines based on their
hostnames. The EC2 dynamic inventory maps the hostnames in the static
inventory to the Name tag on the EC2 machine.

The Jenkins CASC file is updated to use a loop to dynamically populate
all the Linux machines in the 'slaves' group. This will probably need to
be split into a 'linux_slaves' or 'docker_slaves' and 'windows_slaves'
groups for when Windows support is added shortly. It will also need to
have an if condition to distinguish whether we're running a local setup
or not, since it references properties that will only be available on
EC2, e.g. 'ec2_public_dns_name'.
Installs all the packages that are installed in the Travis Windows
environment, as we want to match as closely as possible.

See the Travis documentation:
https://docs.travis-ci.com/user/reference/windows/
This appears to be pre-installed in the local VM, but it's not installed
on the EC2 instance.
Even although this provision is seemingly running as the Administrator
user, it still wouldn't let Git install, complaining about permissions.

There is a thread and some resolutions to the issue here:
chocolatey/choco#1048

I chose to just assign FullControl permissions to the Chocolatey
installation directory. This issue doesn't occur on the local VM and I
suspect this change will be compatible with that.
Change the group name from 'jenkins-windows-slaves' to just
'windows_slaves'. The dynamic inventory plugin doesn't seem to play too
well with hyphens, and I think this is a better name anyway. It's pretty
redundant to include the word "jenkins" in the group name.
The win_tempfile module didn't work correctly on the EC2 instance. The
directory I created already seemed to be missing by the time I tried to
copy over the Rust installer.
These had to be separate out because Jenkins has a slave configuration
and all the slaves machines need to be available at the point when the
master is provisioned. This is because the list of slaves is provided by
the Ansible dynamic inventory, and the Windows slaves wouldn't be
available if we spun them up after the master provision.

On the other hand, the Windows slave provision requires a running
Jenkins instance to connect to, so we spin the Windows machine up but
don't do the Ansible run until after the Jenkins master has been
provisioned.

The 2 scripts need to share information, so the instance ID and
generated passwords are written out to a hidden directory. It's not a
security issue to store this password on a local machine, as this is
just a development setup.
This is intended to be used as a 'user data' script for an EC2 instance,
hence the '<powershell>' tag. It won't run without this.
This is a better name from the group and it also matches the AWS
environment.
The jinja2 template for the Jenkins configuration uses a variable to
determine if we're running in cloud mode or not. In that case the slaves
are populated using a different method that queries the inventory. It's
necessary for there to be different paths, because some of the variables
that are injected by dynamic inventory don't exist in the local setup.
This spins up the security groups with the correct rules, rather than
relying on them pre-existing.
I had been doing the development with my own personal key, so this has
now been changed.
It made more sense to split the descriptions here a bit too since we now
have a local and an AWS setup.
These were based on Stephen's review. Some more setup for the ec2.py
script for dynamic inventory and be a bit more explicit about setting
environment variables.
For some reason on macOS it was attempting to use Samba to share this,
which was prompting for credentials.
Copy link

@calumcraig calumcraig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have been able to successfully build an AWS Jenkins environment following the instructions here https://github.com/jacderida/safe-build-infrastructure/tree/aws#aws-development-provision. Had to liaise with Chris after hitting an error and he supplied a fix - "the java role needs to exist for the Ansible provision of the slaves. The script is modified to clone it locally if it doesn't exist.". Was able to log into Jenkins and access the AWS environment:
image

calumcraig
calumcraig previously approved these changes Mar 1, 2019
Copy link

@calumcraig calumcraig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have been able to successfully build an AWS Jenkins environment following the instructions here https://github.com/jacderida/safe-build-infrastructure/tree/aws#aws-development-provision. Had to liaise with Chris after hitting an error and he supplied a fix - "the java role needs to exist for the Ansible provision of the slaves. The script is modified to clone it locally if it doesn't exist.". Was able to log into Jenkins and access the AWS environment:
image

Copy link
Contributor

@S-Coyle S-Coyle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per @calumcraig 's comment above, this works on Linux, which is sufficient for now.

@S-Coyle S-Coyle merged commit f1313ea into maidsafe-archive:master Mar 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants