Skip to content

Jrahme cci/update nomad docs #9417

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ contentTags:
:toc: macro
:toc-title:

CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment.
CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment. CircleCI server requires Nomad client images to use CGroupsV1 and is not compatible with CgroupsV2.

NOTE: In the following sections, replace any sections indicated by `< >` with your details.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ This section provides supplemental information on hardening your Kubernetes clus
== Network topology
A server installation basically runs three different type of compute instances: The Kubernetes nodes, Nomad clients, and external VMs.

Best practice is to make as many of the resources as private as possible. If your users will access your CircleCI server installation via VPN, there is no need to assign any public IP addresses at all, as long as you have a working NAT gateway setup. Otherwise, you will need at least one public subnet for the `circleci-proxy` load balancer.
Best practice is to make as many of the resources as private as possible. If users will access your CircleCI server installation via VPN, there is no need to assign any public IP addresses, as long as you have a NAT gateway setup. Otherwise, you will need at least one public subnet for the `circleci-proxy` load balancer.

However, in this case, it is also recommended to place Nomad clients and VMs in a public subnet to enable your users to SSH into jobs and scope access via networking rules.
It is also recommended to place Nomad clients and VMs in a public subnet to enable users to SSH into jobs and scope access via networking rules.

NOTE: An nginx reverse proxy is placed in front of link:https://github.com/Kong/charts[Kong] and exposed as a Kubernetes service named `circleci-proxy`. nginx is responsible routing the traffic to the following services: `kong` and `nomad`.

Expand Down Expand Up @@ -55,7 +55,7 @@ You may wish to check the status of the services routing traffic in your CircleC

[#kubernetes-load-balancers]
## Kubernetes load balancers
Depending on your setup, your load balancers might be transparent (that is, they are not treated as a distinct layer in your networking topology). In this case, you can apply the rules from this section directly to the underlying destination or source of the network traffic. Refer to the documentation of your cloud provider to make sure you understand how to correctly apply networking security rules, given the type of load balancing you are using with your installation.
Depending on your setup, your load balancers might be transparent (that is, they are not treated as a distinct layer in your networking topology). In this case, you can apply the rules from this section directly to the underlying destination or source of the network traffic. Refer to the documentation of your cloud provider to make sure you understand how to correctly apply networking security rules, given the type of load balancing used by your installation.

[#ingress-load-balancers]
=== Ingress
Expand Down Expand Up @@ -97,7 +97,7 @@ If the traffic rules for your load balancers have not been created automatically

[#egress-load-balancers]
=== Egress
The only type of egress needed is TCP traffic to the Kubernetes nodes on the Kubernetes load balancer ports (30000-32767). This is not needed if your load balancers are transparent.
The only egress needed is for TCP traffic to the Kubernetes nodes on the Kubernetes load balancer ports (30000-32767). This egress is not needed if your load balancers are transparent.

[#common-rules-for-compute-instances]
== Common rules for compute instances
Expand All @@ -110,15 +110,15 @@ It is recommended to scope the rule as closely as possible to allowed source IP

[#egress-common]
=== Egress
You most likely want all of your instances to access internet resources. This requires you to allow egress for UDP and TCP on port 53 to the DNS server within your VPC, as well as TCP ports 80 and 443 for HTTP and HTTPS traffic, respectively.
Instances building jobs (that is, the Nomad clients and external VMs) also will likely need to pull code from your VCS using SSH (TCP port 22). SSH is also used to communicate with external VMs, so it should be allowed for all instances with the destination of the VM subnet and your VCS, at the very least.
You most likely want all of your instances to access internet resources. This requires allowing egress for UDP and TCP on port 53 to the DNS server within your VPC, and TCP ports 80 and 443 for HTTP and HTTPS traffic.
Instances building jobs (that is, the Nomad clients and external VMs) also will likely need to pull code from your VCS using SSH (TCP port 22). SSH is also used to communicate with external VMs, and should be allowed for all instances with the destination of the VM subnet and your VCS.

[#kubernetes-nodes]
== Kubernetes nodes

[#intra-node-traffic]
=== Intra-node traffic
By default, the traffic within your Kubernetes cluster is regulated by networking policies. For most purposes, this should be sufficient to regulate the traffic between pods and there is no additional requirement to reduce traffic between Kubernetes nodes any further (it is fine to allow all traffic between Kubernetes nodes).
By default, the traffic within your Kubernetes cluster is regulated by networking policies. This should be sufficient to regulate the traffic between pods. No additional requirement are needed to reduce traffic between Kubernetes nodes any further (it is fine to allow all traffic between Kubernetes nodes).

To make use of networking policies within your cluster, you may need to take additional steps, depending on your cloud provider and setup. Here are some resources to get you started:

Expand Down Expand Up @@ -258,7 +258,7 @@ Within the ingress rules of the VM security group, the following rules can be cr

| 54782
| CIDR range of your choice
| Allows users to SSH into failed vm-based jobs and to retry and debug
| Allows users to SSH into failed virtual machine based jobs and to retry and debug

|===

Expand All @@ -278,7 +278,7 @@ When hardening an installation where the machine provisioner uses public IP addr

| 54782
| CIDR range of your choice
| Allows users to SSH into failed vm-based jobs to retry and debug.
| Allows users to SSH into failed virtual machine based jobs to retry and debug.

|===

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ contentTags:
:toc: macro
:toc-title:

CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment.
CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment. CircleCI server requires Nomad client images to use CGroupsV1 and is not compatible with CgroupsV2.

NOTE: In the following sections, replace any sections indicated by `< >` with your details.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ terraform apply
After Terraform is done spinning up the Nomad client(s), it outputs the certificates and key needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe.

endif::env-aws[]

NOTE: The CircleCI Terraform uses an Ubuntu 22.04 image customized to use CgroupsV1 and not the default CgroupsV2. CgroupsV1 is a requirement for CircleCI Docker execution environments.
[#nomad-autoscaler-configuration]
=== b. Nomad Autoscaler configuration
Nomad can automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider's auto scaling resource. With Nomad Autoscaler, you need to provide permission for the utility to manage your auto scaling resource and specify where it is located. CircleCI's Nomad Terraform module can provision the permissions resources, or it can be done manually.
Expand Down Expand Up @@ -535,7 +535,7 @@ NOTE: Overriding scaling options is currently not supported, but will be support

Machine provisioner is used to configure virtual machines for jobs that run in Linux VM, Windows and Arm VM execution environments, and those that are configured to use xref:../../../configuration-reference#setupremotedocker[remote Docker]. Machine provisioner is unique to AWS and GCP installations because it relies on specific features of these cloud providers.

Once you have completed the server installation process you can further configure machine provisioner, including building and specifying a Windows image to give developers access to the Windows execution environment, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.
Once you have completed the server installation process you can further configure machine provisioner. Including building and specifying a Windows image, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.

Before moving on to platform specific steps, create your firewall rules. External VMs need the networking rules described in xref:hardening-your-cluster/#external-vms[Hardening your Cluster]

Expand Down
4 changes: 3 additions & 1 deletion jekyll/_includes/server/latest/installation/phase-3.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,8 @@ After Terraform is done spinning up the Nomad client(s), it outputs the certific
// Stop hiding from AWS page
endif::env-aws[]

NOTE: The CircleCI Terraform uses an Ubuntu 22.04 image customized to use CgroupsV1 and not the default CgroupsV2. CgroupsV1 is a requirement for CircleCI Docker execution environments.

[#nomad-autoscaler-configuration]
=== b. Nomad Autoscaler configuration
Nomad can automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider's auto scaling resource. With Nomad Autoscaler, you need to provide permission for the utility to manage your auto scaling resource and specify where it is located. CircleCI's Nomad Terraform module can provision the permissions resources, or it can be done manually.
Expand Down Expand Up @@ -533,7 +535,7 @@ NOTE: Overriding scaling options is currently not supported, but will be support

Machine provisioner is used to configure virtual machines for jobs that run in Linux VM, Windows and Arm VM execution environments, and those that are configured to use xref:../../../configuration-reference#setupremotedocker[remote Docker]. Machine provisioner is unique to AWS and GCP installations because it relies on specific features of these cloud providers.

Once you have completed the server installation process you can further configure machine provisioner, including building and specifying a Windows image to give developers access to the Windows execution environment, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.
Once you have completed the server installation process you can further configure machine provisioner. Including building and specifying a Windows image, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.

Before moving on to platform specific steps, create your firewall rules. External VMs need the networking rules described in xref:hardening-your-cluster/#external-vms[Hardening your Cluster]

Expand Down
2 changes: 2 additions & 0 deletions styles/config/vocabularies/Docs/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -317,6 +317,8 @@ stdin
[Ss]ubkeys?
[Ss]ubshells?
subcommand
[Ss]ubnets?
[Ss]ubnetworks?
superset
Sysbox
systemd
Expand Down