Skip to content

Commit 0cb0a76

Browse files
Jrahme cci/update nomad docs (#9417)
* note the requirement for cgroupsv1 for nomad clients * linting fixes --------- Co-authored-by: Rosie Yohannan <rosie@circleci.com>
1 parent 67e9c3e commit 0cb0a76

File tree

6 files changed

+18
-14
lines changed

6 files changed

+18
-14
lines changed

jekyll/_cci2/server/latest/air-gapped-installation/phase-4-configure-nomad-clients.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ contentTags:
1212
:toc: macro
1313
:toc-title:
1414

15-
CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment.
15+
CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment. CircleCI server requires Nomad client images to use CGroupsV1 and is not compatible with CgroupsV2.
1616

1717
NOTE: In the following sections, replace any sections indicated by `< >` with your details.
1818

jekyll/_cci2/server/latest/installation/hardening-your-cluster.adoc

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ This section provides supplemental information on hardening your Kubernetes clus
1818
== Network topology
1919
A server installation basically runs three different type of compute instances: The Kubernetes nodes, Nomad clients, and external VMs.
2020

21-
Best practice is to make as many of the resources as private as possible. If your users will access your CircleCI server installation via VPN, there is no need to assign any public IP addresses at all, as long as you have a working NAT gateway setup. Otherwise, you will need at least one public subnet for the `circleci-proxy` load balancer.
21+
Best practice is to make as many of the resources as private as possible. If users will access your CircleCI server installation via VPN, there is no need to assign any public IP addresses, as long as you have a NAT gateway setup. Otherwise, you will need at least one public subnet for the `circleci-proxy` load balancer.
2222

23-
However, in this case, it is also recommended to place Nomad clients and VMs in a public subnet to enable your users to SSH into jobs and scope access via networking rules.
23+
It is also recommended to place Nomad clients and VMs in a public subnet to enable users to SSH into jobs and scope access via networking rules.
2424

2525
NOTE: An nginx reverse proxy is placed in front of link:https://github.com/Kong/charts[Kong] and exposed as a Kubernetes service named `circleci-proxy`. nginx is responsible routing the traffic to the following services: `kong` and `nomad`.
2626

@@ -55,7 +55,7 @@ You may wish to check the status of the services routing traffic in your CircleC
5555

5656
[#kubernetes-load-balancers]
5757
## Kubernetes load balancers
58-
Depending on your setup, your load balancers might be transparent (that is, they are not treated as a distinct layer in your networking topology). In this case, you can apply the rules from this section directly to the underlying destination or source of the network traffic. Refer to the documentation of your cloud provider to make sure you understand how to correctly apply networking security rules, given the type of load balancing you are using with your installation.
58+
Depending on your setup, your load balancers might be transparent (that is, they are not treated as a distinct layer in your networking topology). In this case, you can apply the rules from this section directly to the underlying destination or source of the network traffic. Refer to the documentation of your cloud provider to make sure you understand how to correctly apply networking security rules, given the type of load balancing used by your installation.
5959

6060
[#ingress-load-balancers]
6161
=== Ingress
@@ -97,7 +97,7 @@ If the traffic rules for your load balancers have not been created automatically
9797

9898
[#egress-load-balancers]
9999
=== Egress
100-
The only type of egress needed is TCP traffic to the Kubernetes nodes on the Kubernetes load balancer ports (30000-32767). This is not needed if your load balancers are transparent.
100+
The only egress needed is for TCP traffic to the Kubernetes nodes on the Kubernetes load balancer ports (30000-32767). This egress is not needed if your load balancers are transparent.
101101

102102
[#common-rules-for-compute-instances]
103103
== Common rules for compute instances
@@ -110,15 +110,15 @@ It is recommended to scope the rule as closely as possible to allowed source IP
110110

111111
[#egress-common]
112112
=== Egress
113-
You most likely want all of your instances to access internet resources. This requires you to allow egress for UDP and TCP on port 53 to the DNS server within your VPC, as well as TCP ports 80 and 443 for HTTP and HTTPS traffic, respectively.
114-
Instances building jobs (that is, the Nomad clients and external VMs) also will likely need to pull code from your VCS using SSH (TCP port 22). SSH is also used to communicate with external VMs, so it should be allowed for all instances with the destination of the VM subnet and your VCS, at the very least.
113+
You most likely want all of your instances to access internet resources. This requires allowing egress for UDP and TCP on port 53 to the DNS server within your VPC, and TCP ports 80 and 443 for HTTP and HTTPS traffic.
114+
Instances building jobs (that is, the Nomad clients and external VMs) also will likely need to pull code from your VCS using SSH (TCP port 22). SSH is also used to communicate with external VMs, and should be allowed for all instances with the destination of the VM subnet and your VCS.
115115

116116
[#kubernetes-nodes]
117117
== Kubernetes nodes
118118

119119
[#intra-node-traffic]
120120
=== Intra-node traffic
121-
By default, the traffic within your Kubernetes cluster is regulated by networking policies. For most purposes, this should be sufficient to regulate the traffic between pods and there is no additional requirement to reduce traffic between Kubernetes nodes any further (it is fine to allow all traffic between Kubernetes nodes).
121+
By default, the traffic within your Kubernetes cluster is regulated by networking policies. This should be sufficient to regulate the traffic between pods. No additional requirement are needed to reduce traffic between Kubernetes nodes any further (it is fine to allow all traffic between Kubernetes nodes).
122122

123123
To make use of networking policies within your cluster, you may need to take additional steps, depending on your cloud provider and setup. Here are some resources to get you started:
124124

@@ -258,7 +258,7 @@ Within the ingress rules of the VM security group, the following rules can be cr
258258

259259
| 54782
260260
| CIDR range of your choice
261-
| Allows users to SSH into failed vm-based jobs and to retry and debug
261+
| Allows users to SSH into failed virtual machine based jobs and to retry and debug
262262

263263
|===
264264

@@ -278,7 +278,7 @@ When hardening an installation where the machine provisioner uses public IP addr
278278

279279
| 54782
280280
| CIDR range of your choice
281-
| Allows users to SSH into failed vm-based jobs to retry and debug.
281+
| Allows users to SSH into failed virtual machine based jobs to retry and debug.
282282

283283
|===
284284

jekyll/_cci2/server/v4.6/air-gapped-installation/phase-4-configure-nomad-clients.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ contentTags:
1212
:toc: macro
1313
:toc-title:
1414

15-
CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment.
15+
CircleCI server uses Nomad clients to perform container-based build actions. These machines will need to exist within the air-gapped environment to communicate with the CircleCI server Helm deployment. CircleCI server requires Nomad client images to use CGroupsV1 and is not compatible with CgroupsV2.
1616

1717
NOTE: In the following sections, replace any sections indicated by `< >` with your details.
1818

jekyll/_cci2/server/v4.6/installation/phase-3-execution-environments.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ terraform apply
147147
After Terraform is done spinning up the Nomad client(s), it outputs the certificates and key needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe.
148148

149149
endif::env-aws[]
150-
150+
NOTE: The CircleCI Terraform uses an Ubuntu 22.04 image customized to use CgroupsV1 and not the default CgroupsV2. CgroupsV1 is a requirement for CircleCI Docker execution environments.
151151
[#nomad-autoscaler-configuration]
152152
=== b. Nomad Autoscaler configuration
153153
Nomad can automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider's auto scaling resource. With Nomad Autoscaler, you need to provide permission for the utility to manage your auto scaling resource and specify where it is located. CircleCI's Nomad Terraform module can provision the permissions resources, or it can be done manually.
@@ -535,7 +535,7 @@ NOTE: Overriding scaling options is currently not supported, but will be support
535535

536536
Machine provisioner is used to configure virtual machines for jobs that run in Linux VM, Windows and Arm VM execution environments, and those that are configured to use xref:../../../configuration-reference#setupremotedocker[remote Docker]. Machine provisioner is unique to AWS and GCP installations because it relies on specific features of these cloud providers.
537537

538-
Once you have completed the server installation process you can further configure machine provisioner, including building and specifying a Windows image to give developers access to the Windows execution environment, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.
538+
Once you have completed the server installation process you can further configure machine provisioner. Including building and specifying a Windows image, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.
539539

540540
Before moving on to platform specific steps, create your firewall rules. External VMs need the networking rules described in xref:hardening-your-cluster/#external-vms[Hardening your Cluster]
541541

jekyll/_includes/server/latest/installation/phase-3.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,8 @@ After Terraform is done spinning up the Nomad client(s), it outputs the certific
143143
// Stop hiding from AWS page
144144
endif::env-aws[]
145145

146+
NOTE: The CircleCI Terraform uses an Ubuntu 22.04 image customized to use CgroupsV1 and not the default CgroupsV2. CgroupsV1 is a requirement for CircleCI Docker execution environments.
147+
146148
[#nomad-autoscaler-configuration]
147149
=== b. Nomad Autoscaler configuration
148150
Nomad can automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider's auto scaling resource. With Nomad Autoscaler, you need to provide permission for the utility to manage your auto scaling resource and specify where it is located. CircleCI's Nomad Terraform module can provision the permissions resources, or it can be done manually.
@@ -533,7 +535,7 @@ NOTE: Overriding scaling options is currently not supported, but will be support
533535

534536
Machine provisioner is used to configure virtual machines for jobs that run in Linux VM, Windows and Arm VM execution environments, and those that are configured to use xref:../../../configuration-reference#setupremotedocker[remote Docker]. Machine provisioner is unique to AWS and GCP installations because it relies on specific features of these cloud providers.
535537

536-
Once you have completed the server installation process you can further configure machine provisioner, including building and specifying a Windows image to give developers access to the Windows execution environment, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.
538+
Once you have completed the server installation process you can further configure machine provisioner. Including building and specifying a Windows image, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the xref:../operator/manage-virtual-machines-with-machine-provisioner#[Manage Virtual Machines with machine provisioner] page.
537539

538540
Before moving on to platform specific steps, create your firewall rules. External VMs need the networking rules described in xref:hardening-your-cluster/#external-vms[Hardening your Cluster]
539541

styles/config/vocabularies/Docs/accept.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -317,6 +317,8 @@ stdin
317317
[Ss]ubkeys?
318318
[Ss]ubshells?
319319
subcommand
320+
[Ss]ubnets?
321+
[Ss]ubnetworks?
320322
superset
321323
Sysbox
322324
systemd

0 commit comments

Comments
 (0)