Skip to content

Commit

Permalink
Do not use deprecated asciidoctor footnote syntax
Browse files Browse the repository at this point in the history
  • Loading branch information
jboxman committed Jun 12, 2020
1 parent 9e34866 commit 36cd48b
Show file tree
Hide file tree
Showing 7 changed files with 281 additions and 22 deletions.
2 changes: 1 addition & 1 deletion modules/available-persistent-storage-options.adoc
Expand Up @@ -26,7 +26,7 @@ bypassing the file system
a| * Presented to the OS as a file system export to be mounted
* Also referred to as Network Attached Storage (NAS)
* Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.
|RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.], and Vendor NFS
|RHEL NFS, NetApp NFS footnoteref:netappnfs[NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.], and Vendor NFS
// Azure File, AWS EFS

| Object
Expand Down
105 changes: 105 additions & 0 deletions modules/olm-building-operator-catalog-image.adoc
@@ -0,0 +1,105 @@
// Module included in the following assemblies:
//
// * operators/olm-restricted-networks.adoc
// * migration/migrating_3_4/deploying-cam-3-4.adoc
// * migration/migrating_4_1_4/deploying-cam-4-1-4.adoc
// * migration/migrating_4_2_4/deploying-cam-4-2-4.adoc

[id="olm-building-operator-catalog-image_{context}"]
= Building an Operator catalog image

Cluster administrators can build a custom Operator catalog image to be used by
Operator Lifecycle Manager (OLM) and push the image to a container image
registry that supports
link:https://docs.docker.com/registry/spec/manifest-v2-2/[Docker v2-2]. For a
cluster on a restricted network, this registry can be a registry that the cluster
has network access to, such as the mirror registry created during the restricted
network installation.

[IMPORTANT]
====
The {product-title} cluster's internal registry cannot be used as the target
registry because it does not support pushing without a tag, which is required
during the mirroring process.
====

For this example, the procedure assumes use of the mirror registry created on
the bastion host during a restricted network cluster installation.

.Prerequisites


* A Linux workstation with unrestricted network access
ifeval::["{context}" == "olm-restricted-networks"]
footnoteref:BZ1771329[The
`oc adm catalog` command is currently only supported on Linux.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1771329[*BZ#1771329*])]
endif::[]
* `oc` version 4.3.5+
* `podman` version 1.4.4+
* Access to mirror registry that supports
link:https://docs.docker.com/registry/spec/manifest-v2-2/[Docker v2-2]
* If you are working with private registries, set the `REG_CREDS` environment
variable to the file path of your registry credentials for use in later steps.
For example, for the `podman` CLI:
+
----
$ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
----

.Procedure

. On the workstation with unrestricted network access, authenticate with the
target mirror registry:
+
----
$ podman login <registry_host_name>
----
+
Also authenticate with `registry.redhat.io` so that the base image can be pulled
during the build:
+
----
$ podman login registry.redhat.io
----

. Build a catalog image based on the `redhat-operators` catalog from
link:https://quay.io/[quay.io], tagging and pushing it to your mirror registry:
+
----
$ oc adm catalog build \
--appregistry-org redhat-operators \//<1>
--from=registry.redhat.io/openshift4/ose-operator-registry:v4.4 \//<2>
--filter-by-os="linux/amd64" \//<3>
--to=<registry_host_name>:<port>/olm/redhat-operators:v1 \//<4>
[-a ${REG_CREDS}] \//<5>
[--insecure] <6>
INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
...
Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1
----
<1> Organization (namespace) to pull from an App Registry instance.
<2> Set `--from` to the `ose-operator-registry` base image using the tag that
matches the target {product-title} cluster major and minor version.
<3> Set `--filter-by-os` to the operating system and architecture to use for the
base image, which must match the target {product-title} cluster. Valid values
are `linux/amd64`, `linux/ppc64le`, and `linux/s390x`.
<4> Name your catalog image and include a tag, for example, `v1`.
<5> Optional: If required, specify the location of your registry credentials file.
<6> Optional: If you do not want to configure trust for the target registry, add the
`--insecure` flag.
+
Sometimes invalid manifests are accidentally introduced into Red Hat's catalogs;
when this happens, you might see some errors:
+
----
...
INFO[0014] directory dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package
W1114 19:42:37.876180 34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found
Uploading ... 244.9kB/s
----
+
These errors are usually non-fatal, and if the Operator package mentioned does
not contain an Operator you plan to install or a dependency of one, then they
can be ignored.
154 changes: 154 additions & 0 deletions modules/olm-updating-operator-catalog-image.adoc
@@ -0,0 +1,154 @@
// Module included in the following assemblies:
//
// * operators/olm-restricted-networks.adoc

[id="olm-updating-operator-catalog-image_{context}"]
= Updating an Operator catalog image

After a cluster administrator has configured OperatorHub to use custom Operator
catalog images, administrators can keep their {product-title} cluster up to date
with the latest Operators by capturing updates made to Red Hat’s App Registry
catalogs. This is done by building and pushing a new Operator catalog image,
then replacing the existing CatalogSource’s `spec.image` parameter with the new
image digest.

For this example, the procedure assumes a custom `redhat-operators` catalog
image is already configured for use with OperatorHub.

.Prerequisites

* A Linux workstation with unrestricted network access
ifeval::["{context}" == "olm-restricted-networks"]
footnoteref:BZ1771329[]
endif::[]
* `oc` version 4.3.5+
* `podman` version 1.4.4+
* Access to mirror registry that supports
link:https://docs.docker.com/registry/spec/manifest-v2-2/[Docker v2-2]
* OperatorHub configured to use custom catalog images
* If you are working with private registries, set the `REG_CREDS` environment
variable to the file path of your registry credentials for use in later steps.
For example, for the `podman` CLI:
+
----
$ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
----

.Procedure

. On the workstation with unrestricted network access, authenticate with the
target mirror registry:
+
----
$ podman login <registry_host_name>
----
+
Also authenticate with `registry.redhat.io` so that the base image can be pulled
during the build:
+
----
$ podman login registry.redhat.io
----

. Build a new catalog image based on the `redhat-operators` catalog from
link:https://quay.io/[quay.io], tagging and pushing it to your mirror registry:
+
----
$ oc adm catalog build \
--appregistry-org redhat-operators \//<1>
--from=registry.redhat.io/openshift4/ose-operator-registry:v4.4 \//<2>
--filter-by-os="linux/amd64" \//<3>
--to=<registry_host_name>:<port>/olm/redhat-operators:v2 \//<4>
[-a ${REG_CREDS}] \//<5>
[--insecure] <6>
INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
...
Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2
----
<1> Organization (namespace) to pull from an App Registry instance.
<2> Set `--from` to the `ose-operator-registry` base image using the tag that
matches the target {product-title} cluster major and minor version.
<3> Set `--filter-by-os` to the operating system and architecture to use for the
base image, which must match the target {product-title} cluster. Valid values
are `linux/amd64`, `linux/ppc64le`, and `linux/s390x`.
<4> Name your catalog image and include a tag, for example, `v2` because it is the
updated catalog.
<5> Optional: If required, specify the location of your registry credentials file.
<6> Optional: If you do not want to configure trust for the target registry, add the
`--insecure` flag.

. Mirror the contents of your catalog to your target registry. The following
`oc adm catalog mirror` command extracts the contents of your custom Operator
catalog image to generate the manifests required for mirroring and mirrors the
images to your registry:
+
----
$ oc adm catalog mirror \
<registry_host_name>:<port>/olm/redhat-operators:v2 \//<1>
<registry_host_name>:<port> \
[-a ${REG_CREDS}] \//<2>
[--insecure] \//<3>
[--filter-by-os="<os>/<arch>"] <4>
mirroring ...
----
<1> Specify your new Operator catalog image.
<2> Optional: If required, specify the location of your registry credentials
file.
<3> Optional: If you do not want to configure trust for the target registry, add
the `--insecure` flag.
<4> Optional: Because the catalog might reference images that support multiple
architectures and operating systems, you can filter by architecture and
operating system to mirror only the images that match. Valid values are
`linux/amd64`, `linux/ppc64le`, and `linux/s390x`.

. Apply the newly generated manifests:
+
----
$ oc apply -f ./redhat-operators-manifests
----
+
[IMPORTANT]
====
It is possible that you do not need to apply the `imageContentSourcePolicy.yaml`
manifest. Complete a `diff` of the files to determine if changes are necessary.
====

. Update your CatalogSource object that references your catalog image.

.. If you have your original `catalogsource.yaml` file for this CatalogSource:

... Edit your `catalogsource.yaml` file to reference your new catalog image in the
`spec.image` field:
+
[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-operator-catalog
namespace: openshift-marketplace
spec:
sourceType: grpc
image: <registry_host_name>:<port>/olm/redhat-operators:v2 <1>
displayName: My Operator Catalog
publisher: grpc
----
<1> Specify your new Operator catalog image.

... Use the updated file to replace the CatalogSource object:
+
----
$ oc replace -f catalogsource.yaml
----

.. Alternatively, edit the CatalogSource using the following command and reference
your new catalog image in the `spec.image` parameter:
+
----
$ oc edit catalogsource <catalog_source_name> -n openshift-marketplace
----

Updated Operators should now be available from the *OperatorHub* page on your
{product-title} cluster.
8 changes: 4 additions & 4 deletions modules/openshift-cluster-maximums-environment.adoc
Expand Up @@ -56,7 +56,7 @@ AWS cloud platform:
|===
| Node |Flavor |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOPS |Count |Region

| Master/Etcd footnoteref:[masteretcdnodeaws, io1 disk with 3000 IOPS is used for master/etcd nodes as etcd is I/O intensive and latency sensitive.]
| Master/Etcd footnoteref:masteretcdnodeaws[io1 disk with 3000 IOPS is used for master/etcd nodes as etcd is I/O intensive and latency sensitive.]
| r5.4xlarge
| 16
| 128
Expand All @@ -65,7 +65,7 @@ AWS cloud platform:
| 3
| us-west-2

| Infra footnoteref:[infranodesaws,Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.]
| Infra footnoteref:infranodesaws[Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.]
| m5.12xlarge
| 48
| 192
Expand All @@ -74,12 +74,12 @@ AWS cloud platform:
| 3
| us-west-2

| Workload footnoteref:[workloadnodeaws, Workload node is dedicated to run performance and scalability workload generators.]
| Workload footnoteref:workloadnodeaws[Workload node is dedicated to run performance and scalability workload generators.]
| m5.4xlarge
| 16
| 64
| gp2
| 500 footnoteref:[disksizeaws, Larger disk size is used to have enough space to store large amounts of data collected during the performance and scalability test run.]
| 500 footnoteref:disksizeaws[Larger disk size is used to have enough space to store large amounts of data collected during the performance and scalability test run.]
| 1
| us-west-2

Expand Down
12 changes: 6 additions & 6 deletions modules/openshift-cluster-maximums-major-releases.adoc
Expand Up @@ -16,27 +16,27 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A
| 2,000
| 2,000

| Number of Pods footnoteref:[numberofpodsmajorrelease,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| Number of Pods footnoteref:numberofpodsmajorrelease[The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| 150,000
| 150,000

| Number of Pods per node
| 250
| 500 footnoteref:[podspernodemajorrelease, This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `hostPrefix` of `22` in the `install-config.yaml` file and `maxPods` set to `500` using a custom KubeletConfig. The maximum number of Pods with attached Persistant Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document.]
| 500 footnoteref:podspernodemajorrelease[This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `hostPrefix` of `22` in the `install-config.yaml` file and `maxPods` set to `500` using a custom KubeletConfig. The maximum number of Pods with attached Persistant Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document.]

| Number of Pods per core
| There is no default value.
| There is no default value.

| Number of Namespaces footnoteref:[numberofnamepacesmajorrelease, When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| Number of Namespaces footnoteref:numberofnamepacesmajorrelease[When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| 10,000
| 10,000

| Number of Builds
| 10,000 (Default pod RAM 512 Mi) - Pipeline Strategy
| 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy

| Number of Pods per namespace footnoteref:[objectpernamespacemajorrelease,There are
| Number of Pods per namespace footnoteref:objectpernamespacemajorrelease[There are
a number of control loops in the system that must iterate over all objects
in a given namespace as a reaction to some changes in state. Having a large
number of objects of a given type in a single namespace can make those loops
Expand All @@ -45,7 +45,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 25,000
| 25,000

| Number of Services footnoteref:[servicesandendpointsmajorrelease,Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| Number of Services footnoteref:servicesandendpointsmajorrelease[Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| 10,000
| 10,000

Expand All @@ -57,7 +57,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 5,000
| 5,000

| Number of Deployments per Namespace footnoteref:[objectpernamespacemajorrelease]
| Number of Deployments per Namespace footnoteref:objectpernamespacemajorrelease[]
| 2,000
| 2,000

Expand Down
10 changes: 5 additions & 5 deletions modules/openshift-cluster-maximums.adoc
Expand Up @@ -15,7 +15,7 @@
| 2,000
| 2,000

| Number of Pods footnoteref:[numberofpods,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| Number of Pods footnoteref:numberofpods[The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| 150,000
| 150,000
| 150,000
Expand All @@ -33,7 +33,7 @@
| There is no default value.
| There is no default value.

| Number of Namespaces footnoteref:[numberofnamepaces, When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| Number of Namespaces footnoteref:numberofnamepaces[ When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| 10,000
| 10,000
| 10,000
Expand All @@ -45,7 +45,7 @@
| 10,000 (Default pod RAM 512 Mi)
| 10,000 (Default pod RAM 512 Mi)

| Number of Pods per Namespace footnoteref:[objectpernamespace,There are
| Number of Pods per Namespace footnoteref:objectpernamespace[There are
a number of control loops in the system that must iterate over all objects
in a given namespace as a reaction to some changes in state. Having a large
number of objects of a given type in a single namespace can make those loops
Expand All @@ -56,7 +56,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 25,000
| 25,000

| Number of Services footnoteref:[servicesandendpoints,Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| Number of Services footnoteref:servicesandendpoints[Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| 10,000
| 10,000
| 10,000
Expand All @@ -74,7 +74,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 5,000
| 5,000

| Number of Deployments per Namespace footnoteref:[objectpernamespace]
| Number of Deployments per Namespace footnoteref:objectpernamespace[]
| 2,000
| 2,000
| 2,000
Expand Down

0 comments on commit 36cd48b

Please sign in to comment.