Skip to content

Commit

Permalink
Do not use deprecated asciidoctor footnote syntax
Browse files Browse the repository at this point in the history
  • Loading branch information
jboxman committed Jun 10, 2020
1 parent a8afba4 commit a689c1a
Show file tree
Hide file tree
Showing 8 changed files with 29 additions and 29 deletions.
2 changes: 1 addition & 1 deletion modules/available-persistent-storage-options.adoc
Expand Up @@ -26,7 +26,7 @@ bypassing the file system
a| * Presented to the OS as a file system export to be mounted
* Also referred to as Network Attached Storage (NAS)
* Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.
|RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.], and Vendor NFS
|RHEL NFS, NetApp NFS footnoteref:netappnfs[NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.], and Vendor NFS
// Azure File, AWS EFS

| Object
Expand Down
2 changes: 1 addition & 1 deletion modules/olm-building-operator-catalog-image.adoc
Expand Up @@ -31,7 +31,7 @@ the bastion host during a restricted network cluster installation.

* A Linux workstation with unrestricted network access
ifeval::["{context}" == "olm-restricted-networks"]
footnoteref:[BZ1771329, The
footnoteref:BZ1771329[The
`oc adm catalog` command is currently only supported on Linux.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1771329[*BZ#1771329*])]
endif::[]
Expand Down
Expand Up @@ -17,7 +17,7 @@ built and pushed to a supported registry.

* A Linux workstation with unrestricted network access
ifeval::["{context}" == "olm-restricted-networks"]
footnoteref:[BZ1771329]
footnoteref:BZ1771329[]
endif::[]
* A custom Operator catalog image pushed to a supported registry
* `oc` version 4.3.5+
Expand Down
2 changes: 1 addition & 1 deletion modules/olm-updating-operator-catalog-image.adoc
Expand Up @@ -19,7 +19,7 @@ image is already configured for use with OperatorHub.

* A Linux workstation with unrestricted network access
ifeval::["{context}" == "olm-restricted-networks"]
footnoteref:[BZ1771329]
footnoteref:BZ1771329[]
endif::[]
* `oc` version 4.3.5+
* `podman` version 1.4.4+
Expand Down
16 changes: 8 additions & 8 deletions modules/openshift-cluster-maximums-environment.adoc
Expand Up @@ -11,7 +11,7 @@ AWS cloud platform:
|===
| Node |Flavor |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOS |Count |Region

| Master/etcd footnoteref:[masteretcdnodeaws, io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.]
| Master/etcd footnoteref:masteretcdnodeaws[io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.]
| r5.4xlarge
| 16
| 128
Expand All @@ -20,7 +20,7 @@ AWS cloud platform:
| 3
| us-west-2

| Infra footnoteref:[infranodesaws,Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.]
| Infra footnoteref:infranodesaws[Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.]
| m5.12xlarge
| 48
| 192
Expand All @@ -29,12 +29,12 @@ AWS cloud platform:
| 3
| us-west-2

| Workload footnoteref:[workloadnode, Workload node is dedicated to run performance and scalability workload generators.]
| Workload footnoteref:workloadnode[Workload node is dedicated to run performance and scalability workload generators.]
| m5.4xlarge
| 16
| 64
| gp2
| 500 footnoteref:[disksize, Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.]
| 500 footnoteref:disksize[Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.]
| 1
| us-west-2

Expand All @@ -44,7 +44,7 @@ AWS cloud platform:
| 32
| gp2
| 100
| 3/25/250/2000 footnoteref:[nodescaleaws, Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.]
| 3/25/250/2000 footnoteref:nodescaleaws[Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.]
| us-west-2

|===
Expand All @@ -56,7 +56,7 @@ Azure cloud platform:
|===
| Node |Flavor |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/iops |Count |Region

| Master/etcd footnoteref:[masteretcdnodeazure, For a higher IOPs and throughput cap, 1024GB disks are used for master/etcd nodes because etcd is I/O intensive and latency sensitive.]
| Master/etcd footnoteref:masteretcdnodeazure[For a higher IOPs and throughput cap, 1024GB disks are used for master/etcd nodes because etcd is I/O intensive and latency sensitive.]
| Standard_D8s_v3
| 8
| 32
Expand All @@ -65,7 +65,7 @@ Azure cloud platform:
| 3
| centralus

| Infra footnoteref:[infranodesazure,Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.]
| Infra footnoteref:infranodesazure[Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.]
| Standard_D16s_v3
| 16
| 64
Expand All @@ -79,7 +79,7 @@ Azure cloud platform:
| 4
| 16
| Premium SSD
| 1024 ( P30 )| 3/25/100/110 footnoteref:[nodescaleazure, The cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.]
| 1024 ( P30 )| 3/25/100/110 footnoteref:nodescaleazure[The cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.]
| centralus

|===
12 changes: 6 additions & 6 deletions modules/openshift-cluster-maximums-major-releases.adoc
Expand Up @@ -16,27 +16,27 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A
| 2,000
| 2,000

| Number of Pods footnoteref:[numberofpodsmajorrelease,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| Number of Pods footnoteref:numberofpodsmajorrelease[The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| 150,000
| 150,000

| Number of Pods per node
| 250
| 500 footnoteref:[podspernodemajorrelease, This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `hostPrefix` of `22` in the `install-config.yaml` file and `maxPods` set to `500` using a custom KubeletConfig. The maximum number of Pods with attached Persistant Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document.]
| 500 footnoteref:podspernodemajorrelease[This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `hostPrefix` of `22` in the `install-config.yaml` file and `maxPods` set to `500` using a custom KubeletConfig. The maximum number of Pods with attached Persistant Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document.]

| Number of Pods per core
| There is no default value.
| There is no default value.

| Number of Namespaces footnoteref:[numberofnamepacesmajorrelease, When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| Number of Namespaces footnoteref:numberofnamepacesmajorrelease[When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| 10,000
| 10,000

| Number of Builds
| 10,000 (Default pod RAM 512 Mi) - Pipeline Strategy
| 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy

| Number of Pods per namespace footnoteref:[objectpernamespacemajorrelease,There are
| Number of Pods per namespace footnoteref:objectpernamespacemajorrelease[There are
a number of control loops in the system that must iterate over all objects
in a given namespace as a reaction to some changes in state. Having a large
number of objects of a given type in a single namespace can make those loops
Expand All @@ -45,7 +45,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 25,000
| 25,000

| Number of Services footnoteref:[servicesandendpointsmajorrelease,Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| Number of Services footnoteref:servicesandendpointsmajorrelease[Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| 10,000
| 10,000

Expand All @@ -57,7 +57,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 5,000
| 5,000

| Number of Deployments per Namespace footnoteref:[objectpernamespacemajorrelease]
| Number of Deployments per Namespace footnoteref:objectpernamespacemajorrelease[]
| 2,000
| 2,000

Expand Down
10 changes: 5 additions & 5 deletions modules/openshift-cluster-maximums.adoc
Expand Up @@ -16,7 +16,7 @@
| 2,000
| 2,000

| Number of Pods footnoteref:[numberofpods,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| Number of Pods footnoteref:numberofpods[The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| 150,000
| 150,000
| 150,000
Expand All @@ -37,7 +37,7 @@
| There is no default value.
| There is no default value.

| Number of Namespaces footnoteref:[numberofnamepaces, When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| Number of Namespaces footnoteref:numberofnamepaces[ When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.]
| 10,000
| 10,000
| 10,000
Expand All @@ -51,7 +51,7 @@
| 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy
| 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy

| Number of Pods per Namespace footnoteref:[objectpernamespace,There are
| Number of Pods per Namespace footnoteref:objectpernamespace[There are
a number of control loops in the system that must iterate over all objects
in a given namespace as a reaction to some changes in state. Having a large
number of objects of a given type in a single namespace can make those loops
Expand All @@ -63,7 +63,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 25,000
| 25,000

| Number of Services footnoteref:[servicesandendpoints,Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| Number of Services footnoteref:servicesandendpoints[Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| 10,000
| 10,000
| 10,000
Expand All @@ -84,7 +84,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme
| 5,000
| 5,000

| Number of Deployments per Namespace footnoteref:[objectpernamespace]
| Number of Deployments per Namespace footnoteref:objectpernamespace[]
| 2,000
| 2,000
| 2,000
Expand Down
12 changes: 6 additions & 6 deletions modules/recommended-configurable-storage-technology.adoc
Expand Up @@ -23,10 +23,10 @@ technologies for the given {product-title} cluster application.
.Recommended and configurable storage technology
[options="header"]
|===
|Storage type |ROX footnoteref:[rox,ReadOnlyMany]|RWX footnoteref:[rwx,ReadWriteMany] |Registry|Scaled registry|Metrics footnoteref:[metrics-prometheus,Prometheus is the underlying technology used for metrics.]|Logging|Apps
|Storage type |ROX footnoteref:rox[ReadOnlyMany]|RWX footnoteref:rwx[ReadWriteMany] |Registry|Scaled registry|Metrics footnoteref:metrics-prometheus[Prometheus is the underlying technology used for metrics.]|Logging|Apps

| Block
| Yes footnoteref:[disk,This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.]
| Yes footnoteref:disk[This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.]
| No
| Configurable
| Not configurable
Expand All @@ -35,12 +35,12 @@ technologies for the given {product-title} cluster application.
| Recommended

| File
| Yes footnoteref:[disk]
| Yes footnoteref:disk[]
| Yes
| Configurable
| Configurable
| Configurable footnoteref:[metrics-warning,For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any PersistentVolumeClaims that are configured for use with metrics.]
| Configurable footnoteref:[logging-warning,For logging, using any shared
| Configurable footnoteref:metrics-warning[For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any PersistentVolumeClaims that are configured for use with metrics.]
| Configurable footnoteref:logging-warning[For logging, using any shared
storage would be an anti-pattern. One volume per elasticsearch is required.]
| Recommended

Expand All @@ -51,7 +51,7 @@ storage would be an anti-pattern. One volume per elasticsearch is required.]
| Recommended
| Not configurable
| Not configurable
| Not configurable footnoteref:[object,Object storage is not consumed through {product-title}'s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API. ]
| Not configurable footnoteref:object[Object storage is not consumed through {product-title}'s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API. ]
|===
[NOTE]
Expand Down

0 comments on commit a689c1a

Please sign in to comment.