Skip to content

Commit

Permalink
Do not use deprecated asciidoctor footnote syntax
Browse files Browse the repository at this point in the history
  • Loading branch information
jboxman committed Jun 12, 2020
1 parent 7734bb5 commit 1eb5367
Show file tree
Hide file tree
Showing 7 changed files with 30 additions and 30 deletions.
18 changes: 9 additions & 9 deletions architecture/index.adoc
Expand Up @@ -220,24 +220,24 @@ suites enabled.
|Unsupported

|TLS 1.0
|No footnoteref:[tlsconfig,Disabled by default, but can be enabled in the server configuration.]
|No footnoteref:[tlsconfig]
|Maybe footnoteref:[otherclient,Some internal clients, such as the LDAP client.]
|No footnoteref:tlsconfig[Disabled by default, but can be enabled in the server configuration.]
|No footnoteref:tlsconfig[]
|Maybe footnoteref:otherclient[Some internal clients, such as the LDAP client.]

|TLS 1.1
|No footnoteref:[tlsconfig]
|No footnoteref:[tlsconfig]
|Maybe footnoteref:[otherclient]
|No footnoteref:tlsconfig[]
|No footnoteref:tlsconfig[]
|Maybe footnoteref:otherclient[]

|TLS 1.2
|*Yes*
|*Yes*
|*Yes*

|TLS 1.3
|N/A footnoteref:[tls13,TLS 1.3 is still under development.]
|N/A footnoteref:[tls13]
|N/A footnoteref:[tls13]
|N/A footnoteref:tls13[TLS 1.3 is still under development.]
|N/A footnoteref:tls13[]
|N/A footnoteref:tls13[]
|===

The following list of enabled cipher suites of {product-title}'s server and `oc`
Expand Down
4 changes: 2 additions & 2 deletions release_notes/index.adoc
Expand Up @@ -47,13 +47,13 @@ capabilities that are not supported by a 3.1 server.

|
|*X.Y* (`oc` Client)
|*X.Y+N* footnoteref:[versionpolicyn,Where *N* is a number greater than 1.] (`oc` Client)
|*X.Y+N* footnoteref:versionpolicyn[Where *N* is a number greater than 1.] (`oc` Client)

|*X.Y* (Server)
|image:redcircle-1.png[]
|image:redcircle-3.png[]

|*X.Y+N* footnoteref:[versionpolicyn] (Server)
|*X.Y+N* footnoteref:versionpolicyn[] (Server)
|image:redcircle-2.png[]
|image:redcircle-1.png[]

Expand Down
2 changes: 1 addition & 1 deletion release_notes/ocp_3_10_release_notes.adoc
Expand Up @@ -1724,7 +1724,7 @@ features marked *GA* indicate _General Availability_.

|CRI-O for runtime pods
|TP
|GA* footnoteref:[disclaimer, Features marked with `*` indicate delivery in a z-stream patch.]
|GA* footnoteref:disclaimer[Features marked with `*` indicate delivery in a z-stream patch.]
|GA

|xref:ocp-310-tenant-driven-storage-snapshotting[Tenant Driven Snapshotting]
Expand Down
4 changes: 2 additions & 2 deletions release_notes/ocp_3_11_release_notes.adoc
Expand Up @@ -1327,7 +1327,7 @@ features marked *GA* indicate _General Availability_.

|CRI-O for runtime pods
|GA
|GA* footnoteref:[disclaimer, Features marked with `*` indicate delivery in a z-stream patch.]
|GA* footnoteref:disclaimer[Features marked with `*` indicate delivery in a z-stream patch.]
|GA

|xref:ocp-311-tenant-driven-storage-snapshotting[Tenant Driven Snapshotting]
Expand Down Expand Up @@ -1538,7 +1538,7 @@ features marked *GA* indicate _General Availability_.
|xref:ocp-311-kuryr[Kuryr CNI Plug-in]
|-
|TP
|xref:ocp-3-11-88[GA*] footnoteref:[disclaimer]
|xref:ocp-3-11-88[GA*] footnoteref:disclaimer[]

|xref:ocp-311-control-sharing-pid-namespace-between-containers[Sharing Control of the PID Namespace]
|-
Expand Down
2 changes: 1 addition & 1 deletion release_notes/ocp_3_9_release_notes.adoc
Expand Up @@ -1358,7 +1358,7 @@ features marked *GA* indicate _General Availability_.
|xref:ocp-39-crio[CRI-O] for runtime pods
| -
|TP
|GA* footnoteref:[disclaimer, Features marked with `*` indicate delivery in a z-stream patch.]
|GA* footnoteref:disclaimer[Features marked with `*` indicate delivery in a z-stream patch.]

|Tenant Driven Snapshotting
| -
Expand Down
16 changes: 8 additions & 8 deletions scaling_performance/cluster_maximums.adoc
Expand Up @@ -85,7 +85,7 @@ application requirements.]
| 2,000
| 2,000

| Number of Pods footnoteref:[numberofpods,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| Number of Pods footnoteref:numberofpods[The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.]
| 120,000
| 120,000
| 150,000
Expand Down Expand Up @@ -116,7 +116,7 @@ application requirements.]
| 10,000 (Default pod RAM 512Mi)
| 10,000 (Default pod RAM 512Mi)

| Number of Pods per Namespace footnoteref:[objectpernamespace,There are
| Number of Pods per Namespace footnoteref:objectpernamespace[There are
a number of control loops in the system that need to iterate over all objects
in a given namespace as a reaction to some changes in state. Having a large
number of objects of a given type in a single namespace can make those loops
Expand All @@ -128,7 +128,7 @@ application requirements.]
| 3,000
| 25,000

| Number of Services footnoteref:[servicesandendpoints,Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| Number of Services footnoteref:servicesandendpoints[Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
| 10,000
| 10,000
| 10,000
Expand All @@ -146,7 +146,7 @@ application requirements.]
| 5,000
| 5,000

| Number of Deployments per Namespace footnoteref:[objectpernamespace]
| Number of Deployments per Namespace footnoteref:objectpernamespace[]
| 2,000
| 2,000
| 2,000
Expand All @@ -163,14 +163,14 @@ Infrastructure as a service provider: OpenStack
|===
|Node |vCPU |RAM(MiB) |Disk size(GiB) |pass-through disk |Count

| Master/Etcd footnoteref:[masteretcdnvme, The master/etcd nodes are backed by NVMe disks as etcd is I/O intensive and latency sensitive.]
| Master/Etcd footnoteref:masteretcdnvme[The master/etcd nodes are backed by NVMe disks as etcd is I/O intensive and latency sensitive.]
| 16
| 124672
| 128
| Yes, NVMe
| 3

| Infra footnoteref:[infranodes, Infra nodes host the Router, Registry, Logging and Monitoring and are backed by NVMe disks.]
| Infra footnoteref:infranodes[Infra nodes host the Router, Registry, Logging and Monitoring and are backed by NVMe disks.]
| 40
| 163584
| 256
Expand All @@ -191,14 +191,14 @@ Infrastructure as a service provider: OpenStack
| No
| 1

| Container Native Storage footnoteref:[cns, Container Native Storage or Ceph storage nodes are backed by NVMe disks.]
| Container Native Storage footnoteref:cns[Container Native Storage or Ceph storage nodes are backed by NVMe disks.]
| 16
| 65280
| 200
| Yes, NVMe
| 3

| Bastion footnoteref:[bastionnode, The Bastion node is part of the OCP network and is used to orchestrate the performance and scale tests.]
| Bastion footnoteref:bastionnode[The Bastion node is part of the OCP network and is used to orchestrate the performance and scale tests.]
| 16
| 65280
| 200
Expand Down
14 changes: 7 additions & 7 deletions scaling_performance/optimizing_storage.adoc
Expand Up @@ -44,19 +44,19 @@ a|* Presented to the operating system (OS) as a block device
bypassing the file system
* Also referred to as a Storage Area Network (SAN)
* Non-shareable, which means that only one client at a time can mount an endpoint of this type
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV,{gluster-native}/{gluster-external} GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:[dynamicPV], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:[dynamicPV], Azure Disk
| {gluster-native}/{gluster-external} GlusterFS footnoteref:dynamicPV[{gluster-native}/{gluster-external} GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:dynamicPV[], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:dynamicPV[], Azure Disk

|File
a| * Presented to the OS as a file system export to be mounted
* Also referred to as Network Attached Storage (NAS)
* Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plugin.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:[glusterfs, Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS
| {gluster-native}/{gluster-external} GlusterFS footnoteref:dynamicPV[], RHEL NFS, NetApp NFS footnoteref:netappnfs[NetApp NFS supports dynamic PV provisioning when using the Trident plugin.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:glusterfs[Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS

| Object
a| * Accessible through a REST API endpoint
* Configurable for use in the {product-title} Registry
* Applications must build their drivers into the application and/or container.
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:[glusterfs], Vendor Swift footnoteref:[glusterfs]
| {gluster-native}/{gluster-external} GlusterFS footnoteref:dynamicPV[], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:glusterfs[], Vendor Swift footnoteref:glusterfs[]
|===

You can use {gluster-native} GlusterFS (a hyperconverged or cluster-hosted
Expand All @@ -72,7 +72,7 @@ The following table summarizes the recommended and configurable storage technolo
.Recommended and configurable storage technology
[options="header"]
|===
|Storage type|RWO footnoteref:[rwo,ReadWriteOnce]|ROX footnoteref:[rox,ReadOnlyMany]|RWX footnoteref:[rwx,ReadWriteMany]|Registry|Scaled registry|Monitoring|Logging|Apps
|Storage type|RWO footnoteref:rwo[ReadWriteOnce]|ROX footnoteref:rox[ReadOnlyMany]|RWX footnoteref:rwx[ReadWriteMany]|Registry|Scaled registry|Monitoring|Logging|Apps

| Block
| Yes
Expand All @@ -90,8 +90,8 @@ The following table summarizes the recommended and configurable storage technolo
| Yes
| Configurable
| Configurable
| Configurable footnoteref:[metrics-warning,For monitoring components, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any PersistentVolumeClaims that are configured for use with monitoring.]
| Configurable footnoteref:[logging-warning,For logging, using any shared
| Configurable footnoteref:metrics-warning[For monitoring components, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any PersistentVolumeClaims that are configured for use with monitoring.]
| Configurable footnoteref:logging-warning[For logging, using any shared
storage would be an anti-pattern. One volume per logging-es is required.]
| Recommended

Expand All @@ -103,7 +103,7 @@ storage would be an anti-pattern. One volume per logging-es is required.]
| Recommended
| Not configurable
| Not configurable
| Not configurable footnoteref:[object,Object storage is not consumed through {product-title}'s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API. ]
| Not configurable footnoteref:object[Object storage is not consumed through {product-title}'s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API. ]
|===

[NOTE]
Expand Down

0 comments on commit 1eb5367

Please sign in to comment.