diff --git a/snippets/logging-approval-strategy-snip.adoc b/snippets/logging-approval-strategy-snip.adoc deleted file mode 100644 index 27df1077aaa2..000000000000 --- a/snippets/logging-approval-strategy-snip.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -If the approval strategy in the subscription is set to *Automatic*, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to *Manual*, you must manually approve pending updates. diff --git a/snippets/logging-create-secret-snip.adoc b/snippets/logging-create-secret-snip.adoc deleted file mode 100644 index 4a1fb35c29c2..000000000000 --- a/snippets/logging-create-secret-snip.adoc +++ /dev/null @@ -1,24 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -You can create a secret in the directory that contains your certificate and key files by using the following command: -[subs="+quotes"] -[source,terminal] ----- -$ oc create secret generic -n openshift-logging \ - --from-file=tls.key= - --from-file=tls.crt= - --from-file=ca-bundle.crt= - --from-literal=username= - --from-literal=password= ----- - -[NOTE] -==== -Use generic or opaque secrets for best results. -==== diff --git a/snippets/logging-get-clusterid-snip.adoc b/snippets/logging-get-clusterid-snip.adoc deleted file mode 100644 index ee6204095268..000000000000 --- a/snippets/logging-get-clusterid-snip.adoc +++ /dev/null @@ -1,12 +0,0 @@ -// Text snippet included in the following modules and assemblies: -// - -:_mod-docs-content-type: SNIPPET - -Logs from any source contain a field `openshift.cluster_id`, the unique identifier of the cluster in which the Operator is deployed. - -.ClusterID query -[source,terminal] ----- -$ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}' ----- diff --git a/snippets/logging-log-types-snip.adoc b/snippets/logging-log-types-snip.adoc deleted file mode 100644 index 17b994dbbf21..000000000000 --- a/snippets/logging-log-types-snip.adoc +++ /dev/null @@ -1,15 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -{logging-uc} collects container logs and node logs. These are categorized into types: - -* `application` - Container logs generated by non-infrastructure containers. - -* `infrastructure` - Container logs from namespaces `kube-\*` and `openshift-\*`, and node logs from `journald`. - -* `audit` - Logs from `auditd`, `kube-apiserver`, `openshift-apiserver`, and `ovn` if enabled. diff --git a/snippets/logging-loki-vs-lokistack-snip.adoc b/snippets/logging-loki-vs-lokistack-snip.adoc deleted file mode 100644 index fbca02e2cbc3..000000000000 --- a/snippets/logging-loki-vs-lokistack-snip.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -In logging documentation, LokiStack refers to the supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. diff --git a/snippets/logging-outputs-5.5-snip.adoc b/snippets/logging-outputs-5.5-snip.adoc deleted file mode 100644 index 54e9e0278a67..000000000000 --- a/snippets/logging-outputs-5.5-snip.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -.Output Destinations -[options="header"] -|====== -|Feature|Protocol|Tested with|Fluentd|Vector -|Cloudwatch|REST over HTTPS||✓|✓ -|Elasticearch|||| -| * v6||v6.8.1|✓|✓ -| * v7||v7.12.2|✓|✓ -| * v8||||✓ -|Google Cloud Logging||||✓ - -|Kafka|kafka 0.11|kafka 2.4.1 kafka 2.7.0 kafka 3|✓|✓ - -|Fluent Forward|fluentd forward v1|fluentd 1.14.6 -logstash 7.10.1|✓| - -|Loki|REST over HTTP(S)|Loki 2.3.0 Loki 2.6.0|✓|✓ -|Syslog|RFC3164,RFC5424|rsyslog 8.39.0|✓| -|====== diff --git a/snippets/logging-outputs-5.6-snip.adoc b/snippets/logging-outputs-5.6-snip.adoc deleted file mode 100644 index 74aa6aa8244b..000000000000 --- a/snippets/logging-outputs-5.6-snip.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -[options="header"] -|==================================================================================================== -| Output | Protocol | Tested with | Fluentd | Vector -| Cloudwatch | REST over HTTP(S) | | ✓ | ✓ -| Elasticsearch v6 | | v6.8.1 | ✓ | ✓ -| Elasticsearch v7 | | v7.12.2, 7.17.7 | ✓ | ✓ -| Elasticsearch v8 | | v8.4.3 | | ✓ -| Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1 | ✓ | -| Google Cloud Logging | | | | ✓ -| HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | | -| Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | ✓ | ✓ -| Loki | REST over HTTP(S) | Loki 2.3.0, 2.7 | ✓ | ✓ -| Splunk | HEC | v8.2.9, 9.0.0 | | ✓ -| Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7 | ✓ | -|==================================================================================================== diff --git a/snippets/logging-restructure-snip.adoc b/snippets/logging-restructure-snip.adoc deleted file mode 100644 index 1c4da7ef9b89..000000000000 --- a/snippets/logging-restructure-snip.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - - -[NOTE] -==== -For Logging 5.5 and higher, documentation is organized by version. -==== diff --git a/snippets/logging-subscription-object-snip.adoc b/snippets/logging-subscription-object-snip.adoc deleted file mode 100644 index 0be3307393a5..000000000000 --- a/snippets/logging-subscription-object-snip.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// Text snippet included in the following modules: -// -// -:_mod-docs-content-type: SNIPPET - -[source,YAML] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: cluster-logging - namespace: openshift-logging -spec: - channel: "stable" <1> - name: cluster-logging - source: redhat-operators - sourceNamespace: openshift-marketplace - installPlanApproval: Automatic ----- -<1> Specify `stable`, or `stable-5.` as the channel. diff --git a/snippets/network-observability-clusterrole-reader.adoc b/snippets/network-observability-clusterrole-reader.adoc deleted file mode 100644 index 117c5e6f058b..000000000000 --- a/snippets/network-observability-clusterrole-reader.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// -// Text snippet included in the following modules: -// -// * modules/network-observability-auth-multi-tenancy.adoc - -:_mod-docs-content-type: SNIPPET -.Example ClusterRole reader yaml -[source, yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: netobserv-reader <1> -rules: -- apiGroups: - - 'loki.grafana.com' - resources: - - network - resourceNames: - - logs - verbs: - - 'get' ----- -<1> This role can be used for multi-tenancy. \ No newline at end of file diff --git a/snippets/network-observability-clusterrole-writer.adoc b/snippets/network-observability-clusterrole-writer.adoc deleted file mode 100644 index bf9704d1ef2d..000000000000 --- a/snippets/network-observability-clusterrole-writer.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// -// Text snippet included in the following modules: -// -// * modules/network-observability-auth-multi-tenancy.adoc - -:_mod-docs-content-type: SNIPPET -.Example ClusterRole writer yaml -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: netobserv-writer -rules: -- apiGroups: - - 'loki.grafana.com' - resources: - - network - resourceNames: - - logs - verbs: - - 'create' ----- \ No newline at end of file diff --git a/snippets/network-observability-clusterrolebinding.adoc b/snippets/network-observability-clusterrolebinding.adoc deleted file mode 100644 index 76906006ab27..000000000000 --- a/snippets/network-observability-clusterrolebinding.adoc +++ /dev/null @@ -1,30 +0,0 @@ -// Text snippet included in the following assemblies: -// -// -// -// Text snippet included in the following modules: -// -// * modules/network-observability-auth-multi-tenancy.adoc - -:_mod-docs-content-type: SNIPPET - -.Example ClusterRoleBinding yaml -[source, yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: netobserv-writer-flp -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: netobserv-writer -subjects: -- kind: ServiceAccount - name: flowlogs-pipeline <1> - namespace: netobserv -- kind: ServiceAccount - name: flowlogs-pipeline-transformer - namespace: netobserv ----- -<1> The `flowlogs-pipeline` writes to Loki. If you are using Kafka, this value is `flowlogs-pipeline-transformer`. \ No newline at end of file diff --git a/snippets/oadp-ceph-cr-prerequisites.adoc b/snippets/oadp-ceph-cr-prerequisites.adoc deleted file mode 100644 index 672ae8afcf81..000000000000 --- a/snippets/oadp-ceph-cr-prerequisites.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Text snippet included in the following modules: -// -// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc - -:_mod-docs-content-type: SNIPPET - -.Prerequisites - -* A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner. -* The `StorageClass` and `VolumeSnapshotClass` custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover. -* There is a secret `cloud-credentials` in the `openshift-adp` namespace. diff --git a/snippets/operator-group-unique-name.adoc b/snippets/operator-group-unique-name.adoc deleted file mode 100644 index 9f64eebbc5e4..000000000000 --- a/snippets/operator-group-unique-name.adoc +++ /dev/null @@ -1,21 +0,0 @@ -// Text snippet included in the following assemblies: -// -// * modules/olm-installing-from-operatorhub-using-cli.adoc -// * modules/olm-installing-specific-version-cli.adoc -// * modules/olm-operatorgroups-rbac.adoc -// * modules/olm-operatorgroups-static.adoc -// * modules/olm-operatorgroups-target-namespace.adoc -// * modules/olm-policy-scoping-operator-install.adoc - -:_mod-docs-content-type: SNIPPET - -[WARNING] -==== -Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: - -* `-admin` -* `-edit` -* `-view` - -When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. -==== diff --git a/snippets/ptp-clock-holdover-note.adoc b/snippets/ptp-clock-holdover-note.adoc deleted file mode 100644 index 5ba4867e0729..000000000000 --- a/snippets/ptp-clock-holdover-note.adoc +++ /dev/null @@ -1,10 +0,0 @@ -:_mod-docs-content-type: SNIPPET -[NOTE] -==== -During holdover, the T-GM or T-BC uses the internal system clock to continue generating time synchronization signals as accurately as possible based on the last known good reference. - -You can set the time holdover specification threshold controlling the time spent advertising `ClockClass` values `7` or `135` to `0` so that the T-GM or T-BC advertises a degraded `ClockClass` value directly after losing traceability to a PRTC. -In this case, after initially advertising `ClockClass` values between `140–165`, a clock can still be within the holdover specification. -==== - -For more information, see link:https://www.itu.int/rec/T-REC-G.8275.1-202211-I/en["Phase/time traceability information", ITU-T G.8275.1/Y.1369.1 Recommendations]. diff --git a/snippets/rosa-4-11-12-snippet.adoc b/snippets/rosa-4-11-12-snippet.adoc deleted file mode 100644 index 995063694722..000000000000 --- a/snippets/rosa-4-11-12-snippet.adoc +++ /dev/null @@ -1,6 +0,0 @@ -[IMPORTANT] -==== -{product-title} ROSA 4.12 cluster creation can take a long time or fail. The default version of ROSA is set to 4.11, which means that only 4.11 resources are created when you create account roles or ROSA clusters using the default settings. Account roles from 4.12 are backwards compatible, which is the case for `account-role` policy versions. You can use the `--version` flag to create 4.12 resources. - -For more information see the link:https://access.redhat.com/solutions/6996508[ROSA 4.12 cluster creation failure solution]. -==== \ No newline at end of file diff --git a/snippets/rosa-hcp-rn.adoc b/snippets/rosa-hcp-rn.adoc deleted file mode 100644 index 7bd29c9930e5..000000000000 --- a/snippets/rosa-hcp-rn.adoc +++ /dev/null @@ -1,6 +0,0 @@ -// Text snippet included in the following modules: -// -// * rosa_release_notes/rosa-release-notes.adoc - -:_mod-docs-content-type: SNIPPET -* **Hosted control planes.** {hcp-title-first} clusters are now available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature. This new architecture provides a lower-cost, more resilient ROSA architecture. For more information, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc[Creating {hcp-title} clusters using the default options]. \ No newline at end of file diff --git a/snippets/vcenter-support.adoc b/snippets/vcenter-support.adoc deleted file mode 100644 index 1101155977b7..000000000000 --- a/snippets/vcenter-support.adoc +++ /dev/null @@ -1,16 +0,0 @@ -// Text snippet included in the following modules: -// -// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc -// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc -// * installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc -// * installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc -// * installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc -// * installing/installing_vsphere/installing-vsphere-network-customizations.adoc -// * installing/installing_vsphere/installing-vsphere.adoc - -:_mod-docs-content-type: SNIPPET - -[NOTE] -==== -{product-title} supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. -==== diff --git a/snippets/ztp-example-siteconfig.adoc b/snippets/ztp-example-siteconfig.adoc deleted file mode 100644 index 27fd58a58b93..000000000000 --- a/snippets/ztp-example-siteconfig.adoc +++ /dev/null @@ -1,92 +0,0 @@ -:_mod-docs-content-type: SNIPPET -.Example {sno} cluster SiteConfig CR -[source,yaml,subs="attributes+"] ----- -apiVersion: ran.openshift.io/v1 -kind: SiteConfig -metadata: - name: "" - namespace: "" -spec: - baseDomain: "example.com" - pullSecretRef: - name: "assisted-deployment-pull-secret" <1> - clusterImageSetNameRef: "openshift-{product-version}" <2> - sshPublicKey: "ssh-rsa AAAA..." <3> - clusters: - - clusterName: "" - networkType: "OVNKubernetes" - clusterLabels: <4> - common: true - group-du-sno: "" - sites : "" - clusterNetwork: - - cidr: 1001:1::/48 - hostPrefix: 64 - machineNetwork: - - cidr: 1111:2222:3333:4444::/64 - serviceNetwork: - - 1001:2::/112 - additionalNTPSources: - - 1111:2222:3333:4444::2 - #crTemplates: - # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" <5> - nodes: - - hostName: "example-node.example.com" <6> - role: "master" - nodeLabels: <7> - node-role.kubernetes.io/example-label: - custom-label/parameter1: true - # automatedCleaningMode: "disabled" <8> - bmcAddress: idrac-virtualmedia://// <9> - bmcCredentialsName: - name: "bmh-secret" <10> - bootMACAddress: "AA:BB:CC:DD:EE:11" - bootMode: "UEFI" <11> - rootDeviceHints: <12> - wwn: "0x11111000000asd123" - cpuset: "0-1,52-53" <13> - nodeNetwork: <14> - interfaces: - - name: eno1 - macAddress: "AA:BB:CC:DD:EE:11" - config: - interfaces: - - name: eno1 - type: ethernet - state: up - ipv4: - enabled: false - ipv6: <15> - enabled: true - address: - - ip: 1111:2222:3333:4444::aaaa:1 - prefix-length: 64 - dns-resolver: - config: - search: - - example.com - server: - - 1111:2222:3333:4444::2 - routes: - config: - - destination: ::/0 - next-hop-interface: eno1 - next-hop-address: 1111:2222:3333:4444::1 - table-id: 254 ----- -<1> Create the `assisted-deployment-pull-secret` CR with the same namespace as the `SiteConfig` CR. -<2> `clusterImageSetNameRef` defines an image set available on the hub cluster. To see the list of supported versions on your hub cluster, run `oc get clusterimagesets`. -<3> Configure the SSH public key used to access the cluster. -<4> Cluster labels must correspond to the `bindingRules` field in the `PolicyGenTemplate` CRs that you define. For example, `policygentemplates/common-ranGen.yaml` applies to all clusters with `common: true` set, `policygentemplates/group-du-sno-ranGen.yaml` applies to all clusters with `group-du-sno: ""` set. -<5> Optional. The CR specifed under `KlusterletAddonConfig` is used to override the default `KlusterletAddonConfig` that is created for the cluster. -<6> For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with `role: master` and two or more hosts defined with `role: worker`. -<7> Specify custom roles to your nodes in your managed clusters. These are additional roles which are not used by any {product-title} components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding your custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. -<8> Optional. If the value is set to `metadata`, the partitioning table of the disk is removed, but the disk is not fully wiped. By default, the `automatedCleaningMode` field is disabled. To enable removing the partitioning table, uncomment this line and set the value to `metadata`. -<9> BMC address that you use to access the host. Applies to all cluster types. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use {rh-rhacm} 2.8 or later. For more information about BMC addressing, see the _Additional resources_ section. -<10> Name of the `bmh-secret` CR that you separately create with the host BMC credentials. When creating the `bmh-secret` CR, use the same namespace as the `SiteConfig` CR that provisions the host. -<11> Configures the boot mode for the host. The default value is `UEFI`. Use `UEFISecureBoot` to enable secure boot on the host. -<12> Specifies the device for deployment. Identifiers that are stable across reboots are recommended, for example `wwn: ` or `deviceName: /dev/disk/by-path/`. For a detailed list of stable identifiers, see the _About root device hints_ section. -<13> `cpuset` must match the value set in the cluster `PerformanceProfile` CR `spec.cpu.reserved` field for workload partitioning. -<14> Specifies the network settings for the node. -<15> Configures the IPv6 address for the host. For {sno} clusters with static IP addresses, the node-specific API and Ingress IP addresses must be the same. diff --git a/snippets/ztp-hub-cluster-scale-test-specs.adoc b/snippets/ztp-hub-cluster-scale-test-specs.adoc deleted file mode 100644 index 725846d1836f..000000000000 --- a/snippets/ztp-hub-cluster-scale-test-specs.adoc +++ /dev/null @@ -1,56 +0,0 @@ -:_mod-docs-content-type: SNIPPET -[IMPORTANT] -==== -The following guidelines are based on internal lab benchmark testing only and do not represent a complete real-world host specification. -==== - -.Representative three-node hub cluster machine specifications -[cols=2*, width="90%", options="header"] -|==== -|Requirement -|Description - -|{product-title} -|version 4.13 - -|{rh-rhacm} -|version 2.7 - -|{cgu-operator-first} -|version 4.13 - -|Server hardware -|3 x Dell PowerEdge R650 rack servers - -|NVMe hard disks -a|* 50 GB disk for `/var/lib/etcd` -* 2.9 TB disk for `/var/lib/containers` - -|SSD hard disks -a|* 1 SSD split into 15 200GB thin-provisioned logical volumes provisioned as `PV` CRs -* 1 SSD serving as an extra large `PV` resource - -|Number of applied DU profile policies -|5 -|==== - -[IMPORTANT] -==== -The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing. -==== - -.Simulated lab environment network specifications -[cols=2*, width="90%", options="header"] -|==== -|Specification -|Description - -|Round-trip time (RTT) latency -|50 ms - -|Packet loss -|0.02% packet loss - -|Network bandwidth limit -|20 Mbps -|====