diff --git a/logging/cluster-logging-external.adoc b/logging/cluster-logging-external.adoc
index f8ce3e3e7d29..711ee9c1bed2 100644
--- a/logging/cluster-logging-external.adoc
+++ b/logging/cluster-logging-external.adoc
@@ -16,8 +16,6 @@ To send audit logs to the default internal Elasticsearch log store, use the Clus
When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
-Alternatively, you can create a config map to use the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-fluentd_cluster-logging-external[Fluentd *forward* protocol] or the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-syslog_cluster-logging-external[syslog protocol] to send logs to external systems. However, these methods for forwarding logs are deprecated in {product-title} and will be removed in a future release.
-
[IMPORTANT]
====
You cannot use the config map methods and the Cluster Log Forwarder in the same cluster.
@@ -62,9 +60,4 @@ include::modules/cluster-logging-collector-collecting-ovn-logs.adoc[leveloffset=
* xref:../networking/network_policy/logging-network-policy.adoc#nw-networkpolicy-audit-concept_logging-network-policy[Network policy audit logging]
-
-include::modules/cluster-logging-collector-legacy-fluentd.adoc[leveloffset=+1]
-
-include::modules/cluster-logging-collector-legacy-syslog.adoc[leveloffset=+1]
-
include::modules/cluster-logging-troubleshooting-log-forwarding.adoc[leveloffset=+1]
diff --git a/modules/cluster-logging-collector-legacy-fluentd.adoc b/modules/cluster-logging-collector-legacy-fluentd.adoc
deleted file mode 100644
index fce3afbb7aff..000000000000
--- a/modules/cluster-logging-collector-legacy-fluentd.adoc
+++ /dev/null
@@ -1,111 +0,0 @@
-[id="cluster-logging-collector-legacy-fluentd_{context}"]
-= Forwarding logs using the legacy Fluentd method
-
-You can use the Fluentd *forward* protocol to send logs to destinations outside of your {product-title} cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator to receive log data from {product-title}.
-
-[IMPORTANT]
-====
-This method for forwarding logs is deprecated in {product-title} and will be removed in a future release.
-====
-
-ifdef::openshift-origin[]
-The *forward* protocols are provided with the Fluentd image as of v1.4.0.
-endif::openshift-origin[]
-
-To send logs using the Fluentd *forward* protocol, create a configuration file called `secure-forward.conf`, that points to an external log aggregator. Then, use that file to create a config map called called `secure-forward` in the `openshift-logging` project, which {product-title} uses when forwarding the logs.
-
-.Prerequisites
-
-* You must have a logging server that is configured to receive the logging data using the specified protocol or format.
-
-.Sample Fluentd configuration file
-
-[source,yaml]
-----
-
- @type forward
-
- self_hostname ${hostname}
- shared_key "fluent-receiver"
-
- transport tls
- tls_verify_hostname false
- tls_cert_path '/etc/ocp-forward/ca-bundle.crt'
-
- @type file
- path '/var/lib/fluentd/secureforwardlegacy'
- queued_chunks_limit_size "1024"
- chunk_limit_size "1m"
- flush_interval "5s"
- flush_at_shutdown "false"
- flush_thread_count "2"
- retry_max_interval "300"
- retry_forever true
- overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'throw_exception'}"
-
-
- host fluent-receiver.example.com
- port 24224
-
-
-----
-
-.Procedure
-
-To configure {product-title} to forward logs using the legacy Fluentd method:
-
-. Create a configuration file named `secure-forward` and specify parameters similar to the following within the `` stanza:
-+
-[source,yaml]
-----
-
- @type forward
-
- self_hostname ${hostname}
- shared_key <1>
-
- transport tls <2>
- tls_verify_hostname <3>
- tls_cert_path <4>
- <5>
- @type file
- path '/var/lib/fluentd/secureforwardlegacy'
- queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
- chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
- flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
- flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
- flush_thread_count "#{ENV['FLUSH_THREAD_COUNT'] || 2}"
- retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
- retry_forever true
-
-
- name <6>
- host <7>
- hostlabel <8>
- port <9>
-
- <10>
- name
- host
-
-----
-<1> Enter the shared key between nodes.
-<2> Specify `tls` to enable TLS validation.
-<3> Set to `true` to verify the server cert hostname. Set to `false` to ignore server cert hostname.
-<4> Specify the path to the private CA certificate file as `/etc/ocp-forward/ca_cert.pem`.
-<5> Specify the link:https://docs.fluentd.org/configuration/buffer-section[Fluentd buffer parameters] as needed.
-<6> Optionally, enter a name for this server.
-<7> Specify the hostname or IP of the server.
-<8> Specify the host label of the server.
-<9> Specify the port of the server.
-<10> Optionally, add additional servers.
-If you specify two or more servers, *forward* uses these server nodes in a round-robin order.
-+
-To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/output/forward#tips-and-tricks[Fluentd documentation] for information about client certificate, key parameters, and other settings.
-
-. Create a config map named `secure-forward` in the `openshift-logging` project from the configuration file:
-+
-[source,terminal]
-----
-$ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging
-----
diff --git a/modules/cluster-logging-collector-legacy-syslog.adoc b/modules/cluster-logging-collector-legacy-syslog.adoc
deleted file mode 100644
index 0184df470370..000000000000
--- a/modules/cluster-logging-collector-legacy-syslog.adoc
+++ /dev/null
@@ -1,111 +0,0 @@
-[id="cluster-logging-collector-legacy-syslog_{context}"]
-= Forwarding logs using the legacy syslog method
-
-You can use the *syslog* RFC3164 protocol to send logs to destinations outside of your {product-title} cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from {product-title}.
-
-[IMPORTANT]
-====
-This method for forwarding logs is deprecated in {product-title} and will be removed in a future release.
-====
-
-There are two versions of the *syslog* protocol:
-
-* *out_syslog*: The non-buffered implementation, which communicates through UDP, does not buffer data and writes out results immediately.
-* *out_syslog_buffered*: The buffered implementation, which communicates through TCP and link:https://docs.fluentd.org/buffer[buffers data into chunks].
-
-To send logs using the *syslog* protocol, create a configuration file called `syslog.conf`, with the information needed to forward the logs. Then, use that file to create a config map called `syslog` in the `openshift-logging` project, which {product-title} uses when forwarding the logs.
-
-.Prerequisites
-
-* You must have a logging server that is configured to receive the logging data using the specified protocol or format.
-
-
-.Sample syslog configuration file
-[source,yaml]
-----
-
-@type syslog_buffered
-remote_syslog rsyslogserver.example.com
-port 514
-hostname ${hostname}
-remove_tag_prefix tag
-facility local0
-severity info
-use_record true
-payload_key message
-rfc 3164
-
-----
-
-You can configure the following `syslog` parameters. For more information, see the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164].
-
-* facility: The link:https://tools.ietf.org/html/rfc3164#section-4.1.1[syslog facility]. The value can be a decimal integer or a case-insensitive keyword:
-** `0` or `kern` for kernel messages
-** `1` or `user` for user-level messages, the default.
-** `2` or `mail` for the mail system
-** `3` or `daemon` for the system daemons
-** `4` or `auth` for the security/authentication messages
-** `5` or `syslog` for messages generated internally by syslogd
-** `6` or `lpr` for the line printer subsystem
-** `7` or `news` for the network news subsystem
-** `8` or `uucp` for the UUCP subsystem
-** `9` or `cron` for the clock daemon
-** `10` or `authpriv` for security authentication messages
-** `11` or `ftp` for the FTP daemon
-** `12` or `ntp` for the NTP subsystem
-** `13` or `security` for the syslog audit logs
-** `14` or `console` for the syslog alert logs
-** `15` or `solaris-cron` for the scheduling daemon
-** `16`–`23` or `local0` – `local7` for locally used facilities
-* payloadKey: The record field to use as payload for the syslog message.
-* rfc: The RFC to be used for sending logs using syslog.
-* severity: The link:https://tools.ietf.org/html/rfc3164#section-4.1.1[syslog severity] to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
-** `0` or `Emergency` for messages indicating the system is unusable
-** `1` or `Alert` for messages indicating action must be taken immediately
-** `2` or `Critical` for messages indicating critical conditions
-** `3` or `Error` for messages indicating error conditions
-** `4` or `Warning` for messages indicating warning conditions
-** `5` or `Notice` for messages indicating normal but significant conditions
-** `6` or `Informational` for messages indicating informational messages
-** `7` or `Debug` for messages indicating debug-level messages, the default
-* tag: The record field to use as a tag on the syslog message.
-* trimPrefix: The prefix to remove from the tag.
-
-.Procedure
-
-To configure {product-title} to forward logs using the legacy configuration methods:
-
-. Create a configuration file named `syslog.conf` and specify parameters similar to the following within the `` stanza:
-+
-----
-
-@type <1>
-remote_syslog <2>
-port 514 <3>
-hostname ${hostname}
-remove_tag_prefix <4>
-facility
-severity
-use_record
-payload_key message
-rfc 3164 <5>
-
-----
-<1> Specify the protocol to use, either: `syslog` or `syslog_buffered`.
-<2> Specify the FQDN or IP address of the syslog server.
-<3> Specify the port of the syslog server.
-<4> Optional: Specify the appropriate syslog parameters, for example:
-** Parameter to remove the specified `tag` field from the syslog prefix.
-** Parameter to set the specified field as the syslog key.
-** Parameter to specify the syslog log facility or source.
-** Parameter to specify the syslog log severity.
-** Parameter to use the severity and facility from the record if available. If `true`, the `container_name`, `namespace_name`, and `pod_name` are included in the output content.
-** Parameter to specify the key to set the payload of the syslog message. Defaults to `message`.
-<5> With the legacy syslog method, you must specify `3164` for the `rfc` value.
-
-. Create a config map named `syslog` in the `openshift-logging` project from the configuration file:
-+
-[source,terminal]
-----
-$ oc create configmap syslog --from-file=syslog.conf -n openshift-logging
-----