diff --git a/website/content/partials/alerts/rc-alert.mdx b/website/content/partials/alerts/rc-alert.mdx
deleted file mode 100644
index cac2771c43..0000000000
--- a/website/content/partials/alerts/rc-alert.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
-The information you are reading is subject to change and should not be
-referenced as official guidance until the associated release is generally
-available.
-
-
\ No newline at end of file
diff --git a/website/content/partials/alpine-314.mdx b/website/content/partials/alpine-314.mdx
deleted file mode 100644
index e2c1f87ad9..0000000000
--- a/website/content/partials/alpine-314.mdx
+++ /dev/null
@@ -1,9 +0,0 @@
-## Alpine 3.14
-
-Docker images for Vault 1.6.6+, 1.7.4+, and 1.8.2+ are built with Alpine 3.14,
-due to a security issue in Alpine 3.13 (CVE-2021-36159).
-Some users on older versions of Docker may run into issues with these images.
-See the following for more details:
-
-- https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0#faccessat2
-- https://about.gitlab.com/blog/2021/08/26/its-time-to-upgrade-docker-engine/
diff --git a/website/content/partials/api/restricted-endpoints.mdx b/website/content/partials/api/restricted-endpoints.mdx
deleted file mode 100644
index 3c8508b60e..0000000000
--- a/website/content/partials/api/restricted-endpoints.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-
- The CLI commands associated with restricted API paths are also restricted.
-
-
-API path | Root | Admin
-------------------------------------------- | ---- | -----
-`sys/audit` | YES | NO
-`sys/audit-hash` | YES | YES
-`sys/config/auditing/*` | YES | NO
-`sys/config/cors` | YES | NO
-`sys/config/group-policy-application` | YES | NO
-`sys/config/reload` | YES | NO
-`sys/config/state` | YES | NO
-`sys/config/ui` | YES | NO
-`sys/decode-token` | YES | NO
-`sys/experiments` | YES | NO
-`sys/generate-recovery-token` | YES | NO
-`sys/generate-root` | YES | NO
-`sys/health` | YES | NO
-`sys/host-info` | YES | NO
-`sys/in-flight-req` | YES | NO
-`sys/init` | YES | NO
-`sys/internal/counters/activity` | YES | NO
-`sys/internal/counters/activity/export` | YES | NO
-`sys/internal/counters/activity/monthly` | YES | NO
-`sys/internal/counters/config` | YES | NO
-`sys/internal/inspect/router/*` | YES | NO
-`sys/key-status` | YES | NO
-`sys/loggers` | YES | NO
-`sys/metrics` | YES | NO
-`sys/mfa/method/*` | YES | NO
-`sys/monitor` | YES | YES
-`sys/pprof/*` | YES | NO
-`sys/quotas/config` | YES | NO
-`sys/quotas/lease-count` | YES | NO
-`sys/quotas/rate-limit` | YES | NO
-`sys/raw` | YES | NO
-`sys/rekey/*` | YES | NO
-`sys/rekey-recovery-key` | YES | NO
-`sys/rotate/config` | YES | NO
-`sys/rotate` | YES | NO
-`sys/seal` | YES | NO
-`sys/sealwrap/rewrap` | YES | NO
-`sys/step-down` | YES | NO
-`sys/storage` | YES | NO
-`sys/unseal` | YES | NO
diff --git a/website/content/partials/application-of-sentinel-rgps-via-identity-groups.mdx b/website/content/partials/application-of-sentinel-rgps-via-identity-groups.mdx
deleted file mode 100644
index 881c03d52f..0000000000
--- a/website/content/partials/application-of-sentinel-rgps-via-identity-groups.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
-As of versions `1.15.0`, `1.14.4`, and `1.13.8`, [the Sentinel RGPSs derived from membership in identity groups apply
-only to entities in the same and child namespaces, relative to the identity group](/vault/docs/enterprise/sentinel#rgps-and-namespaces).
-
-Also, the [`group_policy_application_mode`](/vault/api-docs/system/config-group-policy-application) only applies to
-to ACL policies. Vault Sentinel Role Governing Policies (RGPs) are not affected by group policy application mode.
diff --git a/website/content/partials/builds-without-ui.mdx b/website/content/partials/builds-without-ui.mdx
deleted file mode 100644
index 3b5c0434d2..0000000000
--- a/website/content/partials/builds-without-ui.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
-### Core binaries lacking Vault UI
-
-Core binaries of Vault 1.5.1, 1.4.4, 1.3.8, and 1.2.5 were built without the
-Vault UI. Enterprise binaries are not affected.
diff --git a/website/content/partials/enterprise-licenses.mdx b/website/content/partials/enterprise-licenses.mdx
deleted file mode 100644
index b4ab6a9b18..0000000000
--- a/website/content/partials/enterprise-licenses.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-### Enterprise licenses
-
-In versions 1.2.6, 1.3.9, 1.4.5, and 1.5.2, the enterprise licenses were not incorporated correctly
-into the build and we have issued patch releases (x.y.z.1) for enterprise customers containing the
-proper license. As the previous builds were both not working and causing confusion, we have removed
-the binaries.
diff --git a/website/content/partials/entity-alias-mapping.mdx b/website/content/partials/entity-alias-mapping.mdx
deleted file mode 100644
index 650a8fe234..0000000000
--- a/website/content/partials/entity-alias-mapping.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-## Entity alias mapping
-
-Previously, an entity in Vault could be mapped to multiple entity aliases on the same authentication backend. This
-led to a potential security vulnerability (CVE-2021-43998), as ACL policies templated with alias information would match the first
-alias created. Thus, tokens created from all aliases of the entity, will have access to the paths containing alias
-metadata of the first alias due to templated policies being incorrectly applied. As a result, the mapping behavior was updated
-such that an entity can only have one alias per authentication backend. This change exists in Vault 1.9.0+, 1.8.5+ and 1.7.6+.
\ No newline at end of file
diff --git a/website/content/partials/faq/client-count/tools.mdx b/website/content/partials/faq/client-count/tools.mdx
deleted file mode 100644
index 789e41e228..0000000000
--- a/website/content/partials/faq/client-count/tools.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
-### What is the Vault auditor? ((##vault-auditor))
-
-@include 'alerts/auditor-deprecated.mdx'
-
-The [Vault auditor tool](/vault/tutorials/monitoring/usage-metrics#vault-auditor-tool)
-lets customers running Vault v1.3 – v1.5 compute and display client
-count data using the client compute logic available in Vault 1.7. Auditor use
-with Vault versions older than 1.3 is untested.
-
-The auditor may report that your audit logs are unreadable of the logs are too
-large or you are running an older version than Vault 1.6.
-
-
-### Are there any known client count issues in the auditor tool? ((#auditor-ki))
-
-**Yes**.
-
-The Vault auditor only includes the computation logic improvements from Vault
-v1.6 – v1.7. Running the auditor on Vault v1.8+ will result in
-discrepancies when comparing the result to data available through the
-Vault UI or API.
diff --git a/website/content/partials/known-issues/aws-static-roles.mdx b/website/content/partials/known-issues/aws-static-roles.mdx
deleted file mode 100644
index 75122d43c5..0000000000
--- a/website/content/partials/known-issues/aws-static-roles.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
-### AWS static roles ignore changes to rotation period ((#aws-static-role-rotation))
-
-#### Affected versions
-
-- 1.14.0+
-
-#### Issue
-
-AWS static roles currently ignore configuration changes made to the key rotation
-period. As a result, Vault will continue to use whatever rotation period was set
-when the roles were originally created.
-
-#### Workaround
-
-Delete and recreate any static role objects that should use the new rotation
-period.
diff --git a/website/content/partials/known-issues/ephemeral-loggers-memory-leak.mdx b/website/content/partials/known-issues/ephemeral-loggers-memory-leak.mdx
deleted file mode 100644
index c45281560b..0000000000
--- a/website/content/partials/known-issues/ephemeral-loggers-memory-leak.mdx
+++ /dev/null
@@ -1,20 +0,0 @@
-### Vault is storing references to ephemeral sub-loggers leading to unbounded memory consumption
-
-#### Affected versions
-
-This memory consumption bug affects Vault Community and Enterprise versions:
-
-- 1.13.7 - 1.13.9
-- 1.14.3 - 1.14.5
-- 1.15.0 - 1.15.1
-
-This change that introduced this bug has been reverted as of 1.13.10, 1.14.6, and 1.15.2
-
-#### Issue
-Vault is unexpectedly storing references to ephemeral sub-loggers which prevents them from being cleaned up, leading to
-unbound memory consumption for loggers. This came about from a change to address a previously known issue around
-[sub-logger levels not being adjusted on reload](#sublogger-levels-unchanged-on-reload).
-This impacts many areas of Vault, but primarily logins in Enterprise.
-
-#### Workaround
-There is no workaround.
diff --git a/website/content/partials/known-issues/expiration-metrics-fatal-error.mdx b/website/content/partials/known-issues/expiration-metrics-fatal-error.mdx
deleted file mode 100644
index 2a539756f2..0000000000
--- a/website/content/partials/known-issues/expiration-metrics-fatal-error.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
-### Fatal error during expiration metrics gathering causing Vault crash
-
-#### Affected versions
-
-This issue affects Vault Community and Enterprise versions:
-- 1.13.9
-- 1.14.5
-- 1.15.1
-
-A fix has been issued in Vault 1.13.10, 1.14.6, and 1.15.2.
-
-#### Issue
-
-A recent change to Vault to improve state change speed (e.g. becoming active or standby) introduced a concurrency issue
-which can lead to a concurrent iteration and write on a map, causing a fatal error and crashing Vault. This error occurs
-when gathering lease and token metrics from the expiration manager. These metrics originate from the active node in a HA
-cluster, as such a standby node will take over active duties and the cluster will remain functional should the original
-active node encounter this bug. The new active node will be vulnerable to the same bug, but may not encounter it immediately.
-
-There is no workaround.
-
-
diff --git a/website/content/partials/known-issues/internal-error-namespace-missing-policy.mdx b/website/content/partials/known-issues/internal-error-namespace-missing-policy.mdx
deleted file mode 100644
index 13d2eb4f5e..0000000000
--- a/website/content/partials/known-issues/internal-error-namespace-missing-policy.mdx
+++ /dev/null
@@ -1,143 +0,0 @@
-### Internal error when vault policy in namespace does not exist
-If a user is a member of a group that gets a policy from a
-namespace other than the one they’re trying to log into,
-and that policy doesn’t exist, Vault returns an internal error.
-This impacts all auth methods.
-
-#### Affected versions
-- 1.13.8 and 1.13.9
-- 1.14.4 and 1.14.5
-- 1.15.0 and 1.15.1
-
-A fix will be released in Vault 1.15.2, 1.14.6, and 1.13.10.
-
-### Workaround
-
-During authentication, Vault derives inherited policies based on the groups an
-entity belongs to. Vault returns an internal error when attaching the derived
-policy to a token when:
-
-1. the token belongs to a different namespace than the one handling
- authentication, and
-2. the derived policy does not exist under the namespace.
-
-
-You can resolve the error by adding the policy to the relevant namespace or
-deleting the group policy mapping that uses the derived policy.
-
-As an example, consider the following userpass auth method failure. The error is
-due to the fact that Vault expects a group policy under the namespace that does
-not exist.
-
-
-
-```shell-session
-# Failed login
-$ vault login -method=userpass username=user1 password=123
-Error authenticating: Error making API request.
-
-URL: PUT http://127.0.0.1:8200/v1/auth/userpass/login/user1
-Code: 500. Errors:
-
-* internal error
-```
-
-
-
-To confirm the problem is a missing policy, start by identifying the relevant
-entity and group IDs:
-
-
-
-```shell-session
-$ vault read -format=json identity/entity/name/user1 | \
- jq '{"entity_id": .data.id, "group_ids": .data.group_ids} '
-{
- "entity_id": "420c82de-57c3-df2e-2ef6-0690073b1636",
- "group_ids": [
- "6cb152b7-955d-272b-4dcf-a2ed668ca1ea"
- ]
-}
-```
-
-
-
-Use the group ID to fetch the relevant policies for the group under the `ns1`
-namespace:
-
-
-
-```shell-session
-$ vault read -format=json -namespace=ns1 \
- identity/group/id/6cb152b7-955d-272b-4dcf-a2ed668ca1ea | \
- jq '.data.policies'
-[
- "group_policy"
-]
-```
-
-
-
-Now that we know Vault is looking for a policy called `group_policy`, we can
-check whether that policy exists under the `ns1` namespace:
-
-
-
-```shell-session
-$ vault policy list -namespace=ns1
-default
-```
-
-
-
-The only policy in the `ns1` namespace is `default`, which confirms that the
-missing policy (`group_policy`) is causing the error.
-
-
-To fix the problem, we can either remove the missing policy from the
-`6cb152b7-955d-272b-4dcf-a2ed668ca1ea` group or create the missing policy under
-the `ns1` namespace.
-
-
-
-
-
-To remove `group_policy` from group ID `6cb152b7-955d-272b-4dcf-a2ed668ca1ea`,
-use the `vault write` command to set the applicable policies to just include
-`default`:
-
-```shell-session
-$ vault write \
- -namespace=ns1 \
- identity/group/id/6cb152b7-955d-272b-4dcf-a2ed668ca1ea \
- name="test" \
- policies="default"
-```
-
-
-
-
-
-To create the missing policy, use `vault policy write` and define the
-appropriate capabilities:
-
-```shell-session
-$ vault policy write -namespace=ns1 group_policy - << EOF
- path "secret/data/*" {
- capabilities = ["create", "update"]
- }
-EOF
-```
-
-
-
-
-Verify the fix by re-running the login command:
-
-
-
-```shell-session
-$ vault login -method=userpass username=user1 password=123
-```
-
-
\ No newline at end of file
diff --git a/website/content/partials/known-issues/sublogger-levels-unchanged-on-reload.mdx b/website/content/partials/known-issues/sublogger-levels-unchanged-on-reload.mdx
deleted file mode 100644
index 33c72e2b7e..0000000000
--- a/website/content/partials/known-issues/sublogger-levels-unchanged-on-reload.mdx
+++ /dev/null
@@ -1,32 +0,0 @@
-### Sublogger levels not adjusted on reload ((#sublogger-levels-unchanged-on-reload))
-
-#### Affected versions
-
-This issue affects all Vault Community and Vault Enterprise versions.
-
-#### Issue
-
-Vault does not honor a modified `log_level` configuration for certain subsystem
-loggers on SIGHUP.
-
-The issue is known to specifically affect `resolver.watcher` and
-`replication.index.*` subloggers.
-
-After modifying the `log_level` and issuing a reload (SIGHUP), some loggers are
-updated to reflect the new configuration, while some subsystem logger levels
-remain unchanged.
-
-For example, after starting a server with `log_level: "trace"` and modifying it
-to `log_level: "info"` the following lines appear after reload:
-
-```
-[TRACE] resolver.watcher: dr mode doesn't have failover support, returning
-...
-[DEBUG] replication.index.perf: saved checkpoint: num_dirty=5
-[DEBUG] replication.index.local: saved checkpoint: num_dirty=0
-[DEBUG] replication.index.periodic: starting WAL GC: from=2531280 to=2531280 last=2531536
-```
-
-#### Workaround
-
-The workaround is to restart the Vault server.
diff --git a/website/content/partials/known-issues/transit-managed-keys-panics.mdx b/website/content/partials/known-issues/transit-managed-keys-panics.mdx
deleted file mode 100644
index 7cea98da8b..0000000000
--- a/website/content/partials/known-issues/transit-managed-keys-panics.mdx
+++ /dev/null
@@ -1,23 +0,0 @@
-### Transit Encryption with Cloud KMS managed keys causes a panic
-
-#### Affected versions
-
-- 1.13.1+ up to 1.13.8 inclusively
-- 1.14.0+ up to 1.14.4 inclusively
-- 1.15.0
-
-#### Issue
-
-Vault panics when it receives a Transit encryption API call that is backed by a Cloud KMS managed key (Azure, GCP, AWS).
-
-
-The issue does not affect encryption and decryption with the following key types:
-
-- PKCS#11 managed keys
-- Transit native keys
-
-
-
-#### Workaround
-
-None at this time
diff --git a/website/content/partials/known-issues/transit-managed-keys-sign-fails.mdx b/website/content/partials/known-issues/transit-managed-keys-sign-fails.mdx
deleted file mode 100644
index 69e0bf2711..0000000000
--- a/website/content/partials/known-issues/transit-managed-keys-sign-fails.mdx
+++ /dev/null
@@ -1,23 +0,0 @@
-### Transit Sign API calls with managed keys fail
-
-#### Affected versions
-
-- 1.14.0+ up to 1.14.4 inclusively
-- 1.15.0
-
-#### Issue
-
-Vault responds to Transit sign API calls with the following error when the request uses a managed key:
-
-`requested version for signing does not contain a private part`
-
-
-The issue does not affect signing with the following key types:
-
-- Transit native keys
-
-
-
-#### Workaround
-
-None at this time
diff --git a/website/content/partials/known-issues/ui-collapsed-navbar.mdx b/website/content/partials/known-issues/ui-collapsed-navbar.mdx
deleted file mode 100644
index 08d28f137d..0000000000
--- a/website/content/partials/known-issues/ui-collapsed-navbar.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
-### Collapsed navbar does not allow you to click inside the console or namespace picker
-
-#### Affected versions
-
-The UI issue affects Vault versions 1.14.0+ and 1.15.0+.
-A fix is expected for Vault 1.16.0.
-
-#### Issue
-
-The Vauil UI currently uses a version of HDS that does not allow users to click
-within collapsed elements. In particular, the dev console or namespace picker
-become inaccessible when viewing the components in smaller viewports.
-
-#### Workaround
-
-Expand the width of the screen until you deactivate the collapsed view. Once the full navbar is displayed, click the desired components.
\ No newline at end of file
diff --git a/website/content/partials/known-issues/ui-pki-control-groups.mdx b/website/content/partials/known-issues/ui-pki-control-groups.mdx
deleted file mode 100644
index fbdbf6358c..0000000000
--- a/website/content/partials/known-issues/ui-pki-control-groups.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-### Users limited by control groups can only access issuer detail from PKI overview page ((#ui-pki-control-groups))
-
-#### Affected versions
-
-- Vault 1.14.x
-
-#### Issue
-
-Vault UI users who require control group approval to read issuer details are
-directed to the Control Group Access page when they try to view issuer details
-from links on the Issuer list page.
-
-#### Workaround
-
-Vault UI users constrained by control groups should select issuers from the
-**PKI overview** page to view detailed information instead of the
-**Issuers list** page.
\ No newline at end of file
diff --git a/website/content/partials/known-issues/ui-safari-login-screen.mdx b/website/content/partials/known-issues/ui-safari-login-screen.mdx
deleted file mode 100644
index 37373b6204..0000000000
--- a/website/content/partials/known-issues/ui-safari-login-screen.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
-### Safari login screen appears broken on the UI
-
-#### Affected versions
-
-- 1.14.0
-
-#### Issue
-
-The login screen on Safari appears to be broken, presenting as a blank white screen.
-
-#### Workaround
-
-Scroll down to find the login section.
\ No newline at end of file
diff --git a/website/content/partials/known-issues/update-primary-addrs-panic.mdx b/website/content/partials/known-issues/update-primary-addrs-panic.mdx
deleted file mode 100644
index d7e63828e8..0000000000
--- a/website/content/partials/known-issues/update-primary-addrs-panic.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
-### Using 'update_primary_addrs' on a demoted cluster causes Vault to panic ((#update-primary-addrs-panic))
-
-#### Affected versions
-
-- 1.13.3, 1.13.4 & 1.14.0
-
-#### Issue
-
-If the [`update_primary_addrs`](/vault/api-docs/system/replication/replication-performance#update_primary_addrs)
-parameter is used on a recently demoted cluster, Vault will panic due to no longer
-having information about the primary cluster.
-
-#### Workaround
-
-Instead of using `update_primary_addrs` on the recently demoted cluster, instead provide an
-[activation token](/vault/api-docs/system/replication/replication-performance#token-1).
\ No newline at end of file
diff --git a/website/content/partials/known-issues/update-primary-data-loss.mdx b/website/content/partials/known-issues/update-primary-data-loss.mdx
deleted file mode 100644
index 955b798ccd..0000000000
--- a/website/content/partials/known-issues/update-primary-data-loss.mdx
+++ /dev/null
@@ -1,57 +0,0 @@
-### API calls to update-primary may lead to data loss ((#update-primary-data-loss))
-
-#### Affected versions
-
-All versions of Vault before 1.14.1, 1.13.5, 1.12.9, and 1.11.12.
-
-#### Issue
-
-The [update-primary](/vault/api-docs/system/replication/replication-performance#update-performance-secondary-s-primary)
-endpoint temporarily removes all mount entries except for those that are managed
-automatically by vault (e.g. identity mounts). In certain situations, a race
-condition between mount table truncation replication repairs may lead to data
-loss when updating secondary replication clusters.
-
-Situations where the race condition may occur:
-
-- **When the cluster has local data (e.g., PKI certificates, app role secret IDs)
- in shared mounts**.
- Calling `update-primary` on a performance secondary with local data in shared
- mounts may corrupt the merkle tree on the secondary. The secondary still
- contains all the previously stored data, but the corruption means that
- downstream secondaries will not receive the shared data and will interpret the
- update as a request to delete the information. If the downstream secondary is
- promoted before the merkle tree is repaired, the newly promoted secondary will
- not contain the expected local data. The missing data may be unrecoverable if
- the original secondary is is lost or destroyed.
-- **When the cluster has an `Allow` paths defined.**
- As of Vault 1.0.3.1, startup, unseal, and calling `update-primary` all trigger a
- background job that looks at the current mount data and removes invalid entries
- based on path filters. When a secondary has `Allow` path filters, the cleanup
- code may misfire in the windown of time after update-primary truncats the mount
- tables but before the mount tables are rewritten by replication. The cleanup
- code deletes data associated with the missing mount entries but does not modify
- the merkle tree. Because the merkle tree remains unchanged, replication will not
- know that the data is missing and needs to be repaired.
-
-#### Workaround 1: PR secondary with local data in shared mounts
-
-Watch for `cleaning key in merkle tree` in the TRACE log immediately after an
-update-primary call on a PR secondary to indicate the merkle tree may be
-corrupt. Repair the merkle tree by issuing a
-[replication reindex request](/vault/api-docs/system/replication#reindex-replication)
-to the PR secondary.
-
-If TRACE logs are no longer available, we recommend pre-emptively reindexing the
-PR secondary as a precaution.
-
-#### Workaround 2: PR secondary with "Allow" path filters
-
-Watch for `deleted mistakenly stored mount entry from backend` in the INFO log.
-Reindex the performance secondary to update the merkle tree with the missing
-data and allow replication to disseminate the changes. **You will not be able to
-recover local data on shared mounts (e.g., PKI certificates)**.
-
-If INFO logs are no longer available, query the shared mount in question to
-confirm whether your role and configuration data are present on the primary but
-missing from the secondary.
diff --git a/website/content/partials/ldap-upndomain-issue.mdx b/website/content/partials/ldap-upndomain-issue.mdx
deleted file mode 100644
index df32e08410..0000000000
--- a/website/content/partials/ldap-upndomain-issue.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-## LDAP auth engine and upndomain
-
-Users of the LDAP auth engine with the `upndomain` configuration setting populated
-should hold off on upgrading to 1.4.x for now. We are investigating a regression
-introduced by [#8333](https://github.com/hashicorp/vault/pull/8333). There is
-no Github issue for this bug yet.
diff --git a/website/content/partials/lease-count-quota-upgrade.mdx b/website/content/partials/lease-count-quota-upgrade.mdx
deleted file mode 100644
index 81a529efec..0000000000
--- a/website/content/partials/lease-count-quota-upgrade.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
-### Lease count quota invalidations on DR secondaries fixed
-
-Lease count quota invalidation causes DR Secondaries to panic and experience
-a hard shutdown. This issue exists prior to Vault 1.6.6 and 1.7.4. It is
-fixed in Vault 1.6.6, 1.7.4, and 1.8.0.
diff --git a/website/content/partials/ocsp-redirect.mdx b/website/content/partials/ocsp-redirect.mdx
deleted file mode 100644
index a41c3562ed..0000000000
--- a/website/content/partials/ocsp-redirect.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
-### PKI OCSP GET requests can return HTTP redirect responses
-
-If a base64 encoded OCSP request contains consecutive '/' characters, the GET request
-will return a 301 permanent redirect response. If the redirection is followed, the
-request will not decode as it will not be a properly base64 encoded request.
-
-As a workaround, OCSP POST requests can be used which are unaffected.
-
-#### Impacted versions
-
-Affects all current versions of 1.12.x and 1.13.x
diff --git a/website/content/partials/okta-group-pagination.mdx b/website/content/partials/okta-group-pagination.mdx
deleted file mode 100644
index 4a74af24d2..0000000000
--- a/website/content/partials/okta-group-pagination.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-## Okta auth with > 200 groups
-
-In 1.4.0 Vault started using the official Okta Go client library. Unlike
-the previous Okta library it used, the official library doesn't automatically
-handle pagination when there are more than 200 groups listed. If a user
-associated with more than 200 Okta groups logs in, only 200 of them will be
-seen by Vault. The fix is [#9580](https://github.com/hashicorp/vault/pull/9580)
-and will eventually appear in 1.4.x and 1.5.x point releases.
diff --git a/website/content/partials/perf-standby-token-create-forwarding-failure.mdx b/website/content/partials/perf-standby-token-create-forwarding-failure.mdx
deleted file mode 100644
index b36f861f33..0000000000
--- a/website/content/partials/perf-standby-token-create-forwarding-failure.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
-### Token creation with a new entity alias could silently fail
-
-A regression caused token creation requests under specific circumstances to be
-forwarded from perf standbys (Enterprise only) to the active node incorrectly.
-They would appear to succeed, however no lease was created. The token would then
-be revoked on first use causing a 403 error.
-
-This only happened when all of the following conditions were met:
- - the token is being created against a role
- - the request specifies an entity alias which has never been used before with
- the same role (for example for a brand new role or a unique alias)
- - the request happens to be made to a perf standby rather than the active node
-
-Retrying token creation after the affected token is rejected would work since
-the entity alias has already been created.
-
-#### Affected versions
-
-Affects Vault 1.13.0 to 1.13.3. Fixed in 1.13.4.
diff --git a/website/content/partials/pgx-params.mdx b/website/content/partials/pgx-params.mdx
deleted file mode 100644
index 544820a52d..0000000000
--- a/website/content/partials/pgx-params.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
-### Postgres library change
-
-Vault 1.11+ uses pgx instead of lib/pq for Postgres connections. If you are
-using parameters like `fallback_application_name` that pgx does not support, you
-may need to update your `connection_url` before upgrading to Vault 1.11+.
diff --git a/website/content/partials/pki-double-migration-bug.mdx b/website/content/partials/pki-double-migration-bug.mdx
deleted file mode 100644
index e1ff94c637..0000000000
--- a/website/content/partials/pki-double-migration-bug.mdx
+++ /dev/null
@@ -1,30 +0,0 @@
-### PKI storage migration revives deleted issuers
-
-Vault 1.11 introduced Storage v1, a new storage layout that supported
-multiple issuers within a single mount. Bug fixes in Vault 1.11.6, 1.12.2,
-and 1.13.0 corrected a write-ordering issue that lead to invalid CA chains.
-Specifically, incorrectly ordered writes could fail due to load, resulting
-in the mount being re-migrated next time it was loaded or silently
-truncating CA chains. This collection of bug fixes introduced Storage v2.
-
-#### Affected versions
-
-Vault may incorrectly re-migrated legacy issuers created before Vault 1.11 that
-were migrated to Storage v1 and deleted before upgrading to a Vault version with
-Storage v2.
-
-The migration fails when Vault finds managed keys associated with the legacy
-issuers that were removed from the managed key repository prior to the upgrade.
-
-The migration error appears in Vault logs as:
-
-> Error during migration of PKI mount:
-> failed to lookup public key from managed key:
-> no managed key found with uuid
-
-
-Issuers created in Vault 1.11+ and direct upgrades to a Storage v2 layout are
-not affected.
-
-
-The Storage v1 upgrade bug was fixed in Vault 1.14.1, 1.13.5, and 1.12.9.
diff --git a/website/content/partials/pki-forwarding-bug.mdx b/website/content/partials/pki-forwarding-bug.mdx
deleted file mode 100644
index c48a6d8e92..0000000000
--- a/website/content/partials/pki-forwarding-bug.mdx
+++ /dev/null
@@ -1,10 +0,0 @@
-## PKI certificate generation forwarding regression
-
-A bug introduced in Vault 1.8 causes certificate generation requests to the PKI secrets engine made on a performance
-secondary node to be forwarded to the cluster's primary node. The resulting certificates are stored on the primary node,
-and thus visible to list and read certificate requests only on the primary node rather than the secondary node as
-intended. Furthermore, if a certificate is subsequently revoked on a performance secondary node, the secondary's
-certificate revocation list is updated, rather than the primary's where the certificate is stored. This bug is fixed
-in Vault 1.8.8 and 1.9.3.
-Certificates issued after the fix are correctly stored locally to the performance secondary.
-
diff --git a/website/content/partials/primary-cluster-addr-change.mdx b/website/content/partials/primary-cluster-addr-change.mdx
deleted file mode 100644
index ef9c34e80b..0000000000
--- a/website/content/partials/primary-cluster-addr-change.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
-### Primary cluster address change
-
-In Vault 1.4.0-1.4.3, a secondary cluster with a single `primary_cluster_addr`
-configured will obtain the address of the active node in the primary cluster
-via replication heartbeats from the primary cluster.
-
-If the `api_addr` and `cluster_addr` in the heartbeats from the primary
-cluster are not reachable from the secondary cluster, replication will not
-work. This situation can arise if, for example, `primary_cluster_addr`
-corresponds to a load balancer accessible from the secondary cluster, but the
-`api_addr` and `cluster_addr` on the primary cluster are only accessible
-from the primary cluster.
-
-In Vault 1.4.4, we will use the `primary_cluster_addr` if it has been set,
-instead of relying on the heartbeat information, but it's possible to
-encounter this issue in Vault 1.4.0-1.4.3.
diff --git a/website/content/partials/raft-panic-old-tls-key.mdx b/website/content/partials/raft-panic-old-tls-key.mdx
deleted file mode 100644
index 43f8160978..0000000000
--- a/website/content/partials/raft-panic-old-tls-key.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-### Integrated storage panic related to old TLS key
-
-Raft in Vault uses its own set of TLS certificates, independent of those that the user
-controls to protect the API port and those used for replication and clustering. These
-certs get rotated daily, but to ensure that nodes which were down or behind on Raft log
-replication don't lose the ability to speak with other nodes, the newly generated daily
-TLS cert only starts being used once we see that all nodes have received it.
-
-A recent security audit related change results in this rotation code [getting a
-panic](https://github.com/hashicorp/vault/issues/15147) when the current cert is
-more than 24h old. This can happen if the cluster as a whole is down for a day
-or more. It can also happen if a single node is unreachable 24h, or sufficiently
-backlogged in applying raft logs that it's more than a day behind.
-
-Impacted versions: 1.10.1, 1.9.5, 1.8.10. Versions prior to these are unaffected.
-
-New releases addressing this panic are coming soon.
diff --git a/website/content/partials/raft-retry-join-failure.mdx b/website/content/partials/raft-retry-join-failure.mdx
deleted file mode 100644
index cb87e79c63..0000000000
--- a/website/content/partials/raft-retry-join-failure.mdx
+++ /dev/null
@@ -1,24 +0,0 @@
-### Cluster initialization hangs with `retry_join`
-
-The
-[`retry_join`](/vault/docs/concepts/integrated-storage/index#retry_join-configuration)
-feature no longer successfully attempts to rejoin the raft cluster every 2
-seconds following a join failure.
-
-The error occurs when attempting to initialize non-leader nodes with a
-[`retry_join` stanza](/vault/docs/configuration/storage/raft/#retry_join-stanza). This
-affects multi-node raft clusters on [impacted versions](#impacted-versions).
-
-The bug was introduced by commit
-https://github.com/hashicorp/vault/commit/cc6409222ce246ed72d067debe6ffeb8f62f9dad
-and first reported in https://github.com/hashicorp/vault/issues/16486.
-
-#### Impacted versions
-
-Affects versions 1.11.1, 1.11.2, 1.10.5, and 1.10.6. Versions prior to these
-are unaffected.
-
-NOTE: This error does not extend to version 1.9.8+, which is slightly different
-in this portion of the code and does not exhibit the same behavior.
-
-New releases addressing this bug are coming soon.
diff --git a/website/content/partials/release-notes/deprecation-note.mdx b/website/content/partials/release-notes/deprecation-note.mdx
deleted file mode 100644
index 1c222d194b..0000000000
--- a/website/content/partials/release-notes/deprecation-note.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
-Please refer to the [Deprecation Plans and Notice](/vault/docs/deprecation) page
-for up-to-date information on feature deprecations and plans or the [Feature
-Deprecation FAQ](/vault/docs/deprecation/faq) for general questions about
-our deprecation process.
\ No newline at end of file
diff --git a/website/content/partials/release-notes/intro.mdx b/website/content/partials/release-notes/intro.mdx
deleted file mode 100644
index 5ec87fdd76..0000000000
--- a/website/content/partials/release-notes/intro.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-Release notes provide an at-a-glance summary of key updates to new versions of
-Vault. For a comprehensive list of product updates, improvements, and bug fixes
-refer to the [changelog](https://github.com/openbao/openbao/blob/main/CHANGELOG.md)
-included with the Vault code on GitHub.
-
-We encourage you to
-[upgrade to the latest release of Vault](/vault/docs/upgrading)
-to take advantage of continuing improvements, critical fixes, and new features.
\ No newline at end of file
diff --git a/website/content/partials/telemetry-metrics/vault/core/seal.mdx b/website/content/partials/telemetry-metrics/vault/core/seal.mdx
deleted file mode 100644
index 625d91c7de..0000000000
--- a/website/content/partials/telemetry-metrics/vault/core/seal.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
-### vault.core.seal ((#vault-core-seal))
-
-Metric type | Value | Description
------------ | ----- | -----------
-summary | ms | Time required to complete seal operations
\ No newline at end of file
diff --git a/website/content/partials/tokenization-rotation-persistence.mdx b/website/content/partials/tokenization-rotation-persistence.mdx
deleted file mode 100644
index 7cd971a2ee..0000000000
--- a/website/content/partials/tokenization-rotation-persistence.mdx
+++ /dev/null
@@ -1,14 +0,0 @@
-### Rotation configuration persistence issue could lose transform tokenization key versions
-
-A rotation performed manually or via automatic time based rotation after
-restarting or leader change of Vault, where configuration of rotation was
-changed since the initial configuration of the tokenization transform can
-result in the loss of intermediate key versions. Tokenized values from
-these versions would not be decodeable. It is recommended that customers
-who have enabled automatic rotation disable it, and other customers avoid
-key rotation until the upcoming fix.
-
-#### Affected versions
-
-This issue affects Vault Enterprise with ADP versions 1.10.x and higher. A
-fix will be released in Vault 1.11.9, 1.12.5, and 1.13.1.
diff --git a/website/content/partials/transform-upgrade.mdx b/website/content/partials/transform-upgrade.mdx
deleted file mode 100644
index fc08e551b2..0000000000
--- a/website/content/partials/transform-upgrade.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-### Transform storage upgrades fixed
-
-The Transform Secrets Engine storage upgrade introduced in 1.6.0 introduced
-malformed configuration for transformations configured earlier than 1.6.0,
-resulting in an error using these transformations if Vault is restarted
-after the upgrade. This issue exists on Vault 1.6.0 through 1.6.3, and is fixed
-in Vault 1.6.4 and 1.7.0. Transformations configured on 1.6.0 or higher are
-unaffected.
diff --git a/website/content/partials/ui-pki-control-groups-known-issue.mdx b/website/content/partials/ui-pki-control-groups-known-issue.mdx
deleted file mode 100644
index ed7b03ab1a..0000000000
--- a/website/content/partials/ui-pki-control-groups-known-issue.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-### Users limited by control groups can only access issuer detail from PKI overview page ((#ui-pki-control-groups))
-
-#### Affected versions
-
-- Vault 1.14.x
-
-#### Issue
-
-Vault UI users who require control group approval to read issuer details are
-directed to the Control Group Access page when they try to view issuer details
-from links on the Issuer list page.
-
-#### Workaround
-
-Vault UI users constrained by control groups should select issuers from the
-**PKI overview** page to view detailed information instead of the
-**Issuers list** page.
diff --git a/website/content/partials/ui-safari-login-screen.mdx b/website/content/partials/ui-safari-login-screen.mdx
deleted file mode 100644
index 37373b6204..0000000000
--- a/website/content/partials/ui-safari-login-screen.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
-### Safari login screen appears broken on the UI
-
-#### Affected versions
-
-- 1.14.0
-
-#### Issue
-
-The login screen on Safari appears to be broken, presenting as a blank white screen.
-
-#### Workaround
-
-Scroll down to find the login section.
\ No newline at end of file