From fb6d8b5a0ad75c764e5326531eff39dae78d8b6d Mon Sep 17 00:00:00 2001 From: Agil Antony Date: Mon, 22 Sep 2025 15:16:28 +0530 Subject: [PATCH] ROX30868 Removing discrete headings --- ...ate-with-image-vulnerability-scanners.adoc | 1 - modules/common-search-queries.adoc | 11 --- modules/configuration-details-tab.adoc | 6 +- ...eate-policy-from-system-policies-view.adoc | 8 +-- ...default-requirements-central-services.adoc | 2 - modules/default-requirements-external-db.adoc | 4 -- ...requirements-secured-cluster-services.adoc | 12 ---- .../generating-sensor-deployment-files.adoc | 4 +- ...r-upgrade-change-subscription-channel.adoc | 4 +- ...mmended-requirements-central-services.adoc | 5 -- ...requirements-secured-cluster-services.adoc | 2 - ...derstanding-connectivity-explanations.adoc | 2 - modules/use-process-baselines.adoc | 2 - modules/using-cli.adoc | 15 ---- ...tingwebhookconfiguration-yaml-changes.adoc | 3 - modules/violation-view-policy-tab.adoc | 5 +- modules/violations-view-deployment-tab.adoc | 5 -- scripts/fix_discrete.sh | 68 +++++++++++++++++++ 18 files changed, 73 insertions(+), 86 deletions(-) create mode 100755 scripts/fix_discrete.sh diff --git a/integration/integrate-with-image-vulnerability-scanners.adoc b/integration/integrate-with-image-vulnerability-scanners.adoc index 8d5ded3d9820..759c6c3768c9 100644 --- a/integration/integrate-with-image-vulnerability-scanners.adoc +++ b/integration/integrate-with-image-vulnerability-scanners.adoc @@ -9,7 +9,6 @@ toc::[] [role="_abstract"] {rh-rhacs-first} integrates with vulnerability scanners to enable you to import your container images and watch them for vulnerabilities. -[discrete] == Supported container image registries Red{nbsp}Hat supports the following container image registries: diff --git a/modules/common-search-queries.adoc b/modules/common-search-queries.adoc index b68ba227aac4..3d99a6e99b6c 100644 --- a/modules/common-search-queries.adoc +++ b/modules/common-search-queries.adoc @@ -7,7 +7,6 @@ Here are some common search queries you can run with {product-title}. -[discrete] == Finding deployments that are affected by a specific CVE |=== @@ -17,7 +16,6 @@ Here are some common search queries you can run with {product-title}. | `CVE:CVE-2018-11776` |=== -[discrete] == Finding privileged running deployments |=== @@ -27,7 +25,6 @@ Here are some common search queries you can run with {product-title}. | `Privileged:true` |=== -[discrete] == Finding deployments that have external network exposure |=== @@ -37,7 +34,6 @@ Here are some common search queries you can run with {product-title}. | `Exposure Level:External` |=== -[discrete] == Finding deployments that are running specific processes |=== @@ -47,7 +43,6 @@ Here are some common search queries you can run with {product-title}. | `Process Name:bash` |=== -[discrete] == Finding deployments that have serious but fixable vulnerabilities |=== @@ -57,7 +52,6 @@ Here are some common search queries you can run with {product-title}. | `CVSS:>=6` `Fixable:.*` |=== -[discrete] == Finding deployments that use passwords exposed through environment variables |=== @@ -67,7 +61,6 @@ Here are some common search queries you can run with {product-title}. | `Environment Key:r/.\*pass.*` |=== -[discrete] == Finding running deployments that have particular software components in them |=== @@ -77,13 +70,11 @@ Here are some common search queries you can run with {product-title}. | `Component:libgpg-error` or `Component:sudo` |=== -[discrete] == Finding users or groups Use Kubernetes link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and Selectors], and link:https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[Annotations] to attach metadata to your deployments. You can then query based on the applied annotations and labels to identify individuals or groups. -[discrete] === Finding who owns a particular deployment |=== @@ -93,7 +84,6 @@ You can then query based on the applied annotations and labels to identify indiv | `Deployment:app-server` `Label:team=backend` |=== -[discrete] === Finding who is deploying images from public registries |=== @@ -103,7 +93,6 @@ You can then query based on the applied annotations and labels to identify indiv | `Image Registry:docker.io` `Label:team=backend` |=== -[discrete] === Finding who is deploying into the default namespace |=== diff --git a/modules/configuration-details-tab.adoc b/modules/configuration-details-tab.adoc index 5745b446e2ca..b50fdcbecbdf 100644 --- a/modules/configuration-details-tab.adoc +++ b/modules/configuration-details-tab.adoc @@ -8,7 +8,6 @@ The *Configuration details* tab displays information about the scan schedule information such as the essential parameters, cluster status, associated profiles, and email delivery destinations. -[discrete] == Parameters section The *Parameters* section organizes information into the following groups: @@ -19,7 +18,6 @@ Schedule:: Specifies when the compliance scans should run. Last scanned:: The timestamp of the last compliance scan performed. Last updated:: The last date and time that the compliance scan data was modified. -[discrete] == Clusters section The *Clusters* section organizes information into the following groups: @@ -27,16 +25,14 @@ The *Clusters* section organizes information into the following groups: Cluster:: Lists the one or more clusters associated with a compliance scan. Operator status:: Indicates the current health or operational status of the Operator. -[discrete] == Profiles section The *Profiles* section lists the one or more profiles associated with a compliance scan. -[discrete] == Delivery destinations section The *Delivery destinations* section organizes information into the following groups: Email notifier:: Specifies the email notification system or tool set up to distribute reports or alerts. Distribution list:: Lists the recipients who should receive the notifications or reports. -Email template:: Specifies the email format used for the notifications. You can use the default or customize the email subject and body as needed. \ No newline at end of file +Email template:: Specifies the email format used for the notifications. You can use the default or customize the email subject and body as needed. diff --git a/modules/create-policy-from-system-policies-view.adoc b/modules/create-policy-from-system-policies-view.adoc index 5a6c02a53e3b..fc2bcd30fb70 100644 --- a/modules/create-policy-from-system-policies-view.adoc +++ b/modules/create-policy-from-system-policies-view.adoc @@ -14,7 +14,6 @@ You can create new security policies from the system policies view. // future enhancement: split these into separate modules and call them from the assembly. Add a procedure title to each module. -[discrete] [id="policy-details_{context}"] == Enter policy details @@ -31,7 +30,6 @@ Enter the following details about your policy in the *Policy details* section. .. Click the *Add technique* to add techniques for the selected tactic. You can specify multiple techniques for a tactic. . Click *Next*. -[discrete] [id="policy-lifecycle_{context}"] == Configure the policy lifecycle @@ -48,7 +46,6 @@ You can select more than one stage from the following choices: * *Audit logs*: {product-title-short} triggers policy violations when event sources match Kubernetes audit log records. . Click *Next*. -[discrete] [id="policy-rules_{context}"] == Configure the policy rules and criteria @@ -75,7 +72,6 @@ See "Policy criteria" in the "Additional resources" section for more information . To combine multiple values for an attribute, click the *Add* icon. . Click *Next*. -[discrete] [id="policy-scope_{context}"] == Configure the policy scope @@ -98,7 +94,6 @@ It does not have any effect if you use this policy to check running deployments ==== . Click *Next*. -[discrete] [id="policy-actions_{context}"] == Configure policy actions @@ -130,7 +125,6 @@ You must have previously configured the notification before it is visible and av ==== . Click *Next*. -[discrete] [id="policy-review_{context}"] == Review the policy and preview violations @@ -144,4 +138,4 @@ Review the policy settings you have configured. Runtime violations are not available in this preview because they are generated in response to future events. ==== Before you save the policy, verify that the violations seem accurate. -. Click *Save*. \ No newline at end of file +. Click *Save*. diff --git a/modules/default-requirements-central-services.adoc b/modules/default-requirements-central-services.adoc index f329d4d2eb5b..c32239b8c8f6 100644 --- a/modules/default-requirements-central-services.adoc +++ b/modules/default-requirements-central-services.adoc @@ -42,7 +42,6 @@ Otherwise, your data is only saved on a single node. Red{nbsp}Hat does not recom For security reasons, you should deploy Central in a cluster with limited administrative access. ==== -[discrete] == CPU, memory, and storage requirements The following table lists the minimum CPU and memory values required to install and run Central. @@ -83,7 +82,6 @@ Scanner is responsible for scanning images, nodes, and the platform for vulnerab {product-title-short} includes two image vulnerability scanners: StackRox Scanner and Scanner V4. The StackRox Scanner is deprecated. Scanner V4 was introduced in release 4.4 and as of release 4.8 is the default image scanner. -[discrete] == StackRox Scanner The following table lists the minimum CPU and memory values required to install and run StackRox Scanner. The requirements in this table are based on the default of 3 replicas. diff --git a/modules/default-requirements-external-db.adoc b/modules/default-requirements-external-db.adoc index ceaefd3ba128..0bbe2df816c6 100644 --- a/modules/default-requirements-external-db.adoc +++ b/modules/default-requirements-external-db.adoc @@ -18,11 +18,9 @@ When you use an external database, note the following guidance: If you select an external database, your database instance and the user connecting to it must meet the requirements listed in the following sections. -[discrete] == Database type and version The database must be a PostgreSQL-compatible database that supports PostgreSQL 13 or later. -[discrete] == User permissions The user account that Central uses to connect to the database must be a `superuser` account with connection rights to the database and the following permissions: @@ -31,7 +29,6 @@ The user account that Central uses to connect to the database must be a `superus * `Usage` permissions on all sequences in the schema. * The ability to create and delete databases as a `superuser`. -[discrete] == Connection string Central connects to the external database by using a connection string, which must be in `keyword=value` format. The connection string should specify details such as the host, port, database name, user, and SSL/TLS mode. For example, `host= port=5432 database=stackrox user=stackrox sslmode=verify-ca`. @@ -40,6 +37,5 @@ Central connects to the external database by using a connection string, which mu Connections through *PgBouncer* are not supported. ==== -[discrete] == CA certificates If your external database uses a certificate issued by a private or untrusted Certificate Authority (CA), you might need to specify the CA certificate so that Central trusts the database certificate. You can add this by using a TLS block in the Central custom resource configuration. diff --git a/modules/default-requirements-secured-cluster-services.adoc b/modules/default-requirements-secured-cluster-services.adoc index ca45063c25c7..75d3cac3a3a3 100644 --- a/modules/default-requirements-secured-cluster-services.adoc +++ b/modules/default-requirements-secured-cluster-services.adoc @@ -22,7 +22,6 @@ If you use a web proxy or firewall, you must ensure that secured clusters and Ce Sensor monitors your Kubernetes and {ocp} clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with the other {product-title} components. -[discrete] === CPU and memory requirements The following table lists the minimum CPU and memory values required to install and run sensor on secured clusters. @@ -45,7 +44,6 @@ The following table lists the minimum CPU and memory values required to install The Admission controller prevents users from creating workloads that violate policies you configure. -[discrete] === CPU and memory requirements By default, the admission control service runs 3 replicas. The following table lists the request and limits for each replica. @@ -68,7 +66,6 @@ By default, the admission control service runs 3 replicas. The following table l Collector monitors runtime activity on each node in your secured clusters as a DaemonSet. It connects to Sensor to report this information. The collector pod has three containers. The first container is collector, which monitors and reports the runtime activity on the node. The other two are compliance and node-inventory. -[discrete] === Collection requirements To use the `CORE_BPF` collection method, the base kernel must support BTF, and the BTF file must be available to collector. @@ -94,12 +91,10 @@ Collector looks for the BTF file in the standard locations shown in the followin If any of these files exists, it is likely that the kernel has BTF support and `CORE_BPF` is configurable. -[discrete] === CPU and memory requirements By default, the collector pod runs 3 containers. The following tables list the request and limits for each container and the total for each collector pod. -[discrete] ==== Collector container [cols="3",options="header"] @@ -114,7 +109,6 @@ By default, the collector pod runs 3 containers. The following tables list the r | 1000 MiB |=== -[discrete] ==== Compliance container [cols="3",options="header"] @@ -130,7 +124,6 @@ By default, the collector pod runs 3 containers. The following tables list the r | 2000 MiB |=== -[discrete] ==== Node-inventory container [cols="3",options="header"] @@ -145,7 +138,6 @@ By default, the collector pod runs 3 containers. The following tables list the r | 500 MiB |=== -[discrete] ==== Total collector pod requirements [cols="3",options="header"] @@ -163,7 +155,6 @@ By default, the collector pod runs 3 containers. The following tables list the r [id="default-requirements-secured-cluster-services-scanner_{context}"] == Scanner -[discrete] === CPU and memory requirements The requirements in this table are based on the default of 3 replicas. @@ -199,10 +190,8 @@ The StackRox Scanner requires Scanner DB (PostgreSQL 15) to store data. The foll Scanner V4 is optional. If Scanner V4 is installed on secured clusters, the following requirements apply. -[discrete] === CPU, memory, and storage requirements -[discrete] === Scanner V4 Indexer The requirements in this table are based on the default of 2 replicas. @@ -219,7 +208,6 @@ The requirements in this table are based on the default of 2 replicas. | 6 GiB |=== -[discrete] === Scanner V4 DB Scanner V4 requires Scanner V4 DB (PostgreSQL 15) to store data. The following table lists the minimum CPU, memory, and storage values required to install and run Scanner V4 DB. For Scanner V4 DB, a PVC is not required, but it is strongly recommended because it ensures optimal performance. diff --git a/modules/generating-sensor-deployment-files.adoc b/modules/generating-sensor-deployment-files.adoc index 42f1d783dec1..bc09a432125f 100644 --- a/modules/generating-sensor-deployment-files.adoc +++ b/modules/generating-sensor-deployment-files.adoc @@ -5,7 +5,6 @@ [id="generating-sensor-deployment-files_{context}"] = Generating Sensor deployment files -[discrete] == Generating files for Kubernetes systems .Procedure @@ -17,7 +16,6 @@ $ roxctl sensor generate k8s --name __ --central "$ROX_ENDPOINT" ---- -[discrete] == Generating files for {ocp} systems .Procedure @@ -47,4 +45,4 @@ To use `wss`, prefix the address with *`wss://`*, and ---- $ roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 ---- -==== \ No newline at end of file +==== diff --git a/modules/operator-upgrade-change-subscription-channel.adoc b/modules/operator-upgrade-change-subscription-channel.adoc index f48ed7775080..b6628de66617 100644 --- a/modules/operator-upgrade-change-subscription-channel.adoc +++ b/modules/operator-upgrade-change-subscription-channel.adoc @@ -30,7 +30,6 @@ ifndef::cloud-svc[] endif::[] * You have access to an {ocp} cluster web console using an account with `cluster-admin` permissions. -[discrete] == Changing the subscription channel by using the web console Use the following instructions for changing the subscription channel by using the web console: @@ -44,7 +43,6 @@ Use the following instructions for changing the subscription channel by using th + For subscriptions with a *Manual* approval strategy, you can manually approve the update from the *Subscription* tab. -[discrete] == Changing the subscription channel by using command line Use the following instructions for changing the subscription channel by using command line: @@ -63,4 +61,4 @@ During the update, the {product-title-short} Operator provisions a new deploymen ifeval::["{context}" == "upgrade-cloudsvc-operator"] :!cloud-svc: -endif::[] \ No newline at end of file +endif::[] diff --git a/modules/recommended-requirements-central-services.adoc b/modules/recommended-requirements-central-services.adoc index dd126e4e2b4c..1f87eb6f93cf 100644 --- a/modules/recommended-requirements-central-services.adoc +++ b/modules/recommended-requirements-central-services.adoc @@ -20,7 +20,6 @@ Central services contain the following components: [id="recommended-requirements-central-services-central_{context}"] == Central -[discrete] === Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Central. To determine sizing, consider the following data: @@ -55,7 +54,6 @@ The following table lists the minimum memory and CPU values required to run Cent [id="recommended-requirements-central-db-services-central_{context}"] == Central DB -[discrete] === Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Central DB. To determine sizing, consider the following data: @@ -121,7 +119,6 @@ The following table lists the minimum memory and CPU values required for the Sta The following table lists the minimum memory and CPU values required for the Scanner V4 deployment in the Central cluster. The table includes the number of unique images deployed in all secured clusters. -[discrete] === Scanner V4 Indexer |=== @@ -134,7 +131,6 @@ The following table lists the minimum memory and CPU values required for the Sca |< 10000|3|6 cores|1.5 GiB |=== -[discrete] === Scanner V4 Matcher |=== @@ -147,7 +143,6 @@ The following table lists the minimum memory and CPU values required for the Sca |< 10000|3|3 cores|1.7 GiB |=== -[discrete] === Scanner V4 DB |=== diff --git a/modules/recommended-requirements-secured-cluster-services.adoc b/modules/recommended-requirements-secured-cluster-services.adoc index 469e881d0eaf..8a5f786ba5b2 100644 --- a/modules/recommended-requirements-secured-cluster-services.adoc +++ b/modules/recommended-requirements-secured-cluster-services.adoc @@ -23,7 +23,6 @@ Collector component is not included on this page. Required resource requirements Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector. -[discrete] == Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Sensor on a secured cluster. @@ -45,7 +44,6 @@ The following table lists the minimum memory and CPU values required to run Sens The admission controller prevents users from creating workloads that violate policies that you configure. -[discrete] == Memory and CPU requirements The following table lists the minimum memory and CPU values required to run the admission controller on a secured cluster. diff --git a/modules/understanding-connectivity-explanations.adoc b/modules/understanding-connectivity-explanations.adoc index a02d41480d89..eea8826be6df 100644 --- a/modules/understanding-connectivity-explanations.adoc +++ b/modules/understanding-connectivity-explanations.adoc @@ -19,7 +19,6 @@ The following are examples of some common issues: The explainability mode provides insights into how each side of a connection is affected by your policy rules. You can quickly identify the configuration mistakes by using the explainability mode. -[discrete] == Example allowed connections Consider that you have a Kubernetes cluster with the following components: @@ -47,7 +46,6 @@ Allowed connections: In this example, the connection from `monitoring/monitoring-service` to `internal-apps/internal-app-a` is allowed by the combination of ANP `pass-monitoring` that applies broadly on all namespaces with the label `security: internal` and a more specific NP `internal-apps/allow-monitoring` tailored for `internal-app-a`. The output shows that multiple policies can contribute to allowing a connection. -[discrete] == Example blocked connections Consider an isolated data service `isolated-data-service` which denies all external access by default for security reasons. diff --git a/modules/use-process-baselines.adoc b/modules/use-process-baselines.adoc index d62d38377065..516b0c0b34e0 100644 --- a/modules/use-process-baselines.adoc +++ b/modules/use-process-baselines.adoc @@ -9,14 +9,12 @@ You can minimize risk by using process baselining for infrastructure security. With this approach, {product-title} first discovers existing processes and creates a baseline. Then it operates in the default deny-all mode and only allows processes listed in the baseline to run. -[discrete] == Process baselines When you install {product-title}, there is no default process baseline. As {product-title} discovers deployments, it creates a process baseline for every container type in a deployment. Then it adds all discovered processes to their own process baselines. -[discrete] == Process baseline states During the process discovery phase, all baselines are in an unlocked state. diff --git a/modules/using-cli.adoc b/modules/using-cli.adoc index 02017ce78c5f..8a39ec1777d0 100644 --- a/modules/using-cli.adoc +++ b/modules/using-cli.adoc @@ -38,7 +38,6 @@ Central stores information about: You can back up and restore Central's database by using the `roxctl` CLI. -[discrete] === Backing up Central database Run the following command to back up Central's database: @@ -47,7 +46,6 @@ Run the following command to back up Central's database: $ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup ---- -[discrete] === Restoring Central database Run the following command to restore Central's database: @@ -62,7 +60,6 @@ $ roxctl -e "$ROX_CENTRAL_ADDRESS" central db restore To secure a Kubernetes or an {ocp} cluster, you must deploy {product-title} services into the cluster. You can generate deployment files in the {product-title-short} portal by selecting *Platform Configuration* -> *Clusters*, or you can use the `roxctl` CLI. -[discrete] === Generating Sensor deployment files .Kubernetes @@ -98,7 +95,6 @@ $ roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 ---- ==== -[discrete] === Installing Sensor by using the generate YAML files When you generate the Sensor deployment files, `roxctl` creates a directory called `sensor-` in your working directory. The script to install Sensor is present in this directory. Run the sensor installation script to install Sensor. @@ -109,7 +105,6 @@ $ ./sensor-/sensor.sh If you get a warning that you do not have the required permissions to install Sensor, follow the on-screen instructions, or contact your cluster administrator for help. -[discrete] === Downloading Sensor bundle for existing clusters Use the following command to download Sensor bundles for existing clusters by specifying a cluster name or ID. @@ -119,7 +114,6 @@ Use the following command to download Sensor bundles for existing clusters by sp $ roxctl sensor get-bundle ---- -[discrete] === Deleting cluster integration [source,terminal] @@ -138,7 +132,6 @@ You can remove them by running the `delete-sensor.sh` script from the Sensor ins You can use the `roxctl` CLI to check deployment YAML files and images for policy compliance. -[discrete] === Configuring output format When you check policy compliance by using the `deployment check`, `image check`, or `image scan` commands, you can specify the output format by using the `-o` option. This option determines how the output of a command is displayed in the terminal. @@ -205,7 +198,6 @@ $ roxctl -e "$ROX_CENTRAL_ADDRESS" \ |=== -[discrete] === Checking deployment YAML files The following command checks build-time and deploy-time violations of your security policies in YAML deployment files. @@ -221,7 +213,6 @@ or $ roxctl -e "$ROX_CENTRAL_ADDRESS" deployment check --file= ---- -[discrete] === Checking images The following command checks build-time violations of your security policies in images. @@ -231,7 +222,6 @@ The following command checks build-time violations of your security policies in $ roxctl -e "$ROX_CENTRAL_ADDRESS" image check --image= ---- -[discrete] === Checking image scan results You can also check the scan results for specific images. @@ -257,12 +247,10 @@ The default *Continuous Integration* system role already has the required permis [id="debug-issues_{context}"] == Debugging issues -[discrete] === Managing Central log level Central saves information to its container logs. -[discrete] ==== Viewing the logs You can see the container logs for Central by running: @@ -278,7 +266,6 @@ $ kubectl logs -n stackrox $ oc logs -n stackrox ---- -[discrete] ==== Viewing current log level You can change the log level to see more or less information in Central logs. Run the following command to view the current log level: @@ -287,7 +274,6 @@ Run the following command to view the current log level: $ roxctl -e "$ROX_CENTRAL_ADDRESS" central debug log ---- -[discrete] ==== Changing the log level Run the following command to change the log level: @@ -297,7 +283,6 @@ $ roxctl -e "$ROX_CENTRAL_ADDRESS" central debug log --level= <1> ---- <1> The acceptable values for `` are `Panic`, `Fatal`, `Error`, `Warn`, `Info`, and `Debug`. -[discrete] === Retrieving debugging information To gather debugging information for investigating issues, run the following command: diff --git a/modules/validatingwebhookconfiguration-yaml-changes.adoc b/modules/validatingwebhookconfiguration-yaml-changes.adoc index e44365b6cff9..6295bc7e42f2 100644 --- a/modules/validatingwebhookconfiguration-yaml-changes.adoc +++ b/modules/validatingwebhookconfiguration-yaml-changes.adoc @@ -12,7 +12,6 @@ With {product-title} you can enforce security policies on: * Pod execution * Pod port forward -[discrete] == If Central or Sensor is unavailable The admission controller requires an initial configuration from Sensor to work. Kubernetes or {ocp} saves this configuration, and it remains accessible even if all admission control service replicas are rescheduled onto other nodes. @@ -42,7 +41,6 @@ $ kubectl delete ValidatingWebhookConfiguration/stackrox ---- ==== -[discrete] == Make the admission controller more reliable Red{nbsp}Hat recommends that you schedule the admission control service on the control plane and not on worker nodes. @@ -57,7 +55,6 @@ $ oc -n stackrox scale deploy/admission-control --replicas= ---- <1> If you use Kubernetes, enter `kubectl` instead of `oc`. -[discrete] == Using with the roxctl CLI You can use the following options when you generate a Sensor deployment YAML file: diff --git a/modules/violation-view-policy-tab.adoc b/modules/violation-view-policy-tab.adoc index 2c5206c1ac35..635a2ef37057 100644 --- a/modules/violation-view-policy-tab.adoc +++ b/modules/violation-view-policy-tab.adoc @@ -8,7 +8,6 @@ [role="_abstract"] The *Policy* tab of the *Details* panel displays details of the policy that caused the violation. -[discrete] == Policy overview section The *Policy overview* section lists the following information: @@ -21,7 +20,6 @@ The *Policy overview* section lists the following information: * *Guidance*: Suggestions on how to address the violation. * *MITRE ATT&CK*: Indicates if there are MITRE link:https://attack.mitre.org/matrices/enterprise/containers/[tactics and techniques] that apply to this policy. -[discrete] == Policy behavior The *Policy behavior* section provides the following information: @@ -40,7 +38,6 @@ The *Policy behavior* section provides the following information: *** For existing deployments, policy changes only result in enforcement at the next detection of the criteria, when a Kubernetes event occurs. For more information about enforcement, see "Security policy enforcement for the deploy stage". ** *Runtime*: {product-title-short} deletes all pods when an event in the pods matches the criteria of the policy. -[discrete] == Policy criteria section -The *Policy criteria* section lists the policy criteria for the policy. \ No newline at end of file +The *Policy criteria* section lists the policy criteria for the policy. diff --git a/modules/violations-view-deployment-tab.adoc b/modules/violations-view-deployment-tab.adoc index fc71f8e64a8e..ec0a71bbd027 100644 --- a/modules/violations-view-deployment-tab.adoc +++ b/modules/violations-view-deployment-tab.adoc @@ -8,7 +8,6 @@ [role="_abstract"] The *Deployment* tab of the *Details* panel displays details of the deployment to which the violation applies. -[discrete] == Overview section The *Deployment overview* section lists the following information: @@ -25,7 +24,6 @@ The *Deployment overview* section lists the following information: * *Annotations*: The annotations that apply to the selected deployment. * *Service Account*: The name of the service account for the selected deployment. -[discrete] == Container configuration section The *Container configuration* section lists the following information: @@ -46,7 +44,6 @@ The *Container configuration* section lists the following information: ** *Destination*: The path where the data is stored. ** *Type*: The type of the volume. -[discrete] == Port configuration section The *Port configuration* section provides information about the ports in the deployment, including the following fields: @@ -64,7 +61,6 @@ The *Port configuration* section provides information about the ports in the dep *** *nodePort*: The port on the node where external traffic comes into the node. *** *externalIps*: The IP addresses that can be used to access the service externally, from outside the cluster, if any exist. This field is not available for an internal service. -[discrete] == Security context section The *Security context* section lists whether the container is running as a privileged container. @@ -73,7 +69,6 @@ The *Security context* section lists whether the container is running as a privi ** `true` if it is *privileged*. ** `false` if it is *not privileged*. -[discrete] == Network policy section The *Network policy* section lists the namespace and all network policies in the namespace containing the violation. Click on a network policy name to view the full YAML file of the network policy. \ No newline at end of file diff --git a/scripts/fix_discrete.sh b/scripts/fix_discrete.sh new file mode 100755 index 000000000000..48db3186f504 --- /dev/null +++ b/scripts/fix_discrete.sh @@ -0,0 +1,68 @@ +#!/bin/bash + +# Spinner function +spinner() { + local pid=$1 + local delay=0.1 + local spinstr='|/-\' + while kill -0 $pid 2>/dev/null; do + local temp=${spinstr#?} + printf " [%c] " "$spinstr" + spinstr=$temp${spinstr%"$temp"} + sleep $delay + printf "\b\b\b\b\b\b" + done + printf " \b\b\b\b" +} + +# Ask user for the base directory +read -rp "Enter the path to the base directory: " BASE_DIR + +# Validate directory +if [[ ! -d "$BASE_DIR" ]]; then + echo "Error: $BASE_DIR is not a valid directory." + exit 1 +fi + +echo "Processing directory: $BASE_DIR" + +FILES=$(find "$BASE_DIR" -type f -name "*.adoc") +TOTAL_COUNT=$(echo "$FILES" | wc -l | tr -d ' ') + +CURRENT=0 +echo "Found $TOTAL_COUNT .adoc files." + +# Process files +echo +for FILE in $FILES; do + CURRENT=$((CURRENT+1)) + printf "Processing file %d/%d: %s" "$CURRENT" "$TOTAL_COUNT" "$FILE" + + ( + REPLACEMENTS=$(awk ' + BEGIN { count=0 } + # Remove [discrete] and prevent blank line + /^\[discrete\]$/ { count++; next } + { print } + END { print "###REPLACEMENTS###" count } + ' "$FILE") + + COUNT=$(echo "$REPLACEMENTS" | tail -n1 | sed 's/###REPLACEMENTS###//') + + if [[ "$COUNT" -gt 0 ]]; then + echo "$REPLACEMENTS" | sed '/###REPLACEMENTS###/d' > "${FILE}.tmp" && mv "${FILE}.tmp" "$FILE" + echo "###SUMMARY### $COUNT" + fi + ) & + spinner $! + RESULT=$(wait $! 2>/dev/null; true) + + COUNT=$(echo "$RESULT" | grep "###SUMMARY###" | awk '{print $2}') + + if [[ -n "$COUNT" && "$COUNT" -gt 0 ]]; then + echo " -> Modified ($COUNT [discrete] removed)" + else + echo -ne "\r\033[K" + fi +done +