Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ibm/mas_devops/roles/appconnect/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ Storage class where AppConnect will be installed - for IBM Cloud clusters, `ibmc
- **Required**
- Environment Variable: `APPCONNECT_STORAGE_CLASS`
- Default Value: None
- **Note**: The App Connect Dashboard requires a file-based storage class with ReadWriteMany (RWX) capability.

### appconnect_dashboard_name
AppConnect dashboard instance name. Defaults to `dashboard-12040r2` as a reference to AppConnect Dashboard version `12.0.4.0-r2` that is compatible with the default subscription channel and license ID.
Expand Down
14 changes: 8 additions & 6 deletions ibm/mas_devops/roles/cp4d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,6 @@ statefulset.apps/zen-metastoredb 3/3 68m




Role Variables
--------------
### cpd_product_version
Expand All @@ -199,14 +198,17 @@ An IBM entitlement key specific for Cloud Pak for Data installation, primarily u
- Default: None

### cpd_primary_storage_class
Primary storage class for Cloud Pak for Data.
Primary storage class for Cloud Pak for Data. For more details please read the [Storage Considerations for IBM Cloud Pak for Data](https://www.ibm.com/docs/en/cloud-paks/cp-data/4.6.x?topic=planning-storage-considerations).
According to the mentioned documentation, Cloud Pak for Data uses the following access modes for storage classes:
- RWX file storage: ocs-storagecluster-cephfs
- RWX file storage: ibmc-file-gold-gid

- **Required** if one of the known supported storage classes is not installed in the cluster.
- Environment Variable: `CPD_PRIMARY_STORAGE_CLASS`
- Default Value: `ibmc-file-gold-gid`, `ocs-storagecluster-cephfs`, `azurefiles-premium` (if available)

### cpd_metadata_storage_class
Storage class for the Cloud Pak for Data Zen meta database.
Storage class for the Cloud Pak for Data Zen meta database. This must support ReadWriteOnce (RWO access) access mode.

- **Required** if one of the known supported storage classes is not installed in the cluster.
- Environment Variable: `CPD_METADATA_STORAGE_CLASS`
Expand Down Expand Up @@ -239,16 +241,16 @@ The CP4D Admin username to authenticate with CP4D APIs. If you didn't change the

- Optional
- Environment Variable: `CPD_ADMIN_USERNAME`
- Default Value:
- Default Value:
- `admin` (CPD 4.6)
- `cpadmin` (CPD 4.8)
- `cpadmin` (CPD 4.8)

### cpd_admin_password
The CP4D Admin User password to call CP4D API to provision Discovery Instance. If you didn't change the initial admin password after CP4D install, you don't need to provide it. The initial admin user password for `admin` or `cpdamin` will be used.

- Optional
- Environment Variable: `CPD_ADMIN_PASSWORD`
- Default Value:
- Default Value:
- CPD 4.6: Looked up from the `admin-user-details` secret in the `cpd_instance_namespace` namespace
- CPD 4.8: Looked up from the `ibm-iam-bindinfo-platform-auth-idp-credentials` secret in the `cpd_instance_namespace` namespace

Expand Down
12 changes: 6 additions & 6 deletions ibm/mas_devops/roles/cp4d_service/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -614,14 +614,14 @@ The product version (also known as operand version) of this service to install.
- Default Value: Defined by the installed MAS catalog version

### cpd_service_storage_class
This is used to set `spec.storageClass` in all CPD services that uses file storage class (read-write-many).
This is used to set `spec.storageClass` in all CPD services that uses file storage class (read-write-many RWX).

- **Required**, unless IBMCloud storage classes are available.
- Environment Variable: `CPD_SERVICE_STORAGE_CLASS`
- Default Value: Auto determined if default storage classes are provided and available by your cloud provider. i.e `ibmc-file` for IBM Cloud, `efs` for AWS.

### cpd_service_block_storage_class
This is used to set `spec.blockStorageClass` in all CPD services that uses block storage class (read-write-only).
This is used to set `spec.blockStorageClass` in all CPD services that uses block storage class (read-write-only RWO).

- **Required**, unless IBMCloud storage classes are available.
- Environment Variable: `CPD_SERVICE_BLOCK_STORAGE_CLASS`
Expand All @@ -646,16 +646,16 @@ The CP4D Admin username to authenticate with CP4D APIs. If you didn't change the

- Optional
- Environment Variable: `CPD_ADMIN_USERNAME`
- Default Value:
- Default Value:
- `admin` (CPD 4.6)
- `cpadmin` (CPD 4.8)
- `cpadmin` (CPD 4.8)

### cpd_admin_password
The CP4D Admin User password to call CP4D API to provision Discovery Instance. If you didn't change the initial admin password after CP4D install, you don't need to provide it. The initial admin user password for `admin` or `cpdamin` will be used.

- Optional
- Environment Variable: `CPD_ADMIN_PASSWORD`
- Default Value:
- Default Value:
- CPD 4.6: Looked up from the `admin-user-details` secret in the `cpd_instance_namespace` namespace
- CPD 4.8: Looked up from the `ibm-iam-bindinfo-platform-auth-idp-credentials` secret in the `cpd_instance_namespace` namespace

Expand Down Expand Up @@ -694,7 +694,7 @@ Stores the name of the CP4D Watson Discovery Instance that can be used to config
- Default Value: `wd-mas-${mas_instance_id}-assist`

### cpd_wd_deployment_type
Defines the CP4D Watson Discovery deployment type:
Defines the CP4D Watson Discovery deployment type:

- `Starter`: One replica pod for each wd service/component, uses fewer resources in your cluster.
- `Production`: Multiple replica pods for each Watson Discovery service/component, recommended for production deployments to increase workload capacity however consumes more cluster resources.
Expand Down
16 changes: 8 additions & 8 deletions ibm/mas_devops/roles/db2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ Role Variables - Storage
We recommend reviewing the Db2 documentation about the certified storage options for Db2 on Red Hat OpenShift. Please ensure your storage class meets the specified deployment requirements for Db2. [https://www.ibm.com/docs/en/db2/11.5?topic=storage-certified-options](https://www.ibm.com/docs/en/db2/11.5?topic=storage-certified-options)

### db2_meta_storage_class
Storage class used for metadata. This must support ReadWriteMany
Storage class used for metadata. This must support ReadWriteMany(RWX) access mode.

- **Required**
- Environment Variable: `DB2_META_STORAGE_CLASS`
Expand All @@ -160,7 +160,7 @@ The access mode for the storage.
- Default: `ReadWriteMany`

### db2_data_storage_class
Storage class used for user data. This must support ReadWriteOnce
Storage class used for user data. This must support ReadWriteMany(RWX) access mode.

- **Required**
- Environment Variable: `DB2_DATA_STORAGE_CLASS`
Expand All @@ -181,7 +181,7 @@ The access mode for the storage.
- Default: `ReadWriteOnce`

### db2_backup_storage_class
Storage class used for backup. This must support ReadWriteMany
Storage class used for backup. This must support ReadWriteMany(RWX) access mode.

- Optional
- Environment Variable: `DB2_BACKUP_STORAGE_CLASS`
Expand All @@ -202,7 +202,7 @@ The access mode for the storage.
- Default: `ReadWriteMany`

### db2_logs_storage_class
Storage class used for transaction logs. This must support ReadWriteOnce
Storage class used for transaction logs. This must support ReadWriteMany(RWX) access mode.

- Optional
- Environment Variable: `DB2_LOGS_STORAGE_CLASS`
Expand All @@ -223,7 +223,7 @@ The access mode for the storage.
- Default: `ReadWriteOnce`

### db2_temp_storage_class
Storage class used for temporary data. This must support ReadWriteOnce
Storage class used for temporary data. This must support ReadWriteMany(RWX) access mode.

- Optional
- Environment Variable: `DB2_TEMP_STORAGE_CLASS`
Expand All @@ -237,7 +237,7 @@ Size of temporary persistent volume.
- Default: `100Gi`

### db2_temp_storage_accessmode
The access mode for the storage.
The access mode for the storage. This must support ReadWriteOnce(RWO) access mode.

- Optional
- Environment Variable: `DB2_TEMP_STORAGE_ACCESSMODE`
Expand All @@ -249,9 +249,9 @@ Role Variables - Resource Requests
These variables allow you to customize the resources available to the Db2 pod in your cluster. In most circumstances you will want to set these properties because it's impossible for us to provide a default value that will be appropriate for all users. We have set defaults that are suitable for deploying Db2 onto a dedicated worker node with 4cpu and 16gb memory.

!!! tip
Note that you must take into account the system overhead on any given node when setting these parameters, if you set the requests equal to the number of CPU or amount of memory on yournode then the scheduler will not be able to schedule the Db2 pod because not 100% of the worker nodes' resource will be available to pod on that node, even if there's only a single pod on it.
Note that you must take into account the system overhead on any given node when setting these parameters, if you set the requests equal to the number of CPU or amount of memory on your node then the scheduler will not be able to schedule the Db2 pod because not 100% of the worker nodes' resource will be available to pod on that node, even if there's only a single pod on it.

Db2 is sensitive to both CPU and memory issues, particularly memory, we recommennd setting requests and limits to the same values, ensuring the scheduler always reserves the resources that Db2 expects to be available to it.
Db2 is sensitive to both CPU and memory issues, particularly memory, we recommend setting requests and limits to the same values, ensuring the scheduler always reserves the resources that Db2 expects to be available to it.

### db2_cpu_requests
Define the Kubernetes CPU request for the Db2 pod.
Expand Down
3 changes: 2 additions & 1 deletion ibm/mas_devops/roles/dro/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,12 @@ Provide your [IBM entitlement key](https://myibm.ibm.com/products-services/conta
- Default: None

### dro_storage_class
Required. Storage class where DRO will be installed. MAS ansible playbooks will automatically try to determine a rwo (Read Write Once) storage class from a cluster if DRO_STORAGE_CLASS is not supplied. If a cluster is setup with a customize storage solution, please provide a valid rwo storage class name using DRO_STORAGE_CLASS
Required. Storage class where DRO will be installed. MAS ansible playbooks will automatically try to determine a RWO (Read Write Once) storage class from a cluster if DRO_STORAGE_CLASS is not supplied. If a cluster is setup with a customized storage solution, please provide a valid RWO storage class name using DRO_STORAGE_CLASS.

- Optional
- Environment Variable: `DRO_STORAGE_CLASS`
- Default Value: None
- **Note**: The storage class must support the RWO(Read Write Once) access Mode

Role Variables - BASCfg Generation
-------------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion ibm/mas_devops/roles/grafana/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Sets the namespace to install the grafana operator V5 and grafana instance
### grafana_instance_storage_class
Declare the storage class for Grafana Instance user data persistent volume.

- **Required** if one of the known supported storage classes is not installed in the cluster.
- **Required** if one of the known supported storage classes is not installed in the cluster. Storage classes must support ReadWriteOnce (RWO) access mode.
- Environment Variable: `GRAFANA_INSTANCE_STORAGE_CLASS`
- Default Value: `ibmc-file-gold-gid`, `ocs-storagecluster-cephfs`, `azurefiles-premium` (if available)

Expand Down
4 changes: 2 additions & 2 deletions ibm/mas_devops/roles/kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The configuration to apply, there are two configurations available: small and la
- Default Value: `small`

### kafka_storage_class
The name of the storage class to configure the AMQStreams operator to use for persistent storage in the Kafka cluster.
The name of the storage class to configure the AMQStreams operator to use for persistent storage in the Kafka cluster. Storage class must support ReadWriteOnce(RWO) access mode.

- Environment Variable: `KAFKA_STORAGE_CLASS`
- Default Value: lookup supported storage classes in the cluster
Expand All @@ -70,7 +70,7 @@ The size of the storage to configure the AMQStreams operator to use for persiste
- Default Value: `100Gi`

### zookeeper_storage_class
The name of the storage class to configure the AMQStreams operator to use for persistent storage in the Zookeeper cluster.
The name of the storage class to configure the AMQStreams operator to use for persistent storage in the Zookeeper cluster. Storage class must support ReadWriteOnce(RWO) access mode.

- Environment Variable: `ZOOKEEPER_STORAGE_CLASS`
- Default Value: lookup supported storage classes in the cluster
Expand Down
2 changes: 1 addition & 1 deletion ibm/mas_devops/roles/mongodb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ List of preserved settings
- mongodb_replicas

### mongodb_storage_class
Required. The name of the storage class to configure the MongoDb operator to use for persistent storage in the MongoDb cluster.
**Required**: The name of the storage class to configure the MongoDb operator to use for persistent storage in the MongoDb cluster. Storage class must support ReadWriteOnce(RWO) access mode.

- Environment Variable: `MONGODB_STORAGE_CLASS`
- Default Value: None
Expand Down
7 changes: 4 additions & 3 deletions ibm/mas_devops/roles/ocp_cluster_monitoring/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ Adjust the retention period for Prometheus metrics, only used when both `prometh
- Default Value: `15d`

### prometheus_storage_class
Declare the storage class for Prometheus' metrics data persistent volume.
Declare the storage class for Prometheus' metrics data persistent volume. Storage class must support ReadWriteOnce(RWO) access mode.

- **Required** if one of the known supported storage classes is not installed in the cluster.
- Environment Variable: `PROMETHEUS_STORAGE_CLASS`
- Default Value: `ibmc-file-gold-gid`, `ocs-storagecluster-cephfs`, `azurefiles-premium` (if available)
- Default Value: `ibmc-block-gold`, `ocs-storagecluster-ceph-rbd`, or `managed-premium` (if available)

### prometheus_storage_size
Adjust the size of the volume used to store metrics, only used when both `prometheus_storage_class` and `prometheus_alertmgr_storage_class` are set.
Expand All @@ -42,6 +42,7 @@ Declare the storage class for AlertManager's persistent volume.
- **Required** if one of the known supported storage classes is not installed in the cluster.
- Environment Variable: `PROMETHEUS_ALERTMGR_STORAGE_CLASS`
- Default Value: `ibmc-file-gold-gid`, `ocs-storagecluster-cephfs`, `azurefiles-premium` (if available)
- **Note**: Storage class must support ReadWriteMany(RWX) access mode.

### prometheus_alertmgr_storage_size
Adjust the size of the volume used by AlertManager, only used when both `prometheus_storage_class` and `prometheus_alertmgr_storage_class` are set.
Expand All @@ -58,7 +59,7 @@ Adjust the retention period for User Workload Prometheus metrics, this parameter
- Default Value: `15d`

### prometheus_userworkload_storage_class
Declare the storage class for User Workload Prometheus' metrics data persistent volume.
Declare the storage class for User Workload Prometheus' metrics data persistent volume. Storage class must support ReadWriteOnce(RWO) access mode.

- Optional
- Environment Variable: `PROMETHEUS_USERWORKLOAD_STORAGE_CLASS`
Expand Down
4 changes: 2 additions & 2 deletions ibm/mas_devops/roles/registry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Usage for tear-down action
--------------------------
This role can also be used to permanently delete a mirror registry from a given cluster by setting the `registry_action` to `tear-down` and specifying the corresponding `registry_namespace`, if not using the default value.

Note that the tear-down action deletes the registry completely including the PVC storage and the registry namespace. To start up the registry again, the role needs to be run again with the registry_action on default or `setup`. Images previously stored in the registry before the tear-down will no longer be available and will need to be mirrored again once the registry setup has completed. Take precaution when using this function and expect that images can no longer be accessed from the registry that has been torn down.
Note that the tear-down action deletes the registry completely including the PVC storage and the registry namespace. To start up the registry again, the role needs to be run again with the registry_action on default or `setup`. Images previously stored in the registry before the tear-down will no longer be available and will need to be mirrored again once the registry setup has completed. Take precaution when using this function and expect that images can no longer be accessed from the registry that has been torn down.

**Note:** Recreating the registry will also create a new ca cert for the new registry.

Expand All @@ -74,7 +74,7 @@ The namespace where the registry to run
- Default Value: `airgap-registry`

### registry_storage_class
Required. The name of the storage class to configure the MongoDb operator to use for persistent storage in the MongoDb cluster.
**Required**: The name of the storage class to configure the MongoDb operator to use for persistent storage in the MongoDb cluster. Storage class must support ReadWriteOnce(RWO) access mode.

- **Required**, unless running in IBM Cloud ROKS, where the storage class will default to `ibmc-block-gold`.
- Environment Variable: `REGISTRY_STORAGE_CLASS`
Expand Down
5 changes: 3 additions & 2 deletions ibm/mas_devops/roles/suite_app_config/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ There are two defaulted File Storage Persistent Volumes Claim resources that wil
The following properties can be defined to customize the persistent volumes for the JMS queues setup for Manage.

### mas_app_settings_jms_queue_pvc_storage_class
Provide the persistent volume storage class to be used for JMS queue configuration.
Provide the persistent volume storage class to be used for JMS queue configuration. Both `ReadWriteOnce` (if using a block storage class) or `ReadWriteMany` (if using file storage class) access modes are supported.
**Note:** JMS configuration will only be done if `mas_app_settings_server_bundles_size` property is set to `jms`.

- Optional
Expand Down Expand Up @@ -276,6 +276,7 @@ The following properties can be defined to customize the persistent volumes for

### mas_app_settings_doclinks_pvc_storage_class
Provide the persistent volume storage class to be used for doclinks/attachments configuration.
Both `ReadWriteOnce` (if using a block storage class) or `ReadWriteMany` (if using file storage class) are supported.

- Optional
- Environment Variable: `MAS_APP_SETTINGS_DOCLINKS_PVC_STORAGE_CLASS`
Expand Down Expand Up @@ -316,7 +317,7 @@ The following properties can be defined to customize the persistent volumes for

### mas_app_settings_bim_pvc_storage_class
Provide the persistent volume storage class to be used for Building Information Models configuration.

Both `ReadWriteOnce` (if using a block storage class) or `ReadWriteMany` (if using file storage class) are supported.
- Optional
- Environment Variable: `MAS_APP_SETTINGS_BIM_PVC_STORAGE_CLASS`
- Default: None - If not set, a default storage class will be auto defined accordingly to your cluster's available storage classes.
Expand Down
Loading
Loading