Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift Data Foundation Example Fix & Update #5157

Merged
merged 5 commits into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,10 @@ vendor/
!command/test-fixtures/**/.terraform/

*.sh

!createaddon.sh
!createcrd.sh
!deleteaddon.sh
!deletecrd.sh
!updatecrd.sh
!updateodf.sh
4 changes: 3 additions & 1 deletion examples/openshift-data-foundation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,6 @@ If you'd like to Deploy and Manage the different configurations for ODF on a Red

If you'd like to update or replace the different worker nodes with ODF enabled, head over to the [vpc-worker-replace](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/vpc-worker-replace) folder. This inherently covers the worker replace steps of sequential cordon, drain, and replace.

## Deploying & Managing OpenShift Data Foundation on ROKS Satellite - Coming Soon
## Deploying & Managing OpenShift Data Foundation on ROKS Satellite

If you'd like to Deploy and Manage ODF on a Red Hat OpenShift on a Satellite environment head over to the [satellite](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/satellite) folder.
2 changes: 1 addition & 1 deletion examples/openshift-data-foundation/addon/4.12.0/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Tech Preview] Deploying and Managing Openshift Data Foundation
# Deploying and Managing Openshift Data Foundation

This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud VPC based RedHat Openshift cluster. Note this template is still in development, so please be advised before using in production.

Expand Down
2 changes: 1 addition & 1 deletion examples/openshift-data-foundation/addon/4.13.0/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Tech Preview] Deploying and Managing Openshift Data Foundation
# Deploying and Managing Openshift Data Foundation

This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud VPC based RedHat Openshift cluster. Note this template is still in development, so please be advised before using in production.

Expand Down
4 changes: 2 additions & 2 deletions examples/openshift-data-foundation/addon/4.14.0/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Tech Preview] Deploying and Managing Openshift Data Foundation
# Deploying and Managing Openshift Data Foundation

This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud VPC based RedHat Openshift cluster. Note this template is still in development, so please be advised before using in production.

Expand Down Expand Up @@ -157,7 +157,7 @@ ocsUpgrade = "false" -> "true"
| cluster | Name of the cluster. | `string` | yes | -
| region | Region of the cluster | `string` | yes | -
| odfVersion | Version of the ODF add-on | `string` | yes | 4.12.0
| osdSize | Enter the size for the storage devices that you want to provision for the Object Storage Daemon (OSD) pods | `string` | yes | 250Gi
| osdSize | Enter the size for the storage devices that you want to provision for the Object Storage Daemon (OSD) pods | `string` | yes | 512Gi
| numOfOsd | The Number of OSD | `string` | yes | 1
| osdStorageClassName | Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods | `string` | yes | ibmc-vpc-block-metro-10iops-tier
| autoDiscoverDevices | Set to true if automatically discovering local disks | `string` | no | true
Expand Down
11 changes: 11 additions & 0 deletions examples/openshift-data-foundation/addon/4.14.0/createaddon.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cd ${WORKING_DIR}/ibm_odf_addon
terraform init
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
11 changes: 11 additions & 0 deletions examples/openshift-data-foundation/addon/4.14.0/createcrd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
cd ${WORKING_DIR}/ocscluster
terraform init
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
17 changes: 17 additions & 0 deletions examples/openshift-data-foundation/addon/4.14.0/deleteaddon.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/usr/bin/env bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cd ${WORKING_DIR}/ibm_odf_addon
terraform init
if [ -e ${WORKING_DIR}/ibm_odf_addon/terraform.tfstate ]
then
echo "ok"
else
terraform apply --auto-approve -var-file=${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
fi
terraform destroy --auto-approve -var-file=${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
19 changes: 19 additions & 0 deletions examples/openshift-data-foundation/addon/4.14.0/deletecrd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/usr/bin/env bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
cd ${WORKING_DIR}/ocscluster
terraform init
if [ -e ${WORKING_DIR}/ocscluster/terraform.tfstate ]
then
echo "ok"
else
terraform import -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars kubernetes_manifest.ocscluster_ocscluster_auto "apiVersion=ocs.ibm.io/v1,kind=OcsCluster,namespace=openshift-storage,name=ocscluster-auto"
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
fi

terraform destroy --auto-approve -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
ibmcloud_api_key = ""
cluster = ""
region = ""
odfVersion = ""
odfVersion = "4.14.0"


# To create the Ocscluster Custom Resource Definition, with the following specs
Expand Down
25 changes: 25 additions & 0 deletions examples/openshift-data-foundation/addon/4.14.0/updatecrd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/usr/bin/env bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
cd ${WORKING_DIR}/ocscluster
terraform init
if [ -e ${WORKING_DIR}/ocscluster/terraform.tfstate ]
then
echo "ok"
else
terraform import -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars kubernetes_manifest.ocscluster_ocscluster_auto "apiVersion=ocs.ibm.io/v1,kind=OcsCluster,namespace=openshift-storage,name=ocscluster-auto"
fi

terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars

sed -i'' -e "s|ocsUpgrade = \"true\"|ocsUpgrade = \"false\"|g" ${WORKING_DIR}/schematics.tfvars
sed -i'' -e "s|ocsUpgrade = \"true\"|ocsUpgrade = \"false\"|g" ${WORKING_DIR}/ocscluster/schematics.tfvars
rm -f ${WORKING_DIR}/schematics.tfvars-e
rm -f ${WORKING_DIR}/ocscluster/schematics.tfvars-e

terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
11 changes: 11 additions & 0 deletions examples/openshift-data-foundation/addon/4.14.0/updateodf.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cd ${WORKING_DIR}/ibm_odf_addon
terraform init
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
9 changes: 9 additions & 0 deletions examples/openshift-data-foundation/satellite/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Deploying and Managing Openshift Data Foundation on Satellite

This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud Satellite based RedHat Openshift cluster.

#### Please Select the ODF Template you wish to install on your ROKS Satellite Cluster and follow the documentation.

- odf-remote - Choose this template if you have a CSI driver installed in your cluster. For example, the azuredisk-csi-driver driver. You can use the CSI driver to dynamically provision storage volumes when deploying ODF.

- odf-local - Choose this template when you have local storage available to your worker nodes. If your storage volumes are visible when running lsblk, you can use these disks when deploying ODF if they are raw and unformatted.
177 changes: 177 additions & 0 deletions examples/openshift-data-foundation/satellite/odf-local/4.13/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# Openshift Data Foundation - Local Deployment

This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud Satellite based RedHat Openshift cluster.

This sample configuration will deploy the ODF, scale and upgrade it using the "ibm_satellite_storage_configuration" and "ibm_satellite_storage_assignment" resources from the ibm terraform provider.

For more information, about

* ODF Deployment & Management on Satellite, see [OpenShift Data Foundation for local devices](https://cloud.ibm.com/docs/satellite?topic=satellite-storage-odf-local&interface=ui)

## Usage

### Option 1 - Command Line Interface

To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`

```bash
$ cd satellite
```

```bash
$ terraform init
$ terraform plan --var-file input.tfvars
$ terraform apply --var-file input.tfvars
```

Run `terraform destroy --var-file input.tfvars` when you don't need these resources.

### Option 2 - IBM Cloud Schematics

To Deploy & Manage the Openshift-Data-Foundation add-on using `IBM Cloud Schematics` please follow the below documentation

https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform


## Example usage

### Deployment of ODF Storage Configuration and Assignment

The default input.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment.

```hcl
# Common for both storage configuration and assignment
ibmcloud_api_key = ""
location = "" #Location of your storage configuration and assignment
configName = "" #Name of your storage configuration
region = ""


#ODF Storage Configuration
storageTemplateName = "odf-local"
storageTemplateVersion = "4.13"

## User Parameters
autoDiscoverDevices = "true"
osdDevicePaths = ""
billingType = "advanced"
clusterEncryption = "false"
kmsBaseUrl = null
kmsEncryption = "false"
kmsInstanceId = null
kmsInstanceName = null
kmsTokenUrl = null
ibmCosEndpoint = null
ibmCosLocation = null
ignoreNoobaa = false
numOfOsd = "1"
ocsUpgrade = "false"
workerNodes = null
encryptionInTransit = false
disableNoobaaLB = false
performCleanup = false

## Secret Parameters
ibmCosAccessKey = null
ibmCosSecretKey = null
iamAPIKey = "" #Required
kmsApiKey = null
kmsRootKey = null

#ODF Storage Assignment
assignmentName = ""
cluster = ""
updateConfigRevision = false

## NOTE ##
# The following variables will cause issues to your storage assignment lifecycle, so please use only with a storage configuration resource.
deleteAssignments = false
updateAssignments = false
```

Please note with this deployment the storage configuration and it's respective storage assignment is created to your specific satellite cluster in this example, if you'd like more control over the resources you can split it up into different files.

### Scale-Up of ODF

The following variables in the `input.tfvars` file can be edited

* numOfOsd - To scale your storage
* workerNodes - To increase the number of Worker Nodes with ODF

```hcl
numOfOsd = "1" -> "2"
workerNodes = null -> "worker_1_ID,worker_2_ID"
updateConfigRevision = true
```
In this example we set the `updateConfigRevision` parameter to true in order to update our storage assignment with the latest configuration revision i.e the OcsCluster CRD is updated with the latest changes.

You could also use `updateAssignments` to directly update the storage configuration's assignments, but if you have a dependent `storage_assignment` resource, it's lifecycle will be affected. It it recommended to use this parameter when you've only defined the `storage_configuration` resource.

### Upgrade of ODF

The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.

* storageTemplateVersion - Specify the version you wish to upgrade to
* ocsUpgrade - Must be set to `true` to upgrade the CRD

```hcl
# For ODF add-on upgrade
storageTemplateVersion = "4.13" -> "4.14"
ocsUpgrade = "false" -> "true"
```

Note this operation deletes the existing configuration and it's respective assignments, updates it to the next version and reassigns back to the previous clusters/groups. If used with a dependent assignment resource, it's lifecycle will be affected. It is recommended to perform this scenario when you've only defined the `storage_configuration` resource.

## Examples

* [ ODF Deployment & Management ](https://cloud.ibm.com/docs/satellite?topic=satellite-storage-odf-local&interface=ui)

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

## Requirements

| Name | Version |
|------|---------|
| terraform | ~> 0.14.8 |

## Providers

| Name | Version |
|------|---------|
| ibm | latest |

## Inputs

| Name | Description | Type | Required | Default
|------|-------------|------|----------|--------|
| ibmcloud_api_key | IBM Cloud API Key | `string` | yes | -
| cluster | Name of the cluster. | `string` | yes | -
| region | Region of the cluster | `string` | yes | -
| storageTemplateVersion | Version of the Storage Template (odf-local) | `string` | yes | -
| storageTemplateName | Name of the Storage Template (odf-local)| `string` | yes | -
| numOfOsd | The Number of OSD | `string` | yes | 1
| autoDiscoverDevices | Set to true if automatically discovering local disks | `string` | no | true
| billingType | Set to true if automatically discovering local disks | `string` | no | advanced
| performCleanup |Set to true if you want to perform complete cleanup of ODF on assignment deletion. | `bool` | yes | false
| clusterEncryption | To enable at-rest encryption of all disks in the storage cluster | `string` | no | false
| iamApiKey | Your IAM API key. | `string` | true | -
| kmsEncryption | Set to true to enable HPCS Encryption | `string` | yes | false
| kmsBaseUrl | The HPCS Base URL | `string` | no | null
| kmsInstanceId | The HPCS Service ID | `string` | no | null
| kmsSecretName | The HPCS secret name | `string` | no | null
| kmsInstanceName | The HPCS service name | `string` | no | null
| kmsTokenUrl | The HPCS Token URL | `string` | no | null
| ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false
| ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false
| osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null
| workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null
| encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false
| disableNoobaaLB | Specify true to disable to NooBaa public load balancer. | `bool` | no | false

Refer - https://cloud.ibm.com/docs/satellite?topic=satellite-storage-odf-local&interface=ui#odf-local-4.13-parameters

## Note

* Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set.
* `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on.
* During ODF Storage Template Update, it is recommended to delete all terraform related assignments before handed, as their lifecycle will be affected, during update new storage assignments are made back internally with new UUIDs.