Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correction/Changes in Quick Start Guides for ODF #100

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
74 changes: 43 additions & 31 deletions controllers/quickstart_constants.go
Expand Up @@ -42,10 +42,10 @@ spec:

tasks:
-
title: Connecting applications to block or file storage (PersistentVolumeClaims)
title: Connect applications to block or file storage (PersistentVolumeClaims)
description: >-

PersistentVolumes (PVs) allow your data to exist beyond your pod's lifecycle, even after you restart, rescheduled, or delete it.
PersistentVolumes (PVs) allow your data to exist beyond the pod's lifecycle, even after you restart, rescheduled, or delete it.


After an administrator sets up an OpenShift Data Foundation StorageSystem, developers can use PersistentVolumeClaims (PVCs) to request PV resources without needing to know anything specific about their underlying storage infrastructure.
Expand All @@ -57,38 +57,40 @@ spec:

2. Select your project from the **Project** dropdown and find your application in the list of deployments.

3. Open the action menu &*⋮&* and select **Add Storage**.
3. Open the action menu &*⋮&* and select **Add storage**.

4. To create a claim, select **Create new claim**. To restore an existing PV to a redeployed application, select **Use existing claim**.
4. Select **Storage type** PersistentVolumeClaim if it isn’t already selected.

5. Select the appropriate storage type for your application:
5. To restore an existing PV to a redeployed application, select **Use existing claim**. To create a claim, select **Create new claim**.

6. Select the appropriate storage type for your application:

- &*Block storage:&* Select **ocs-storagecluster-ceph-rbd**.

- &*File storage:&* Select **ocs-storagecluster-cephfs**.

6. Specify storage details.
7. Specify your storage details. If you’re using an existing claim, specify your mount path details.

7. Click **Save**.
8. Select **Save**.
review:
instructions: >-
To verify that your application is using PersistentVolumeClaim:

- Click the name of the deployment that you assigned storage to.
- Select the name of the Deployment that you assigned storage to.

- On the deployment details page, look at Type in the Volumes section to verify the type of the PVC you attached.
- On the Deployment details page, look at Type in the Volumes section to verify the type of the PVC you attached.

- Click the PVC name and verify the storage class name in the PersistentVolumeClaim Overview page.
- Select the PVC name and verify the StorageClass name in the PersistentVolumeClaim Overview page.

failedTaskHelp: Try the steps again.
-
title: Connecting application to object storage (Object Bucket Claims)
title: Connect applications to object storage (Object Bucket Claims)
description: >-

Object Bucket Claims provide an easy way to consume object storage across OpenShift Data Foundation.


Use your object service endpoint, access key, and secret key to add your object service provider to OpenShift Data Foundation as a BackingStore. See Learn how to add storage resources for hybrid or multicloud docs.
Use your object service endpoint, access key, and secret key to add your object service provider to OpenShift Data Foundation as a BackingStore. See: Learn how to add storage resources for hybrid or multicloud docs.


**To create an Object Bucket Claim and connect it to your application:**
Expand All @@ -103,33 +105,39 @@ spec:

- &*Multicloud Object Gateway:&* Select **openshift-storage.noobaa.io**.

4. To create your Object Bucket Claim, select **Create**.
4. Select Object Bucket Claim for StorageClass **openshift-storage.noobaa.io**.

5. To create your Object Bucket Claim, select **Create**.

6. On the **Object Bucket Claims** page, verify that your object bucket claim’s status is **Bound**.

7. Open the action menu &*⋮&* and select **Attach to deployment**.

5. Open the action menu &*⋮&* and select **Attach to deployment**.
8. To attach your object bucket claim to your application, navigate to the pop-up that appears on your screen and select its name from the dropdown list under **Deployment Name**.

6. To attach the Object Bucket to your application, select its name from the application list.
9. Select **Attach**.

review:
instructions: >-
To verify that your application is using an Object Bucket Claim:

- Click the name of the deployment that you assigned storage to.
- Select the name of the Deployment that you assigned storage to.

- On the deployment click on Environment tab and check if a the new secret and config map were added.
- On the Deployment details page, select the **Environment** tab and check if a new **Secret** and **ConfigMap** were added.

failedTaskHelp: Try the steps again.
-
title: Using the dashboards to monitor OpenShift Data Foundation resources
title: Use the dashboards to monitor OpenShift Data Foundation resources
description: >-

Monitor any storage resource manage by Openshift Data Foundation through various dashboard overviews.
Monitor any storage resource managed by Openshift Data Foundation through various dashboard overviews.


In the side navigation, select **Storage > Openshift Data Foundation** to access the Openshift Data Foundation dashboard view.

1. Observe high level insights for all your StorageSystems with the overview screen.

1. To access more specific information for a system, drill down to its system overview.
1. To access more specific information for a system, select its **System Capacity** tile and drill down to its system overview.

- The **Block & File overview** tab shows the holistic state of Openshift Data Foundation and the state of any PersistentVolumes.

Expand All @@ -148,7 +156,7 @@ kind: ConsoleQuickStart
metadata:
name: odf-configuration
spec:
displayName: OpenShift Data Foundation Configuration & Management
displayName: Configure and manage OpenShift Data Foundation
durationMinutes: 5
icon: data:image/svg+xml;base64,PHN2ZyBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCAxMDAgMTAwIiBoZWlnaHQ9IjEwMCIgdmlld0JveD0iMCAwIDEwMCAxMDAiIHdpZHRoPSIxMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PHBhdGggZD0ibTY2LjcgNTUuOGM2LjYgMCAxNi4xLTEuNCAxNi4xLTkuMiAwLS42IDAtMS4yLS4yLTEuOGwtMy45LTE3Yy0uOS0zLjctMS43LTUuNC04LjMtOC43LTUuMS0yLjYtMTYuMi02LjktMTkuNS02LjktMy4xIDAtNCA0LTcuNiA0LTMuNSAwLTYuMS0yLjktOS40LTIuOS0zLjIgMC01LjIgMi4xLTYuOCA2LjYgMCAwLTQuNCAxMi41LTUgMTQuMy0uMS4zLS4xLjctLjEgMSAuMSA0LjcgMTkuMiAyMC42IDQ0LjcgMjAuNm0xNy4xLTZjLjkgNC4zLjkgNC44LjkgNS4zIDAgNy40LTguMyAxMS40LTE5LjEgMTEuNC0yNC42IDAtNDYuMS0xNC40LTQ2LjEtMjMuOSAwLTEuMy4zLTIuNi44LTMuOS04LjkuNS0yMC4zIDIuMS0yMC4zIDEyLjIgMCAxNi41IDM5LjIgMzYuOSA3MC4yIDM2LjkgMjMuOCAwIDI5LjgtMTAuNyAyOS44LTE5LjIgMC02LjctNS44LTE0LjMtMTYuMi0xOC44IiBmaWxsPSIjZWQxYzI0Ii8+PHBhdGggZD0ibTgzLjggNDkuOGMuOSA0LjMuOSA0LjguOSA1LjMgMCA3LjQtOC4zIDExLjQtMTkuMSAxMS40LTI0LjYgMC00Ni4xLTE0LjQtNDYuMS0yMy45IDAtMS4zLjMtMi42LjgtMy45bDEuOS00LjhjLS4xLjMtLjEuNy0uMSAxIDAgNC44IDE5LjEgMjAuNyA0NC43IDIwLjcgNi42IDAgMTYuMS0xLjQgMTYuMS05LjIgMC0uNiAwLTEuMi0uMi0xLjh6IiBmaWxsPSIjMDEwMTAxIi8+PC9zdmc+
description: Learn how to configure OpenShift Data Foundation to meet your deployment
Expand All @@ -162,13 +170,13 @@ spec:

- Created a StorageSystem.

- Set a cluster size.
- Set a cluster size.

- Provisioned a storage subsystem.

- Deployed necessary drivers.
- Deployed necessary drivers.

- Created StorageClasses
- Created StorageClasses.


These installation actions enable you to easily provision and consume your deployed storage services.
Expand All @@ -177,7 +185,7 @@ spec:
Monitor your storage regularly so that you don't run out of storage space.


As you consume storage, you'll receive cluster capacity alerts at 75% capacity (near-full) and 85% (full) capacity. Always address capacity warnings promptly.
As you consume storage, you'll receive cluster capacity alerts at 75% (near-full) capacity and 85% (full) capacity. Always address capacity warnings promptly.


**To expand your StorageSystem:**
Expand All @@ -190,18 +198,22 @@ spec:

4. Select **Add Capacity**.

5. Select your desired StorageClass from the dropdown.

6. Select **Add**. Once your selected StorageSystem’s status changes to **Ready**, you’ve successfully expanded your StorageSystem.

review:
instructions: |-
#### To verify that you have expanded your StorageSystem.
Did you expand your StorageSystem?
#### To verify that you expanded your StorageSystem:
Navigate to the **Overview > Block and File** for this StorageSystem. Under the **Raw capacity** section, has your **Available** capacity increased ?
failedTaskHelp: This task isn’t verified yet. Try the task again.
summary:
success: You have expanded the StorageSystem for the ODF operator!
failed: Try the steps again.
- title: Configure BucketClass
description: |-

BucketClass determines a bucket's data location and provides a set of policies (placement, namespace, caching) that apply to all buckets created with the same class.
BucketClass determines a bucket's data location and provides a set of policies (placement, namespace, caching) that applies to all buckets created with the same class.


BucketClasses occur in two types:
Expand All @@ -212,7 +224,7 @@ spec:

**To create a BucketClass:**

1. in the navigation menu, select **Installed Operators > OpenShift Data Foundation**.
1. In the main navigation menu, select **Storage > OpenShift Data Foundation**.

2. Select **Bucket Class** tab.

Expand All @@ -221,8 +233,8 @@ spec:
4. In the wizard, follow each step to create your BucketClass.
review:
instructions: |-
#### To verify that you have created BucketClass and BackingStore.
Is the BucketClass in ready state?
#### To verify that you created BucketClass and BackingStore:
Is the BucketClass in **Ready** state?
failedTaskHelp: This task isn’t verified yet. Try the task again.
summary:
success: You have successfully created BucketClass
Expand Down