Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1235,6 +1235,13 @@ Topics:
- Name: Release Notes
File: serverless-release-notes
---
Name: Migration
Dir: migration
Distros: openshift-enterprise
Topics:
- Name: Migrating OpenShift 3 to 4
File: migrating-openshift-3-to-4
---
Name: Support
Dir: support
Distros: openshift-enterprise,openshift-online,openshift-dedicated
Expand Down
Binary file added images/OCP_3_to_4_App_migration.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-7.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/darkcircle-8.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/migration-PV-copy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/migration-PV-move.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/migration-architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions migration/images
92 changes: 92 additions & 0 deletions migration/migrating-openshift-3-to-4.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
[id="migrating-openshift-3-to-4"]
= Migrating {product-title} version 3 to 4
include::modules/common-attributes.adoc[]
:context: migrating-openshift-3-to-4

toc::[]

You can migrate application workloads from {product-title} 3.7 to a later version with the Cluster Application Migration (CAM) tool. The CAM enables you to control the migration and to minimize application downtime. The CAM provides a web console and an API, based on Kubernetes custom resources, for migrating stateful application workloads, at the granularity of a namespace, from a source to a target cluster.

You can migrate data to any storage class that is available on the target cluster, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

Optionally, you can migrate the {product-title} 3.7 (and later) control plane settings to {product-title} 4.x with the Control Plane Migration Assistant (CPMA). See xref:migration-understanding-cpma_{context}[].

.Prerequisites

* You must have `cluster-admin` privileges.
* You must have `podman` installed.
* The source cluster(s) must be {product-title} 3.7 or later.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an additional prerequisite

"The migration controller must be able to communicate with both the source and destination clusters, we recommend the migration controller be installed on the destination cluster."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a recommendation, not a prerequisite. The default installation is on the target cluster, so I'm not sure it's necessary to mention this. The user would have to go out of their way to install it on another cluster.

* The target cluster must be {product-title} 3.9 or later.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest we change to:

"The target cluster is recommended to be the latest released version of {product-title}."

For background...
This will technically work on 3.7 or later for both source and destination.
Recommendation is customer migrate to latest released OpenShift for the destination.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we leave this as is. We should treat these situations with a support exception.
This will be confusing if you look at this with the lifecycle.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sferich888 when you say leave as-is, I just want to confirm.

Right now doc says: "The target cluster must be {product-title} 3.9 or later."

I think it should say:
"The target cluster is recommended to be the latest released version of {product-title}."

Then we can address 3->3 migrations on a case-by-case basis. Do you agree?

* The replication repository object storage must support the S3 API, for example, Red Hat NooBaa Storage, AWS S3, and generic S3 storage providers.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we need to change "Red Hat NooBaa Storage".

Upstream: NooBaa
Downstream: MCG

For this immediate release we will use upstream NooBaa as an example, after OCS4 releases we will change to leverage an operator for MCG.

I will ask product management for their recommendation of what to call this now.

Copy link
Member

@jwmatthews jwmatthews Oct 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confirmed with product management.
Let's change 'Red Hat NooBaa Storage' to 'NooBaa'

Suggest we reword to something like below.

"An object storage provider is required, we refer to this object store as a ‘Replication Repository’. The Replication Repository needs to support the S3 API, for example, NooBaa, AWS S3, and generic S3 storage providers. In addition, the source and destination clusters must both be able to read and write to this object store."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to call this out as "upstream" and possibly include a note / call out (warning) about support not being provided for upstream projects?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed Red Hat NooBaa Storage to NooBaa.

* If your applications use images from the `openshift` namespace, the required versions of the images must be present on the target cluster. If not, you must link:../openshift_images/image-streams-manage.html#images-imagestreams-update-tag_image-streams-managing[update the `imagestreamtags` references] to use an available version that is compatible with your application.
+
[NOTE]
====
If the `imagestreamtags` cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.

The following `imagestreamtags` have been _removed_ from {product-title} 4.2:

* `dotnet:1.0`, `dotnet:1.1`, `dotnet:2.0`
* `dotnet-runtime:2.0`
* `mariadb:10.1`
* `mongodb:2.4`, `mongodb:2.6`
* `mysql:5.5`, `mysql:5.6`
* `nginx:1.8`
* `nodejs:0.10`, `nodejs:4`, `nodejs:6`
* `perl:5.16`, `perl:5.20`
* `php:5.5`, `php:5.6`
* `postgresql:9.2`, `postgresql:9.4`, `postgresql:9.5`
* `python:3.3`, `python:3.4`
* `ruby:2.0`, `ruby:2.2`
====

include::modules/migration-understanding-cam.adoc[leveloffset=+1]

== Installing the CAM Operator

The CAM Operator installed on all clusters:

* {product-title} 3.x: You must install the CAM Operator manually, because OLM is not supported.
* {product-title} 4.x: You can install the CAM Operator with OLM.

include::modules/migration-installing-migration-operator-manually.adoc[leveloffset=+2]
include::modules/migration-installing-migration-operator-olm.adoc[leveloffset=+2]

== Configuring cross-origin resource sharing

You enable communication with the CAM by configuring cross-origin resource sharing on {product-title} 3 and 4 (if the {product-title} 4 cluster is not hosting the CAM).

include::modules/migration-configuring-cors-3.adoc[leveloffset=+2]

You can now verify xref:migration-verifying-cors_{context}[cross-origin resource sharing].

include::modules/migration-configuring-cors-4.adoc[leveloffset=+2]

You can now verify xref:migration-verifying-cors_{context}[cross-origin resource sharing].

include::modules/migration-verifying-cors.adoc[leveloffset=+2]

== Migrating applications with the CAM web console

include::modules/migration-launching-cam.adoc[leveloffset=+2]
include::modules/migration-adding-cluster-to-cam.adoc[leveloffset=+2]
include::modules/migration-adding-replication-repository-to-cam.adoc[leveloffset=+2]
include::modules/migration-creating-migration-plan-cam.adoc[leveloffset=+2]
include::modules/migration-running-migration-plan-cam.adoc[leveloffset=+2]

== Migrating control plane settings with the Control Plane Migration Assistant

include::modules/migration-understanding-cpma.adoc[leveloffset=+2]
include::modules/migration-installing-cpma.adoc[leveloffset=+2]
include::modules/migration-using-cpma.adoc[leveloffset=+2]

== Troubleshooting a failed migration

You can view the migration custom resources (CRs) and download logs to troubleshoot a failed migration.

include::modules/migration-custom-resources.adoc[leveloffset=+2]
include::modules/migration-viewing-migration-crs.adoc[leveloffset=+2]
include::modules/migration-downloading-logs.adoc[leveloffset=+2]
include::modules/migration-restic-timeout.adoc[leveloffset=+2]

include::modules/migration-known-issues.adoc[leveloffset=+1]
1 change: 1 addition & 0 deletions migration/modules
30 changes: 30 additions & 0 deletions modules/migration-adding-cluster-to-cam.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-adding-cluster-to-cam_{context}']
= Adding a cluster to the CAM web console

You can add a cluster to the CAM web console.

.Prerequisites

* Cross-origin resource sharing is configured on the cluster.

.Procedure

. Log in to the cluster you are adding to the CAM web console.
. Obtain the service account token:
+
----
$ oc sa get-token mig -n openshift-migration
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
----

. In the *Clusters* section of the CAM web console, click *Add cluster*.
. In the *Cluster* window, fill in the following fields:

* *Cluster name*: May contain lower-case letters (`a-z`) and numbers (`0-9`). Must not contain spaces or international characters.
* *Url*: URL of the cluster's API server, for example, `https://_<master1.example.com>_:8443`.
* *Service account token*

. Click *Add cluster*. The cluster appears in the *Clusters* section.
28 changes: 28 additions & 0 deletions modules/migration-adding-replication-repository-to-cam.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-adding-replication-repository-to-cam_{context}']
= Adding a replication repository to the CAM web console

.Procedure

. In the *Replication repositories* section, click *Add replication repository*.

. In the *Replication repository* window, fill in the following fields:

* *Replication repository name*
* *S3 bucket name*
* *S3 bucket region*: Required for AWS S3 if the bucket region is not *us-east-1*. Optional for a generic S3 repository.
* *S3 endpoint*: Required for a generic S3 repository. This is the URL of the S3 service, not the bucket, for example, `http://_<minio-gpte-minio.apps.cluster.com>_`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will want to update this to reference an example NooBaa endpoint instead of minio

I am working on other updates to show an example of how to setup the S3 pre-reqs.
Will share later today.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed URL to http://s3-noobaa.apps.cluster.com

+
[NOTE]
====
Currently, `https://` is supported only for AWS. For other providers, use `http://`.
====

* *S3 provider access key*
* *S3 provider secret access key*

. Click *Add replication repository* and wait for connection validation.

. Click *Close*. The repository appears in the *Replication repositories* section.
47 changes: 47 additions & 0 deletions modules/migration-configuring-cors-3.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-configuring-cors-3_{context}']
= Configuring cross-origin resource sharing on {product-title} 3 clusters

To access the API server of an {product-title} 3 cluster from a web application (in this case, the Cluster Application Migration tool) on a different host, you must configure cross-origin resource sharing by adding the CAM host name to the master configuration file.

.Procedure

. Log in to the {product-title} 3 cluster.
. Add the CAM host name to the `corsAllowedOrigins` stanza in the *_/etc/origin/master/master-config.yaml_* configuration file:
+
----
corsAllowedOrigins:
- (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z)
Copy link

@eriknelson eriknelson Oct 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw some folks confused by this, thinking literally (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) is the string that they should use.

I think prior to "Configuring cross-origin resource sharing on OpenShift Container Platform 3 clusters", we need a section that explains how to get the UI's Origin, and we need to make it clear that the docs will continue with (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) as an example Origin, but this value will be different for every user.

You enable communication with the CAM by configuring cross-origin resource sharing on OpenShift Container Platform 3 and 4 (if the OpenShift Container Platform 4 cluster is not hosting the CAM).

# Obtaining the UI's Origin
First, you must obtain the UI's Origin value to be whitelisted across all of the clusters involved in a migration.

1) You must log into the cluster that is hosting the web UI.
2) The following command will export the encoded `corsAllowedOrigin` for your cluster: `oc get -n openshift-migration route/migration -o go-template='(?i}//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'`
3) This output string should be used where an allowed origin must be whitelisted. As an example, we'll use `(?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z)`, but this will differ for your cluster.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instructions in the upstream README contains a command to obtain the value for the master-config.yaml in this section. https://github.com/fusor/mig-operator/blob/master/README.md#openshift-3. It may be helpful to add similar to the docs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the steps for obtaining the CAM host URL and marked (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) with a callout saying that it needs to be replaced with the user's CAM host URL.

- (?i)//openshift\.default\.svc(:|\z)
- (?i)//kubernetes\.default(:|\z)
----
+
[NOTE]
====
This example uses the following syntax:

* The `(?i)` makes it case-insensitive.
* The `//` pins to the beginning of the domain and matches the double slash
following `http:` or `https:`.
* The `\.` escapes dots in the domain name.
* The `(:|\z)` matches the end of the domain name `(\z)` or a port separator
`(:)`.
====

. Restart the API server and controller manager components to apply the changes:
+
* In {product-title} 3.7 and 3.9, these components run as stand-alone host processes managed by `systemd`:
+
----
$ systemctl restart atomic-openshift-master-api
$ systemctl restart atomic-openshift-master-controllers
----

* In {product-title} 3.10 and 3.11, these components run in static pods managed by a kubelet:
+
----
$ /usr/local/bin/master-restart api
$ /usr/local/bin/master-restart controller
----
59 changes: 59 additions & 0 deletions modules/migration-configuring-cors-4.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-configuring-cors-4_{context}']
= Configuring cross-origin resource sharing on {product-title} 4 clusters

If you installed the migration controller on your {product-title} 4 cluster, the cluster's resources are modified by the CAM Operator and you do not need to configure cross-origin resource sharing. If you did not install the migration controller, you must configure cross-origin resource sharing manually.

To access the API server of an {product-title} 4 cluster from a web application on a different host (in this case, the Cluster Application Migration tool), you must add the CAM host name to the API server and the Kubernetes API server CRs.

.Procedure

. Log in to the {product-title} 4 cluster.
. Edit the API server CR:
+
----
$ oc edit authentication.operator cluster
----

. Add the CAM host name to the `additionalCORSAllowedOrigins` stanza:
+
[source,yaml]
----
spec:
additionalCORSAllowedOrigins:
- (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z)
----
+
[NOTE]
====
This example uses the following syntax:

* The `(?i)` makes it case-insensitive.
* The `//` pins to the beginning of the domain and matches the double slash
following `http:` or `https:`.
* The `\.` escapes dots in the domain name.
* The `(:|\z)` matches the end of the domain name `(\z)` or a port separator
`(:)`.
====

. Save the file to apply the changes.

. Edit the Kubernetes API server CR:
+
----
$ oc edit kubeapiserver.operator cluster
----

. Add `corsAllowedOrigins` and the CAM host name to the `unsupportedConfigOverrides` stanza:
+
[source,yaml]
----
spec:
unsupportedConfigOverrides:
corsAllowedOrigins:
- (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z)
----

. Save the file to apply the changes.
24 changes: 24 additions & 0 deletions modules/migration-creating-migration-plan-cam.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-creating-migration-plan-cam_{context}']
= Creating a migration plan in the CAM web console

.Prerequisites

* Source and target clusters added to the CAM web console
* S3-compatible object storage that is accessible to the source and target clusters

.Procedure

. In the *Plans* section, click *Add plan*.
. Enter the *Plan name* and click *Next*.
+
The *Plan name* can contain up to 253 lower-case alphanumeric characters (`a-z, 0-9`). It must not contain spaces or underscores (`_`).
. Select a *Source cluster*.
. Select a *Target cluster*.
. Select a *Replication repository*.
. Select the projects to be migrated and click *Next*.
. Select *Copy* or *Move* for the persistent volumes and click *Next*.
. Select a *Storage class* for the persistent volumes and click *Next*.
. Click *Close*. The migration appears in the *Plans* section.
40 changes: 40 additions & 0 deletions modules/migration-custom-resources.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-custom-resources_{context}']
= Migration custom resources

The CAM creates the following custom resources (CRs) for migration:

image::migration-architecture.png[migration architecture diagram]

<1> link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migcluster_types.go[MigCluster] (configuration, CAM cluster): Cluster definition

<2> link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migstorage_types.go[MigStorage] (configuration, CAM cluster): Storage definition

<3> link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migplan_types.go[MigPlan] (configuration, CAM cluster): Migration plan
+
The MigPlan CR describes the source and target clusters, repository, and namespace(s) being migrated. It is associated with 0, 1, or many MigMigration CRs.
+
[NOTE]
====
Deleting a MigPlan CR deletes the associated MigMigration CRs.
====

<4> link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/backup_storage_location.go[BackupStorageLocation] (configuration, CAM cluster): Location of Velero backup objects

<5> link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/volume_snapshot_location.go[VolumeSnapshotLocation] (configuration, CAM cluster): Location of Velero volume snapshots

<6> link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migmigration_types.go[MigMigration] (action, CAM cluster): Migration, created during migration
+
A MigMigration CR is created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.

<7> link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/backup.go[Backup] (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:

** Backup CR #1 for Kubernetes objects
** Backup CR #2 for PV data

<8> link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/restore.go[Restore] (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:

** Restore CR #1 (using Backup CR #2) for PV data
** Restore CR #2 (using Backup CR #1) for Kubernetes objects
26 changes: 26 additions & 0 deletions modules/migration-downloading-logs.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-downloading-logs_{context}']
= Downloading migration logs

You can download the migration controller, Velero, and Restic logs in the CAM web console to troubleshoot a failed migration.

.Procedure

. Click the *Options* menu {kebab} of a migration plan and select *Logs*.
. To download a single controller log, select the following:

* *Cluster*
* *Log source*: Velero, Restic, or migration controller
* *Pod source*: For example, `velero-_7659c69dd7-ctb5x_`

. Click *Download all logs* to download the migration controller logs of the cluster hosting the CAM and the Velero and Restic logs of the source and target clusters.

Optionally, you can access the logs by using the CLI, as in this example for the migration controller:

----
$ oc get pods -n openshift-migration | grep controller
controller-manager-78c469849c-v6wcf 1/1 Running 0 4h49m

$ oc logs controller-manager-78c469849c-v6wcf -f -n openshift-migration
----
14 changes: 14 additions & 0 deletions modules/migration-installing-cpma.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-installing-cpma_{context}']
= Installing CPMA

.Procedure

. From the link:https://access.redhat.com[Red Hat Customer Portal], navigate to *Downloads* -> *Red Hat {product-title}*.
. On the *Download Red Hat {product-title}* page, select *Red Hat {product-title}* from the *Product Variant* menu.
. Select *CPMA 1.0 for RHEL 7* from the *Version* drop-down menu. The same binary will work on RHEL 7 or RHEL 8.
. Select *x86_64*, which is the default, from the *Architecture* drop-down menu.
+
The choices for downloading CPMA for each platform (Linux, Windows,and Mac) populate.
. Click the *Download Now* button for the platform of your choice.
Loading