Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,13 @@ You can automate your migrations and modify the `MigPlan` and `MigrationControll

include::modules/migration-terminology.adoc[leveloffset=+1]

include::modules/migration-migrating-on-prem-to-cloud.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
.Additional resources
[role="_additional-resources"]
.Additional resources

Re: Jupiter guidelines from the OSDOCS peer review checklist

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

* For information about creating a MigCluster CR manifest for each remote cluster, see xref:../migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc#migration-migrating-applications-api_advanced-migration-options-3-4[Migrating an application by using the {mtc-short} API].
* For information about adding a cluster using the web console, see xref:../migrating_from_ocp_3_to_4/migrating-applications-3-4.adoc#migrating-applications-mtc-web-console_migrating-applications-3-4[Migrating your applications by using the {mtc-short} web console]

[id="migrating-applications-cli_{context}"]
== Migrating applications by using the command line

Expand Down
2 changes: 1 addition & 1 deletion modules/migration-adding-cluster-to-cam.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ You can add a cluster to the {mtc-full} ({mtc-short}) web console.
** You must specify the Azure resource group name for the cluster.
** The clusters must be in the same Azure resource group.
** The clusters must be in the same geographic location.
* If you are using direct image migration, you must expose a route to
* If you are using direct image migration, you must expose a route to the image registry of the source cluster.

.Procedure

Expand Down
118 changes: 118 additions & 0 deletions modules/migration-migrating-on-prem-to-cloud.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
// Module included in the following assemblies:
//
// * migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc
// * migration_toolkit_for_containers/advanced-migration-options-mtc.adoc
:_content-type: PROCEDURE
[id="migration-migrating-applications-on-prem-to-cloud_{context}"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[id="migration-migrating-applications-on-prem-to-cloud_{context}"]
:_content-type: PROCEDURE
[id="migration-migrating-applications-on-prem-to-cloud_{context}"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

= Migrating an application from on-premises to a cloud-based cluster

You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The `crane tunnel-api` command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster.

A service created on the destination cluster exposes the source cluster's API to {mtc-short}, which is running on the destination cluster.

.Prerequisites

* The system that creates the VPN tunnel must have access and be logged in to both clusters.
* It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible.
* Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see \https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names.
* When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace.
* OpenVPN server is installed on the destination cluster.
* OpenVPN client is installed on the source cluster.
* When configuring the source cluster in {mtc-short}, the API URL takes the form of `\https://proxied-cluster.<namespace>.svc.cluster.local:8443`.
** If you use the API, see _Create a MigCluster CR manifest for each remote cluster_.
** If you use the {mtc-short} web console, see _Migrating your applications using the {mtc-short} web console_.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
** If you use the {mtc-short} web console, see _Migrating your applications using the {mtc-short} web console_.
** If you use the {mtc-short} web console, see "Migrating your applications using the {mtc-short} web console."

I think this might be an unwritten rule, though I am trying to track down the reference, but we usually use double quotes instead of italics to refer to sections of the docs that we would otherwise xref or link to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bergerhoffer What do you think? I stay away from quotes in general because in this day and age, they have a cynical connotation to them. The conventions for document titles in my experience, including at Red Hat, has always been Italics.

* The {mtc-short} web console and Migration Controller must be installed on the target cluster.

.Procedure

. Install the `crane` utility:
+
[source,terminal,subs="+quotes"]
----
$ podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7.0):/crane ./
----
. Log in remotely to a node on the source cluster and a node on the destination cluster.

. Obtain the cluster context for both clusters after logging in:
+
[source,terminal,subs="+quotes"]
----
$ oc config view
----

. Establish a tunnel by entering the following command on the command system:
+
[source,terminal,sub="+quotes"]
----
$ crane tunnel-api [--namespace <namespace>] \
--destination-context <destination-cluster> \
--source-context <source-cluster>
----
+
If you don't specify a namespace, the command uses the default value `openvpn`.
+
For example:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For example:
.Example command

+
[source,terminal,subs="+quotes"]
----
$ crane tunnel-api --namespace my_tunnel \
--destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \
--source-context default/192-168-122-171-nip-io:8443/admin
----
+
[TIP]
====
See all available parameters for the `crane tunnel-api` command by entering `crane tunnel-api --help`.
====
+
The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes.
+
The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster.
+
After a few minutes, the load balancer resolves on the source node.
+
[TIP]
====
You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges:

[source,terminal,subs="+quotes"]
----
# oc get po -n <namespace>
----

.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
<pod_name> 2/2 Running 0 44s
----

[source,terminal,subs="+quotes"]
----
# oc logs -f -n <namespace> <pod_name> -c openvpn
----
When the address of the load balancer is resolved, the message `Initialization Sequence Completed` appears at the end of the log.
====

. On the OpenVPN server, which is on a destination control node, verify that the `openvpn` service and the `proxied-cluster` service are running:
+
[source,terminal,subs="+quotes"]
----
$ oc get service -n <namespace>
----

. On the source node, get the service account (SA) token for the migration controller:
+
[source,terminal]
----
# oc sa get-token -n openshift-migration migration-controller
----

. Open the {mtc-short} web console and add the source cluster, using the following values:
+
* *Cluster name*: The source cluster name.
* *URL*: `proxied-cluster.<namespace>.svc.cluster.local:8443`. If you did not define a value for `<namespace>`, use `openvpn`.
* *Service account token*: The token of the migration controller service account.
* *Exposed route host to image registry*: `proxied-cluster.<namespace>.svc.cluster.local:5000`. If you did not define a value for `<namespace>`, use `openvpn`.

After {mtc-short} has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces.