>, and then proceed to xref:ROOT:change-read-routing.adoc[Phase 4] where you will permanently route read requests to the target cluster.
\ No newline at end of file
diff --git a/modules/ROOT/pages/feasibility-checklists.adoc b/modules/ROOT/pages/feasibility-checklists.adoc
index 42fabb83..cdfe3a36 100644
--- a/modules/ROOT/pages/feasibility-checklists.adoc
+++ b/modules/ROOT/pages/feasibility-checklists.adoc
@@ -239,4 +239,4 @@ The origin and target clusters can have different authentication configurations
== Next steps
-* xref:ROOT:deployment-infrastructure.adoc[]
\ No newline at end of file
+Next, xref:ROOT:deployment-infrastructure.adoc[prepare the {product-proxy} infrastructure].
\ No newline at end of file
diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc
index 31cab81b..c00e946b 100644
--- a/modules/ROOT/pages/index.adoc
+++ b/modules/ROOT/pages/index.adoc
@@ -73,7 +73,7 @@ svg::sideloader:astra-migration-toolkit.svg[role="absolute bottom-1/2 translate-
{cass-migrator-short} can migrate and validate data between {cass-short}-based clusters, with optional logging and reconciliation support.
- xref:ROOT:cdm-overview.adoc[Get started with {cass-migrator-short}]
+ xref:ROOT:cassandra-data-migrator.adoc[Get started with {cass-migrator-short}]
diff --git a/modules/ROOT/pages/introduction.adoc b/modules/ROOT/pages/introduction.adoc
index 63ad2bda..aa612ed0 100644
--- a/modules/ROOT/pages/introduction.adoc
+++ b/modules/ROOT/pages/introduction.adoc
@@ -48,7 +48,7 @@ The _target_ is your new {cass-short}-based environment where you want to migrat
Before you begin a migration, your client applications perform read/write operations with your existing xref:cql:ROOT:index.adoc[CQL]-compatible database, such as {cass}, {dse-short}, {hcd-short}, or {astra-db}.
-image:pre-migration0ra.png["Pre-migration environment."]
+image:pre-migration0ra.png[Before the migration begins, your applications connect exclusively to your origin cluster]
While your application is stable with the current data model and database platform, you might need to make some adjustments before enabling {product-proxy}.
@@ -74,9 +74,9 @@ In this first phase, deploy the {product-proxy} instances and connect client app
This phase activates the dual-write logic.
Writes are sent to both the origin and target databases, while reads are executed on the origin only.
-For more information and instructions, see xref:ROOT:phase1.adoc[].
+For more information and instructions, see xref:ROOT:phase1.adoc[Phase 1: Deploy and connect {product-proxy}].
-image:migration-phase1ra.png["Migration Phase 1."]
+image:migration-phase1ra.png[In Phase 1, you deploy and connect {product-proxy}]
=== Phase 2: Migrate data
@@ -86,7 +86,7 @@ Then, you thoroughly validate the migrated data, resolving missing and mismatche
For more information and instructions, see xref:ROOT:migrate-and-validate-data.adoc[].
-image:migration-phase2ra.png["Migration Phase 2."]
+image:migration-phase2ra.png[In Phase 2, you migrate and validate data from the origin cluster to the target cluster]
=== Phase 3: Enable asynchronous dual reads
@@ -98,7 +98,7 @@ When enabled, {product-proxy} sends asynchronous read requests to the secondary
For more information, see xref:ROOT:enable-async-dual-reads.adoc[] and xref:ROOT:components.adoc#how_zdm_proxy_handles_reads_and_writes[How {product-proxy} handles reads and writes].
-image:migration-phase3ra.png["Migration Phase 3."]
+image:migration-phase3ra.png[In Phase 3, you test your target cluster's production readiness]
=== Phase 4: Route reads to the target database
@@ -109,7 +109,7 @@ At this point, the target database becomes the primary database.
For more information and instructions, see xref:ROOT:change-read-routing.adoc[].
-image:migration-phase4ra9.png["Migration Phase 4."]
+image:migration-phase4ra9.png[In Phase 4, you route reads to the target cluster exclusively]
=== Phase 5: Connect directly to the target database
@@ -122,7 +122,7 @@ However, be aware that the origin database is no longer synchronized with the ta
For more information, see xref:ROOT:connect-clients-to-target.adoc[].
-image:migration-phase5ra.png["Migration Phase 5."]
+image:migration-phase5ra.png[In Phase 5, you connect your client applications directly and exclusively to the target cluster]
[#lab]
== {product} interactive lab
diff --git a/modules/ROOT/pages/migrate-and-validate-data.adoc b/modules/ROOT/pages/migrate-and-validate-data.adoc
index e2a8e2e6..5be69633 100644
--- a/modules/ROOT/pages/migrate-and-validate-data.adoc
+++ b/modules/ROOT/pages/migrate-and-validate-data.adoc
@@ -1,8 +1,11 @@
-= Migrate and validate data
+= Phase 2: Migrate and validate data
+:page-aliases: ROOT:sideloader-zdm.adoc
+
+In xref:ROOT:phase1.adoc[Phase 1], you set up {product-proxy} to orchestrate live traffic to your origin and target clusters.
In Phase 2 of {product}, you migrate data from the origin to the target, and then validate the migrated data.
-image::migration-phase2ra.png[In {product-short} Phase 2, you migrate data from the origin cluster to the target cluster.]
+image::migration-phase2ra.png[In {product-short} Phase 2, you migrate data from the origin cluster to the target cluster]
To move and validate data, you can use a dedicated data migration tool, such as {sstable-sideloader}, {cass-migrator}, or {dsbulk-migrator}, or your can create your own custom data migration script.
@@ -10,19 +13,31 @@ To move and validate data, you can use a dedicated data migration tool, such as
== {sstable-sideloader}
-{sstable-sideloader} is a service running in {astra-db} that imports data from snapshots of your existing {cass-reg}-based cluster.
This tool is exclusively for migrations that move data to {astra-db}.
+{sstable-sideloader} is a service running in {astra-db} that imports data from snapshots of your existing {cass-reg}-based cluster.
+Because it imports data directly, {sstable-sideloader} can offer several advantages over CQL-based tools like {dsbulk-migrator} and {cass-migrator}, including faster, more cost-effective data loading, and minimal performance impacts on your origin cluster and target database.
+
+To migrate data with {sstable-sideloader}, you use `nodetool`, a cloud provider's CLI, and the {astra} {devops-api}:
+
+* *`nodetool`*: Create snapshots of your existing {dse-short}, {hcd-short}, or open-source {cass-short} cluster.
+For compatible origin clusters, see xref:ROOT:astra-migration-paths.adoc[].
+* *Cloud provider CLI*: Upload snapshots to a dedicated cloud storage bucket for your migration.
+* *{astra} {devops-api}*: Run the {sstable-sideloader} commands to write the data from cloud storage to your {astra-db} database.
+
You can use {sstable-sideloader} alone or with {product-proxy}.
-For more information, see xref:sideloader:sideloader-zdm.adoc[].
+For more information and instructions, see xref:sideloader:sideloader-overview.adoc[].
+
+.Use {sstable-sideloader} with {product-proxy}
+svg::sideloader:astra-migration-toolkit.svg[]
== {cass-migrator}
You can use {cass-migrator} ({cass-migrator-short}) for data migration and validation between {cass-short}-based databases.
It offers extensive functionality and configuration options to support large and complex migrations as well as post-migration data validation.
-You can use {cass-migrator-short} by itself, with {product-proxy}, or for data validation after using another data migration tool.
+You can use {cass-migrator-short} alone, with {product-proxy}, or for data validation after using another data migration tool.
For more information, see xref:ROOT:cassandra-data-migrator.adoc[].
@@ -48,4 +63,17 @@ This is crucial to a successful migration.
* Preserves the data model, including column names and data types, so that {product-proxy} can send the same read/write statements to both databases successfully.
+
Migrations that perform significant data transformations might not be compatible with {product-proxy}.
-The impact of data transformations depends on your specific data model, database platforms, and the scale of your migration.
\ No newline at end of file
+The impact of data transformations depends on your specific data model, database platforms, and the scale of your migration.
+
+== Next steps
+
+[IMPORTANT]
+====
+Don't proceed to Phase 3 until you have replicated _all_ preexisting data from your origin cluster to your target cluster, _and_ you have taken time to validate that the data was migrated correctly and completely.
+
+The success of your migration and future performance of the target cluster depends on correct and complete data.
+
+If your chosen data migration tool doesn't have built-in validation features, you need to use a separate tool for validation.
+====
+
+After using your chosen data migration tool to migrate and thoroughly validate your data, proceed to xref:ROOT:enable-async-dual-reads.adoc[Phase 3] to test your target cluster's production readiness.
\ No newline at end of file
diff --git a/modules/ROOT/pages/phase1.adoc b/modules/ROOT/pages/phase1.adoc
index 78473841..ab913321 100644
--- a/modules/ROOT/pages/phase1.adoc
+++ b/modules/ROOT/pages/phase1.adoc
@@ -1,11 +1,14 @@
-= Phase 1: Deploy {product-proxy} and connect client applications
+= Deploy and connect {product-proxy}
-This section presents the following:
+After you plan and prepare for your migration, you can start Phase 1 of the migration process where you deploy and connect {product-proxy}.
-* xref:setup-ansible-playbooks.adoc[]
-* xref:deploy-proxy-monitoring.adoc[]
-** xref:tls.adoc[]
-* xref:connect-clients-to-proxy.adoc[]
-* xref:manage-proxy-instances.adoc[]
+image::migration-phase1ra.png[In migration Phase 1, you deploy {product-proxy} instances, and then connect your client applications to the proxies]
-image::migration-phase1ra.png[Phase 1 diagram shows deployed {product-proxy} instances, client app connections to proxies, and the target cluster is setup.]
\ No newline at end of file
+To complete Phase 1, do the following:
+
+. xref:setup-ansible-playbooks.adoc[].
+. xref:deploy-proxy-monitoring.adoc[] with optional xref:tls.adoc[TLS].
+. xref:connect-clients-to-proxy.adoc[].
+
+During the migration you will modify {product-proxy} configuration settings and monitor your {product-proxy} instances.
+Before you proceed to Phase 2, make sure you understand how to xref:manage-proxy-instances.adoc[manage {product-proxy} instances] and xref:metrics.adoc[use {product-proxy} metrics].
\ No newline at end of file
diff --git a/modules/ROOT/pages/rollback.adoc b/modules/ROOT/pages/rollback.adoc
index 45cab323..5d7c7712 100644
--- a/modules/ROOT/pages/rollback.adoc
+++ b/modules/ROOT/pages/rollback.adoc
@@ -1,5 +1,4 @@
-= Understand the rollback options
-:navtitle: Understand rollback options
+= Understand rollback options
At any point from Phase 1 through Phase 4, if you encounter an unexpected issue and need to stop or roll back the migration, you can revert your client applications to connect directly to the origin cluster.
@@ -19,4 +18,4 @@ In this case, you use your original target cluster as the new origin cluster, an
== Next steps
-* xref:ROOT:phase1.adoc[]
+After preparing the infrastructure for {product-proxy} and your target cluster, begin xref:ROOT:phase1.adoc[Phase 1] of the migration.
\ No newline at end of file
diff --git a/modules/ROOT/pages/setup-ansible-playbooks.adoc b/modules/ROOT/pages/setup-ansible-playbooks.adoc
index 07065911..905b6bab 100644
--- a/modules/ROOT/pages/setup-ansible-playbooks.adoc
+++ b/modules/ROOT/pages/setup-ansible-playbooks.adoc
@@ -232,3 +232,7 @@ image::zdm-go-utility-results3.png[A summary of the configuration provided is di
+
image::zdm-go-utility-success3.png[Ansible Docker container success messages]
+
+== Next steps
+
+After you use {product-utility} to set up the Ansible Control Host container, you can xref:deploy-proxy-monitoring.adoc[use {product-automation} to deploy your {product-proxy} instances and the monitoring stack].
\ No newline at end of file
diff --git a/modules/ROOT/partials/cassandra-data-migrator-body.adoc b/modules/ROOT/partials/cassandra-data-migrator-body.adoc
deleted file mode 100644
index f1ba3f01..00000000
--- a/modules/ROOT/partials/cassandra-data-migrator-body.adoc
+++ /dev/null
@@ -1,344 +0,0 @@
-{description}
-It is best for large or complex migrations that benefit from advanced features and configuration options, such as the following:
-
-* Logging and run tracking
-* Automatic reconciliation
-* Performance tuning
-* Record filtering
-* Column renaming
-* Support for advanced data types, including sets, lists, maps, and UDTs
-* Support for SSL, including custom cipher algorithms
-* Use `writetime` timestamps to maintain chronological write history
-* Use Time To Live (TTL) values to maintain data lifecycles
-
-For more information and a complete list of features, see the {cass-migrator-repo}?tab=readme-ov-file#features[{cass-migrator-short} GitHub repository].
-
-== {cass-migrator} requirements
-
-To use {cass-migrator-short} successfully, your origin and target clusters must be {cass-short}-based databases with matching schemas.
-
-== {cass-migrator-short} with {product-proxy}
-
-You can use {cass-migrator-short} alone, with {product-proxy}, or for data validation after using another data migration tool.
-
-When using {cass-migrator-short} with {product-proxy}, {cass-short}'s last-write-wins semantics ensure that new, real-time writes accurately take precedence over historical writes.
-
-Last-write-wins compares the `writetime` of conflicting records, and then retains the most recent write.
-
-For example, if a new write occurs in your target cluster with a `writetime` of `2023-10-01T12:05:00Z`, and then {cass-migrator-short} migrates a record against the same row with a `writetime` of `2023-10-01T12:00:00Z`, the target cluster retains the data from the new write because it has the most recent `writetime`.
-
-== Install {cass-migrator}
-
-{company} recommends that you always install the latest version of {cass-migrator-short} to get the latest features, dependencies, and bug fixes.
-
-[tabs]
-======
-Install as a container::
-+
---
-Get the latest `cassandra-data-migrator` image that includes all dependencies from https://hub.docker.com/r/datastax/cassandra-data-migrator[DockerHub].
-
-The container's `assets` directory includes all required migration tools: `cassandra-data-migrator`, `dsbulk`, and `cqlsh`.
---
-
-Install as a JAR file::
-+
---
-. Install Java 11 or later, which includes Spark binaries.
-
-. Install https://spark.apache.org/downloads.html[Apache Spark(TM)] version 3.5.x with Scala 2.13 and Hadoop 3.3 and later.
-+
-[tabs]
-====
-Single VM::
-+
-For one-off migrations, you can install the Spark binary on a single VM where you will run the {cass-migrator-short} job.
-+
-. Get the Spark tarball from the Apache Spark archive.
-+
-[source,bash,subs="+quotes"]
-----
-wget https://archive.apache.org/dist/spark/spark-3.5.**PATCH**/spark-3.5.**PATCH**-bin-hadoop3-scala2.13.tgz
-----
-+
-Replace `**PATCH**` with your Spark patch version.
-+
-. Change to the directory where you want install Spark, and then extract the tarball:
-+
-[source,bash,subs="+quotes"]
-----
-tar -xvzf spark-3.5.**PATCH**-bin-hadoop3-scala2.13.tgz
-----
-+
-Replace `**PATCH**` with your Spark patch version.
-
-Spark cluster::
-+
-For large (several terabytes) migrations, complex migrations, and use of {cass-migrator-short} as a long-term data transfer utility, {company} recommends that you use a Spark cluster or Spark Serverless platform.
-+
-If you deploy CDM on a Spark cluster, you must modify your `spark-submit` commands as follows:
-+
-* Replace `--master "local[*]"` with the host and port for your Spark cluster, as in `--master "spark://**MASTER_HOST**:**PORT**"`.
-* Remove parameters related to single-VM installations, such as `--driver-memory` and `--executor-memory`.
-====
-
-. Download the latest {cass-migrator-repo}/packages/1832128/versions[cassandra-data-migrator JAR file] {cass-migrator-shield}.
-
-. Add the `cassandra-data-migrator` dependency to `pom.xml`:
-+
-[source,xml,subs="+quotes"]
-----
-
- datastax.cdm
- cassandra-data-migrator
- **VERSION**
-
-----
-+
-Replace `**VERSION**` with your {cass-migrator-short} version.
-
-. Run `mvn install`.
-
-If you need to build the JAR for local development or your environment only has Scala version 2.12.x, see the alternative installation instructions in the {cass-migrator-repo}?tab=readme-ov-file[{cass-migrator-short} README].
---
-======
-
-== Configure {cass-migrator-short}
-
-. Create a `cdm.properties` file.
-+
-If you use a different name, make sure you specify the correct filename in your `spark-submit` commands.
-
-. Configure the properties for your environment.
-+
-In the {cass-migrator-short} repository, you can find a {cass-migrator-repo}/blob/main/src/resources/cdm.properties[sample properties file with default values], as well as a {cass-migrator-repo}/blob/main/src/resources/cdm-detailed.properties[fully annotated properties file].
-+
-{cass-migrator-short} jobs process all uncommented parameters.
-Any parameters that are commented out are ignored or use default values.
-+
-If you want to reuse a properties file created for a previous {cass-migrator-short} version, make sure it is compatible with the version you are currently using.
-Check the {cass-migrator-repo}/releases[{cass-migrator-short} release notes] for possible breaking changes in interim releases.
-For example, the 4.x series of {cass-migrator-short} isn't backwards compatible with earlier properties files.
-
-. Store your properties file where it can be accessed while running {cass-migrator-short} jobs using `spark-submit`.
-
-[#migrate]
-== Run a {cass-migrator-short} data migration job
-
-A data migration job copies data from a table in your origin cluster to a table with the same schema in your target cluster.
-
-To optimize large-scale migrations, {cass-migrator-short} can run multiple concurrent migration jobs on the same table.
-
-The following `spark-submit` command migrates one table from the origin to the target cluster, using the configuration in your properties file.
-The migration job is specified in the `--class` argument.
-
-[tabs]
-======
-Local installation::
-+
---
-[source,bash,subs="+quotes,+attributes"]
-----
-./spark-submit --properties-file cdm.properties \
---conf spark.cdm.schema.origin.keyspaceTable="**KEYSPACE_NAME**.**TABLE_NAME**" \
---master "local[{asterisk}]" --driver-memory 25G --executor-memory 25G \
---class com.datastax.cdm.job.Migrate cassandra-data-migrator-**VERSION**.jar &> logfile_name_$(date +%Y%m%d_%H_%M).txt
-----
-
-Replace or modify the following, if needed:
-
-* `--properties-file cdm.properties`: If your properties file has a different name, specify the actual name of your properties file.
-+
-Depending on where your properties file is stored, you might need to specify the full or relative file path.
-
-* `**KEYSPACE_NAME**.**TABLE_NAME**`: Specify the name of the table that you want to migrate and the keyspace that it belongs to.
-+
-You can also set `spark.cdm.schema.origin.keyspaceTable` in your properties file using the same format of `**KEYSPACE_NAME**.**TABLE_NAME**`.
-
-* `--driver-memory` and `--executor-memory`: For local installations, specify the appropriate memory settings for your environment.
-
-* `**VERSION**`: Specify the full {cass-migrator-short} version that you installed, such as `5.2.1`.
---
-
-Spark cluster::
-+
---
-[source,bash,subs="+quotes"]
-----
-./spark-submit --properties-file cdm.properties \
---conf spark.cdm.schema.origin.keyspaceTable="**KEYSPACE_NAME**.**TABLE_NAME**" \
---master "spark://**MASTER_HOST**:**PORT**" \
---class com.datastax.cdm.job.Migrate cassandra-data-migrator-**VERSION**.jar &> logfile_name_$(date +%Y%m%d_%H_%M).txt
-----
-
-Replace or modify the following, if needed:
-
-* `--properties-file cdm.properties`: If your properties file has a different name, specify the actual name of your properties file.
-+
-Depending on where your properties file is stored, you might need to specify the full or relative file path.
-
-* `**KEYSPACE_NAME**.**TABLE_NAME**`: Specify the name of the table that you want to migrate and the keyspace that it belongs to.
-+
-You can also set `spark.cdm.schema.origin.keyspaceTable` in your properties file using the same format of `**KEYSPACE_NAME**.**TABLE_NAME**`.
-
-* `--master`: Provide the URL of your Spark cluster.
-
-* `**VERSION**`: Specify the full {cass-migrator-short} version that you installed, such as `5.2.1`.
---
-======
-
-This command generates a log file (`logfile_name_**TIMESTAMP**.txt`) instead of logging output to the console.
-
-For additional modifications to this command, see <>.
-
-[#cdm-validation-steps]
-== Run a {cass-migrator-short} data validation job
-
-After migrating data, use {cass-migrator-short}'s data validation mode to identify any inconsistencies between the origin and target tables, such as missing or mismatched records.
-
-Optionally, {cass-migrator-short} can automatically correct discrepancies in the target cluster during validation.
-
-. Use the following `spark-submit` command to run a data validation job using the configuration in your properties file.
-The data validation job is specified in the `--class` argument.
-+
-[tabs]
-======
-Local installation::
-+
---
-[source,bash,subs="+quotes,+attributes"]
-----
-./spark-submit --properties-file cdm.properties \
---conf spark.cdm.schema.origin.keyspaceTable="**KEYSPACE_NAME**.**TABLE_NAME**" \
---master "local[{asterisk}]" --driver-memory 25G --executor-memory 25G \
---class com.datastax.cdm.job.DiffData cassandra-data-migrator-**VERSION**.jar &> logfile_name_$(date +%Y%m%d_%H_%M).txt
-----
-
-Replace or modify the following, if needed:
-
-* `--properties-file cdm.properties`: If your properties file has a different name, specify the actual name of your properties file.
-+
-Depending on where your properties file is stored, you might need to specify the full or relative file path.
-
-* `**KEYSPACE_NAME**.**TABLE_NAME**`: Specify the name of the table that you want to validate and the keyspace that it belongs to.
-+
-You can also set `spark.cdm.schema.origin.keyspaceTable` in your properties file using the same format of `**KEYSPACE_NAME**.**TABLE_NAME**`.
-
-* `--driver-memory` and `--executor-memory`: For local installations, specify the appropriate memory settings for your environment.
-
-* `**VERSION**`: Specify the full {cass-migrator-short} version that you installed, such as `5.2.1`.
---
-
-Spark cluster::
-+
---
-[source,bash,subs="+quotes"]
-----
-./spark-submit --properties-file cdm.properties \
---conf spark.cdm.schema.origin.keyspaceTable="**KEYSPACE_NAME**.**TABLE_NAME**" \
---master "spark://**MASTER_HOST**:**PORT**" \
---class com.datastax.cdm.job.DiffData cassandra-data-migrator-**VERSION**.jar &> logfile_name_$(date +%Y%m%d_%H_%M).txt
-----
-
-Replace or modify the following, if needed:
-
-* `--properties-file cdm.properties`: If your properties file has a different name, specify the actual name of your properties file.
-+
-Depending on where your properties file is stored, you might need to specify the full or relative file path.
-
-* `**KEYSPACE_NAME**.**TABLE_NAME**`: Specify the name of the table that you want to validate and the keyspace that it belongs to.
-+
-You can also set `spark.cdm.schema.origin.keyspaceTable` in your properties file using the same format of `**KEYSPACE_NAME**.**TABLE_NAME**`.
-
-* `--master`: Provide the URL of your Spark cluster.
-
-* `**VERSION**`: Specify the full {cass-migrator-short} version that you installed, such as `5.2.1`.
---
-======
-
-. Allow the command some time to run, and then open the log file (`logfile_name_**TIMESTAMP**.txt`) and look for `ERROR` entries.
-+
-The {cass-migrator-short} validation job records differences as `ERROR` entries in the log file, listed by primary key values.
-For example:
-+
-[source,plaintext]
-----
-23/04/06 08:43:06 ERROR DiffJobSession: Mismatch row found for key: [key3] Mismatch: Target Index: 1 Origin: valueC Target: value999)
-23/04/06 08:43:06 ERROR DiffJobSession: Corrected mismatch row in target: [key3]
-23/04/06 08:43:06 ERROR DiffJobSession: Missing target row found for key: [key2]
-23/04/06 08:43:06 ERROR DiffJobSession: Inserted missing row in target: [key2]
-----
-+
-When validating large datasets or multiple tables, you might want to extract the complete list of missing or mismatched records.
-There are many ways to do this.
-For example, you can grep for all `ERROR` entries in your {cass-migrator-short} log files or use the `log4j2` example provided in the {cass-migrator-repo}?tab=readme-ov-file#steps-for-data-validation[{cass-migrator-short} repository].
-
-=== Run a validation job in AutoCorrect mode
-
-Optionally, you can run {cass-migrator-short} validation jobs in **AutoCorrect** mode, which offers the following functions:
-
-* `autocorrect.missing`: Add any missing records in the target with the value from the origin.
-
-* `autocorrect.mismatch`: Reconcile any mismatched records between the origin and target by replacing the target value with the origin value.
-+
-[IMPORTANT]
-====
-Timestamps have an effect on this function.
-
-If the `writetime` of the origin record (determined with `.writetime.names`) is before the `writetime` of the corresponding target record, then the original write won't appear in the target cluster.
-
-This comparative state can be challenging to troubleshoot if individual columns or cells were modified in the target cluster.
-====
-
-* `autocorrect.missing.counter`: By default, counter tables are not copied when missing, unless explicitly set.
-
-In your `cdm.properties` file, use the following properties to enable (`true`) or disable (`false`) autocorrect functions:
-
-[source,properties]
-----
-spark.cdm.autocorrect.missing false|true
-spark.cdm.autocorrect.mismatch false|true
-spark.cdm.autocorrect.missing.counter false|true
-----
-
-The {cass-migrator-short} validation job never deletes records from either the origin or target.
-Data validation only inserts or updates data on the target.
-
-For an initial data validation, consider disabling AutoCorrect so that you can generate a list of data discrepancies, investigate those discrepancies, and then decide whether you want to rerun the validation with AutoCorrect enabled.
-
-[#advanced]
-== Additional {cass-migrator-short} options
-
-You can modify your properties file or append additional `--conf` arguments to your `spark-submit` commands to customize your {cass-migrator-short} jobs.
-For example, you can do the following:
-
-* Check for large field guardrail violations before migrating.
-* Use the `partition.min` and `partition.max` parameters to migrate or validate specific token ranges.
-* Use the `track-run` feature to monitor progress and rerun a failed migration or validation job from point of failure.
-
-For all options, see the {cass-migrator-repo}[{cass-migrator-short} repository].
-Specifically, see the {cass-migrator-repo}/blob/main/src/resources/cdm-detailed.properties[fully annotated properties file].
-
-== Troubleshoot {cass-migrator-short}
-
-.Java NoSuchMethodError
-[%collapsible]
-====
-If you installed Spark as a JAR file, and your Spark and Scala versions aren't compatible with your installed version of {cass-migrator-short}, {cass-migrator-short} jobs can throw exceptions such a the following:
-
-[source,console]
-----
-Exception in thread "main" java.lang.NoSuchMethodError: 'void scala.runtime.Statics.releaseFence()'
-----
-
-Make sure that your Spark binary is compatible with your {cass-migrator-short} version.
-If you installed an earlier version of {cass-migrator-short}, you might need to install an earlier Spark binary.
-====
-
-.Rerun a failed or partially completed job
-[%collapsible]
-====
-You can use the `track-run` feature to track the progress of a migration or validation, and then, if necessary, use the `run-id` to rerun a failed job from the last successful migration or validation point.
-
-For more information, see the {cass-migrator-repo}[{cass-migrator-short} repository] and the {cass-migrator-repo}/blob/main/src/resources/cdm-detailed.properties[fully annotated properties file].
-====
\ No newline at end of file
diff --git a/modules/ROOT/partials/dsbulk-migrator-body.adoc b/modules/ROOT/partials/dsbulk-migrator-body.adoc
deleted file mode 100644
index 45ea3680..00000000
--- a/modules/ROOT/partials/dsbulk-migrator-body.adoc
+++ /dev/null
@@ -1,642 +0,0 @@
-{dsbulk-migrator} is an extension of {dsbulk-loader}.
-It is best for smaller migrations or migrations that don't require extensive data validation, aside from post-migration row counts.
-You can also consider this tool for migrations where you can shard data from large tables into more manageable quantities.
-
-{dsbulk-migrator} extends {dsbulk-loader} with the following commands:
-
-* `migrate-live`: Start a live data migration using the embedded version of {dsbulk-loader} or your own {dsbulk-loader} installation.
-A live migration means that the data migration starts immediately and is performed by the migrator tool through the specified {dsbulk-loader} installation.
-
-* `generate-script`: Generate a migration script that you can execute to perform a data migration with a your own {dsbulk-loader} installation.
-This command _doesn't_ trigger the migration; it only generates the migration script that you must then execute.
-
-* `generate-ddl`: Read the schema from origin, and then generate CQL files to recreate it in your target {astra-db} database.
-
-[[prereqs-dsbulk-migrator]]
-== {dsbulk-migrator} prerequisites
-
-* Java 11
-
-* https://maven.apache.org/download.cgi[Maven] 3.9.x
-
-* Optional: If you don't want to use the embedded {dsbulk-loader} that is bundled with {dsbulk-migrator}, xref:dsbulk:overview:install.adoc[install {dsbulk-loader}] before installing {dsbulk-migrator}.
-
-== Build {dsbulk-migrator}
-
-. Clone the {dsbulk-migrator-repo}[{dsbulk-migrator} repository]:
-+
-[source,bash]
-----
-cd ~/github
-git clone git@github.com:datastax/dsbulk-migrator.git
-cd dsbulk-migrator
-----
-
-. Use Maven to build {dsbulk-migrator}:
-+
-[source,bash]
-----
-mvn clean package
-----
-
-The build produces two distributable fat jars:
-
-* `dsbulk-migrator-**VERSION**-embedded-driver.jar` contains an embedded Java driver.
-Suitable for script generation or live migrations using an external {dsbulk-loader}.
-+
-This jar isn't suitable for live migrations that use the embedded {dsbulk-loader} because no {dsbulk-loader} classes are present.
-
-* `dsbulk-migrator-**VERSION**-embedded-dsbulk.jar` contains an embedded {dsbulk-loader} and an embedded Java driver.
-Suitable for all operations.
-Much larger than the other JAR due to the presence of {dsbulk-loader} classes.
-
-== Test {dsbulk-migrator}
-
-The {dsbulk-migrator} project contains some integration tests that require https://github.com/datastax/simulacron[Simulacron].
-
-. Clone and build Simulacron, as explained in the https://github.com/datastax/simulacron[Simulacron GitHub repository].
-Note the prerequisites for Simulacron, particularly for macOS.
-
-. Run the tests:
-
-[source,bash]
-----
-mvn clean verify
-----
-
-== Run {dsbulk-migrator}
-
-Launch {dsbulk-migrator} with the command and options you want to run:
-
-[source,bash]
-----
-java -jar /path/to/dsbulk-migrator.jar { migrate-live | generate-script | generate-ddl } [OPTIONS]
-----
-
-The role and availability of the options depends on the command you run:
-
-* During a live migration, the options configure {dsbulk-migrator} and establish connections to
-the clusters.
-
-* When generating a migration script, most options become default values in the generated scripts.
-However, even when generating scripts, {dsbulk-migrator} still needs to access the origin cluster to gather metadata about the tables to migrate.
-
-* When generating a DDL file, import options and {dsbulk-loader}-related options are ignored.
-However, {dsbulk-migrator} still needs to access the origin cluster to gather metadata about the keyspaces and tables for the DDL statements.
-
-For more information about the commands and their options, see the following references:
-
-* <>
-* <>
-* <>
-
-For help and examples, see <> and <>.
-
-[[dsbulk-live]]
-== Live migration command-line options
-
-The following options are available for the `migrate-live` command.
-Most options have sensible default values and do not need to be specified, unless you want to override the default value.
-
-[cols="2,8,14"]
-|===
-
-| `-c`
-| `--dsbulk-cmd=CMD`
-| The external {dsbulk-loader} command to use.
-Ignored if the embedded {dsbulk-loader} is being used.
-The default is simply `dsbulk`, assuming that the command is available through the `PATH` variable contents.
-
-| `-d`
-| `--data-dir=PATH`
-| The directory where data will be exported to and imported from.
-The default is a `data` subdirectory in the current working directory.
-The data directory will be created if it does not exist.
-Tables will be exported and imported in subdirectories of the data directory specified here.
-There will be one subdirectory per keyspace in the data directory, then one subdirectory per table in each keyspace directory.
-
-| `-e`
-| `--dsbulk-use-embedded`
-| Use the embedded {dsbulk-loader} version instead of an external one.
-The default is to use an external {dsbulk-loader} command.
-
-|
-| `--export-bundle=PATH`
-| The path to a secure connect bundle to connect to the origin cluster, if that cluster is a {company} {astra-db} cluster.
-Options `--export-host` and `--export-bundle` are mutually exclusive.
-
-|
-| `--export-consistency=CONSISTENCY`
-| The consistency level to use when exporting data.
-The default is `LOCAL_QUORUM`.
-
-|
-| `--export-dsbulk-option=OPT=VALUE`
-| An extra {dsbulk-loader} option to use when exporting.
-Any valid {dsbulk-loader} option can be specified here, and it will passed as is to the {dsbulk-loader} process.
-{dsbulk-loader} options, including driver options, must be passed as `--long.option.name=`.
-Short options are not supported.
-
-|
-| `--export-host=HOST[:PORT]`
-| The host name or IP and, optionally, the port of a node from the origin cluster.
-If the port is not specified, it will default to `9042`.
-This option can be specified multiple times.
-Options `--export-host` and `--export-bundle` are mutually exclusive.
-
-|
-| `--export-max-concurrent-files=NUM\|AUTO`
-| The maximum number of concurrent files to write to.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--export-max-concurrent-queries=NUM\|AUTO`
-| The maximum number of concurrent queries to execute.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--export-max-records=NUM`
-| The maximum number of records to export for each table.
-Must be a positive number or `-1`.
-The default is `-1` (export the entire table).
-
-|
-| `--export-password`
-| The password to use to authenticate against the origin cluster.
-Options `--export-username` and `--export-password` must be provided together, or not at all.
-Omit the parameter value to be prompted for the password interactively.
-
-|
-| `--export-splits=NUM\|NC`
-| The maximum number of token range queries to generate.
-Use the `NC` syntax to specify a multiple of the number of available cores.
-For example, `8C` = 8 times the number of available cores.
-The default is `8C`.
-This is an advanced setting; you should rarely need to modify the default value.
-
-|
-| `--export-username=STRING`
-| The username to use to authenticate against the origin cluster.
-Options `--export-username` and `--export-password` must be provided together, or not at all.
-
-| `-h`
-| `--help`
-| Displays this help text.
-
-|
-| `--import-bundle=PATH`
-| The path to a {scb} to connect to a target {astra-db} cluster.
-Options `--import-host` and `--import-bundle` are mutually exclusive.
-
-|
-| `--import-consistency=CONSISTENCY`
-| The consistency level to use when importing data.
-The default is `LOCAL_QUORUM`.
-
-|
-| `--import-default-timestamp=`
-| The default timestamp to use when importing data.
-Must be a valid instant in ISO-8601 syntax.
-The default is `1970-01-01T00:00:00Z`.
-
-|
-| `--import-dsbulk-option=OPT=VALUE`
-| An extra {dsbulk-loader} option to use when importing.
-Any valid {dsbulk-loader} option can be specified here, and it will passed as is to the {dsbulk-loader} process.
-{dsbulk-loader} options, including driver options, must be passed as `--long.option.name=`.
-Short options are not supported.
-
-|
-| `--import-host=HOST[:PORT]`
-| The host name or IP and, optionally, the port of a node on the target cluster.
-If the port is not specified, it will default to `9042`.
-This option can be specified multiple times.
-Options `--import-host` and `--import-bundle` are mutually exclusive.
-
-|
-| `--import-max-concurrent-files=NUM\|AUTO`
-| The maximum number of concurrent files to read from.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--import-max-concurrent-queries=NUM\|AUTO`
-| The maximum number of concurrent queries to execute.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--import-max-errors=NUM`
-| The maximum number of failed records to tolerate when importing data.
-The default is `1000`.
-Failed records will appear in a `load.bad` file in the {dsbulk-loader} operation directory.
-
-|
-| `--import-password`
-| The password to use to authenticate against the target cluster.
-Options `--import-username` and `--import-password` must be provided together, or not at all.
-Omit the parameter value to be prompted for the password interactively.
-
-|
-| `--import-username=STRING`
-| The username to use to authenticate against the target cluster. Options `--import-username` and `--import-password` must be provided together, or not at all.
-
-| `-k`
-| `--keyspaces=REGEX`
-| A regular expression to select keyspaces to migrate.
-The default is to migrate all keyspaces except system keyspaces, {dse-short}-specific keyspaces, and the OpsCenter keyspace.
-Case-sensitive keyspace names must be entered in their exact case.
-
-| `-l`
-| `--dsbulk-log-dir=PATH`
-| The directory where the {dsbulk-loader} should store its logs.
-The default is a `logs` subdirectory in the current working directory.
-This subdirectory will be created if it does not exist.
-Each {dsbulk-loader} operation will create a subdirectory in the log directory specified here.
-
-|
-| `--max-concurrent-ops=NUM`
-| The maximum number of concurrent operations (exports and imports) to carry.
-The default is `1`.
-Set this to higher values to allow exports and imports to occur concurrently.
-For example, with a value of `2`, each table will be imported as soon as it is exported, while the next table is being exported.
-
-|
-| `--skip-truncate-confirmation`
-| Skip truncate confirmation before actually truncating tables.
-Only applicable when migrating counter tables, ignored otherwise.
-
-| `-t`
-| `--tables=REGEX`
-| A regular expression to select tables to migrate.
-The default is to migrate all tables in the keyspaces that were selected for migration with `--keyspaces`.
-Case-sensitive table names must be entered in their exact case.
-
-|
-| `--table-types=regular\|counter\|all`
-| The table types to migrate.
-The default is `all`.
-
-|
-| `--truncate-before-export`
-| Truncate tables before the export instead of after.
-The default is to truncate after the export.
-Only applicable when migrating counter tables, ignored otherwise.
-
-| `-w`
-| `--dsbulk-working-dir=PATH`
-| The directory where `dsbulk` should be executed.
-Ignored if the embedded {dsbulk-loader} is being used.
-If unspecified, it defaults to the current working directory.
-
-|===
-
-[[dsbulk-script]]
-== Script generation command-line options
-
-The following options are available for the `generate-script` command.
-Most options have sensible default values and do not need to be specified, unless you want to override the default value.
-
-
-[cols="2,8,14"]
-|===
-
-| `-c`
-| `--dsbulk-cmd=CMD`
-| The {dsbulk-loader} command to use.
-The default is simply `dsbulk`, assuming that the command is available through the `PATH` variable contents.
-
-| `-d`
-| `--data-dir=PATH`
-| The directory where data will be exported to and imported from.
-The default is a `data` subdirectory in the current working directory.
-The data directory will be created if it does not exist.
-
-|
-| `--export-bundle=PATH`
-| The path to a secure connect bundle to connect to the origin cluster, if that cluster is a {company} {astra-db} cluster.
-Options `--export-host` and `--export-bundle` are mutually exclusive.
-
-|
-| `--export-consistency=CONSISTENCY`
-| The consistency level to use when exporting data.
-The default is `LOCAL_QUORUM`.
-
-|
-| `--export-dsbulk-option=OPT=VALUE`
-| An extra {dsbulk-loader} option to use when exporting.
-Any valid {dsbulk-loader} option can be specified here, and it will passed as is to the {dsbulk-loader} process.
-{dsbulk-loader} options, including driver options, must be passed as `--long.option.name=`.
-Short options are not supported.
-
-|
-| `--export-host=HOST[:PORT]`
-| The host name or IP and, optionally, the port of a node from the origin cluster.
-If the port is not specified, it will default to `9042`.
-This option can be specified multiple times.
-Options `--export-host` and `--export-bundle` are mutually exclusive.
-
-|
-| `--export-max-concurrent-files=NUM\|AUTO`
-| The maximum number of concurrent files to write to.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--export-max-concurrent-queries=NUM\|AUTO`
-| The maximum number of concurrent queries to execute.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--export-max-records=NUM`
-| The maximum number of records to export for each table.
-Must be a positive number or `-1`.
-The default is `-1` (export the entire table).
-
-|
-| `--export-password`
-| The password to use to authenticate against the origin cluster.
-Options `--export-username` and `--export-password` must be provided together, or not at all.
-Omit the parameter value to be prompted for the password interactively.
-
-|
-| `--export-splits=NUM\|NC`
-| The maximum number of token range queries to generate.
-Use the `NC` syntax to specify a multiple of the number of available cores.
-For example, `8C` = 8 times the number of available cores.
-The default is `8C`.
-This is an advanced setting.
-You should rarely need to modify the default value.
-
-|
-| `--export-username=STRING`
-| The username to use to authenticate against the origin cluster.
-Options `--export-username` and `--export-password` must be provided together, or not at all.
-
-| `-h`
-| `--help`
-| Displays this help text.
-
-|
-| `--import-bundle=PATH`
-| The path to a Secure Connect Bundle to connect to a target {astra-db} cluster.
-Options `--import-host` and `--import-bundle` are mutually exclusive.
-
-|
-| `--import-consistency=CONSISTENCY`
-| The consistency level to use when importing data.
-The default is `LOCAL_QUORUM`.
-
-|
-| `--import-default-timestamp=`
-| The default timestamp to use when importing data.
-Must be a valid instant in ISO-8601 syntax.
-The default is `1970-01-01T00:00:00Z`.
-
-|
-| `--import-dsbulk-option=OPT=VALUE`
-| An extra {dsbulk-loader} option to use when importing.
-Any valid {dsbulk-loader} option can be specified here, and it will passed as is to the {dsbulk-loader} process.
-{dsbulk-loader} options, including driver options, must be passed as `--long.option.name=`.
-Short options are not supported.
-
-|
-| `--import-host=HOST[:PORT]`
-| The host name or IP and, optionally, the port of a node on the target cluster.
-If the port is not specified, it will default to `9042`.
-This option can be specified multiple times.
-Options `--import-host` and `--import-bundle` are mutually exclusive.
-
-|
-| `--import-max-concurrent-files=NUM\|AUTO`
-| The maximum number of concurrent files to read from.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--import-max-concurrent-queries=NUM\|AUTO`
-| The maximum number of concurrent queries to execute.
-Must be a positive number or the special value `AUTO`.
-The default is `AUTO`.
-
-|
-| `--import-max-errors=NUM`
-| The maximum number of failed records to tolerate when importing data.
-The default is `1000`.
-Failed records will appear in a `load.bad` file in the {dsbulk-loader} operation directory.
-
-|
-| `--import-password`
-| The password to use to authenticate against the target cluster.
-Options `--import-username` and `--import-password` must be provided together, or not at all.
-Omit the parameter value to be prompted for the password interactively.
-
-|
-| `--import-username=STRING`
-| The username to use to authenticate against the target cluster.
-Options `--import-username` and `--import-password` must be provided together, or not at all.
-
-| `-k`
-| `--keyspaces=REGEX`
-| A regular expression to select keyspaces to migrate.
-The default is to migrate all keyspaces except system keyspaces, {dse-short}-specific keyspaces, and the OpsCenter keyspace.
-Case-sensitive keyspace names must be entered in their exact case.
-
-| `-l`
-| `--dsbulk-log-dir=PATH`
-| The directory where {dsbulk-loader} should store its logs.
-The default is a `logs` subdirectory in the current working directory.
-This subdirectory will be created if it does not exist.
-Each {dsbulk-loader} operation will create a subdirectory in the log directory specified here.
-
-| `-t`
-| `--tables=REGEX`
-| A regular expression to select tables to migrate.
-The default is to migrate all tables in the keyspaces that were selected for migration with `--keyspaces`.
-Case-sensitive table names must be entered in their exact case.
-
-|
-| `--table-types=regular\|counter\|all`
-| The table types to migrate. The default is `all`.
-
-|===
-
-
-[[dsbulk-ddl]]
-== DDL generation command-line options
-
-The following options are available for the `generate-ddl` command.
-Most options have sensible default values and do not need to be specified, unless you want to override the default value.
-
-[cols="2,8,14"]
-|===
-
-| `-a`
-| `--optimize-for-astra`
-| Produce CQL scripts optimized for {company} {astra-db}.
-{astra-db} does not allow some options in DDL statements.
-Using this {dsbulk-migrator} command option, forbidden {astra-db} options will be omitted from the generated CQL files.
-
-| `-d`
-| `--data-dir=PATH`
-| The directory where data will be exported to and imported from.
-The default is a `data` subdirectory in the current working directory.
-The data directory will be created if it does not exist.
-
-|
-| `--export-bundle=PATH`
-| The path to a secure connect bundle to connect to the origin cluster, if that cluster is a {company} {astra-db} cluster.
-Options `--export-host` and `--export-bundle` are mutually exclusive.
-
-|
-| `--export-host=HOST[:PORT]`
-| The host name or IP and, optionally, the port of a node from the origin cluster.
-If the port is not specified, it will default to `9042`.
-This option can be specified multiple times.
-Options `--export-host` and `--export-bundle` are mutually exclusive.
-
-|
-| `--export-password`
-| The password to use to authenticate against the origin cluster.
-Options `--export-username` and `--export-password` must be provided together, or not at all.
-Omit the parameter value to be prompted for the password interactively.
-
-|
-| `--export-username=STRING`
-| The username to use to authenticate against the origin cluster.
-Options `--export-username` and `--export-password` must be provided together, or not at all.
-
-| `-h`
-| `--help`
-| Displays this help text.
-
-| `-k`
-| `--keyspaces=REGEX`
-| A regular expression to select keyspaces to migrate.
-The default is to migrate all keyspaces except system keyspaces, {dse-short}-specific keyspaces, and the OpsCenter keyspace.
-Case-sensitive keyspace names must be entered in their exact case.
-
-| `-t`
-| `--tables=REGEX`
-| A regular expression to select tables to migrate.
-The default is to migrate all tables in the keyspaces that were selected for migration with `--keyspaces`.
-Case-sensitive table names must be entered in their exact case.
-
-|
-| `--table-types=regular\|counter\|all`
-| The table types to migrate.
-The default is `all`.
-
-|===
-
-[[dsbulk-examples]]
-== {dsbulk-migrator} examples
-
-These examples show sample `username` and `password` values that are for demonstration purposes only.
-Don't use these values in your environment.
-
-=== Generate a migration script
-
-Generate a migration script to migrate from an existing origin cluster to a target {astra-db} cluster:
-
-[source,bash]
-----
- java -jar target/dsbulk-migrator--embedded-driver.jar migrate-live \
- --data-dir=/path/to/data/dir \
- --dsbulk-cmd=${DSBULK_ROOT}/bin/dsbulk \
- --dsbulk-log-dir=/path/to/log/dir \
- --export-host=my-origin-cluster.com \
- --export-username=user1 \
- --export-password=s3cr3t \
- --import-bundle=/path/to/bundle \
- --import-username=user1 \
- --import-password=s3cr3t
-----
-
-=== Live migration with an external {dsbulk-loader} installation
-
-Perform a live migration from an existing origin cluster to a target {astra-db} cluster using an external {dsbulk-loader} installation:
-
-[source,bash]
-----
- java -jar target/dsbulk-migrator--embedded-driver.jar migrate-live \
- --data-dir=/path/to/data/dir \
- --dsbulk-cmd=${DSBULK_ROOT}/bin/dsbulk \
- --dsbulk-log-dir=/path/to/log/dir \
- --export-host=my-origin-cluster.com \
- --export-username=user1 \
- --export-password # password will be prompted \
- --import-bundle=/path/to/bundle \
- --import-username=user1 \
- --import-password # password will be prompted
-----
-
-Passwords are prompted interactively.
-
-=== Live migration with the embedded {dsbulk-loader}
-
-Perform a live migration from an existing origin cluster to a target {astra-db} cluster using the embedded {dsbulk-loader} installation:
-
-[source,bash]
-----
- java -jar target/dsbulk-migrator--embedded-dsbulk.jar migrate-live \
- --data-dir=/path/to/data/dir \
- --dsbulk-use-embedded \
- --dsbulk-log-dir=/path/to/log/dir \
- --export-host=my-origin-cluster.com \
- --export-username=user1 \
- --export-password # password will be prompted \
- --export-dsbulk-option "--connector.csv.maxCharsPerColumn=65536" \
- --export-dsbulk-option "--executor.maxPerSecond=1000" \
- --import-bundle=/path/to/bundle \
- --import-username=user1 \
- --import-password # password will be prompted \
- --import-dsbulk-option "--connector.csv.maxCharsPerColumn=65536" \
- --import-dsbulk-option "--executor.maxPerSecond=1000"
-----
-
-Passwords are prompted interactively.
-
-The preceding example passes additional {dsbulk-loader} options.
-
-The preceding example requires the `dsbulk-migrator--embedded-dsbulk.jar` fat jar.
-Otherwise, an error is raised because no embedded {dsbulk-loader} can be found.
-
-=== Generate DDL files to recreate the origin schema on the target cluster
-
-Generate DDL files to recreate the origin schema on a target {astra-db} cluster:
-
-[source,bash]
-----
- java -jar target/dsbulk-migrator--embedded-driver.jar generate-ddl \
- --data-dir=/path/to/data/dir \
- --export-host=my-origin-cluster.com \
- --export-username=user1 \
- --export-password=s3cr3t \
- --optimize-for-astra
-----
-
-[[getting-help-with-dsbulk-migrator]]
-== Get help with {dsbulk-migrator}
-
-Use the following command to display the available {dsbulk-migrator} commands:
-
-[source,bash]
-----
-java -jar /path/to/dsbulk-migrator-embedded-dsbulk.jar --help
-----
-
-For individual command help and each one's options:
-
-[source,bash]
-----
-java -jar /path/to/dsbulk-migrator-embedded-dsbulk.jar COMMAND --help
-----
-
-== See also
-
-* xref:dsbulk:overview:dsbulk-about.adoc[{dsbulk-loader}]
-* xref:dsbulk:reference:dsbulk-cmd.adoc#escape-and-quote-command-line-arguments[Escape and quote {dsbulk-loader} command line arguments]
\ No newline at end of file
diff --git a/modules/sideloader/pages/sideloader-overview.adoc b/modules/sideloader/pages/sideloader-overview.adoc
index 1b6cd07b..9765c11d 100644
--- a/modules/sideloader/pages/sideloader-overview.adoc
+++ b/modules/sideloader/pages/sideloader-overview.adoc
@@ -115,7 +115,10 @@ include::sideloader:partial$validate.adoc[]
== Use {sstable-sideloader} with {product-proxy}
-include::sideloader:partial$sideloader-zdm.adoc[]
+If you need to migrate a live database, you can use {sstable-sideloader} instead of {dsbulk-migrator} or {cass-migrator} during of xref:ROOT:migrate-and-validate-data.adoc[Phase 2 of {product}].
+
+.Use {sstable-sideloader} with {product-proxy}
+svg::sideloader:astra-migration-toolkit.svg[]
== Next steps
diff --git a/modules/sideloader/pages/sideloader-zdm.adoc b/modules/sideloader/pages/sideloader-zdm.adoc
deleted file mode 100644
index 1111f833..00000000
--- a/modules/sideloader/pages/sideloader-zdm.adoc
+++ /dev/null
@@ -1,25 +0,0 @@
-= Use {sstable-sideloader} with {product-proxy}
-:navtitle: Use {sstable-sideloader}
-:description: {sstable-sideloader} is a service running in {astra-db} that imports data from snapshots of your existing {cass-short}-based cluster.
-
-{description}
-This tool is exclusively for migrations that move data to {astra-db}.
-
-Because it imports data directly, {sstable-sideloader} can offer several advantages over CQL-based tools like {dsbulk-migrator} and {cass-migrator}, including faster, more cost-effective data loading, and minimal performance impacts on your origin cluster and target database.
-
-== Migrate data with {sstable-sideloader}
-
-To migrate data with {sstable-sideloader}, you use `nodetool`, a cloud provider's CLI, and the {astra} {devops-api}:
-
-* *`nodetool`*: Create snapshots of your existing {dse-short}, {hcd-short}, open-source {cass-short} cluster.
-For compatible origin clusters, see xref:ROOT:astra-migration-paths.adoc[].
-* *Cloud provider CLI*: Upload snapshots to a dedicated cloud storage bucket for your migration.
-* *{astra} {devops-api}*: Run the {sstable-sideloader} commands to write the data from cloud storage to your {astra-db} database.
-
-For more information and instructions, see xref:sideloader:sideloader-overview.adoc[].
-
-== Use {sstable-sideloader} with {product-proxy}
-
-You can use {sstable-sideloader} alone or with {product-proxy}.
-
-include::sideloader:partial$sideloader-zdm.adoc[]
\ No newline at end of file
diff --git a/modules/sideloader/partials/sideloader-zdm.adoc b/modules/sideloader/partials/sideloader-zdm.adoc
deleted file mode 100644
index bf4fd583..00000000
--- a/modules/sideloader/partials/sideloader-zdm.adoc
+++ /dev/null
@@ -1,4 +0,0 @@
-If you need to migrate a live database, you can use {sstable-sideloader} instead of {dsbulk-migrator} or {cass-migrator} during of xref:ROOT:migrate-and-validate-data.adoc[Phase 2 of {product}].
-
-.Use {sstable-sideloader} with {product-proxy}
-svg::sideloader:astra-migration-toolkit.svg[]
\ No newline at end of file