From d9a8b4d029ac8c83f736b208f7e9ba4c929075a9 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Thu, 12 Sep 2024 18:57:00 +0700
Subject: [PATCH 01/16] Add tt migrations
---
doc/platform/ddl_dml/space_upgrade.rst | 1 +
doc/reference/reference_lua/box_space.rst | 3 +
doc/tooling/tt_cli/commands.rst | 3 +
doc/tooling/tt_cli/migrations.rst | 483 ++++++++++++++++++++++
4 files changed, 490 insertions(+)
create mode 100644 doc/tooling/tt_cli/migrations.rst
diff --git a/doc/platform/ddl_dml/space_upgrade.rst b/doc/platform/ddl_dml/space_upgrade.rst
index 2776d39db..340c0f142 100644
--- a/doc/platform/ddl_dml/space_upgrade.rst
+++ b/doc/platform/ddl_dml/space_upgrade.rst
@@ -1,4 +1,5 @@
.. _enterprise-space_upgrade:
+.. _box_space-upgrade:
Upgrading space schema
======================
diff --git a/doc/reference/reference_lua/box_space.rst b/doc/reference/reference_lua/box_space.rst
index cc51bcba0..5795ec848 100644
--- a/doc/reference/reference_lua/box_space.rst
+++ b/doc/reference/reference_lua/box_space.rst
@@ -97,6 +97,9 @@ Below is a list of all ``box.space`` functions and members.
* - :doc:`./box_space/update`
- Update a tuple
+ * - :ref:`box_space-upgrade`
+ - Upgrade the space format and tuples
+
* - :doc:`./box_space/upsert`
- Update a tuple
diff --git a/doc/tooling/tt_cli/commands.rst b/doc/tooling/tt_cli/commands.rst
index 3c7adf913..1bc9d0967 100644
--- a/doc/tooling/tt_cli/commands.rst
+++ b/doc/tooling/tt_cli/commands.rst
@@ -58,6 +58,8 @@ help for the given command.
- Print instance logs
* - :doc:`logrotate `
- Rotate instance logs
+ * - :doc:`migrations `
+ - Manage migrations
* - :doc:`pack `
- Package an application
* - :doc:`play `
@@ -112,6 +114,7 @@ help for the given command.
kill
log
logrotate
+ migrations
pack
play
replicaset
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
new file mode 100644
index 000000000..661268677
--- /dev/null
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -0,0 +1,483 @@
+.. _tt-migrations:
+
+Performing migrations
+=====================
+
+.. admonition:: Enterprise Edition
+ :class: fact
+
+ This command is supported by the `Enterprise Edition `_ only.
+
+.. code-block:: console
+
+ $ tt migrations COMMAND [COMMAND_OPTION ...]
+
+``tt migrations`` manages :ref:`migrations ` in a Tarantool EE cluster.
+
+.. important::
+
+ Only Tarantool EE clusters with etcd centralized configuration storage are supported.
+
+Prereq:
+- EE
+- crud
+- etcd
+
+how to write migration files? tt-migrtions.helpers
+
+Migration workflow
+
+prepare files
+publish to etcd
+apply
+check status
+
+Handling errors
+
+stop
+rollback - show examples with force apply?
+
+
+``COMMAND`` is one of the following:
+
+* :ref:`apply `
+* :ref:`publish `
+* :ref:`remove `
+* :ref:`status `
+* :ref:`stop `
+
+
+.. _tt-migrations-apply:
+
+apply
+-----
+
+.. code-block:: console
+
+ $ tt migrations apply ETCD_URI [OPTION ...]
+
+``tt migrations apply`` applies migrations :ref:`published `
+to the cluster to the cluster. It executes all migrations from the cluster's centralized
+configuration storage on all its read-write instances (replica set leaders).
+
+.. code-block:: console
+
+ tt migrations apply https://user:pass@localhost:2379/myapp \
+ -tarantool-username=admin --tarantool-password=pass
+
+You can select a single migration for execution by adding the ``--migration`` option:
+
+.. code-block:: console
+
+ tt migrations apply https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --migration=000001_create_space.lua
+
+You can select a single replica set to apply migrations to:
+
+.. code-block:: console
+
+ tt migrations apply https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --replicaset=storage-001
+
+-- migration - single migration. --order violation
+
+
+?? diff --force-reapply --ignore-preceding-status
+
+warning about dangerous options
+
+.. _tt-migrations-publish:
+
+publish
+-------
+
+.. code-block:: console
+
+ $ tt migrations publish ETCD_URI [MIGRATIONS_DIR | MIGRATION_FILE] [OPTION ...]
+
+``tt migrations publish`` sends the migration files to the cluster's centralized
+configuration storage for future execution.
+
+By default, the command sends all files stored in ``migrations/`` inside the current
+directory.
+
+.. code-block:: console
+
+ $ tt migrations publish https://user:pass@localhost:2379/myapp
+
+To select another directory with migration files, provide a path to it as the command
+argument:
+
+.. code-block:: console
+
+ $ tt migrations publish https://user:pass@localhost:2379/myapp my_migrations
+
+To publish a single migration from a file, use its name or path as the command argument:
+
+.. code-block:: console
+
+ $ tt migrations publish https://user:pass@localhost:2379/myapp migrations/000001_create_space.lua
+
+Optionally, you can provide a key to use as a migration identifier instead of the file name:
+
+.. code-block:: console
+
+ $ tt migrations publish https://user:pass@localhost:2379/myapp file.lua \
+ --key=000001_create_space.lua
+
+When publishing migrations, ``tt`` performs several checks for:
+
+- Syntax errors in migration files. To skip syntax check, add the ``--skip-syntax-check`` option.
+- Existence of migrations with same names. To overwrite an existing migration with
+ the same name, add the ``--overwirte`` option.
+- Migration names order. By default, ``tt migrations`` only adds new migrations
+ to the end of the migrations list ordered lexicographically. For example, if
+ migrations ``001.lua`` and ``003.lua`` are already published, an attempt to publish
+ ``002.lua`` will fail. To force publishing migrations disregarding the order,
+ add the ``--ignore-order-violation`` option.
+
+.. warning::
+
+ Using the options that ignore checks when publishing migration may cause
+ migration inconsistency.
+
+.. _tt-migrations-remove:
+
+remove
+------
+
+.. code-block:: console
+
+ $ tt migrations remove ETCD_URI [OPTION ...]
+
+``tt migrations remove`` removes published migrations from the centralized storage.
+With additional options, it can also remove the information about the migration execution
+on the cluster instances.
+
+To remove all migrations from a specified centralized storage:
+
+.. code-block:: console
+
+ tt migrations remove https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass
+
+To remove a specific migration, pass its name in the ``--migration`` option:
+
+.. code-block:: console
+
+ tt migrations remove https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --migration=000001_create_writers_space.lua
+
+Before removing migrations, the command checks their :ref:`status `
+on the cluster. To ignore the status and remove migrations anyway, add the
+``--force-remove-on=config-storage`` option:
+
+.. code-block:: console
+
+ tt migrations remove https://user:pass@localhost:2379/myapp --force-remove-on=config-storage
+
+.. note::
+
+ In this case, cluster credentials are not required
+
+To remove migration execution information from the cluster (clear the migration status),
+use the ``--force-remove-on=cluster`` option:
+
+.. code-block:: console
+
+ tt migrations remove https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --force-remove-on=cluster
+
+To clear all migration information from the centralized storage and cluster,
+use the ``--force-remove-on=all`` option:
+
+.. code-block:: console
+
+ tt migrations remove https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --force-remove-on=all
+
+?? dangers/warnings?
+
+.. _tt-migrations-status:
+
+status
+------
+
+.. code-block:: console
+
+ $ tt migrations status ETCD_URI [OPTION ...]
+
+``tt migrations status`` prints the list of migrations published to the centralized
+storage and the result of their execution on the cluster instances.
+
+Possible migration statuses are:
+
+- ``APPLY_STARTED`` -- the migration execution has started but not completed yet
+- ``APPLIED`` -- the migration is successfully applied on the instance
+- ``FAILED`` -- there were errors during the migration execution on the instance
+
+To get the list of migrations stored in the given etcd storage and information about
+their execution on the cluster, run:
+
+.. code-block:: console
+
+ tt migrations status https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass
+
+If the cluster uses SSL encryption, add SSL options. Learn more in :ref:`Authentication `.
+
+Use the ``--migration`` and ``--replicaset`` options to get information about specific
+migrations or replica sets:
+
+.. code-block:: console
+
+ tt migrations status https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --replicaset=storage-001 --migration=000001_create_writers_space.lua
+
+The ``--display-mode`` option allows to tailor the command output:
+
+- with ``--display-mode config-storage``, the command prints only the list of migrations
+ published to the centralized storage.
+- with ``--display-mode cluster``, the command prints only the migration statuses
+ on the cluster instances.
+
+To find out the results of a migration execution on a specific replica set in the cluster, run:
+
+.. code-block:: console
+
+ tt migrations status https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=pass \
+ --replicaset=storage-001 --display-mode=cluster
+
+
+.. _tt-migrations-stop:
+
+stop
+----
+
+.. code-block:: console
+
+ $ tt migrations stop ETCD_URI [OPTION ...]
+
+``tt migrations stop`` stops the execution of migrations in the cluster
+
+.. warning::
+
+ Calling ``tt migration stop`` may cause migration inconsistency in the cluster.
+
+To stop execution of migrations currently running in the cluster:
+
+.. code-block:: console
+
+ $ tt migrations stop https://user:pass@localhost:2379/myapp \
+ --tarantool-username=admin --tarantool-password=secret-cluster-cookie
+
+all migration in the batch?
+can any of them complete?
+can it cause inconsistency?
+
+.. _tt-migrations-auth:
+
+Authentication
+--------------
+
+Since ``tt migrations`` operates migrations via a centralizes etcd storage, it
+needs credentials to access this storage. There are two ways to pass etcd credentials:
+
+- command options ``--config-storage-username`` and ``--config-storage-password``
+- the etcd URI, for example, ``https://user:pass@localhost:2379/myapp``
+
+?priority
+
+For commands that connect to the cluster (that is, all except ``publish``), Tarantool
+credentials are also required. The are passed in the ``--tarantool-username`` and
+``--tarantool-password`` options.
+
+If the cluster uses SSL traffic encryption, provide the necessary connection
+parameters in the ``--tarantool-ssl*`` options: ``--tarantool-sslcertfile``,
+``--tarantool-sslkeyfile``, and other. All options are listed in :ref:`tt-migrations-options`.
+
+?auth type
+?example
+
+.. _tt-migrations-options:
+
+Options
+-------
+
+.. option:: --acquire-lock-timeout int
+
+ **Applicable to:** ``apply``
+
+ migrations fiber lock acquire timeout (in seconds). Fiber lock is used to prevent concurrent migrations run (default 60)
+
+.. option:: --config-storage-password STRING
+
+ A password for connecting to the centralized migrations storage (etcd).
+
+ See also: :ref:`tt-migrations-auth`.
+
+.. option:: --config-storage-username STRING
+
+ A username for connecting to the centralized migrations storage (etcd).
+
+ See also: :ref:`tt-migrations-auth`.
+
+.. option:: --display-mode STRING
+
+ **Applicable to:** ``status``
+
+ Display only specific information. Possible values:
+
+ - ``config-storage`` -- information about migrations published to the centralized storage.
+ - ``cluster`` -- information about migration applied on the cluster.
+
+ See also: :ref:`tt-migrations-status`.
+
+.. option:: --execution-timeout int
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ A timeout for completing the operation on a single Tarantool instance, in seconds.
+ Default values:
+
+ - ``3`` for ``remove``, ``status``, and ``stop``
+ - ``3600`` for ``apply``
+
+.. option:: --force-reapply
+
+ **Applicable to:** ``apply``
+
+ Apply migrations disregarding their previous status.
+
+ .. warning::
+
+ Using this option may result in cluster migrations inconsistency.
+
+.. option:: --force-remove-on STRING
+
+ **Applicable to:** ``remove``
+
+ Remove migrations disregarding their status. Possible values:
+
+ - ``config-storage``: remove migrations on etcd centralized migrations storage disregarding the cluster apply status.
+ - ``cluster``: remove migrations status info only on a Tarantool cluster.
+ - ``all`` to execute both ``config-storage`` and ``cluster`` force removals.
+
+ .. warning::
+
+ Using this option may result in cluster migrations inconsistency.
+
+.. option:: --ignore-order-violation
+
+ **Applicable to:** ``apply``, ``publish``
+
+ Skip migration scenarios order check before publish. Using this flag may result in cluster migrations inconsistency
+
+.. option:: --ignore-preceding-status
+
+ **Applicable to:** ``apply``
+
+ skip preceding migrations status check on apply. Using this flag may result in cluster migrations inconsistency
+
+.. option:: --key STRING
+
+ **Applicable to:** ``publish``
+
+ put scenario to //migrations/scenario/ etcd key instead. Only for single file publish
+
+.. option:: --migration string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``
+
+ migration to remove
+
+.. option:: --overwrite
+
+ **Applicable to:** ``publish``
+
+ overwrite existing migration storage keys. Using this flag may result in cluster migrations inconsistency
+
+.. option:: --replicaset string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ Execute the operation only on the specified replicaset.
+
+.. option:: --skip-syntax-check
+
+ **Applicable to:** ``publish``
+
+ Skip syntax check before publish. Using this flag may cause other tt migrations operations to fail
+
+.. option:: --tarantool-auth string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ authentication type (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-connect-timeout int
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ Tarantool cluster instances connection timeout,in seconds. Default: 3.
+
+.. option:: --tarantool-password string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ A password used for connecting to the Tarantool cluster instances.
+
+.. option:: --tarantool-sslcafile string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ SSL CA file (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-sslcertfile string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ SSL cert file (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-sslciphers string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ Colon-separated list of SSL ciphers (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-sslkeyfile string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ SSL key file (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-sslpassword string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ SSL key file password (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-sslpasswordfile string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ File with list of password to SSL key file (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-use-ssl
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ use SSL without providing any additional SSL info (used only to connect to Tarantool cluster instances)
+
+.. option:: --tarantool-username string
+
+ **Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
+
+ A username for connecting to the Tarantool cluster instances.
From 1e54dae77eea63d4d4e1da4f706111ade705e337 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Thu, 26 Sep 2024 18:11:26 +0700
Subject: [PATCH 02/16] Migration tutorial draft
---
.../snippets/migrations/README.md | 4 +
.../snippets/migrations/etcd_setup.sh | 12 +
.../instances.enabled/myapp/config.yaml | 10 +
.../instances.enabled/myapp/instances.yml | 5 +
.../myapp/myapp-scm-1.rockspec | 14 +
.../instances.enabled/myapp/source.yaml | 73 ++
.../scenario/000001_create_writers_space.lua | 23 +
.../scenario/000002_create_writers_index.lua | 11 +
.../scenario/000003_alter_writers_space.lua | 48 ++
doc/code_snippets/snippets/migrations/tt.yaml | 54 ++
doc/platform/ddl_dml/migrations.rst | 175 +++--
.../ddl_dml/performing_migrations_tt.rst | 658 ++++++++++++++++++
doc/platform/ddl_dml/space_upgrade.rst | 2 +-
.../ddl_dml/troubleshooting_migrations_tt.rst | 51 ++
doc/tooling/tt_cli/migrations.rst | 20 -
15 files changed, 1070 insertions(+), 90 deletions(-)
create mode 100644 doc/code_snippets/snippets/migrations/README.md
create mode 100644 doc/code_snippets/snippets/migrations/etcd_setup.sh
create mode 100644 doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
create mode 100644 doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
create mode 100644 doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
create mode 100644 doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
create mode 100644 doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
create mode 100644 doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
create mode 100644 doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
create mode 100644 doc/code_snippets/snippets/migrations/tt.yaml
create mode 100644 doc/platform/ddl_dml/performing_migrations_tt.rst
create mode 100644 doc/platform/ddl_dml/troubleshooting_migrations_tt.rst
diff --git a/doc/code_snippets/snippets/migrations/README.md b/doc/code_snippets/snippets/migrations/README.md
new file mode 100644
index 000000000..cc27a6627
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/README.md
@@ -0,0 +1,4 @@
+# Centralized configuration storages
+
+Sample applications demonstrating how to use the centralized migration mechanism
+for Tarantool EE clusters via the tt utility. Learn more at [Centralized configuration storages](https://www.tarantool.io/en/doc/latest/platform/https://www.tarantool.io/en/doc/latest/platform/ddl_dml/migrations/).
diff --git a/doc/code_snippets/snippets/migrations/etcd_setup.sh b/doc/code_snippets/snippets/migrations/etcd_setup.sh
new file mode 100644
index 000000000..31855a599
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/etcd_setup.sh
@@ -0,0 +1,12 @@
+#!/usr/bin/env bash
+
+# 1. Remove the 'default.etcd' directory to reset etcd to initial state.
+# 2. Start etcd by executing the 'etcd' command.
+# 3. Execute this script to enable authentication.
+
+etcdctl user add root:topsecret
+etcdctl role add app_config_manager
+etcdctl role grant-permission app_config_manager --prefix=true readwrite /myapp/
+etcdctl user add app_user:config_pass
+etcdctl user grant-role app_user app_config_manager
+etcdctl auth enable
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
new file mode 100644
index 000000000..9d1ab2666
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
@@ -0,0 +1,10 @@
+config:
+ etcd:
+ endpoints:
+ - http://localhost:2379
+ prefix: /myapp/
+ username: app_user
+ password: config_pass
+ http:
+ request:
+ timeout: 3
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
new file mode 100644
index 000000000..10bab3f7c
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
@@ -0,0 +1,5 @@
+router-001-a:
+storage-001-a:
+storage-001-b:
+storage-002-a:
+storage-002-b:
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
new file mode 100644
index 000000000..6ad1a7830
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
@@ -0,0 +1,14 @@
+package = 'myapp'
+version = 'scm-1'
+
+source = {
+ url = '/dev/null',
+}
+
+dependencies = {
+ 'crud == 1.5.2',
+}
+
+build = {
+ type = 'none';
+}
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
new file mode 100644
index 000000000..16ba76662
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
@@ -0,0 +1,73 @@
+credentials:
+ users:
+ client:
+ password: 'secret'
+ roles: [super]
+ replicator:
+ password: 'secret'
+ roles: [replication]
+ storage:
+ password: 'secret'
+ roles: [sharding]
+
+iproto:
+ advertise:
+ peer:
+ login: replicator
+ sharding:
+ login: storage
+
+sharding:
+ bucket_count: 3000
+
+groups:
+ routers:
+ sharding:
+ roles: [router]
+ roles: [roles.crud-router]
+ replicasets:
+ router-001:
+ instances:
+ router-001-a:
+ iproto:
+ listen:
+ - uri: localhost:3301
+ advertise:
+ client: localhost:3301
+ storages:
+ sharding:
+ roles: [storage]
+ roles: [roles.crud-storage]
+ replication:
+ failover: manual
+ replicasets:
+ storage-001:
+ leader: storage-001-a
+ instances:
+ storage-001-a:
+ iproto:
+ listen:
+ - uri: localhost:3302
+ advertise:
+ client: localhost:3302
+ storage-001-b:
+ iproto:
+ listen:
+ - uri: localhost:3303
+ advertise:
+ client: localhost:3303
+ storage-002:
+ leader: storage-002-a
+ instances:
+ storage-002-a:
+ iproto:
+ listen:
+ - uri: localhost:3304
+ advertise:
+ client: localhost:3304
+ storage-002-b:
+ iproto:
+ listen:
+ - uri: localhost:3305
+ advertise:
+ client: localhost:3305
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua b/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
new file mode 100644
index 000000000..570e0a4e0
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
@@ -0,0 +1,23 @@
+local helpers = require('tt-migrations.helpers')
+
+local function apply_scenario()
+ local space = box.schema.space.create('writers')
+
+ space:format({
+ {name = 'id', type = 'number'},
+ {name = 'bucket_id', type = 'number'},
+ {name = 'name', type = 'string'},
+ {name = 'age', type = 'number'},
+ })
+
+ space:create_index('primary', {parts = {'id'}})
+ space:create_index('bucket_id', {parts = {'bucket_id'}})
+
+ helpers.register_sharding_key('writers', {'id'})
+end
+
+return {
+ apply = {
+ scenario = apply_scenario,
+ },
+}
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua b/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
new file mode 100644
index 000000000..0caaecbac
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
@@ -0,0 +1,11 @@
+local function apply_scenario()
+ local space = box.space['writers']
+
+ space:create_index('age', {parts = {'age'}})
+end
+
+return {
+ apply = {
+ scenario = apply_scenario,
+ },
+}
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua b/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
new file mode 100644
index 000000000..3db0db03c
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
@@ -0,0 +1,48 @@
+local function apply_scenario()
+ local space = box.space['writers']
+ local new_format = {
+ {name = 'id', type = 'number'},
+ {name = 'bucket_id', type = 'number'},
+ {name = 'first_name', type = 'string'},
+ {name = 'last_name', type = 'string'},
+ {name = 'age', type = 'number'},
+ }
+ box.space.writers.index.age:drop()
+
+ box.schema.func.create('_writers_split_name', {
+ language = 'lua',
+ is_deterministic = true,
+ body = [[
+ function(t)
+ local name = t[3]
+
+ local split_data = {}
+ local split_regex = '([^%s]+)'
+ for v in string.gmatch(name, split_regex) do
+ table.insert(split_data, v)
+ end
+
+ local first_name = split_data[1]
+ assert(first_name ~= nil)
+
+ local last_name = split_data[2]
+ assert(last_name ~= nil)
+
+ return {t[1], t[2], first_name, last_name, t[4]}
+ end
+ ]],
+ })
+
+ local future = space:upgrade({
+ func = '_writers_split_name',
+ format = new_format,
+ })
+
+ future:wait()
+end
+
+return {
+ apply = {
+ scenario = apply_scenario,
+ },
+}
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/tt.yaml b/doc/code_snippets/snippets/migrations/tt.yaml
new file mode 100644
index 000000000..e9cf7000c
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/tt.yaml
@@ -0,0 +1,54 @@
+modules:
+ # Directory where the external modules are stored.
+ directory: modules
+
+env:
+ # Restart instance on failure.
+ restart_on_failure: false
+
+ # Directory that stores binary files.
+ bin_dir: bin
+
+ # Directory that stores Tarantool header files.
+ inc_dir: include
+
+ # Path to directory that stores all applications.
+ # The directory can also contain symbolic links to applications.
+ instances_enabled: instances.enabled
+
+ # Tarantoolctl artifacts layout compatibility: if set to true tt will not create application
+ # sub-directories for control socket, pid files, log files, etc.. Data files (wal, vinyl,
+ # snap) and multi-instance applications are not affected by this option.
+ tarantoolctl_layout: false
+
+app:
+ # Directory that stores various instance runtime
+ # artifacts like console socket, PID file, etc.
+ run_dir: var/run
+
+ # Directory that stores log files.
+ log_dir: var/log
+
+ # Directory where write-ahead log (.xlog) files are stored.
+ wal_dir: var/lib
+
+ # Directory where memtx stores snapshot (.snap) files.
+ memtx_dir: var/lib
+
+ # Directory where vinyl files or subdirectories will be stored.
+ vinyl_dir: var/lib
+
+# Path to file with credentials for downloading Tarantool Enterprise Edition.
+# credential_path: /path/to/file
+ee:
+ credential_path:
+
+templates:
+ # The path to templates search directory.
+ - path: templates
+
+repo:
+ # Directory where local rocks files could be found.
+ rocks:
+ # Directory that stores installation files.
+ distfiles: distfiles
diff --git a/doc/platform/ddl_dml/migrations.rst b/doc/platform/ddl_dml/migrations.rst
index eb4c95736..c0a0d72af 100644
--- a/doc/platform/ddl_dml/migrations.rst
+++ b/doc/platform/ddl_dml/migrations.rst
@@ -3,62 +3,79 @@
Migrations
==========
-**Migration** refers to any change in a data schema: adding/removing a field,
-creating/dropping an index, changing a field format, etc.
+TBD: rewrite
-In Tarantool, there are two types of schema migration
-that do not require data migration:
+- what is migration: examples изменение + первоначальное создание
+- ce approach ?
+- ee approach: centralized migration(s?) management
+ - prereq: etcd, 3.x ee cluster w/etcd, tt-ee 2.4.0 || tcm,
+ tarantool user with ~admin permissions
+ advertise url required?
+ - way to manage: tt and tcm (check diff implementation)
+ - writing migrations:
+ examples: create space (if_not_exists?), add index (with shard key), add index existing space,
+ space upgrade: wait or just run in bg
+
+ - general workflow + status
+ - troubleshooting?
-- adding a field to the end of a space
-- creating an index
-.. note::
- Check the :ref:`Upgrading space schema ` section.
- With the help of ``space:upgrade()``,
- you can enable compression and migrate, including already created tuples.
+**Migration** refers to any change in a data schema: adding or removing a field,
+creating or dropping an index, changing a field format, an so on. Space creation
+is also a schema migration. You can use this fact to track the evolution of your
+data schema since its initial state.
-Adding a field to the end of a space
-------------------------------------
+There are two types of migrations:
-You can add a field as follows:
+- *simple migrations* don't require additional actions on existing data
+- *complex migrations* include both schema and data changes
-.. code:: lua
+In Tarantool, migrations are presented as Lua code that alters the data schema
+using the built-in Lua API.
- local users = box.space.users
- local fmt = users:format()
+.. _migrations_simple:
- table.insert(fmt, { name = 'age', type = 'number', is_nullable = true })
- users:format(fmt)
+Simple migrations
+-----------------
-Note that the field must have the ``is_nullable`` parameter. Otherwise,
-an error will occur.
+In Tarantool, there are two types of schema migration that do not require data migration:
-After creating a new field, you probably want to fill it with data.
-The `tarantool/moonwalker `_
-module is useful for this task.
-The README file describes how to work with this module.
+- Creating an index. A new index can be created at any time. To learn more about
+ index creation, see :ref:`concepts-data_model_indexes` and the :ref:`box_space-create_index` reference.
+- Adding a field to the end of a space. To add a field, update the space format so
+ that it includes all its fields and also the new field. For example:
-Creating an index
------------------
+ .. code-block:: lua
+
+ local users = box.space.users
+ local fmt = users:format()
+
+ table.insert(fmt, { name = 'age', type = 'number', is_nullable = true })
+ users:format(fmt)
+
+ The field must have the ``is_nullable`` parameter. Otherwise, an error occurs
+ if the space contains tuples of old format.
+
+ .. note::
-Index creation is described in the
-:doc:`/reference/reference_lua/box_space/create_index` method.
+ After creating a new field, you probably want to fill it with data.
+ The `tarantool/moonwalker `_
+ module is useful for this task.
-.. _other-migrations:
+.. _migrations_complex:
-Other types of migrations
--------------------------
+Complex migrations
+------------------
-Other types of migrations are also allowed, but it would be more difficult to
+Other types of migrations are more complex and require additional actions to
maintain data consistency.
Migrations are possible in two cases:
- When Tarantool starts, and no client uses the database yet
-
- During request processing, when active clients are already using the database
For the first case, it is enough to write and test the migration code.
@@ -80,17 +97,26 @@ We identify the following problems if there are active clients:
These issues may or may not be relevant depending on your application and
its availability requirements.
-What you need to know when writing complex migrations
------------------------------------------------------
+Tarantool offers the following features that make migrations easier and safer:
+
+- Transaction mechanism. It is useful when writing a migration,
+ because it allows you to work with the data atomically. But before using
+ the transaction mechanism, you should explore its limitations.
+ For details, see the section about :ref:`transactions `.
+
+- ``space:upgrade()`` function (EE only). With the help of ``space:upgrade()``,
+ you can enable compression and migrate, including already created tuples.
+ For details, check the :ref:`Upgrading space schema ` section.
-Tarantool has a transaction mechanism. It is useful when writing a migration,
-because it allows you to work with the data atomically. But before using
-the transaction mechanism, you should explore its limitations.
+- Centralized migration management mechanism (EE only). Implemented
+ in the Enterprise version of the :ref:`tt ` utility and :ref:`tcm`,
+ this mechanism enables migration execution and tracking in the replication
+ clusters. For details, see :ref:`migrations_centralized`.
-For details, see the section about :ref:`transactions `.
+.. _migrations_apply:
-How you can apply migration
----------------------------
+Applying migrations
+-------------------
The migration code is executed on a running Tarantool instance.
Important: no method guarantees you transactional application of migrations
@@ -103,52 +129,63 @@ and the database schema is updated.
However, this method may not work for everyone.
You may not be able to restart Tarantool or update the code using the hot-reload mechanism.
-**Method 2**: tarantool/migrations (only for a Tarantool Cartridge cluster)
-This method is described in the README file of the
-`tarantool/migrations `_ module.
+**Method 2**: the :ref:`tt ` utility
+
+Connect to the necessary instance using ``tt connect``.
+
+.. code:: console
+
+ $ tt connect admin:password@localhost:3301
-.. note::
+- If your migration is written in a Lua file, you can execute it
+ using ``dofile()``. Call this function and specify the path to the
+ migration file as the first argument. It looks like this:
- There are also two other methods that we **do not recommend**,
- but you may find them useful for one reason or another.
+ .. code-block:: tarantoolsession
- **Method 3**: the :ref:`tt ` utility
+ tarantool> dofile('0001-delete-space.lua')
+ ---
+ ...
- Connect to the necessary instance using ``tt connect``.
+- (or) Copy the migration script code,
+ paste it into the console, and run it.
- .. code:: console
+You can also connect to the instance and execute the migration script in a single call:
- $ tt connect admin:password@localhost:3301
+.. code:: console
- - If your migration is written in a Lua file, you can execute it
- using ``dofile()``. Call this function and specify the path to the
- migration file as the first argument. It looks like this:
+ $ tt connect admin:password@localhost:3301 -f 0001-delete-space.lua
- .. code-block:: tarantoolsession
+.. _migrations_centralized:
- tarantool> dofile('0001-delete-space.lua')
- ---
- ...
+Centralized migration management
+--------------------------------
- - (or) Copy the migration script code,
- paste it into the console, and run it.
+.. admonition:: Enterprise Edition
+ :class: fact
- You can also connect to the instance and execute the migration script in a single call:
+ Centralized migration management is available in the `Enterprise Edition `_ only.
- .. code:: console
+Tarantool EE offers a mechanism for centralized migration management in replication
+clusters that use etcd as a :ref:`configuration storage `.
+The mechanism uses the same etcd storage to store migrations and applies them
+across the entire Tarantool cluster. This ensures migration consistency
+in the cluster and enables migration history tracking.
- $ tt connect admin:password@localhost:3301 -f 0001-delete-space.lua
+The centralized migration management mechanism is implemented in the Enterprise
+version of the :ref:`tt ` utility and in :ref:`tcm`.
+To learn how to manage migrations in Tarantool EE clusters from the command line,
+see :ref:`performing_migrations_tt`. To learn how to use the mechanism from the |tcm|
+web interface, see the :ref:`tcm_migrations` |tcm| documentation page.
- **Method 4**: applying migration with Ansible
+The ``tt`` implementation of the mechanism additionally includes commands for
+troubleshooting migration issues. :ref:`troubleshooting_migrations_tt`.
- If you use the `Ansible role `_
- to deploy a Tarantool cluster, you can use ``eval``.
- You can find more information about it
- `in the Ansible role documentation `_.
.. toctree::
- :hidden:
space_upgrade
+ performing_migrations_tt
+ troubleshooting_migrations_tt
diff --git a/doc/platform/ddl_dml/performing_migrations_tt.rst b/doc/platform/ddl_dml/performing_migrations_tt.rst
new file mode 100644
index 000000000..23a6befe5
--- /dev/null
+++ b/doc/platform/ddl_dml/performing_migrations_tt.rst
@@ -0,0 +1,658 @@
+.. _centralized_migrations_tt:
+
+Centralized migration management with tt
+========================================
+
+**Example on GitHub:** `migrations `_
+
+In this tutorial, you learn to user the centralized migration management mechanism
+implemented in the Enterprise Edition of the :ref:``tt `` utility.
+
+
+.. _centralized_migrations_tt_prereq:
+
+Prerequisites
+-------------
+
+Before starting this tutorial:
+
+- Download and :ref:`install Tarantool Enterprise SDK .
+- Install `etcd `__
+
+.. _centralized_migrations_tt_cluster:
+
+Preparing a cluster
+-------------------
+
+Centralized migration mechanism works with Tarantool EE clusters that:
+
+- use etcd as a centralized configuration storage
+- use the `CRUD `__ module for data sharding
+
+Setting up etcd
+~~~~~~~~~~~~~~~
+
+First, start up an etcd instance to use as a configuration storage:
+
+.. code-block::
+
+ $ etcd
+
+etcd runs on the default port 2379.
+
+Optionally, you can enable etcd authentication by running the following script:
+
+.. code-block:: bash
+
+ #!/usr/bin/env bash
+
+ etcdctl user add root:topsecret
+ etcdctl role add app_config_manager
+ etcdctl role grant-permission app_config_manager --prefix=true readwrite /myapp/
+ etcdctl user add app_user:config_pass
+ etcdctl user grant-role app_user app_config_manager
+ etcdctl auth enable
+
+It creates an etcd user ``app_user`` with read and write permissions to the ``/app``
+prefix, in which the cluster configuration will be stored. The user's password is ``config_pass``
+
+.. note::
+
+ If you don't enable etcd authentication, you can make all ``tt migrations``
+ call without the configuration storage credentials.
+
+
+Creating a cluster
+~~~~~~~~~~~~~~~~~~
+
+#. Initialize a ``tt`` environment:
+
+ .. code-block:: console
+
+ $ tt init
+
+#. In the ``instances.enabled`` directory, create the ``myapp`` directory.
+#. Go to the ``instances.enabled/myapp`` directory and create application files:
+
+ - ``instances.yml``:
+
+ .. code-block:: yaml
+
+ router-001-a:
+ storage-001-a:
+ storage-001-b:
+ storage-002-a:
+ storage-002-b:
+
+ - ``config.yaml``:
+
+ .. code-block:: yaml
+
+ config:
+ etcd:
+ endpoints:
+ - http://localhost:2379
+ prefix: /app/
+ username: app_user
+ password: config_pass
+ http:
+ request:
+ timeout: 3
+
+ - ``app-scm-1.rockspec``:
+
+ .. code-block:: text
+
+ package = 'app'
+ version = 'scm-1'
+
+ source = {
+ url = '/dev/null',
+ }
+
+ dependencies = {
+ 'crud == 1.5.2',
+ }
+
+ build = {
+ type = 'none';
+ }
+
+#. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
+
+ .. note::
+
+ This configuration describes a typical CRUD-enabled sharded cluster with
+ one router and two storages, each including one master and one read-only replica.
+
+ .. code-block:: yaml
+
+ credentials:
+ users:
+ client:
+ password: 'secret'
+ roles: [super]
+ replicator:
+ password: 'secret'
+ roles: [replication]
+ storage:
+ password: 'secret'
+ roles: [sharding]
+
+ iproto:
+ advertise:
+ peer:
+ login: replicator
+ sharding:
+ login: storage
+
+ sharding:
+ bucket_count: 3000
+
+ groups:
+ routers:
+ sharding:
+ roles: [router]
+ roles: [roles.crud-router]
+ replicasets:
+ router-001:
+ instances:
+ router-001-a:
+ iproto:
+ listen:
+ - uri: localhost:3301
+ advertise:
+ client: localhost:3301
+ storages:
+ sharding:
+ roles: [storage]
+ roles: [roles.crud-storage]
+ replication:
+ failover: manual
+ replicasets:
+ storage-001:
+ leader: storage-001-a
+ instances:
+ storage-001-a:
+ iproto:
+ listen:
+ - uri: localhost:3302
+ advertise:
+ client: localhost:3302
+ storage-001-b:
+ iproto:
+ listen:
+ - uri: localhost:3303
+ advertise:
+ client: localhost:3303
+ storage-002:
+ leader: storage-002-a
+ instances:
+ storage-002-a:
+ iproto:
+ listen:
+ - uri: localhost:3304
+ advertise:
+ client: localhost:3304
+ storage-002-b:
+ iproto:
+ listen:
+ - uri: localhost:3305
+ advertise:
+ client: localhost:3305
+
+#. Publish the configuration to etcd by running the following command:
+
+ .. code-block:: console
+
+ $ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
+
+The full cluster code is available on GitHub here: `migrations `_
+
+Building and starting the cluster
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Build the application:
+
+ .. code-block:: console
+
+ $ tt build myapp
+
+#. Start the cluster:
+
+ .. code-block:: console
+
+ $ tt start myapp
+
+ To check that the cluster is up and running, use ``tt status``:
+
+ .. code-block:: console
+
+ $ tt status myapp
+
+#. Bootstrap vshard in the cluster:
+
+ .. code-block:: console
+
+ $ tt replicaset vshard bootstrap myapp
+
+Now the cluster is ready.
+
+.. _centralized_migrations_tt_write:
+
+Writing migrations
+------------------
+
+To perform migrations in the cluster, you should write them in Lua and publish to etcd.
+
+Each migration file must return a Lua table with one object named ``apply``.
+This object has one field -- ``scenario`` -- that stores the migration function:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+ -- migration code
+ end
+
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+The migration unit is a single file: its ``scenario`` is executed as a whole. An error
+that happens in any step of the ``scenario`` causes the entire migration to fail.
+
+Migrations are executed in the lexicographical order. Thus, it's convenient to
+use filenames that start with ordered numbers to set the migrations order, for example:
+
+.. code-block:: text
+
+ 000001_create_space.lua
+ 000002_create_index.lua
+ 000003_alter_space.lua
+
+The default location where ``tt`` searches for migration files is ``/migrations/scenario``.
+Create this subdirectory inside the ``tt`` environment. Then, create two migration files:
+
+- ``000001_create_writers_space.lua``: create a space, define its format, and
+ create a primary index.
+
+ .. code-block:: lua
+
+ local helpers = require('tt-migrations.helpers')
+
+ local function apply_scenario()
+ local space = box.schema.space.create('writers')
+
+ space:format({
+ {name = 'id', type = 'number'},
+ {name = 'bucket_id', type = 'number'},
+ {name = 'name', type = 'string'},
+ {name = 'age', type = 'number'},
+ })
+
+ space:create_index('primary', {parts = {'id'}})
+ space:create_index('bucket_id', {parts = {'bucket_id'}})
+
+ helpers.register_sharding_key('writers', {'id'})
+ end
+
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+ .. note::
+
+ Note the usage of the ``tt-migrations.helpers`` module.
+ In this example, its function ``register_sharding_key`` is used
+ to define a sharding key for the space.
+
+- ``000002_create_writers_index.lua``: add one more index.
+
+ .. code-block:: lua
+
+ local function apply_scenario()
+ local space = box.space['writers']
+
+ space:create_index('age', {parts = {'age'}})
+ end
+
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+.. _centralized_migrations_tt_publish:
+
+Publishing migrations
+---------------------
+
+To publish migrations to the etcd configuration storage, run ``tt migrations publish``:
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+ • 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
+ • 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
+
+
+Applying migrations
+-------------------
+
+To apply stored migrations to the cluster, run ``tt migrations apply`` providing
+a cluster user's credentials:
+
+.. code-block:: console
+
+ tt migrations apply http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+
+.. important::
+
+ The cluster user must have enough access privileges to execute all functions
+ used in migrations code.
+
+The output should look as follows:
+
+.. code-block:: console
+
+ • router-001:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+ • storage-001:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+ • storage-002:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+
+The migrations are applied on all replica set leaders. Read-only replicas
+receive the changes from the corresponding replica set leaders.
+
+The ``tt migration status`` allows checking which migrations are stored in etcd and
+how they are applied to the cluster instances:
+
+.. code-block:: console
+
+ $ tt migrations status http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+ • migrations centralized storage scenarios:
+ • 000001_create_writes_space.lua
+ • 000002_create_writers_index.lua
+ • migrations apply status on Tarantool cluster:
+ • router-001:
+ • 000001_create_writes_space.lua: APPLIED
+ • 000002_create_writers_index.lua: APPLIED
+ • storage-001:
+ • 000001_create_writes_space.lua: APPLIED
+ • 000002_create_writers_index.lua: APPLIED
+ • storage-002:
+ • 000001_create_writes_space.lua: APPLIED
+ • 000002_create_writers_index.lua: APPLIED
+
+To make sure that the migrations are actually applied, connect to the router
+instance and retrieve the information about spaces and indexes in the cluster:
+
+.. code-block:: $ tt connect myapp:router-001
+
+.. code-block:: tarantoolsession
+
+ myapp:router-001-a> require('crud').schema('writers')
+ ---
+ - indexes:
+ 0:
+ unique: true
+ parts:
+ - fieldno: 1
+ type: number
+ exclude_null: false
+ is_nullable: false
+ id: 0
+ type: TREE
+ name: primary
+ 2:
+ unique: true
+ parts:
+ - fieldno: 4
+ type: number
+ exclude_null: false
+ is_nullable: false
+ id: 2
+ type: TREE
+ name: age
+ format: [{'name': 'id', 'type': 'number'}, {'type': 'number', 'name': 'bucket_id',
+ 'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
+ ...
+
+
+Adding new migrations
+---------------------
+
+Complex migrations require data migration along with schema migration. Insert some
+tuples into the space before proceeding to the next steps:
+
+.. code-block:: tarantoolsession
+
+ require('crud').insert_object_many('writers', {
+ {id = 1, name = 'Haruki Murakami', age = 75},
+ {id = 2, name = 'Douglas Adams', age = 49},
+ {id = 3, name = 'Eiji Mikage', age = 41},
+ }, {noreturn = true})
+
+The next migration changes the space format incompatibly: instead of one ``name``
+field, the new format includes two fields ``first_name`` and ``last_name``.
+To apply this migration, you need to change each tuple's format preserving the stored
+data. The :ref:`space.upgrade ` helps with this task.
+
+Create a new file ``000003_alter_writers_space.lua`` in ``/migrations/scenario``.
+Prepare its initial structure the same way as in previous migrations:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+
+ -- migration code
+
+ end
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+Start the migration function with the new format description:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+
+ local space = box.space['writers']
+
+ local new_format = {
+ {name = 'id', type = 'number'},
+ {name = 'bucket_id', type = 'number'},
+ {name = 'first_name', type = 'string'},
+ {name = 'last_name', type = 'string'},
+ {name = 'age', type = 'number'},
+ }
+ box.space.writers.index.age:drop()
+
+ -- migration code
+
+ end
+
+.. note::
+
+ ``box.space.writers.index.age:drop()`` drops an existing index. This is done
+ because indexes rely on field numbers and may break during this format change.
+ If you need the ``age`` field indexed, recreate the index after applying the
+ new format.
+
+Next, create a stored function that transforms tuples to fit the new format.
+In this case, the functions extracts the first and the last name from the ``name`` field
+and returns a tuple of the new format:
+
+.. code-block:: lua
+
+ box.schema.func.create('_writers_split_name', {
+ language = 'lua',
+ is_deterministic = true,
+ body = [[
+ function(t)
+ local name = t[3]
+
+ local split_data = {}
+ local split_regex = '([^%s]+)'
+ for v in string.gmatch(name, split_regex) do
+ table.insert(split_data, v)
+ end
+
+ local first_name = split_data[1]
+ assert(first_name ~= nil)
+
+ local last_name = split_data[2]
+ assert(last_name ~= nil)
+
+ return {t[1], t[2], first_name, last_name, t[4]}
+ end
+ ]],
+ })
+
+Finally pass the new format and the transformation function name into a ``space:upgrade``
+call and wait for it to complete:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+
+ -- space format
+
+ box.schema.func.create('_writers_split_name', {
+ language = 'lua',
+ is_deterministic = true,
+ body =
+ -- data migration function
+ })
+
+ local future = space:upgrade({
+ func = '_writers_split_name',
+ format = new_format,
+ })
+
+ future:wait()
+ end
+
+Learn moew in space upgrade
+
+The full ``000003_alter_writers_space.lua`` migration code is as follows:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+ local space = box.space['writers']
+
+ local new_format = {
+ {name = 'id', type = 'number'},
+ {name = 'bucket_id', type = 'number'},
+ {name = 'first_name', type = 'string'},
+ {name = 'last_name', type = 'string'},
+ {name = 'age', type = 'number'},
+ }
+ box.space.writers.index.age:drop()
+
+ box.schema.func.create('_writers_split_name', {
+ language = 'lua',
+ is_deterministic = true,
+ body = [[
+ function(t)
+ local name = t[3]
+
+ local split_data = {}
+ local split_regex = '([^%s]+)'
+ for v in string.gmatch(name, split_regex) do
+ table.insert(split_data, v)
+ end
+
+ local first_name = split_data[1]
+ assert(first_name ~= nil)
+
+ local last_name = split_data[2]
+ assert(last_name ~= nil)
+
+ return {t[1], t[2], first_name, last_name, t[4]}
+ end
+ ]],
+ })
+
+ local future = space:upgrade({
+ func = '_writers_split_name',
+ format = new_format,
+ })
+
+ future:wait()
+ end
+
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+Publish the new migration to etcd. Migrations that already exist in the storage are skipped.
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+ • 000001_create_writes_space.lua: skipped, key "000001_create_writes_space.lua" already exists with the same content
+ • 000002_create_writers_index.lua: skipped, key "000002_create_writers_index.lua" already exists with the same content
+ • 000003_alter_writers_space.lua: successfully published to key "000003_alter_writers_space.lua"
+
+.. note::
+
+ You can also publish a single migration file by passing a path to it as an argument:
+
+ .. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp migrations/scenario/000003_alter_writers_space.lua
+
+Apply the migrations:
+
+.. code-block::
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+
+Connect to the router instance and retrieve the information about spaces and indexes in the cluster:
+
+.. code-block:: $ tt connect myapp:router-001
+
+.. code-block:: tarantoolsession
+
+ myapp:router-001-a> require('crud').get('writers', 2)
+ ---
+ - rows: []
+ metadata: [{'name': 'id', 'type': 'number'}, {'name': 'bucket_id', 'type': 'number'},
+ {'name': 'first_name', 'type': 'string'}, {'name': 'last_name', 'type': 'string'},
+ {'name': 'age', 'type': 'number'}]
+ - null
+ ...
+
+Extending the cluster
+---------------------
+
+With centralized migrations mechanism, you can
+
+how to write migration files? tt-migrtions.helpers
+
+Migration workflow
+
+prepare files
+publish to etcd
+apply
+check status
+
+Handling errors
+
+stop
+rollback - show examples with force apply?
diff --git a/doc/platform/ddl_dml/space_upgrade.rst b/doc/platform/ddl_dml/space_upgrade.rst
index 340c0f142..13f266a78 100644
--- a/doc/platform/ddl_dml/space_upgrade.rst
+++ b/doc/platform/ddl_dml/space_upgrade.rst
@@ -21,7 +21,7 @@ If you need to change a data schema, there are several possible cases:
To solve the task of migrating the data, you can:
-* :ref:`Migrate data ` to a new space manually.
+* :ref:`Migrate data ` to a new space manually.
* Use the ``space:upgrade()`` feature.
diff --git a/doc/platform/ddl_dml/troubleshooting_migrations_tt.rst b/doc/platform/ddl_dml/troubleshooting_migrations_tt.rst
new file mode 100644
index 000000000..ee29095a0
--- /dev/null
+++ b/doc/platform/ddl_dml/troubleshooting_migrations_tt.rst
@@ -0,0 +1,51 @@
+.. _troubleshooting_migrations_tt:
+
+Troubleshooting migrations
+==========================
+
+
+**Example on GitHub:** TBD
+
+The :ref:``tt `` utility provides
+
+Description
+
+Prerequisites
+-------------
+
+- EE
+- crud
+- etcd
+
+Preparing a cluster
+-------------------
+
+Writing migrations
+------------------
+
+Publishing migrations
+---------------------
+
+Applying migrations
+-------------------
+
+Adding new migrations
+---------------------
+
+Extending the cluster
+---------------------
+
+
+how to write migration files? tt-migrtions.helpers
+
+Migration workflow
+
+prepare files
+publish to etcd
+apply
+check status
+
+Handling errors
+
+stop
+rollback - show examples with force apply?
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index 661268677..ee1d0de9c 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -18,26 +18,6 @@ Performing migrations
Only Tarantool EE clusters with etcd centralized configuration storage are supported.
-Prereq:
-- EE
-- crud
-- etcd
-
-how to write migration files? tt-migrtions.helpers
-
-Migration workflow
-
-prepare files
-publish to etcd
-apply
-check status
-
-Handling errors
-
-stop
-rollback - show examples with force apply?
-
-
``COMMAND`` is one of the following:
* :ref:`apply `
From 377c0d528a87068a0a1b30f1cccc961d0f60b45c Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Fri, 27 Sep 2024 15:12:14 +0700
Subject: [PATCH 03/16] Migration tutorial draft
---
.../myapp/instances-3-storages.yml | 7 +
.../myapp/source-3-storages.yaml | 88 ++++
.../ddl_dml/performing_migrations_tt.rst | 466 ++++++------------
3 files changed, 243 insertions(+), 318 deletions(-)
create mode 100644 doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
create mode 100644 doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
new file mode 100644
index 000000000..6d4b27af6
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
@@ -0,0 +1,7 @@
+router-001-a:
+storage-001-a:
+storage-001-b:
+storage-002-a:
+storage-002-b:
+storage-003-a:
+storage-003-b:
\ No newline at end of file
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
new file mode 100644
index 000000000..dd4eb2d77
--- /dev/null
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
@@ -0,0 +1,88 @@
+credentials:
+ users:
+ client:
+ password: 'secret'
+ roles: [super]
+ replicator:
+ password: 'secret'
+ roles: [replication]
+ storage:
+ password: 'secret'
+ roles: [sharding]
+
+iproto:
+ advertise:
+ peer:
+ login: replicator
+ sharding:
+ login: storage
+
+sharding:
+ bucket_count: 3000
+
+groups:
+ routers:
+ sharding:
+ roles: [router]
+ roles: [roles.crud-router]
+ replicasets:
+ router-001:
+ instances:
+ router-001-a:
+ iproto:
+ listen:
+ - uri: localhost:3301
+ advertise:
+ client: localhost:3301
+ storages:
+ sharding:
+ roles: [storage]
+ roles: [roles.crud-storage]
+ replication:
+ failover: manual
+ replicasets:
+ storage-001:
+ leader: storage-001-a
+ instances:
+ storage-001-a:
+ iproto:
+ listen:
+ - uri: localhost:3302
+ advertise:
+ client: localhost:3302
+ storage-001-b:
+ iproto:
+ listen:
+ - uri: localhost:3303
+ advertise:
+ client: localhost:3303
+ storage-002:
+ leader: storage-002-a
+ instances:
+ storage-002-a:
+ iproto:
+ listen:
+ - uri: localhost:3304
+ advertise:
+ client: localhost:3304
+ storage-002-b:
+ iproto:
+ listen:
+ - uri: localhost:3305
+ advertise:
+ client: localhost:3305
+ storage-003:
+ leader: storage-003-a
+ instances:
+ storage-003-a:
+ iproto:
+ listen:
+ - uri: localhost:3306
+ advertise:
+ client: localhost:3306
+ storage-003-b:
+ iproto:
+ listen:
+ - uri: localhost:3307
+ advertise:
+ client: localhost:3307
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/performing_migrations_tt.rst b/doc/platform/ddl_dml/performing_migrations_tt.rst
index 23a6befe5..24c1b11af 100644
--- a/doc/platform/ddl_dml/performing_migrations_tt.rst
+++ b/doc/platform/ddl_dml/performing_migrations_tt.rst
@@ -5,9 +5,8 @@ Centralized migration management with tt
**Example on GitHub:** `migrations `_
-In this tutorial, you learn to user the centralized migration management mechanism
-implemented in the Enterprise Edition of the :ref:``tt `` utility.
-
+In this tutorial, you learn to use the centralized migration management mechanism
+implemented in the Enterprise Edition of the :ref:`tt ` utility.
.. _centralized_migrations_tt_prereq:
@@ -16,25 +15,27 @@ Prerequisites
Before starting this tutorial:
-- Download and :ref:`install Tarantool Enterprise SDK .
-- Install `etcd `__
+- Download and :ref:`install Tarantool Enterprise SDK `.
+- Install `etcd `__.
.. _centralized_migrations_tt_cluster:
Preparing a cluster
-------------------
-Centralized migration mechanism works with Tarantool EE clusters that:
+The centralized migration mechanism works with Tarantool EE clusters that:
- use etcd as a centralized configuration storage
- use the `CRUD `__ module for data sharding
+.. _centralized_migrations_tt_cluster_etcd:
+
Setting up etcd
~~~~~~~~~~~~~~~
First, start up an etcd instance to use as a configuration storage:
-.. code-block::
+.. code-block:: console
$ etcd
@@ -53,14 +54,15 @@ Optionally, you can enable etcd authentication by running the following script:
etcdctl user grant-role app_user app_config_manager
etcdctl auth enable
-It creates an etcd user ``app_user`` with read and write permissions to the ``/app``
-prefix, in which the cluster configuration will be stored. The user's password is ``config_pass``
+It creates an etcd user ``app_user`` with read and write permissions to the ``/myapp``
+prefix, in which the cluster configuration will be stored. The user's password is ``config_pass``.
.. note::
- If you don't enable etcd authentication, you can make all ``tt migrations``
- call without the configuration storage credentials.
+ If you don't enable etcd authentication, make ``tt migrations`` calls without
+ the configuration storage credentials.
+.. _centralized_migrations_tt_cluster_create:
Creating a cluster
~~~~~~~~~~~~~~~~~~
@@ -76,138 +78,41 @@ Creating a cluster
- ``instances.yml``:
- .. code-block:: yaml
-
- router-001-a:
- storage-001-a:
- storage-001-b:
- storage-002-a:
- storage-002-b:
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
+ :language: yaml
+ :dedent:
- ``config.yaml``:
- .. code-block:: yaml
-
- config:
- etcd:
- endpoints:
- - http://localhost:2379
- prefix: /app/
- username: app_user
- password: config_pass
- http:
- request:
- timeout: 3
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
+ :language: yaml
+ :dedent:
- ``app-scm-1.rockspec``:
- .. code-block:: text
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/pp-scm-1.rockspec
+ :dedent:
- package = 'app'
- version = 'scm-1'
-
- source = {
- url = '/dev/null',
- }
-
- dependencies = {
- 'crud == 1.5.2',
- }
-
- build = {
- type = 'none';
- }
-
-#. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
+4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
.. note::
This configuration describes a typical CRUD-enabled sharded cluster with
one router and two storages, each including one master and one read-only replica.
- .. code-block:: yaml
-
- credentials:
- users:
- client:
- password: 'secret'
- roles: [super]
- replicator:
- password: 'secret'
- roles: [replication]
- storage:
- password: 'secret'
- roles: [sharding]
-
- iproto:
- advertise:
- peer:
- login: replicator
- sharding:
- login: storage
-
- sharding:
- bucket_count: 3000
-
- groups:
- routers:
- sharding:
- roles: [router]
- roles: [roles.crud-router]
- replicasets:
- router-001:
- instances:
- router-001-a:
- iproto:
- listen:
- - uri: localhost:3301
- advertise:
- client: localhost:3301
- storages:
- sharding:
- roles: [storage]
- roles: [roles.crud-storage]
- replication:
- failover: manual
- replicasets:
- storage-001:
- leader: storage-001-a
- instances:
- storage-001-a:
- iproto:
- listen:
- - uri: localhost:3302
- advertise:
- client: localhost:3302
- storage-001-b:
- iproto:
- listen:
- - uri: localhost:3303
- advertise:
- client: localhost:3303
- storage-002:
- leader: storage-002-a
- instances:
- storage-002-a:
- iproto:
- listen:
- - uri: localhost:3304
- advertise:
- client: localhost:3304
- storage-002-b:
- iproto:
- listen:
- - uri: localhost:3305
- advertise:
- client: localhost:3305
-
-#. Publish the configuration to etcd by running the following command:
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
+ :language: yaml
+ :dedent:
+
+#. Publish the configuration to etcd:
.. code-block:: console
$ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
-The full cluster code is available on GitHub here: `migrations `_
+The full cluster code is available on GitHub here: `migrations `_.
+
+.. _centralized_migrations_tt_cluster_start:
Building and starting the cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -226,9 +131,9 @@ Building and starting the cluster
To check that the cluster is up and running, use ``tt status``:
- .. code-block:: console
+ .. code-block:: console
- $ tt status myapp
+ $ tt status myapp
#. Bootstrap vshard in the cluster:
@@ -236,14 +141,13 @@ Building and starting the cluster
$ tt replicaset vshard bootstrap myapp
-Now the cluster is ready.
-
.. _centralized_migrations_tt_write:
Writing migrations
------------------
-To perform migrations in the cluster, you should write them in Lua and publish to etcd.
+To perform migrations in the cluster, write them in Lua and publish to the cluster's
+etcd configuration storage.
Each migration file must return a Lua table with one object named ``apply``.
This object has one field -- ``scenario`` -- that stores the migration function:
@@ -264,7 +168,7 @@ The migration unit is a single file: its ``scenario`` is executed as a whole. An
that happens in any step of the ``scenario`` causes the entire migration to fail.
Migrations are executed in the lexicographical order. Thus, it's convenient to
-use filenames that start with ordered numbers to set the migrations order, for example:
+use filenames that start with ordered numbers to define the migrations order, for example:
.. code-block:: text
@@ -278,31 +182,9 @@ Create this subdirectory inside the ``tt`` environment. Then, create two migrati
- ``000001_create_writers_space.lua``: create a space, define its format, and
create a primary index.
- .. code-block:: lua
-
- local helpers = require('tt-migrations.helpers')
-
- local function apply_scenario()
- local space = box.schema.space.create('writers')
-
- space:format({
- {name = 'id', type = 'number'},
- {name = 'bucket_id', type = 'number'},
- {name = 'name', type = 'string'},
- {name = 'age', type = 'number'},
- })
-
- space:create_index('primary', {parts = {'id'}})
- space:create_index('bucket_id', {parts = {'bucket_id'}})
-
- helpers.register_sharding_key('writers', {'id'})
- end
-
- return {
- apply = {
- scenario = apply_scenario,
- },
- }
+ .. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
+ :language: lua
+ :dedent:
.. note::
@@ -312,19 +194,9 @@ Create this subdirectory inside the ``tt`` environment. Then, create two migrati
- ``000002_create_writers_index.lua``: add one more index.
- .. code-block:: lua
-
- local function apply_scenario()
- local space = box.space['writers']
-
- space:create_index('age', {parts = {'age'}})
- end
-
- return {
- apply = {
- scenario = apply_scenario,
- },
- }
+ .. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
+ :language: lua
+ :dedent:
.. _centralized_migrations_tt_publish:
@@ -339,21 +211,22 @@ To publish migrations to the etcd configuration storage, run ``tt migrations pub
• 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
• 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
+.. _centralized_migrations_tt_apply:
Applying migrations
-------------------
-To apply stored migrations to the cluster, run ``tt migrations apply`` providing
+To apply published migrations to the cluster, run ``tt migrations apply`` providing
a cluster user's credentials:
.. code-block:: console
- tt migrations apply http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
.. important::
- The cluster user must have enough access privileges to execute all functions
- used in migrations code.
+ The cluster user must have enough access privileges to execute the migrations code.
The output should look as follows:
@@ -372,8 +245,7 @@ The output should look as follows:
The migrations are applied on all replica set leaders. Read-only replicas
receive the changes from the corresponding replica set leaders.
-The ``tt migration status`` allows checking which migrations are stored in etcd and
-how they are applied to the cluster instances:
+Check the migrations status with ``tt migration status``:
.. code-block:: console
@@ -392,8 +264,8 @@ how they are applied to the cluster instances:
• 000001_create_writes_space.lua: APPLIED
• 000002_create_writers_index.lua: APPLIED
-To make sure that the migrations are actually applied, connect to the router
-instance and retrieve the information about spaces and indexes in the cluster:
+To make sure that the space and indexes are created in the cluster, connect to the router
+instance and retrieve the space information:
.. code-block:: $ tt connect myapp:router-001
@@ -426,16 +298,17 @@ instance and retrieve the information about spaces and indexes in the cluster:
'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
...
+.. _centralized_migrations_tt_space_upgrade:
-Adding new migrations
----------------------
+Complex migrations with space.upgrade()
+---------------------------------------
Complex migrations require data migration along with schema migration. Insert some
tuples into the space before proceeding to the next steps:
.. code-block:: tarantoolsession
- require('crud').insert_object_many('writers', {
+ myapp:router-001-a> require('crud').insert_object_many('writers', {
{id = 1, name = 'Haruki Murakami', age = 75},
{id = 2, name = 'Douglas Adams', age = 49},
{id = 3, name = 'Eiji Mikage', age = 41},
@@ -443,7 +316,7 @@ tuples into the space before proceeding to the next steps:
The next migration changes the space format incompatibly: instead of one ``name``
field, the new format includes two fields ``first_name`` and ``last_name``.
-To apply this migration, you need to change each tuple's format preserving the stored
+To apply this migration, you need to change each tuple's structure preserving the stored
data. The :ref:`space.upgrade ` helps with this task.
Create a new file ``000003_alter_writers_space.lua`` in ``/migrations/scenario``.
@@ -452,9 +325,7 @@ Prepare its initial structure the same way as in previous migrations:
.. code-block:: lua
local function apply_scenario()
-
-- migration code
-
end
return {
apply = {
@@ -464,24 +335,11 @@ Prepare its initial structure the same way as in previous migrations:
Start the migration function with the new format description:
-.. code-block:: lua
-
- local function apply_scenario()
-
- local space = box.space['writers']
-
- local new_format = {
- {name = 'id', type = 'number'},
- {name = 'bucket_id', type = 'number'},
- {name = 'first_name', type = 'string'},
- {name = 'last_name', type = 'string'},
- {name = 'age', type = 'number'},
- }
- box.space.writers.index.age:drop()
-
- -- migration code
-
- end
+.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+ :language: lua
+ :start-at: local function apply_scenario()
+ :end-at: box.space.writers.index.age:drop()
+ :dedent:
.. note::
@@ -494,111 +352,20 @@ Next, create a stored function that transforms tuples to fit the new format.
In this case, the functions extracts the first and the last name from the ``name`` field
and returns a tuple of the new format:
-.. code-block:: lua
-
- box.schema.func.create('_writers_split_name', {
- language = 'lua',
- is_deterministic = true,
- body = [[
- function(t)
- local name = t[3]
-
- local split_data = {}
- local split_regex = '([^%s]+)'
- for v in string.gmatch(name, split_regex) do
- table.insert(split_data, v)
- end
-
- local first_name = split_data[1]
- assert(first_name ~= nil)
-
- local last_name = split_data[2]
- assert(last_name ~= nil)
-
- return {t[1], t[2], first_name, last_name, t[4]}
- end
- ]],
- })
-
-Finally pass the new format and the transformation function name into a ``space:upgrade``
-call and wait for it to complete:
-
-.. code-block:: lua
-
- local function apply_scenario()
+.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+ :language: lua
+ :start-at: box.schema.func.create
+ :end-before: local future = space:upgrade
+ :dedent:
- -- space format
+Finally, call ``space:upgrade()`` with the new format and the transformation function
+as its arguments. Here is the complete migration code:
- box.schema.func.create('_writers_split_name', {
- language = 'lua',
- is_deterministic = true,
- body =
- -- data migration function
- })
+.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+ :language: lua
+ :dedent:
- local future = space:upgrade({
- func = '_writers_split_name',
- format = new_format,
- })
-
- future:wait()
- end
-
-Learn moew in space upgrade
-
-The full ``000003_alter_writers_space.lua`` migration code is as follows:
-
-.. code-block:: lua
-
- local function apply_scenario()
- local space = box.space['writers']
-
- local new_format = {
- {name = 'id', type = 'number'},
- {name = 'bucket_id', type = 'number'},
- {name = 'first_name', type = 'string'},
- {name = 'last_name', type = 'string'},
- {name = 'age', type = 'number'},
- }
- box.space.writers.index.age:drop()
-
- box.schema.func.create('_writers_split_name', {
- language = 'lua',
- is_deterministic = true,
- body = [[
- function(t)
- local name = t[3]
-
- local split_data = {}
- local split_regex = '([^%s]+)'
- for v in string.gmatch(name, split_regex) do
- table.insert(split_data, v)
- end
-
- local first_name = split_data[1]
- assert(first_name ~= nil)
-
- local last_name = split_data[2]
- assert(last_name ~= nil)
-
- return {t[1], t[2], first_name, last_name, t[4]}
- end
- ]],
- })
-
- local future = space:upgrade({
- func = '_writers_split_name',
- format = new_format,
- })
-
- future:wait()
- end
-
- return {
- apply = {
- scenario = apply_scenario,
- },
- }
+Learn more about ``space.upgrade()` execution in :ref:`enterprise-space_upgrade`.
Publish the new migration to etcd. Migrations that already exist in the storage are skipped.
@@ -615,15 +382,17 @@ Publish the new migration to etcd. Migrations that already exist in the storage
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp migrations/scenario/000003_alter_writers_space.lua
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ migrations/scenario/000003_alter_writers_space.lua
Apply the migrations:
.. code-block::
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
-Connect to the router instance and retrieve the information about spaces and indexes in the cluster:
+Connect to the router instance and check that the space and its tuples have the new format:
.. code-block:: $ tt connect myapp:router-001
@@ -638,21 +407,82 @@ Connect to the router instance and retrieve the information about spaces and ind
- null
...
-Extending the cluster
----------------------
+.. _centralized_migrations_tt_new_instances:
+
+Applying migrations on new instances
+------------------------------------
-With centralized migrations mechanism, you can
+Having all migrations in a centralized etcd storage, you can apply
-how to write migration files? tt-migrtions.helpers
+To add one more storage, edit the cluster files in ``instances.enabled/myapp``:
-Migration workflow
+- ``instances.yml``: add the lines below to the end.
-prepare files
-publish to etcd
-apply
-check status
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
+ :language: yaml
+ :start-at: storage-003-a:
+ :dedent:
+
+- ``source.yaml``: add the lines below to the end.
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yml
+ :language: yaml
+ :start-at: storage-003:
+ :dedent:
+
+Publish the new cluster configuration to etcd:
+
+.. code-block:: console
-Handling errors
+ $ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
-stop
-rollback - show examples with force apply?
+Run ``tt start`` to start up the new instances:
+
+.. code-block:: console
+
+ $ tt start myapp
+ • The instance myapp:router-001-a (PID = 61631) is already running.
+ • The instance myapp:storage-001-a (PID = 61632) is already running.
+ • The instance myapp:storage-001-b (PID = 61634) is already running.
+ • The instance myapp:storage-002-a (PID = 61639) is already running.
+ • The instance myapp:storage-002-b (PID = 61640) is already running.
+ • Starting an instance [myapp:storage-003-a]...
+ • Starting an instance [myapp:storage-003-b]...
+
+Now the cluster contains three storage replica sets. The new one -- ``storage-003``--
+is just started and has no data schema yet. Apply all stored migrations to the cluster
+to load the same data schema to the new replica set:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+ • router-001:
+ • 000001_create_writes_space.lua: skipped, already applied
+ • 000002_create_writers_index.lua: skipped, already applied
+ • 000003_alter_writers_space.lua: skipped, already applied
+ • storage-001:
+ • 000001_create_writes_space.lua: skipped, already applied
+ • 000002_create_writers_index.lua: skipped, already applied
+ • 000003_alter_writers_space.lua: skipped, already applied
+ • storage-002:
+ • 000001_create_writes_space.lua: skipped, already applied
+ • 000002_create_writers_index.lua: skipped, already applied
+ • 000003_alter_writers_space.lua: skipped, already applied
+ • storage-003:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+ • 000003_alter_writers_space.lua: successfully applied
+
+To make sure that the space exists on the new instances, connect to ``storage-003-a``
+and check ``box.space.writers``:
+
+.. code-block:: console
+
+ $ tt connect myapp:storage-003-a
+
+.. code-block:: tarantoolsession
+
+ myapp:storage-003-a> box.space.writers ~= nil
+ ---
+ - true
+ ...
From 0463c1c163434b74c43d6ee5bb6003c826a16af8 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Fri, 27 Sep 2024 15:35:54 +0700
Subject: [PATCH 04/16] Migration tutorial draft
---
...s_tt.rst => centralized_migrations_tt.rst} | 13 ++++++---
doc/platform/ddl_dml/migrations.rst | 27 +++----------------
doc/tooling/tt_cli/migrations.rst | 7 ++---
3 files changed, 16 insertions(+), 31 deletions(-)
rename doc/platform/ddl_dml/{performing_migrations_tt.rst => centralized_migrations_tt.rst} (97%)
diff --git a/doc/platform/ddl_dml/performing_migrations_tt.rst b/doc/platform/ddl_dml/centralized_migrations_tt.rst
similarity index 97%
rename from doc/platform/ddl_dml/performing_migrations_tt.rst
rename to doc/platform/ddl_dml/centralized_migrations_tt.rst
index 24c1b11af..4907bb096 100644
--- a/doc/platform/ddl_dml/performing_migrations_tt.rst
+++ b/doc/platform/ddl_dml/centralized_migrations_tt.rst
@@ -1,13 +1,18 @@
.. _centralized_migrations_tt:
-Centralized migration management with tt
-========================================
+Centralized migrations with tt
+==============================
**Example on GitHub:** `migrations `_
In this tutorial, you learn to use the centralized migration management mechanism
implemented in the Enterprise Edition of the :ref:`tt ` utility.
+See also:
+
+- :ref:`tt migrations ` reference to see the full list of command-line options.
+- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
+
.. _centralized_migrations_tt_prereq:
Prerequisites
@@ -221,8 +226,8 @@ a cluster user's credentials:
.. code-block:: console
- tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
- --tarantool-username=client --tarantool-password=secret
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
.. important::
diff --git a/doc/platform/ddl_dml/migrations.rst b/doc/platform/ddl_dml/migrations.rst
index c0a0d72af..b3a36019e 100644
--- a/doc/platform/ddl_dml/migrations.rst
+++ b/doc/platform/ddl_dml/migrations.rst
@@ -3,26 +3,6 @@
Migrations
==========
-TBD: rewrite
-
-- what is migration: examples изменение + первоначальное создание
-- ce approach ?
-- ee approach: centralized migration(s?) management
- - prereq: etcd, 3.x ee cluster w/etcd, tt-ee 2.4.0 || tcm,
- tarantool user with ~admin permissions
- advertise url required?
- - way to manage: tt and tcm (check diff implementation)
- - writing migrations:
- examples: create space (if_not_exists?), add index (with shard key), add index existing space,
- space upgrade: wait or just run in bg
-
- - general workflow + status
- - troubleshooting?
-
-
-
-
-
**Migration** refers to any change in a data schema: adding or removing a field,
creating or dropping an index, changing a field format, an so on. Space creation
is also a schema migration. You can use this fact to track the evolution of your
@@ -129,7 +109,6 @@ and the database schema is updated.
However, this method may not work for everyone.
You may not be able to restart Tarantool or update the code using the hot-reload mechanism.
-
**Method 2**: the :ref:`tt ` utility
Connect to the necessary instance using ``tt connect``.
@@ -177,15 +156,15 @@ The centralized migration management mechanism is implemented in the Enterprise
version of the :ref:`tt ` utility and in :ref:`tcm`.
To learn how to manage migrations in Tarantool EE clusters from the command line,
-see :ref:`performing_migrations_tt`. To learn how to use the mechanism from the |tcm|
+see :ref:`centralized_migrations_tt`. To learn how to use the mechanism from the |tcm|
web interface, see the :ref:`tcm_migrations` |tcm| documentation page.
The ``tt`` implementation of the mechanism additionally includes commands for
troubleshooting migration issues. :ref:`troubleshooting_migrations_tt`.
-
.. toctree::
+ :maxdepth: 1
space_upgrade
- performing_migrations_tt
+ centralized_migrations_tt
troubleshooting_migrations_tt
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index ee1d0de9c..37915d393 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -1,7 +1,7 @@
.. _tt-migrations:
-Performing migrations
-=====================
+Managing centralized migrations
+===============================
.. admonition:: Enterprise Edition
:class: fact
@@ -12,7 +12,8 @@ Performing migrations
$ tt migrations COMMAND [COMMAND_OPTION ...]
-``tt migrations`` manages :ref:`migrations ` in a Tarantool EE cluster.
+``tt migrations`` manages :ref:`centralized migrations ` in a Tarantool EE cluster.
+See :ref:`centralized_migrations_tt` for a detailed guide on using centralized migrations.
.. important::
From 4b30f691c386c5be1bc507fec51825dc7c2b858b Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Fri, 27 Sep 2024 18:20:04 +0700
Subject: [PATCH 05/16] Migration tutorial draft
---
.../ddl_dml/centralized_migrations_tt.rst | 64 ++++++++++---
doc/tooling/tt_cli/migrations.rst | 96 +++++++++++--------
2 files changed, 106 insertions(+), 54 deletions(-)
diff --git a/doc/platform/ddl_dml/centralized_migrations_tt.rst b/doc/platform/ddl_dml/centralized_migrations_tt.rst
index 4907bb096..954bef013 100644
--- a/doc/platform/ddl_dml/centralized_migrations_tt.rst
+++ b/doc/platform/ddl_dml/centralized_migrations_tt.rst
@@ -10,7 +10,7 @@ implemented in the Enterprise Edition of the :ref:`tt ` utility.
See also:
-- :ref:`tt migrations ` reference to see the full list of command-line options.
+- :ref:`tt migrations ` for the full list of command-line options.
- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
.. _centralized_migrations_tt_prereq:
@@ -46,7 +46,7 @@ First, start up an etcd instance to use as a configuration storage:
etcd runs on the default port 2379.
-Optionally, you can enable etcd authentication by running the following script:
+Optionally, enable etcd authentication by executing the following script:
.. code-block:: bash
@@ -93,9 +93,9 @@ Creating a cluster
:language: yaml
:dedent:
- - ``app-scm-1.rockspec``:
+ - ``myapp-scm-1.rockspec``:
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/pp-scm-1.rockspec
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
:dedent:
4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
@@ -103,7 +103,7 @@ Creating a cluster
.. note::
This configuration describes a typical CRUD-enabled sharded cluster with
- one router and two storages, each including one master and one read-only replica.
+ one router and two storage replica sets, each including one master and one read-only replica.
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
:language: yaml
@@ -322,7 +322,7 @@ tuples into the space before proceeding to the next steps:
The next migration changes the space format incompatibly: instead of one ``name``
field, the new format includes two fields ``first_name`` and ``last_name``.
To apply this migration, you need to change each tuple's structure preserving the stored
-data. The :ref:`space.upgrade ` helps with this task.
+data. The :ref:`space.upgrade ` function helps with this task.
Create a new file ``000003_alter_writers_space.lua`` in ``/migrations/scenario``.
Prepare its initial structure the same way as in previous migrations:
@@ -370,7 +370,7 @@ as its arguments. Here is the complete migration code:
:language: lua
:dedent:
-Learn more about ``space.upgrade()` execution in :ref:`enterprise-space_upgrade`.
+Learn more about ``space.upgrade()`` execution in :ref:`enterprise-space_upgrade`.
Publish the new migration to etcd. Migrations that already exist in the storage are skipped.
@@ -392,7 +392,7 @@ Publish the new migration to etcd. Migrations that already exist in the storage
Apply the migrations:
-.. code-block::
+.. code-block:: console
$ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
--tarantool-username=client --tarantool-password=secret
@@ -414,12 +414,13 @@ Connect to the router instance and check that the space and its tuples have the
.. _centralized_migrations_tt_new_instances:
-Applying migrations on new instances
-------------------------------------
+Extending the cluster
+---------------------
-Having all migrations in a centralized etcd storage, you can apply
+Having all migrations in a centralized etcd storage, you can extend the cluster
+and consistently define the data schema on new instances on the fly.
-To add one more storage, edit the cluster files in ``instances.enabled/myapp``:
+Add one more storage replica set to the cluster. To do this, edit the cluster files in ``instances.enabled/myapp``:
- ``instances.yml``: add the lines below to the end.
@@ -430,7 +431,7 @@ To add one more storage, edit the cluster files in ``instances.enabled/myapp``:
- ``source.yaml``: add the lines below to the end.
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yml
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
:language: yaml
:start-at: storage-003:
:dedent:
@@ -491,3 +492,40 @@ and check ``box.space.writers``:
---
- true
...
+
+.. _centralized_migrations_tt_troubleshoot:
+
+Troubleshooting migrations
+--------------------------
+
+.. warning::
+
+ The options used for migration troubleshooting can cause migration inconsistency in the cluster.
+
+Incorrect migration published
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Problem: an incorrect migration is published to etcd.
+Solution: fix the migration file and publish it again with the ``--overwrite`` option:
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp --overwrite
+
+If there are several migrations and the erroneous one isn't the last, add also ``--ignore-order-violation``:
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp --overwrite --ignore-order-violation
+
+Incorrect migration applied
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the migration is already applied, publish the fixed version and apply it with
+the ``--force-reapply`` option:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret \
+ --force-reapply
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index 37915d393..2b8d2c7d4 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -12,8 +12,9 @@ Managing centralized migrations
$ tt migrations COMMAND [COMMAND_OPTION ...]
-``tt migrations`` manages :ref:`centralized migrations ` in a Tarantool EE cluster.
-See :ref:`centralized_migrations_tt` for a detailed guide on using centralized migrations.
+``tt migrations`` manages :ref:`centralized migrations `
+in a Tarantool EE cluster. See :ref:`centralized_migrations_tt` for a detailed guide
+on using the centralized migrations mechanism.
.. important::
@@ -37,8 +38,8 @@ apply
$ tt migrations apply ETCD_URI [OPTION ...]
-``tt migrations apply`` applies migrations :ref:`published `
-to the cluster to the cluster. It executes all migrations from the cluster's centralized
+``tt migrations apply`` applies :ref:`published ` migrations
+to the cluster. It executes all migrations from the cluster's centralized
configuration storage on all its read-write instances (replica set leaders).
.. code-block:: console
@@ -46,7 +47,7 @@ configuration storage on all its read-write instances (replica set leaders).
tt migrations apply https://user:pass@localhost:2379/myapp \
-tarantool-username=admin --tarantool-password=pass
-You can select a single migration for execution by adding the ``--migration`` option:
+To apply a single published migration, pass its name in the ``--migration`` option:
.. code-block:: console
@@ -54,7 +55,7 @@ You can select a single migration for execution by adding the ``--migration`` op
--tarantool-username=admin --tarantool-password=pass \
--migration=000001_create_space.lua
-You can select a single replica set to apply migrations to:
+To apply migrations on a single replica set, specify the ``replicaset`` option:
.. code-block:: console
@@ -62,7 +63,7 @@ You can select a single replica set to apply migrations to:
--tarantool-username=admin --tarantool-password=pass \
--replicaset=storage-001
--- migration - single migration. --order violation
+--order violation
?? diff --force-reapply --ignore-preceding-status
@@ -101,14 +102,14 @@ To publish a single migration from a file, use its name or path as the command a
$ tt migrations publish https://user:pass@localhost:2379/myapp migrations/000001_create_space.lua
-Optionally, you can provide a key to use as a migration identifier instead of the file name:
+Optionally, you can provide a key to use as a migration identifier instead of the filename:
.. code-block:: console
$ tt migrations publish https://user:pass@localhost:2379/myapp file.lua \
--key=000001_create_space.lua
-When publishing migrations, ``tt`` performs several checks for:
+When publishing migrations, ``tt`` performs checks for:
- Syntax errors in migration files. To skip syntax check, add the ``--skip-syntax-check`` option.
- Existence of migrations with same names. To overwrite an existing migration with
@@ -246,7 +247,7 @@ stop
$ tt migrations stop ETCD_URI [OPTION ...]
-``tt migrations stop`` stops the execution of migrations in the cluster
+``tt migrations stop`` stops the execution of migrations in the cluster.
.. warning::
@@ -284,15 +285,12 @@ If the cluster uses SSL traffic encryption, provide the necessary connection
parameters in the ``--tarantool-ssl*`` options: ``--tarantool-sslcertfile``,
``--tarantool-sslkeyfile``, and other. All options are listed in :ref:`tt-migrations-options`.
-?auth type
-?example
-
.. _tt-migrations-options:
Options
-------
-.. option:: --acquire-lock-timeout int
+.. option:: --acquire-lock-timeout INT
**Applicable to:** ``apply``
@@ -321,7 +319,7 @@ Options
See also: :ref:`tt-migrations-status`.
-.. option:: --execution-timeout int
+.. option:: --execution-timeout INT
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
@@ -359,13 +357,21 @@ Options
**Applicable to:** ``apply``, ``publish``
- Skip migration scenarios order check before publish. Using this flag may result in cluster migrations inconsistency
+ Skip migration scenarios order check before publish.
+
+ .. warning::
+
+ Using this option may result in cluster migrations inconsistency.
.. option:: --ignore-preceding-status
**Applicable to:** ``apply``
- skip preceding migrations status check on apply. Using this flag may result in cluster migrations inconsistency
+ Skip preceding migrations status check on apply.
+
+ .. warning::
+
+ Using this option may result in cluster migrations inconsistency.
.. option:: --key STRING
@@ -373,7 +379,7 @@ Options
put scenario to //migrations/scenario/ etcd key instead. Only for single file publish
-.. option:: --migration string
+.. option:: --migration STRING
**Applicable to:** ``apply``, ``remove``, ``status``
@@ -383,9 +389,13 @@ Options
**Applicable to:** ``publish``
- overwrite existing migration storage keys. Using this flag may result in cluster migrations inconsistency
+ overwrite existing migration storage keys.
+
+ .. warning::
+
+ Using this option may result in cluster migrations inconsistency.
-.. option:: --replicaset string
+.. option:: --replicaset STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
@@ -395,69 +405,73 @@ Options
**Applicable to:** ``publish``
- Skip syntax check before publish. Using this flag may cause other tt migrations operations to fail
+ Skip syntax check before publish.
+
+ .. warning::
+
+ Using this option may cause further ``tt migrations`` calls to fail.
-.. option:: --tarantool-auth string
+.. option:: --tarantool-auth STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- authentication type (used only to connect to Tarantool cluster instances)
+ Authentication type used to connect to the cluster instances.
-.. option:: --tarantool-connect-timeout int
+.. option:: --tarantool-connect-timeout INT
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- Tarantool cluster instances connection timeout,in seconds. Default: 3.
+ Tarantool cluster instances connection timeout, in seconds. Default: 3.
-.. option:: --tarantool-password string
+.. option:: --tarantool-password STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- A password used for connecting to the Tarantool cluster instances.
+ A password used to connect to the cluster instances.
-.. option:: --tarantool-sslcafile string
+.. option:: --tarantool-sslcafile STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- SSL CA file (used only to connect to Tarantool cluster instances)
+ SSL CA file used to connect to the cluster instances.
-.. option:: --tarantool-sslcertfile string
+.. option:: --tarantool-sslcertfile STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- SSL cert file (used only to connect to Tarantool cluster instances)
+ SSL cert file used to connect to the cluster instances.
-.. option:: --tarantool-sslciphers string
+.. option:: --tarantool-sslciphers STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- Colon-separated list of SSL ciphers (used only to connect to Tarantool cluster instances)
+ Colon-separated list of SSL ciphers used to connect to the cluster instances.
-.. option:: --tarantool-sslkeyfile string
+.. option:: --tarantool-sslkeyfile STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- SSL key file (used only to connect to Tarantool cluster instances)
+ SSL key file used to connect to the cluster instances.
-.. option:: --tarantool-sslpassword string
+.. option:: --tarantool-sslpassword STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- SSL key file password (used only to connect to Tarantool cluster instances)
+ SSL key file password used to connect to the cluster instances.
-.. option:: --tarantool-sslpasswordfile string
+.. option:: --tarantool-sslpasswordfile STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- File with list of password to SSL key file (used only to connect to Tarantool cluster instances)
+ File with list of password to SSL key file used to connect to the cluster instances.
.. option:: --tarantool-use-ssl
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- use SSL without providing any additional SSL info (used only to connect to Tarantool cluster instances)
+ Whether SSL is used to connect to the cluster instances.
-.. option:: --tarantool-username string
+.. option:: --tarantool-username STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
From ba2a9fdf457350d29c8b67efda84e0d2f62bf653 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Fri, 27 Sep 2024 19:55:03 +0700
Subject: [PATCH 06/16] Migrations structure
---
.../ddl_dml/centralized_migrations_tt.rst | 531 ------------------
doc/platform/ddl_dml/index.rst | 2 +-
.../migrations/basic_migrations_tt.rst | 299 ++++++++++
.../migrations/centralized_migrations_tt.rst | 22 +
.../migrations/extend_migrations_tt.rst | 115 ++++
.../{migrations.rst => migrations/index.rst} | 0
.../{ => migrations}/space_upgrade.rst | 0
.../migrations/troubleshoot_migrations_tt.rst | 81 +++
.../migrations/upgrade_migrations_tt.rst | 140 +++++
.../ddl_dml/troubleshooting_migrations_tt.rst | 51 --
10 files changed, 658 insertions(+), 583 deletions(-)
delete mode 100644 doc/platform/ddl_dml/centralized_migrations_tt.rst
create mode 100644 doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
create mode 100644 doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst
create mode 100644 doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
rename doc/platform/ddl_dml/{migrations.rst => migrations/index.rst} (100%)
rename doc/platform/ddl_dml/{ => migrations}/space_upgrade.rst (100%)
create mode 100644 doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
create mode 100644 doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
delete mode 100644 doc/platform/ddl_dml/troubleshooting_migrations_tt.rst
diff --git a/doc/platform/ddl_dml/centralized_migrations_tt.rst b/doc/platform/ddl_dml/centralized_migrations_tt.rst
deleted file mode 100644
index 954bef013..000000000
--- a/doc/platform/ddl_dml/centralized_migrations_tt.rst
+++ /dev/null
@@ -1,531 +0,0 @@
-.. _centralized_migrations_tt:
-
-Centralized migrations with tt
-==============================
-
-**Example on GitHub:** `migrations `_
-
-In this tutorial, you learn to use the centralized migration management mechanism
-implemented in the Enterprise Edition of the :ref:`tt ` utility.
-
-See also:
-
-- :ref:`tt migrations ` for the full list of command-line options.
-- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
-
-.. _centralized_migrations_tt_prereq:
-
-Prerequisites
--------------
-
-Before starting this tutorial:
-
-- Download and :ref:`install Tarantool Enterprise SDK `.
-- Install `etcd `__.
-
-.. _centralized_migrations_tt_cluster:
-
-Preparing a cluster
--------------------
-
-The centralized migration mechanism works with Tarantool EE clusters that:
-
-- use etcd as a centralized configuration storage
-- use the `CRUD `__ module for data sharding
-
-.. _centralized_migrations_tt_cluster_etcd:
-
-Setting up etcd
-~~~~~~~~~~~~~~~
-
-First, start up an etcd instance to use as a configuration storage:
-
-.. code-block:: console
-
- $ etcd
-
-etcd runs on the default port 2379.
-
-Optionally, enable etcd authentication by executing the following script:
-
-.. code-block:: bash
-
- #!/usr/bin/env bash
-
- etcdctl user add root:topsecret
- etcdctl role add app_config_manager
- etcdctl role grant-permission app_config_manager --prefix=true readwrite /myapp/
- etcdctl user add app_user:config_pass
- etcdctl user grant-role app_user app_config_manager
- etcdctl auth enable
-
-It creates an etcd user ``app_user`` with read and write permissions to the ``/myapp``
-prefix, in which the cluster configuration will be stored. The user's password is ``config_pass``.
-
-.. note::
-
- If you don't enable etcd authentication, make ``tt migrations`` calls without
- the configuration storage credentials.
-
-.. _centralized_migrations_tt_cluster_create:
-
-Creating a cluster
-~~~~~~~~~~~~~~~~~~
-
-#. Initialize a ``tt`` environment:
-
- .. code-block:: console
-
- $ tt init
-
-#. In the ``instances.enabled`` directory, create the ``myapp`` directory.
-#. Go to the ``instances.enabled/myapp`` directory and create application files:
-
- - ``instances.yml``:
-
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
- :language: yaml
- :dedent:
-
- - ``config.yaml``:
-
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
- :language: yaml
- :dedent:
-
- - ``myapp-scm-1.rockspec``:
-
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
- :dedent:
-
-4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
-
- .. note::
-
- This configuration describes a typical CRUD-enabled sharded cluster with
- one router and two storage replica sets, each including one master and one read-only replica.
-
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
- :language: yaml
- :dedent:
-
-#. Publish the configuration to etcd:
-
- .. code-block:: console
-
- $ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
-
-The full cluster code is available on GitHub here: `migrations `_.
-
-.. _centralized_migrations_tt_cluster_start:
-
-Building and starting the cluster
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-#. Build the application:
-
- .. code-block:: console
-
- $ tt build myapp
-
-#. Start the cluster:
-
- .. code-block:: console
-
- $ tt start myapp
-
- To check that the cluster is up and running, use ``tt status``:
-
- .. code-block:: console
-
- $ tt status myapp
-
-#. Bootstrap vshard in the cluster:
-
- .. code-block:: console
-
- $ tt replicaset vshard bootstrap myapp
-
-.. _centralized_migrations_tt_write:
-
-Writing migrations
-------------------
-
-To perform migrations in the cluster, write them in Lua and publish to the cluster's
-etcd configuration storage.
-
-Each migration file must return a Lua table with one object named ``apply``.
-This object has one field -- ``scenario`` -- that stores the migration function:
-
-.. code-block:: lua
-
- local function apply_scenario()
- -- migration code
- end
-
- return {
- apply = {
- scenario = apply_scenario,
- },
- }
-
-The migration unit is a single file: its ``scenario`` is executed as a whole. An error
-that happens in any step of the ``scenario`` causes the entire migration to fail.
-
-Migrations are executed in the lexicographical order. Thus, it's convenient to
-use filenames that start with ordered numbers to define the migrations order, for example:
-
-.. code-block:: text
-
- 000001_create_space.lua
- 000002_create_index.lua
- 000003_alter_space.lua
-
-The default location where ``tt`` searches for migration files is ``/migrations/scenario``.
-Create this subdirectory inside the ``tt`` environment. Then, create two migration files:
-
-- ``000001_create_writers_space.lua``: create a space, define its format, and
- create a primary index.
-
- .. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
- :language: lua
- :dedent:
-
- .. note::
-
- Note the usage of the ``tt-migrations.helpers`` module.
- In this example, its function ``register_sharding_key`` is used
- to define a sharding key for the space.
-
-- ``000002_create_writers_index.lua``: add one more index.
-
- .. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
- :language: lua
- :dedent:
-
-.. _centralized_migrations_tt_publish:
-
-Publishing migrations
----------------------
-
-To publish migrations to the etcd configuration storage, run ``tt migrations publish``:
-
-.. code-block:: console
-
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
- • 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
- • 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
-
-.. _centralized_migrations_tt_apply:
-
-Applying migrations
--------------------
-
-To apply published migrations to the cluster, run ``tt migrations apply`` providing
-a cluster user's credentials:
-
-.. code-block:: console
-
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
- --tarantool-username=client --tarantool-password=secret
-
-.. important::
-
- The cluster user must have enough access privileges to execute the migrations code.
-
-The output should look as follows:
-
-.. code-block:: console
-
- • router-001:
- • 000001_create_writes_space.lua: successfully applied
- • 000002_create_writers_index.lua: successfully applied
- • storage-001:
- • 000001_create_writes_space.lua: successfully applied
- • 000002_create_writers_index.lua: successfully applied
- • storage-002:
- • 000001_create_writes_space.lua: successfully applied
- • 000002_create_writers_index.lua: successfully applied
-
-The migrations are applied on all replica set leaders. Read-only replicas
-receive the changes from the corresponding replica set leaders.
-
-Check the migrations status with ``tt migration status``:
-
-.. code-block:: console
-
- $ tt migrations status http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
- • migrations centralized storage scenarios:
- • 000001_create_writes_space.lua
- • 000002_create_writers_index.lua
- • migrations apply status on Tarantool cluster:
- • router-001:
- • 000001_create_writes_space.lua: APPLIED
- • 000002_create_writers_index.lua: APPLIED
- • storage-001:
- • 000001_create_writes_space.lua: APPLIED
- • 000002_create_writers_index.lua: APPLIED
- • storage-002:
- • 000001_create_writes_space.lua: APPLIED
- • 000002_create_writers_index.lua: APPLIED
-
-To make sure that the space and indexes are created in the cluster, connect to the router
-instance and retrieve the space information:
-
-.. code-block:: $ tt connect myapp:router-001
-
-.. code-block:: tarantoolsession
-
- myapp:router-001-a> require('crud').schema('writers')
- ---
- - indexes:
- 0:
- unique: true
- parts:
- - fieldno: 1
- type: number
- exclude_null: false
- is_nullable: false
- id: 0
- type: TREE
- name: primary
- 2:
- unique: true
- parts:
- - fieldno: 4
- type: number
- exclude_null: false
- is_nullable: false
- id: 2
- type: TREE
- name: age
- format: [{'name': 'id', 'type': 'number'}, {'type': 'number', 'name': 'bucket_id',
- 'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
- ...
-
-.. _centralized_migrations_tt_space_upgrade:
-
-Complex migrations with space.upgrade()
----------------------------------------
-
-Complex migrations require data migration along with schema migration. Insert some
-tuples into the space before proceeding to the next steps:
-
-.. code-block:: tarantoolsession
-
- myapp:router-001-a> require('crud').insert_object_many('writers', {
- {id = 1, name = 'Haruki Murakami', age = 75},
- {id = 2, name = 'Douglas Adams', age = 49},
- {id = 3, name = 'Eiji Mikage', age = 41},
- }, {noreturn = true})
-
-The next migration changes the space format incompatibly: instead of one ``name``
-field, the new format includes two fields ``first_name`` and ``last_name``.
-To apply this migration, you need to change each tuple's structure preserving the stored
-data. The :ref:`space.upgrade ` function helps with this task.
-
-Create a new file ``000003_alter_writers_space.lua`` in ``/migrations/scenario``.
-Prepare its initial structure the same way as in previous migrations:
-
-.. code-block:: lua
-
- local function apply_scenario()
- -- migration code
- end
- return {
- apply = {
- scenario = apply_scenario,
- },
- }
-
-Start the migration function with the new format description:
-
-.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
- :language: lua
- :start-at: local function apply_scenario()
- :end-at: box.space.writers.index.age:drop()
- :dedent:
-
-.. note::
-
- ``box.space.writers.index.age:drop()`` drops an existing index. This is done
- because indexes rely on field numbers and may break during this format change.
- If you need the ``age`` field indexed, recreate the index after applying the
- new format.
-
-Next, create a stored function that transforms tuples to fit the new format.
-In this case, the functions extracts the first and the last name from the ``name`` field
-and returns a tuple of the new format:
-
-.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
- :language: lua
- :start-at: box.schema.func.create
- :end-before: local future = space:upgrade
- :dedent:
-
-Finally, call ``space:upgrade()`` with the new format and the transformation function
-as its arguments. Here is the complete migration code:
-
-.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
- :language: lua
- :dedent:
-
-Learn more about ``space.upgrade()`` execution in :ref:`enterprise-space_upgrade`.
-
-Publish the new migration to etcd. Migrations that already exist in the storage are skipped.
-
-.. code-block:: console
-
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
- • 000001_create_writes_space.lua: skipped, key "000001_create_writes_space.lua" already exists with the same content
- • 000002_create_writers_index.lua: skipped, key "000002_create_writers_index.lua" already exists with the same content
- • 000003_alter_writers_space.lua: successfully published to key "000003_alter_writers_space.lua"
-
-.. note::
-
- You can also publish a single migration file by passing a path to it as an argument:
-
- .. code-block:: console
-
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
- migrations/scenario/000003_alter_writers_space.lua
-
-Apply the migrations:
-
-.. code-block:: console
-
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
- --tarantool-username=client --tarantool-password=secret
-
-Connect to the router instance and check that the space and its tuples have the new format:
-
-.. code-block:: $ tt connect myapp:router-001
-
-.. code-block:: tarantoolsession
-
- myapp:router-001-a> require('crud').get('writers', 2)
- ---
- - rows: []
- metadata: [{'name': 'id', 'type': 'number'}, {'name': 'bucket_id', 'type': 'number'},
- {'name': 'first_name', 'type': 'string'}, {'name': 'last_name', 'type': 'string'},
- {'name': 'age', 'type': 'number'}]
- - null
- ...
-
-.. _centralized_migrations_tt_new_instances:
-
-Extending the cluster
----------------------
-
-Having all migrations in a centralized etcd storage, you can extend the cluster
-and consistently define the data schema on new instances on the fly.
-
-Add one more storage replica set to the cluster. To do this, edit the cluster files in ``instances.enabled/myapp``:
-
-- ``instances.yml``: add the lines below to the end.
-
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
- :language: yaml
- :start-at: storage-003-a:
- :dedent:
-
-- ``source.yaml``: add the lines below to the end.
-
- .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
- :language: yaml
- :start-at: storage-003:
- :dedent:
-
-Publish the new cluster configuration to etcd:
-
-.. code-block:: console
-
- $ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
-
-Run ``tt start`` to start up the new instances:
-
-.. code-block:: console
-
- $ tt start myapp
- • The instance myapp:router-001-a (PID = 61631) is already running.
- • The instance myapp:storage-001-a (PID = 61632) is already running.
- • The instance myapp:storage-001-b (PID = 61634) is already running.
- • The instance myapp:storage-002-a (PID = 61639) is already running.
- • The instance myapp:storage-002-b (PID = 61640) is already running.
- • Starting an instance [myapp:storage-003-a]...
- • Starting an instance [myapp:storage-003-b]...
-
-Now the cluster contains three storage replica sets. The new one -- ``storage-003``--
-is just started and has no data schema yet. Apply all stored migrations to the cluster
-to load the same data schema to the new replica set:
-
-.. code-block:: console
-
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
- • router-001:
- • 000001_create_writes_space.lua: skipped, already applied
- • 000002_create_writers_index.lua: skipped, already applied
- • 000003_alter_writers_space.lua: skipped, already applied
- • storage-001:
- • 000001_create_writes_space.lua: skipped, already applied
- • 000002_create_writers_index.lua: skipped, already applied
- • 000003_alter_writers_space.lua: skipped, already applied
- • storage-002:
- • 000001_create_writes_space.lua: skipped, already applied
- • 000002_create_writers_index.lua: skipped, already applied
- • 000003_alter_writers_space.lua: skipped, already applied
- • storage-003:
- • 000001_create_writes_space.lua: successfully applied
- • 000002_create_writers_index.lua: successfully applied
- • 000003_alter_writers_space.lua: successfully applied
-
-To make sure that the space exists on the new instances, connect to ``storage-003-a``
-and check ``box.space.writers``:
-
-.. code-block:: console
-
- $ tt connect myapp:storage-003-a
-
-.. code-block:: tarantoolsession
-
- myapp:storage-003-a> box.space.writers ~= nil
- ---
- - true
- ...
-
-.. _centralized_migrations_tt_troubleshoot:
-
-Troubleshooting migrations
---------------------------
-
-.. warning::
-
- The options used for migration troubleshooting can cause migration inconsistency in the cluster.
-
-Incorrect migration published
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Problem: an incorrect migration is published to etcd.
-Solution: fix the migration file and publish it again with the ``--overwrite`` option:
-
-.. code-block:: console
-
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp --overwrite
-
-If there are several migrations and the erroneous one isn't the last, add also ``--ignore-order-violation``:
-
-.. code-block:: console
-
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp --overwrite --ignore-order-violation
-
-Incorrect migration applied
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If the migration is already applied, publish the fixed version and apply it with
-the ``--force-reapply`` option:
-
-.. code-block:: console
-
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
- --tarantool-username=client --tarantool-password=secret \
- --force-reapply
diff --git a/doc/platform/ddl_dml/index.rst b/doc/platform/ddl_dml/index.rst
index d595172eb..d62931229 100644
--- a/doc/platform/ddl_dml/index.rst
+++ b/doc/platform/ddl_dml/index.rst
@@ -32,7 +32,7 @@ This section contains guides on performing data operations in Tarantool.
value_store
schema_desc
operations
- migrations
+ migrations/index
read_views
sql/index
diff --git a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
new file mode 100644
index 000000000..96aac0979
--- /dev/null
+++ b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
@@ -0,0 +1,299 @@
+.. _basic_migrations_tt:
+
+Basic tt migrations tutorial
+============================
+
+**Example on GitHub:** `migrations `_
+
+In this tutorial, you learn to define the cluster data schema using the centralized
+migration management mechanism implemented in the Enterprise Edition of the :ref:`tt ` utility.
+
+.. _basic_migrations_tt_prereq:
+
+Prerequisites
+-------------
+
+Before starting this tutorial:
+
+- Download and :ref:`install Tarantool Enterprise SDK `.
+- Install `etcd `__.
+
+.. _basic_migrations_tt_cluster:
+
+Preparing a cluster
+-------------------
+
+The centralized migration mechanism works with Tarantool EE clusters that:
+
+- use etcd as a centralized configuration storage
+- use the `CRUD `__ module for data sharding
+
+.. _basic_migrations_tt_cluster_etcd:
+
+Setting up etcd
+~~~~~~~~~~~~~~~
+
+First, start up an etcd instance to use as a configuration storage:
+
+.. code-block:: console
+
+ $ etcd
+
+etcd runs on the default port 2379.
+
+Optionally, enable etcd authentication by executing the following script:
+
+.. code-block:: bash
+
+ #!/usr/bin/env bash
+
+ etcdctl user add root:topsecret
+ etcdctl role add app_config_manager
+ etcdctl role grant-permission app_config_manager --prefix=true readwrite /myapp/
+ etcdctl user add app_user:config_pass
+ etcdctl user grant-role app_user app_config_manager
+ etcdctl auth enable
+
+It creates an etcd user ``app_user`` with read and write permissions to the ``/myapp``
+prefix, in which the cluster configuration will be stored. The user's password is ``config_pass``.
+
+.. note::
+
+ If you don't enable etcd authentication, make ``tt migrations`` calls without
+ the configuration storage credentials.
+
+.. _basic_migrations_tt_cluster_create:
+
+Creating a cluster
+~~~~~~~~~~~~~~~~~~
+
+#. Initialize a ``tt`` environment:
+
+ .. code-block:: console
+
+ $ tt init
+
+#. In the ``instances.enabled`` directory, create the ``myapp`` directory.
+#. Go to the ``instances.enabled/myapp`` directory and create application files:
+
+ - ``instances.yml``:
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
+ :language: yaml
+ :dedent:
+
+ - ``config.yaml``:
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
+ :language: yaml
+ :dedent:
+
+ - ``myapp-scm-1.rockspec``:
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
+ :dedent:
+
+4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
+
+ .. note::
+
+ This configuration describes a typical CRUD-enabled sharded cluster with
+ one router and two storage replica sets, each including one master and one read-only replica.
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
+ :language: yaml
+ :dedent:
+
+#. Publish the configuration to etcd:
+
+ .. code-block:: console
+
+ $ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
+
+The full cluster code is available on GitHub here: `migrations `_.
+
+.. _basic_migrations_tt_cluster_start:
+
+Building and starting the cluster
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Build the application:
+
+ .. code-block:: console
+
+ $ tt build myapp
+
+#. Start the cluster:
+
+ .. code-block:: console
+
+ $ tt start myapp
+
+ To check that the cluster is up and running, use ``tt status``:
+
+ .. code-block:: console
+
+ $ tt status myapp
+
+#. Bootstrap vshard in the cluster:
+
+ .. code-block:: console
+
+ $ tt replicaset vshard bootstrap myapp
+
+.. _basic_migrations_tt_write:
+
+Writing migrations
+------------------
+
+To perform migrations in the cluster, write them in Lua and publish to the cluster's
+etcd configuration storage.
+
+Each migration file must return a Lua table with one object named ``apply``.
+This object has one field -- ``scenario`` -- that stores the migration function:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+ -- migration code
+ end
+
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+The migration unit is a single file: its ``scenario`` is executed as a whole. An error
+that happens in any step of the ``scenario`` causes the entire migration to fail.
+
+Migrations are executed in the lexicographical order. Thus, it's convenient to
+use filenames that start with ordered numbers to define the migrations order, for example:
+
+.. code-block:: text
+
+ 000001_create_space.lua
+ 000002_create_index.lua
+ 000003_alter_space.lua
+
+The default location where ``tt`` searches for migration files is ``/migrations/scenario``.
+Create this subdirectory inside the ``tt`` environment. Then, create two migration files:
+
+- ``000001_create_writers_space.lua``: create a space, define its format, and
+ create a primary index.
+
+ .. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
+ :language: lua
+ :dedent:
+
+ .. note::
+
+ Note the usage of the ``tt-migrations.helpers`` module.
+ In this example, its function ``register_sharding_key`` is used
+ to define a sharding key for the space.
+
+- ``000002_create_writers_index.lua``: add one more index.
+
+ .. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
+ :language: lua
+ :dedent:
+
+.. _basic_migrations_tt_publish:
+
+Publishing migrations
+---------------------
+
+To publish migrations to the etcd configuration storage, run ``tt migrations publish``:
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+ • 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
+ • 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
+
+.. _basic_migrations_tt_apply:
+
+Applying migrations
+-------------------
+
+To apply published migrations to the cluster, run ``tt migrations apply`` providing
+a cluster user's credentials:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
+
+.. important::
+
+ The cluster user must have enough access privileges to execute the migrations code.
+
+The output should look as follows:
+
+.. code-block:: console
+
+ • router-001:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+ • storage-001:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+ • storage-002:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+
+The migrations are applied on all replica set leaders. Read-only replicas
+receive the changes from the corresponding replica set leaders.
+
+Check the migrations status with ``tt migration status``:
+
+.. code-block:: console
+
+ $ tt migrations status http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+ • migrations centralized storage scenarios:
+ • 000001_create_writes_space.lua
+ • 000002_create_writers_index.lua
+ • migrations apply status on Tarantool cluster:
+ • router-001:
+ • 000001_create_writes_space.lua: APPLIED
+ • 000002_create_writers_index.lua: APPLIED
+ • storage-001:
+ • 000001_create_writes_space.lua: APPLIED
+ • 000002_create_writers_index.lua: APPLIED
+ • storage-002:
+ • 000001_create_writes_space.lua: APPLIED
+ • 000002_create_writers_index.lua: APPLIED
+
+To make sure that the space and indexes are created in the cluster, connect to the router
+instance and retrieve the space information:
+
+.. code-block:: $ tt connect myapp:router-001
+
+.. code-block:: tarantoolsession
+
+ myapp:router-001-a> require('crud').schema('writers')
+ ---
+ - indexes:
+ 0:
+ unique: true
+ parts:
+ - fieldno: 1
+ type: number
+ exclude_null: false
+ is_nullable: false
+ id: 0
+ type: TREE
+ name: primary
+ 2:
+ unique: true
+ parts:
+ - fieldno: 4
+ type: number
+ exclude_null: false
+ is_nullable: false
+ id: 2
+ type: TREE
+ name: age
+ format: [{'name': 'id', 'type': 'number'}, {'type': 'number', 'name': 'bucket_id',
+ 'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
+ ...
diff --git a/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst b/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst
new file mode 100644
index 000000000..9de6c83b6
--- /dev/null
+++ b/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst
@@ -0,0 +1,22 @@
+.. _centralized_migrations_tt:
+
+Centralized migrations with tt
+==============================
+
+**Example on GitHub:** `migrations `_
+
+In this section, you learn to use the centralized migration management mechanism
+implemented in the Enterprise Edition of the :ref:`tt ` utility.
+
+See also:
+
+- :ref:`tt migrations reference ` for the full list of command-line options.
+- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
+
+.. toctree::
+ :maxdepth: 1
+
+ basic_migrations_tt
+ upgrade_migrations_tt
+ extend_migrations_tt
+ troubleshoot_migrations_tt
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst b/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
new file mode 100644
index 000000000..4a2a9f2ed
--- /dev/null
+++ b/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
@@ -0,0 +1,115 @@
+.. _extend_migrations_tt:
+
+Centralized migrations with tt
+==============================
+
+**Example on GitHub:** `migrations `_
+
+In this tutorial, you learn how to consistently define the data schema on newly
+added cluster instances using the centralized migration management mechanism.
+
+.. _extend_migrations_tt_prereq:
+
+Prerequisites
+-------------
+
+Before starting this tutorial, complete the :ref:`_basic_migrations_tt` and :ref:`upgrade_migrations_tt`
+
+.. _extend_migrations_tt_cluster:
+
+Extending the cluster
+---------------------
+
+Having all migrations in a centralized etcd storage, you can extend the cluster
+and consistently define the data schema on new instances on the fly.
+
+Add one more storage replica set to the cluster. To do this, edit the cluster files in ``instances.enabled/myapp``:
+
+- ``instances.yml``: add the lines below to the end.
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
+ :language: yaml
+ :start-at: storage-003-a:
+ :dedent:
+
+- ``source.yaml``: add the lines below to the end.
+
+ .. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
+ :language: yaml
+ :start-at: storage-003:
+ :dedent:
+
+Publish the new cluster configuration to etcd:
+
+.. code-block:: console
+
+ $ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
+
+Run ``tt start`` to start up the new instances:
+
+.. code-block:: console
+
+ $ tt start myapp
+ • The instance myapp:router-001-a (PID = 61631) is already running.
+ • The instance myapp:storage-001-a (PID = 61632) is already running.
+ • The instance myapp:storage-001-b (PID = 61634) is already running.
+ • The instance myapp:storage-002-a (PID = 61639) is already running.
+ • The instance myapp:storage-002-b (PID = 61640) is already running.
+ • Starting an instance [myapp:storage-003-a]...
+ • Starting an instance [myapp:storage-003-b]...
+
+Now the cluster contains three storage replica sets.
+
+
+.. _extend_migrations_tt_apply:
+
+Applying migrations to the new replica set
+------------------------------------------
+
+The new replica set -- ``storage-003``-- is just started and has no data schema yet.
+Apply all stored migrations to the cluster to load the same data schema to the new replica set:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
+ • router-001:
+ • 000001_create_writes_space.lua: skipped, already applied
+ • 000002_create_writers_index.lua: skipped, already applied
+ • 000003_alter_writers_space.lua: skipped, already applied
+ • storage-001:
+ • 000001_create_writes_space.lua: skipped, already applied
+ • 000002_create_writers_index.lua: skipped, already applied
+ • 000003_alter_writers_space.lua: skipped, already applied
+ • storage-002:
+ • 000001_create_writes_space.lua: skipped, already applied
+ • 000002_create_writers_index.lua: skipped, already applied
+ • 000003_alter_writers_space.lua: skipped, already applied
+ • storage-003:
+ • 000001_create_writes_space.lua: successfully applied
+ • 000002_create_writers_index.lua: successfully applied
+ • 000003_alter_writers_space.lua: successfully applied
+
+.. note::
+
+ You can apply migrations to a specific replica set using the ``--replicaset`` option:
+
+ .. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
+ --replicaset storage-003
+
+To make sure that the space exists on the new instances, connect to ``storage-003-a``
+and check ``box.space.writers``:
+
+.. code-block:: console
+
+ $ tt connect myapp:storage-003-a
+
+.. code-block:: tarantoolsession
+
+ myapp:storage-003-a> box.space.writers ~= nil
+ ---
+ - true
+ ...
diff --git a/doc/platform/ddl_dml/migrations.rst b/doc/platform/ddl_dml/migrations/index.rst
similarity index 100%
rename from doc/platform/ddl_dml/migrations.rst
rename to doc/platform/ddl_dml/migrations/index.rst
diff --git a/doc/platform/ddl_dml/space_upgrade.rst b/doc/platform/ddl_dml/migrations/space_upgrade.rst
similarity index 100%
rename from doc/platform/ddl_dml/space_upgrade.rst
rename to doc/platform/ddl_dml/migrations/space_upgrade.rst
diff --git a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
new file mode 100644
index 000000000..f4fe1137e
--- /dev/null
+++ b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
@@ -0,0 +1,81 @@
+.. _centralized_migrations_tt_troubleshoot:
+
+Troubleshooting migrations
+--------------------------
+
+.. warning::
+
+ The options used for migration troubleshooting can cause migration inconsistency in the cluster.
+
+.. _centralized_migrations_tt_troubleshoot_publish:
+
+Incorrect migration published
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If an incorrect migration was published to etcd but wasn't applied yet,
+fix the migration file and publish it again with the ``--overwrite`` option:
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ 000001_create_space.lua --overwrite
+
+If the migration that needs a fix isn't the last in the lexicographical order,
+add also ``--ignore-order-violation``:
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ 000001_create_space.lua --overwrite --ignore-order-violation
+
+If a migration was published by mistake and wasn't applied yet, you can delete it
+from etcd using ``tt migrations remove``:
+
+.. code-block:: console
+
+ $ tt migrations remove http://app_user:config_pass@localhost:2379/myapp \
+ --migration 000003_not_needed.lua
+
+.. _centralized_migrations_tt_troubleshoot_apply:
+
+Incorrect migration applied
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the migration is already applied, publish the fixed version and apply it with
+the ``--force-reapply`` option:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret \
+ --force-reapply
+
+If execution of the incorrect migration version has failed, you may also need to add
+the ``--ignore-preceding-status`` option:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret \
+ --force-reapply --ignore-preceding-status
+
+.. _centralized_migrations_tt_troubleshoot_stop:
+
+Migration execution takes too long
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To interrupt migration execution on the cluster, use ``tt migrations stop``:
+
+.. code-block:: console
+
+ $ tt migrations stop http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
+
+To avoid such situations in the future, restrict the maximum migration execution time
+using the ``--executions-timeout`` option of ``tt migrations apply``:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret \
+ --execution-timeout=60
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
new file mode 100644
index 000000000..c145ea7d5
--- /dev/null
+++ b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
@@ -0,0 +1,140 @@
+.. _upgrade_migrations_tt:
+
+Complex migrations with space.upgrade()
+=======================================
+
+**Example on GitHub:** `migrations `_
+
+In this tutorial, you learn to write migrations that include data migration using
+the ``space.upgrade`` function.
+
+See also:
+
+- :ref:`tt migrations ` for the full list of command-line options.
+- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
+
+.. _upgrade_migrations_tt:
+
+Prerequisites
+-------------
+
+Before starting this tutorial, complete the :ref:`_basic_migrations_tt`.
+
+.. _upgrade_migrations_tt_write:
+
+Writing a complex migration
+---------------------------
+
+Complex migrations require data migration along with schema migration. Insert some
+tuples into the space before proceeding to the next steps:
+
+.. code-block:: tarantoolsession
+
+ myapp:router-001-a> require('crud').insert_object_many('writers', {
+ {id = 1, name = 'Haruki Murakami', age = 75},
+ {id = 2, name = 'Douglas Adams', age = 49},
+ {id = 3, name = 'Eiji Mikage', age = 41},
+ }, {noreturn = true})
+
+The next migration changes the space format incompatibly: instead of one ``name``
+field, the new format includes two fields ``first_name`` and ``last_name``.
+To apply this migration, you need to change each tuple's structure preserving the stored
+data. The :ref:`space.upgrade ` function helps with this task.
+
+Create a new file ``000003_alter_writers_space.lua`` in ``/migrations/scenario``.
+Prepare its initial structure the same way as in previous migrations:
+
+.. code-block:: lua
+
+ local function apply_scenario()
+ -- migration code
+ end
+ return {
+ apply = {
+ scenario = apply_scenario,
+ },
+ }
+
+Start the migration function with the new format description:
+
+.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+ :language: lua
+ :start-at: local function apply_scenario()
+ :end-at: box.space.writers.index.age:drop()
+ :dedent:
+
+.. note::
+
+ ``box.space.writers.index.age:drop()`` drops an existing index. This is done
+ because indexes rely on field numbers and may break during this format change.
+ If you need the ``age`` field indexed, recreate the index after applying the
+ new format.
+
+Next, create a stored function that transforms tuples to fit the new format.
+In this case, the functions extracts the first and the last name from the ``name`` field
+and returns a tuple of the new format:
+
+.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+ :language: lua
+ :start-at: box.schema.func.create
+ :end-before: local future = space:upgrade
+ :dedent:
+
+Finally, call ``space:upgrade()`` with the new format and the transformation function
+as its arguments. Here is the complete migration code:
+
+.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+ :language: lua
+ :dedent:
+
+Learn more about ``space.upgrade()`` execution in :ref:`enterprise-space_upgrade`.
+
+.. _upgrade_migrations_tt_publish:
+
+Publishing the migration
+------------------------
+
+Publish the new migration to etcd. Migrations that already exist in the storage are skipped.
+
+.. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+ • 000001_create_writes_space.lua: skipped, key "000001_create_writes_space.lua" already exists with the same content
+ • 000002_create_writers_index.lua: skipped, key "000002_create_writers_index.lua" already exists with the same content
+ • 000003_alter_writers_space.lua: successfully published to key "000003_alter_writers_space.lua"
+
+.. note::
+
+ You can also publish a single migration file by passing a path to it as an argument:
+
+ .. code-block:: console
+
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ migrations/scenario/000003_alter_writers_space.lua
+
+.. _upgrade_migrations_tt_apply:
+
+Applying the migration
+----------------------
+
+Apply the migrations:
+
+.. code-block:: console
+
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
+
+Connect to the router instance and check that the space and its tuples have the new format:
+
+.. code-block:: $ tt connect myapp:router-001
+
+.. code-block:: tarantoolsession
+
+ myapp:router-001-a> require('crud').get('writers', 2)
+ ---
+ - rows: []
+ metadata: [{'name': 'id', 'type': 'number'}, {'name': 'bucket_id', 'type': 'number'},
+ {'name': 'first_name', 'type': 'string'}, {'name': 'last_name', 'type': 'string'},
+ {'name': 'age', 'type': 'number'}]
+ - null
+ ...
diff --git a/doc/platform/ddl_dml/troubleshooting_migrations_tt.rst b/doc/platform/ddl_dml/troubleshooting_migrations_tt.rst
deleted file mode 100644
index ee29095a0..000000000
--- a/doc/platform/ddl_dml/troubleshooting_migrations_tt.rst
+++ /dev/null
@@ -1,51 +0,0 @@
-.. _troubleshooting_migrations_tt:
-
-Troubleshooting migrations
-==========================
-
-
-**Example on GitHub:** TBD
-
-The :ref:``tt `` utility provides
-
-Description
-
-Prerequisites
--------------
-
-- EE
-- crud
-- etcd
-
-Preparing a cluster
--------------------
-
-Writing migrations
-------------------
-
-Publishing migrations
----------------------
-
-Applying migrations
--------------------
-
-Adding new migrations
----------------------
-
-Extending the cluster
----------------------
-
-
-how to write migration files? tt-migrtions.helpers
-
-Migration workflow
-
-prepare files
-publish to etcd
-apply
-check status
-
-Handling errors
-
-stop
-rollback - show examples with force apply?
From 0d20803173550a9d224872847a24b29339596852 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Fri, 27 Sep 2024 20:15:56 +0700
Subject: [PATCH 07/16] Fix
---
.../snippets/migrations/README.md | 2 +-
.../migrations/centralized_migrations_tt.rst | 13 ++++---
.../migrations/extend_migrations_tt.rst | 38 +++++++------------
doc/platform/ddl_dml/migrations/index.rst | 4 +-
.../migrations/upgrade_migrations_tt.rst | 28 ++++++++------
5 files changed, 40 insertions(+), 45 deletions(-)
diff --git a/doc/code_snippets/snippets/migrations/README.md b/doc/code_snippets/snippets/migrations/README.md
index cc27a6627..1f2f58b29 100644
--- a/doc/code_snippets/snippets/migrations/README.md
+++ b/doc/code_snippets/snippets/migrations/README.md
@@ -1,4 +1,4 @@
-# Centralized configuration storages
+# Centralized migrations with tt
Sample applications demonstrating how to use the centralized migration mechanism
for Tarantool EE clusters via the tt utility. Learn more at [Centralized configuration storages](https://www.tarantool.io/en/doc/latest/platform/https://www.tarantool.io/en/doc/latest/platform/ddl_dml/migrations/).
diff --git a/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst b/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst
index 9de6c83b6..db59261a7 100644
--- a/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/centralized_migrations_tt.rst
@@ -8,10 +8,7 @@ Centralized migrations with tt
In this section, you learn to use the centralized migration management mechanism
implemented in the Enterprise Edition of the :ref:`tt ` utility.
-See also:
-
-- :ref:`tt migrations reference ` for the full list of command-line options.
-- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
+The section includes the following tutorials:
.. toctree::
:maxdepth: 1
@@ -19,4 +16,10 @@ See also:
basic_migrations_tt
upgrade_migrations_tt
extend_migrations_tt
- troubleshoot_migrations_tt
\ No newline at end of file
+ troubleshoot_migrations_tt
+
+
+See also:
+
+- :ref:`tt migrations reference ` for the full list of command-line options.
+- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst b/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
index 4a2a9f2ed..369ff449a 100644
--- a/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
@@ -1,7 +1,7 @@
.. _extend_migrations_tt:
-Centralized migrations with tt
-==============================
+Extending the cluster
+=====================
**Example on GitHub:** `migrations `_
@@ -13,7 +13,9 @@ added cluster instances using the centralized migration management mechanism.
Prerequisites
-------------
-Before starting this tutorial, complete the :ref:`_basic_migrations_tt` and :ref:`upgrade_migrations_tt`
+Before starting this tutorial, complete the :ref:`basic_migrations_tt` and :ref:`upgrade_migrations_tt`.
+As a result, you have a sharded Tarantool EE cluster that uses an etcd-based configuration
+storage. The cluster has a space with two indexes.
.. _extend_migrations_tt_cluster:
@@ -71,34 +73,20 @@ Apply all stored migrations to the cluster to load the same data schema to the n
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
- --tarantool-username=client --tarantool-password=secret
- • router-001:
- • 000001_create_writes_space.lua: skipped, already applied
- • 000002_create_writers_index.lua: skipped, already applied
- • 000003_alter_writers_space.lua: skipped, already applied
- • storage-001:
- • 000001_create_writes_space.lua: skipped, already applied
- • 000002_create_writers_index.lua: skipped, already applied
- • 000003_alter_writers_space.lua: skipped, already applied
- • storage-002:
- • 000001_create_writes_space.lua: skipped, already applied
- • 000002_create_writers_index.lua: skipped, already applied
- • 000003_alter_writers_space.lua: skipped, already applied
- • storage-003:
- • 000001_create_writes_space.lua: successfully applied
- • 000002_create_writers_index.lua: successfully applied
- • 000003_alter_writers_space.lua: successfully applied
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
+ --replicaset storage-003
.. note::
- You can apply migrations to a specific replica set using the ``--replicaset`` option:
+ You can also apply migrations without specifying the replica set. All published
+ migrations are already applied on other replica sets, so ``tt`` skips the
+ operation on them.
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
- --tarantool-username=client --tarantool-password=secret
- --replicaset storage-003
+ $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ --tarantool-username=client --tarantool-password=secret
To make sure that the space exists on the new instances, connect to ``storage-003-a``
and check ``box.space.writers``:
diff --git a/doc/platform/ddl_dml/migrations/index.rst b/doc/platform/ddl_dml/migrations/index.rst
index b3a36019e..1ea5186a5 100644
--- a/doc/platform/ddl_dml/migrations/index.rst
+++ b/doc/platform/ddl_dml/migrations/index.rst
@@ -165,6 +165,6 @@ troubleshooting migration issues. :ref:`troubleshooting_migrations_tt`.
.. toctree::
:maxdepth: 1
- space_upgrade
centralized_migrations_tt
- troubleshooting_migrations_tt
+ space_upgrade
+
diff --git a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
index c145ea7d5..7404558ae 100644
--- a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
@@ -6,7 +6,7 @@ Complex migrations with space.upgrade()
**Example on GitHub:** `migrations `_
In this tutorial, you learn to write migrations that include data migration using
-the ``space.upgrade`` function.
+the ``space.upgrade()`` function.
See also:
@@ -18,15 +18,19 @@ See also:
Prerequisites
-------------
-Before starting this tutorial, complete the :ref:`_basic_migrations_tt`.
+Before starting this tutorial, complete the :ref:`basic_migrations_tt`.
+As a result, you have a sharded Tarantool EE cluster that uses an etcd-based configuration
+storage. The cluster has a space with two indexes.
.. _upgrade_migrations_tt_write:
Writing a complex migration
---------------------------
-Complex migrations require data migration along with schema migration. Insert some
-tuples into the space before proceeding to the next steps:
+Complex migrations require data migration along with schema migration. Connect to
+the router instance and insert some tuples into the space before proceeding to the next steps.
+
+.. code-block:: $ tt connect myapp:router-001
.. code-block:: tarantoolsession
@@ -98,26 +102,26 @@ Publish the new migration to etcd. Migrations that already exist in the storage
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
- • 000001_create_writes_space.lua: skipped, key "000001_create_writes_space.lua" already exists with the same content
- • 000002_create_writers_index.lua: skipped, key "000002_create_writers_index.lua" already exists with the same content
- • 000003_alter_writers_space.lua: successfully published to key "000003_alter_writers_space.lua"
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ migrations/scenario/000003_alter_writers_space.lua
.. note::
- You can also publish a single migration file by passing a path to it as an argument:
+ You can also publish all migrations from the default location ``/migrations/scenario``.
+ All other migrations stored in this directory are already published, so ``tt``
+ skips them.
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
- migrations/scenario/000003_alter_writers_space.lua
+ $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+
.. _upgrade_migrations_tt_apply:
Applying the migration
----------------------
-Apply the migrations:
+Apply the published migrations:
.. code-block:: console
From ab1867c5077d39de6f9d39dfa23f097d68d8ab1c Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Mon, 30 Sep 2024 15:45:01 +0700
Subject: [PATCH 08/16] Fix
---
.../migrations/basic_migrations_tt.rst | 16 ++-
.../migrations/extend_migrations_tt.rst | 6 +-
doc/platform/ddl_dml/migrations/index.rst | 27 +++---
.../migrations/troubleshoot_migrations_tt.rst | 20 ++--
.../migrations/upgrade_migrations_tt.rst | 30 +++---
doc/tooling/tt_cli/migrations.rst | 97 ++++++++++---------
6 files changed, 106 insertions(+), 90 deletions(-)
diff --git a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
index 96aac0979..9e45d74fa 100644
--- a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
@@ -26,7 +26,7 @@ Preparing a cluster
The centralized migration mechanism works with Tarantool EE clusters that:
- use etcd as a centralized configuration storage
-- use the `CRUD `__ module for data sharding
+- use the `CRUD `__ module for data distribution
.. _basic_migrations_tt_cluster_etcd:
@@ -207,7 +207,7 @@ To publish migrations to the etcd configuration storage, run ``tt migrations pub
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+ $ tt migrations publish "http://app_user:config_pass@localhost:2379/myapp"
• 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
• 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
@@ -221,7 +221,7 @@ a cluster user's credentials:
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
.. important::
@@ -249,7 +249,8 @@ Check the migrations status with ``tt migration status``:
.. code-block:: console
- $ tt migrations status http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
+ $ tt migrations status "http://app_user:config_pass@localhost:2379/myapp" \
+ --tarantool-username=client --tarantool-password=secret
• migrations centralized storage scenarios:
• 000001_create_writes_space.lua
• 000002_create_writers_index.lua
@@ -297,3 +298,10 @@ instance and retrieve the space information:
format: [{'name': 'id', 'type': 'number'}, {'type': 'number', 'name': 'bucket_id',
'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
...
+
+.. _basic_migrations_tt_next:
+
+Next steps
+----------
+
+Learn to write and perform data migration in :ref:`upgrade_migrations_tt`.
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst b/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
index 369ff449a..d82d9e814 100644
--- a/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/extend_migrations_tt.rst
@@ -73,9 +73,9 @@ Apply all stored migrations to the cluster to load the same data schema to the n
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
- --replicaset storage-003
+ --replicaset=storage-003
.. note::
@@ -85,7 +85,7 @@ Apply all stored migrations to the cluster to load the same data schema to the n
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
To make sure that the space exists on the new instances, connect to ``storage-003-a``
diff --git a/doc/platform/ddl_dml/migrations/index.rst b/doc/platform/ddl_dml/migrations/index.rst
index 1ea5186a5..00b16770a 100644
--- a/doc/platform/ddl_dml/migrations/index.rst
+++ b/doc/platform/ddl_dml/migrations/index.rst
@@ -4,28 +4,26 @@ Migrations
==========
**Migration** refers to any change in a data schema: adding or removing a field,
-creating or dropping an index, changing a field format, an so on. Space creation
-is also a schema migration. You can use this fact to track the evolution of your
-data schema since its initial state.
+creating or dropping an index, changing a field format, and so on. Space creation
+is also a migration. Using migrations, you can track the evolution of your
+data schema since its initial state. In Tarantool, migrations are presented as Lua
+code that alters the data schema using the built-in Lua API.
There are two types of migrations:
- *simple migrations* don't require additional actions on existing data
- *complex migrations* include both schema and data changes
-In Tarantool, migrations are presented as Lua code that alters the data schema
-using the built-in Lua API.
-
.. _migrations_simple:
Simple migrations
-----------------
-In Tarantool, there are two types of schema migration that do not require data migration:
+There are two types of schema migration that do not require data migration:
-- Creating an index. A new index can be created at any time. To learn more about
+- *Creating an index*. A new index can be created at any time. To learn more about
index creation, see :ref:`concepts-data_model_indexes` and the :ref:`box_space-create_index` reference.
-- Adding a field to the end of a space. To add a field, update the space format so
+- *Adding a field to the end of a space*. To add a field, update the space format so
that it includes all its fields and also the new field. For example:
.. code-block:: lua
@@ -79,17 +77,17 @@ its availability requirements.
Tarantool offers the following features that make migrations easier and safer:
-- Transaction mechanism. It is useful when writing a migration,
+- *Transaction mechanism*. It is useful when writing a migration,
because it allows you to work with the data atomically. But before using
the transaction mechanism, you should explore its limitations.
For details, see the section about :ref:`transactions `.
-- ``space:upgrade()`` function (EE only). With the help of ``space:upgrade()``,
+- ``space:upgrade()`` *function* (EE only). With the help of ``space:upgrade()``,
you can enable compression and migrate, including already created tuples.
For details, check the :ref:`Upgrading space schema ` section.
-- Centralized migration management mechanism (EE only). Implemented
- in the Enterprise version of the :ref:`tt ` utility and :ref:`tcm`,
+- *Centralized migration management mechanism* (EE only). Implemented
+ in the Enterprise version of the :ref:`tt ` utility and in :ref:`tcm`,
this mechanism enables migration execution and tracking in the replication
clusters. For details, see :ref:`migrations_centralized`.
@@ -159,9 +157,6 @@ To learn how to manage migrations in Tarantool EE clusters from the command line
see :ref:`centralized_migrations_tt`. To learn how to use the mechanism from the |tcm|
web interface, see the :ref:`tcm_migrations` |tcm| documentation page.
-The ``tt`` implementation of the mechanism additionally includes commands for
-troubleshooting migration issues. :ref:`troubleshooting_migrations_tt`.
-
.. toctree::
:maxdepth: 1
diff --git a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
index f4fe1137e..aacf031bf 100644
--- a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
@@ -3,9 +3,15 @@
Troubleshooting migrations
--------------------------
+The centralized migrations mechanism allows troubleshooting migration issues using
+dedicated ``tt migration`` options. When troubleshooting migrations, remember that
+any unfinished or failed migration can bring the data schema into to inconsistency.
+Additional steps may be needed to fix this.
+
.. warning::
- The options used for migration troubleshooting can cause migration inconsistency in the cluster.
+ The options used for migration troubleshooting can cause migration inconsistency
+ in the cluster. Use them only for local development and testing purposes.
.. _centralized_migrations_tt_troubleshoot_publish:
@@ -17,7 +23,7 @@ fix the migration file and publish it again with the ``--overwrite`` option:
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations publish "http://app_user:config_pass@localhost:2379/myapp" \
000001_create_space.lua --overwrite
If the migration that needs a fix isn't the last in the lexicographical order,
@@ -25,7 +31,7 @@ add also ``--ignore-order-violation``:
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations publish "http://app_user:config_pass@localhost:2379/myapp" \
000001_create_space.lua --overwrite --ignore-order-violation
If a migration was published by mistake and wasn't applied yet, you can delete it
@@ -33,7 +39,7 @@ from etcd using ``tt migrations remove``:
.. code-block:: console
- $ tt migrations remove http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations remove "http://app_user:config_pass@localhost:2379/myapp" \
--migration 000003_not_needed.lua
.. _centralized_migrations_tt_troubleshoot_apply:
@@ -46,7 +52,7 @@ the ``--force-reapply`` option:
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret \
--force-reapply
@@ -68,7 +74,7 @@ To interrupt migration execution on the cluster, use ``tt migrations stop``:
.. code-block:: console
- $ tt migrations stop http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations stop "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
To avoid such situations in the future, restrict the maximum migration execution time
@@ -76,6 +82,6 @@ using the ``--executions-timeout`` option of ``tt migrations apply``:
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret \
--execution-timeout=60
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
index 7404558ae..723474998 100644
--- a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
@@ -1,18 +1,13 @@
.. _upgrade_migrations_tt:
-Complex migrations with space.upgrade()
-=======================================
+Data migrations with space.upgrade()
+====================================
**Example on GitHub:** `migrations `_
In this tutorial, you learn to write migrations that include data migration using
the ``space.upgrade()`` function.
-See also:
-
-- :ref:`tt migrations ` for the full list of command-line options.
-- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
-
.. _upgrade_migrations_tt:
Prerequisites
@@ -75,7 +70,7 @@ Start the migration function with the new format description:
new format.
Next, create a stored function that transforms tuples to fit the new format.
-In this case, the functions extracts the first and the last name from the ``name`` field
+In this case, the function extracts the first and the last name from the ``name`` field
and returns a tuple of the new format:
.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
@@ -91,18 +86,18 @@ as its arguments. Here is the complete migration code:
:language: lua
:dedent:
-Learn more about ``space.upgrade()`` execution in :ref:`enterprise-space_upgrade`.
+Learn more about ``space.upgrade()`` in :ref:`enterprise-space_upgrade`.
.. _upgrade_migrations_tt_publish:
Publishing the migration
------------------------
-Publish the new migration to etcd. Migrations that already exist in the storage are skipped.
+Publish the new migration to etcd.
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations publish "http://app_user:config_pass@localhost:2379/myapp" \
migrations/scenario/000003_alter_writers_space.lua
.. note::
@@ -113,7 +108,7 @@ Publish the new migration to etcd. Migrations that already exist in the storage
.. code-block:: console
- $ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
+ $ tt migrations publish "http://app_user:config_pass@localhost:2379/myapp"
.. _upgrade_migrations_tt_apply:
@@ -125,7 +120,7 @@ Apply the published migrations:
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
Connect to the router instance and check that the space and its tuples have the new format:
@@ -142,3 +137,12 @@ Connect to the router instance and check that the space and its tuples have the
{'name': 'age', 'type': 'number'}]
- null
...
+
+
+.. _upgrade_migrations_tt_next:
+
+Next steps
+----------
+
+Learn to use migrations for data schema definition on new instances added to the cluster
+in :ref:`extend_migrations_tt`.
\ No newline at end of file
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index 2b8d2c7d4..66c693346 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -44,14 +44,14 @@ configuration storage on all its read-write instances (replica set leaders).
.. code-block:: console
- tt migrations apply https://user:pass@localhost:2379/myapp \
+ $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
-tarantool-username=admin --tarantool-password=pass
To apply a single published migration, pass its name in the ``--migration`` option:
.. code-block:: console
- tt migrations apply https://user:pass@localhost:2379/myapp \
+ $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
--tarantool-username=admin --tarantool-password=pass \
--migration=000001_create_space.lua
@@ -59,16 +59,18 @@ To apply migrations on a single replica set, specify the ``replicaset`` option:
.. code-block:: console
- tt migrations apply https://user:pass@localhost:2379/myapp \
+ $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
--tarantool-username=admin --tarantool-password=pass \
--replicaset=storage-001
---order violation
+The command also provides options for migration troubleshooting: ``--ignore-order-violation``,
+``--force-reapply``, and ``--ignore-preceding-status``. Learn to use them in
+:ref:`centralized_migrations_tt_troubleshoot`.
+.. warning::
-?? diff --force-reapply --ignore-preceding-status
-
-warning about dangerous options
+ The use of migration troubleshooting options may lead to migration inconsistency
+ in the cluster. Use them only for local development and testing purposes.
.. _tt-migrations-publish:
@@ -87,26 +89,26 @@ directory.
.. code-block:: console
- $ tt migrations publish https://user:pass@localhost:2379/myapp
+ $ tt migrations publish "https://user:pass@localhost:2379/myapp"
To select another directory with migration files, provide a path to it as the command
argument:
.. code-block:: console
- $ tt migrations publish https://user:pass@localhost:2379/myapp my_migrations
+ $ tt migrations publish "https://user:pass@localhost:2379/myapp my_migrations"
To publish a single migration from a file, use its name or path as the command argument:
.. code-block:: console
- $ tt migrations publish https://user:pass@localhost:2379/myapp migrations/000001_create_space.lua
+ $ tt migrations publish "https://user:pass@localhost:2379/myapp" migrations/000001_create_space.lua
Optionally, you can provide a key to use as a migration identifier instead of the filename:
.. code-block:: console
- $ tt migrations publish https://user:pass@localhost:2379/myapp file.lua \
+ $ tt migrations publish "https://user:pass@localhost:2379/myapp" file.lua \
--key=000001_create_space.lua
When publishing migrations, ``tt`` performs checks for:
@@ -123,7 +125,7 @@ When publishing migrations, ``tt`` performs checks for:
.. warning::
Using the options that ignore checks when publishing migration may cause
- migration inconsistency.
+ migration inconsistency in the etcd storage.
.. _tt-migrations-remove:
@@ -142,16 +144,16 @@ To remove all migrations from a specified centralized storage:
.. code-block:: console
- tt migrations remove https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=pass
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass
To remove a specific migration, pass its name in the ``--migration`` option:
.. code-block:: console
- tt migrations remove https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=pass \
- --migration=000001_create_writers_space.lua
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --migration=000001_create_writers_space.lua
Before removing migrations, the command checks their :ref:`status `
on the cluster. To ignore the status and remove migrations anyway, add the
@@ -159,31 +161,30 @@ on the cluster. To ignore the status and remove migrations anyway, add the
.. code-block:: console
- tt migrations remove https://user:pass@localhost:2379/myapp --force-remove-on=config-storage
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --force-remove-on=config-storage
.. note::
- In this case, cluster credentials are not required
+ In this case, cluster credentials are not required.
To remove migration execution information from the cluster (clear the migration status),
use the ``--force-remove-on=cluster`` option:
.. code-block:: console
- tt migrations remove https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=pass \
- --force-remove-on=cluster
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --force-remove-on=cluster
To clear all migration information from the centralized storage and cluster,
use the ``--force-remove-on=all`` option:
.. code-block:: console
- tt migrations remove https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=pass \
- --force-remove-on=all
-
-?? dangers/warnings?
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --force-remove-on=all
.. _tt-migrations-status:
@@ -208,8 +209,8 @@ their execution on the cluster, run:
.. code-block:: console
- tt migrations status https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=pass
+ $ tt migrations status "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass
If the cluster uses SSL encryption, add SSL options. Learn more in :ref:`Authentication `.
@@ -218,7 +219,7 @@ migrations or replica sets:
.. code-block:: console
- tt migrations status https://user:pass@localhost:2379/myapp \
+ $ tt migrations status "https://user:pass@localhost:2379/myapp" \
--tarantool-username=admin --tarantool-password=pass \
--replicaset=storage-001 --migration=000001_create_writers_space.lua
@@ -233,9 +234,9 @@ To find out the results of a migration execution on a specific replica set in th
.. code-block:: console
- tt migrations status https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=pass \
- --replicaset=storage-001 --display-mode=cluster
+ $ tt migrations status "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --replicaset=storage-001 --display-mode=cluster
.. _tt-migrations-stop:
@@ -257,12 +258,12 @@ To stop execution of migrations currently running in the cluster:
.. code-block:: console
- $ tt migrations stop https://user:pass@localhost:2379/myapp \
- --tarantool-username=admin --tarantool-password=secret-cluster-cookie
+ $ tt migrations stop "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass
+
+Q: can any migrations in a batch complete successfully? If I apply 2 migrations and call
+`tt migrations stop` after the first one is finished without errors, what are migration statuses?
-all migration in the batch?
-can any of them complete?
-can it cause inconsistency?
.. _tt-migrations-auth:
@@ -275,7 +276,7 @@ needs credentials to access this storage. There are two ways to pass etcd creden
- command options ``--config-storage-username`` and ``--config-storage-password``
- the etcd URI, for example, ``https://user:pass@localhost:2379/myapp``
-?priority
+Q: which way has a higher priority?
For commands that connect to the cluster (that is, all except ``publish``), Tarantool
credentials are also required. The are passed in the ``--tarantool-username`` and
@@ -294,7 +295,8 @@ Options
**Applicable to:** ``apply``
- migrations fiber lock acquire timeout (in seconds). Fiber lock is used to prevent concurrent migrations run (default 60)
+ Migrations fiber lock acquire timeout in seconds. Default: 60.
+ Fiber lock is used to prevent concurrent migrations run
.. option:: --config-storage-password STRING
@@ -337,7 +339,7 @@ Options
.. warning::
- Using this option may result in cluster migrations inconsistency.
+ Using this option may lead to migrations inconsistency in the cluster.
.. option:: --force-remove-on STRING
@@ -351,7 +353,7 @@ Options
.. warning::
- Using this option may result in cluster migrations inconsistency.
+ Using this option may lead to migrations inconsistency in the cluster.
.. option:: --ignore-order-violation
@@ -361,7 +363,7 @@ Options
.. warning::
- Using this option may result in cluster migrations inconsistency.
+ Using this option may lead to migrations inconsistency in the cluster.
.. option:: --ignore-preceding-status
@@ -371,13 +373,14 @@ Options
.. warning::
- Using this option may result in cluster migrations inconsistency.
+ Using this option may lead to migrations inconsistency in the cluster.
.. option:: --key STRING
**Applicable to:** ``publish``
- put scenario to //migrations/scenario/ etcd key instead. Only for single file publish
+ Put scenario to ``//migrations/scenario/`` etcd key instead.
+ Only for single file publish.
.. option:: --migration STRING
@@ -393,13 +396,13 @@ Options
.. warning::
- Using this option may result in cluster migrations inconsistency.
+ Using this option may lead to migrations inconsistency in the cluster.
.. option:: --replicaset STRING
**Applicable to:** ``apply``, ``remove``, ``status``, ``stop``
- Execute the operation only on the specified replicaset.
+ Execute the operation only on the specified replica set.
.. option:: --skip-syntax-check
From d9827e5320d7a6d6d7439f4b86185f6efa210dfd Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Tue, 1 Oct 2024 12:35:56 +0700
Subject: [PATCH 09/16] Fix missing newlines
---
.../snippets/migrations/instances.enabled/myapp/config.yaml | 2 +-
.../migrations/instances.enabled/myapp/instances-3-storages.yml | 2 +-
.../snippets/migrations/instances.enabled/myapp/instances.yml | 2 +-
.../migrations/instances.enabled/myapp/myapp-scm-1.rockspec | 2 +-
.../migrations/instances.enabled/myapp/source-3-storages.yaml | 2 +-
.../snippets/migrations/instances.enabled/myapp/source.yaml | 2 +-
.../migrations/scenario/000001_create_writers_space.lua | 2 +-
.../migrations/scenario/000002_create_writers_index.lua | 2 +-
.../migrations/scenario/000003_alter_writers_space.lua | 2 +-
9 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
index 9d1ab2666..2e9d6f9fd 100644
--- a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
@@ -7,4 +7,4 @@ config:
password: config_pass
http:
request:
- timeout: 3
\ No newline at end of file
+ timeout: 3
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
index 6d4b27af6..ef5d67a47 100644
--- a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances-3-storages.yml
@@ -4,4 +4,4 @@ storage-001-b:
storage-002-a:
storage-002-b:
storage-003-a:
-storage-003-b:
\ No newline at end of file
+storage-003-b:
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
index 10bab3f7c..de6c03897 100644
--- a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
@@ -2,4 +2,4 @@ router-001-a:
storage-001-a:
storage-001-b:
storage-002-a:
-storage-002-b:
\ No newline at end of file
+storage-002-b:
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
index 6ad1a7830..a7a403519 100644
--- a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
@@ -11,4 +11,4 @@ dependencies = {
build = {
type = 'none';
-}
\ No newline at end of file
+}
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
index dd4eb2d77..61122da38 100644
--- a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source-3-storages.yaml
@@ -85,4 +85,4 @@ groups:
listen:
- uri: localhost:3307
advertise:
- client: localhost:3307
\ No newline at end of file
+ client: localhost:3307
diff --git a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
index 16ba76662..7e49410ab 100644
--- a/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
+++ b/doc/code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
@@ -70,4 +70,4 @@ groups:
listen:
- uri: localhost:3305
advertise:
- client: localhost:3305
\ No newline at end of file
+ client: localhost:3305
diff --git a/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua b/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
index 570e0a4e0..c8e6fc45c 100644
--- a/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
+++ b/doc/code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
@@ -20,4 +20,4 @@ return {
apply = {
scenario = apply_scenario,
},
-}
\ No newline at end of file
+}
diff --git a/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua b/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
index 0caaecbac..fdee011a3 100644
--- a/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
+++ b/doc/code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
@@ -8,4 +8,4 @@ return {
apply = {
scenario = apply_scenario,
},
-}
\ No newline at end of file
+}
diff --git a/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua b/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
index 3db0db03c..3d5331478 100644
--- a/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
+++ b/doc/code_snippets/snippets/migrations/migrations/scenario/000003_alter_writers_space.lua
@@ -45,4 +45,4 @@ return {
apply = {
scenario = apply_scenario,
},
-}
\ No newline at end of file
+}
From 8aa777c9a0ca72682a3badf8d656f52888927254 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Tue, 1 Oct 2024 12:37:40 +0700
Subject: [PATCH 10/16] Apply suggestions from code review
Co-authored-by: Georgy Moiseev
---
.../ddl_dml/migrations/troubleshoot_migrations_tt.rst | 2 +-
doc/tooling/tt_cli/migrations.rst | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
index aacf031bf..0db56ebf3 100644
--- a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
@@ -78,7 +78,7 @@ To interrupt migration execution on the cluster, use ``tt migrations stop``:
--tarantool-username=client --tarantool-password=secret
To avoid such situations in the future, restrict the maximum migration execution time
-using the ``--executions-timeout`` option of ``tt migrations apply``:
+using the ``--execution-timeout`` option of ``tt migrations apply``:
.. code-block:: console
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index 66c693346..8dc558324 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -45,7 +45,7 @@ configuration storage on all its read-write instances (replica set leaders).
.. code-block:: console
$ tt migrations apply "https://user:pass@localhost:2379/myapp" \
- -tarantool-username=admin --tarantool-password=pass
+ --tarantool-username=admin --tarantool-password=pass
To apply a single published migration, pass its name in the ``--migration`` option:
@@ -96,7 +96,7 @@ argument:
.. code-block:: console
- $ tt migrations publish "https://user:pass@localhost:2379/myapp my_migrations"
+ $ tt migrations publish "https://user:pass@localhost:2379/myapp" my_migrations
To publish a single migration from a file, use its name or path as the command argument:
From 6f9be5e40bca0d5dabb334580894f8967044134c Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Tue, 1 Oct 2024 14:15:44 +0700
Subject: [PATCH 11/16] Review fixes
---
.../migrations/basic_migrations_tt.rst | 7 +-
.../migrations/troubleshoot_migrations_tt.rst | 17 +-
.../migrations/upgrade_migrations_tt.rst | 8 +-
doc/tooling/tt_cli/migrations.rst | 185 +++++++++---------
4 files changed, 118 insertions(+), 99 deletions(-)
diff --git a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
index 9e45d74fa..6551979ab 100644
--- a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
@@ -26,7 +26,8 @@ Preparing a cluster
The centralized migration mechanism works with Tarantool EE clusters that:
- use etcd as a centralized configuration storage
-- use the `CRUD `__ module for data distribution
+- use the `CRUD `__ module or its Enterprise
+ version for data distribution
.. _basic_migrations_tt_cluster_etcd:
@@ -268,7 +269,9 @@ Check the migrations status with ``tt migration status``:
To make sure that the space and indexes are created in the cluster, connect to the router
instance and retrieve the space information:
-.. code-block:: $ tt connect myapp:router-001
+.. code-block:: console
+
+ $ tt connect myapp:router-001-a
.. code-block:: tarantoolsession
diff --git a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
index 0db56ebf3..d09af260d 100644
--- a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
@@ -59,10 +59,14 @@ the ``--force-reapply`` option:
If execution of the incorrect migration version has failed, you may also need to add
the ``--ignore-preceding-status`` option:
+When you reapply a migration, ``tt`` checks the statuses of preceding migrations
+to ensure consistency. To skip this check, add the ``--ignore-preceding-status`` option:
+
.. code-block:: console
- $ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
+ $ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret \
+ --migration=00003_alter_space.lua
--force-reapply --ignore-preceding-status
.. _centralized_migrations_tt_troubleshoot_stop:
@@ -77,11 +81,16 @@ To interrupt migration execution on the cluster, use ``tt migrations stop``:
$ tt migrations stop "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
-To avoid such situations in the future, restrict the maximum migration execution time
-using the ``--execution-timeout`` option of ``tt migrations apply``:
+You can adjust the maximum migration execution time using the ``--execution-timeout``
+option of ``tt migrations apply``:
.. code-block:: console
$ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret \
- --execution-timeout=60
\ No newline at end of file
+ --execution-timeout=60
+
+.. note::
+
+ If a migration timeout is reached, you may need to call ``tt migrations stop``
+ to cancel requests that were sent when applying migrations.
\ No newline at end of file
diff --git a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
index 723474998..52a1de71c 100644
--- a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
@@ -25,7 +25,7 @@ Writing a complex migration
Complex migrations require data migration along with schema migration. Connect to
the router instance and insert some tuples into the space before proceeding to the next steps.
-.. code-block:: $ tt connect myapp:router-001
+.. code-block:: $ tt connect myapp:router-001-a
.. code-block:: tarantoolsession
@@ -125,13 +125,15 @@ Apply the published migrations:
Connect to the router instance and check that the space and its tuples have the new format:
-.. code-block:: $ tt connect myapp:router-001
+.. code-block:: console
+
+ $ tt connect myapp:router-001-a
.. code-block:: tarantoolsession
myapp:router-001-a> require('crud').get('writers', 2)
---
- - rows: []
+ - rows: [2, 401, 'Douglas', 'Adams', 49]
metadata: [{'name': 'id', 'type': 'number'}, {'name': 'bucket_id', 'type': 'number'},
{'name': 'first_name', 'type': 'string'}, {'name': 'last_name', 'type': 'string'},
{'name': 'age', 'type': 'number'}]
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index 8dc558324..3f517acd7 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -22,56 +22,14 @@ on using the centralized migrations mechanism.
``COMMAND`` is one of the following:
-* :ref:`apply `
+
* :ref:`publish `
-* :ref:`remove `
+* :ref:`apply `
* :ref:`status `
* :ref:`stop `
+* :ref:`remove `
-.. _tt-migrations-apply:
-
-apply
------
-
-.. code-block:: console
-
- $ tt migrations apply ETCD_URI [OPTION ...]
-
-``tt migrations apply`` applies :ref:`published ` migrations
-to the cluster. It executes all migrations from the cluster's centralized
-configuration storage on all its read-write instances (replica set leaders).
-
-.. code-block:: console
-
- $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass
-
-To apply a single published migration, pass its name in the ``--migration`` option:
-
-.. code-block:: console
-
- $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass \
- --migration=000001_create_space.lua
-
-To apply migrations on a single replica set, specify the ``replicaset`` option:
-
-.. code-block:: console
-
- $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass \
- --replicaset=storage-001
-
-The command also provides options for migration troubleshooting: ``--ignore-order-violation``,
-``--force-reapply``, and ``--ignore-preceding-status``. Learn to use them in
-:ref:`centralized_migrations_tt_troubleshoot`.
-
-.. warning::
-
- The use of migration troubleshooting options may lead to migration inconsistency
- in the cluster. Use them only for local development and testing purposes.
-
.. _tt-migrations-publish:
publish
@@ -125,66 +83,52 @@ When publishing migrations, ``tt`` performs checks for:
.. warning::
Using the options that ignore checks when publishing migration may cause
- migration inconsistency in the etcd storage.
+ migration inconsistency in the cluster.
-.. _tt-migrations-remove:
-remove
-------
+.. _tt-migrations-apply:
-.. code-block:: console
+apply
+-----
- $ tt migrations remove ETCD_URI [OPTION ...]
+.. code-block:: console
-``tt migrations remove`` removes published migrations from the centralized storage.
-With additional options, it can also remove the information about the migration execution
-on the cluster instances.
+ $ tt migrations apply ETCD_URI [OPTION ...]
-To remove all migrations from a specified centralized storage:
+``tt migrations apply`` applies :ref:`published ` migrations
+to the cluster. It executes all migrations from the cluster's centralized
+configuration storage on all its read-write instances (replica set leaders).
.. code-block:: console
- $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass
+ $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass
-To remove a specific migration, pass its name in the ``--migration`` option:
+To apply a single published migration, pass its name in the ``--migration`` option:
.. code-block:: console
- $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass \
- --migration=000001_create_writers_space.lua
+ $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --migration=000001_create_space.lua
-Before removing migrations, the command checks their :ref:`status `
-on the cluster. To ignore the status and remove migrations anyway, add the
-``--force-remove-on=config-storage`` option:
+To apply migrations on a single replica set, specify the ``replicaset`` option:
.. code-block:: console
- $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
- --force-remove-on=config-storage
-
-.. note::
-
- In this case, cluster credentials are not required.
-
-To remove migration execution information from the cluster (clear the migration status),
-use the ``--force-remove-on=cluster`` option:
-
-.. code-block:: console
+ $ tt migrations apply "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --replicaset=storage-001
- $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass \
- --force-remove-on=cluster
+The command also provides options for migration troubleshooting: ``--ignore-order-violation``,
+``--force-reapply``, and ``--ignore-preceding-status``. Learn to use them in
+:ref:`centralized_migrations_tt_troubleshoot`.
-To clear all migration information from the centralized storage and cluster,
-use the ``--force-remove-on=all`` option:
+.. warning::
-.. code-block:: console
+ The use of migration troubleshooting options may lead to migration inconsistency
+ in the cluster. Use them only for local development and testing purposes.
- $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
- --tarantool-username=admin --tarantool-password=pass \
- --force-remove-on=all
.. _tt-migrations-status:
@@ -201,6 +145,7 @@ storage and the result of their execution on the cluster instances.
Possible migration statuses are:
- ``APPLY_STARTED`` -- the migration execution has started but not completed yet
+ or has been interrupted with :ref:`tt migrations stop ``
- ``APPLIED`` -- the migration is successfully applied on the instance
- ``FAILED`` -- there were errors during the migration execution on the instance
@@ -254,16 +199,76 @@ stop
Calling ``tt migration stop`` may cause migration inconsistency in the cluster.
-To stop execution of migrations currently running in the cluster:
+To stop the execution of a migration currently running in the cluster:
.. code-block:: console
$ tt migrations stop "https://user:pass@localhost:2379/myapp" \
--tarantool-username=admin --tarantool-password=pass
-Q: can any migrations in a batch complete successfully? If I apply 2 migrations and call
-`tt migrations stop` after the first one is finished without errors, what are migration statuses?
+``tt migrations stop`` interrupts a single migration. If you call it to interrupt
+the process that applies multiple migrations, the ones completed before the call
+receive the ``APPLIED`` status. The migration is interrupted by the call remains in
+``APPLY_STARTED``.
+
+.. _tt-migrations-remove:
+
+remove
+------
+
+.. code-block:: console
+
+ $ tt migrations remove ETCD_URI [OPTION ...]
+
+``tt migrations remove`` removes published migrations from the centralized storage.
+With additional options, it can also remove the information about the migration execution
+on the cluster instances.
+
+To remove all migrations from a specified centralized storage:
+
+.. code-block:: console
+
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass
+
+To remove a specific migration, pass its name in the ``--migration`` option:
+
+.. code-block:: console
+
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --migration=000001_create_writers_space.lua
+
+Before removing migrations, the command checks their :ref:`status `
+on the cluster. To ignore the status and remove migrations anyway, add the
+``--force-remove-on=config-storage`` option:
+
+.. code-block:: console
+
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --force-remove-on=config-storage
+.. note::
+
+ In this case, cluster credentials are not required.
+
+To remove migration execution information from the cluster (clear the migration status),
+use the ``--force-remove-on=cluster`` option:
+
+.. code-block:: console
+
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --force-remove-on=cluster
+
+To clear all migration information from the centralized storage and cluster,
+use the ``--force-remove-on=all`` option:
+
+.. code-block:: console
+
+ $ tt migrations remove "https://user:pass@localhost:2379/myapp" \
+ --tarantool-username=admin --tarantool-password=pass \
+ --force-remove-on=all
.. _tt-migrations-auth:
@@ -273,10 +278,10 @@ Authentication
Since ``tt migrations`` operates migrations via a centralizes etcd storage, it
needs credentials to access this storage. There are two ways to pass etcd credentials:
-- command options ``--config-storage-username`` and ``--config-storage-password``
+- command-line options ``--config-storage-username`` and ``--config-storage-password``
- the etcd URI, for example, ``https://user:pass@localhost:2379/myapp``
-Q: which way has a higher priority?
+Credentials specified in the URI have a higher priority.
For commands that connect to the cluster (that is, all except ``publish``), Tarantool
credentials are also required. The are passed in the ``--tarantool-username`` and
@@ -386,7 +391,7 @@ Options
**Applicable to:** ``apply``, ``remove``, ``status``
- migration to remove
+ A migration to apply, remove, or check status.
.. option:: --overwrite
From c1405b68daf89086f6461a16f0fbd1c593f84882 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Tue, 1 Oct 2024 14:27:33 +0700
Subject: [PATCH 12/16] Fix
---
.../ddl_dml/{ => migrations}/images/ddl-state.png | Bin
.../ddl_dml/migrations/upgrade_migrations_tt.rst | 6 ++++--
2 files changed, 4 insertions(+), 2 deletions(-)
rename doc/platform/ddl_dml/{ => migrations}/images/ddl-state.png (100%)
diff --git a/doc/platform/ddl_dml/images/ddl-state.png b/doc/platform/ddl_dml/migrations/images/ddl-state.png
similarity index 100%
rename from doc/platform/ddl_dml/images/ddl-state.png
rename to doc/platform/ddl_dml/migrations/images/ddl-state.png
diff --git a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
index 52a1de71c..234e58e91 100644
--- a/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/upgrade_migrations_tt.rst
@@ -8,7 +8,7 @@ Data migrations with space.upgrade()
In this tutorial, you learn to write migrations that include data migration using
the ``space.upgrade()`` function.
-.. _upgrade_migrations_tt:
+.. _upgrade_migrations_tt_prereq:
Prerequisites
-------------
@@ -25,7 +25,9 @@ Writing a complex migration
Complex migrations require data migration along with schema migration. Connect to
the router instance and insert some tuples into the space before proceeding to the next steps.
-.. code-block:: $ tt connect myapp:router-001-a
+.. code-block:: console
+
+ $ tt connect myapp:router-001-a
.. code-block:: tarantoolsession
From c616d6f27a40ed9775f1ec4d1cbe0cc680e261b9 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Thu, 3 Oct 2024 16:29:20 +0700
Subject: [PATCH 13/16] Apply suggestions from code review
Co-authored-by: Kseniia Antonova <73473519+xuniq@users.noreply.github.com>
---
doc/platform/ddl_dml/migrations/basic_migrations_tt.rst | 2 +-
doc/platform/ddl_dml/migrations/index.rst | 2 +-
doc/tooling/tt_cli/migrations.rst | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
index 6551979ab..88ffae17f 100644
--- a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
@@ -94,7 +94,7 @@ Creating a cluster
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
:dedent:
-4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
+#. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
.. note::
diff --git a/doc/platform/ddl_dml/migrations/index.rst b/doc/platform/ddl_dml/migrations/index.rst
index 00b16770a..173e8c038 100644
--- a/doc/platform/ddl_dml/migrations/index.rst
+++ b/doc/platform/ddl_dml/migrations/index.rst
@@ -155,7 +155,7 @@ version of the :ref:`tt ` utility and in :ref:`tcm`.
To learn how to manage migrations in Tarantool EE clusters from the command line,
see :ref:`centralized_migrations_tt`. To learn how to use the mechanism from the |tcm|
-web interface, see the :ref:`tcm_migrations` |tcm| documentation page.
+web interface, see the :ref:`tcm_cluster_migrations` |tcm| documentation page.
.. toctree::
:maxdepth: 1
diff --git a/doc/tooling/tt_cli/migrations.rst b/doc/tooling/tt_cli/migrations.rst
index 3f517acd7..f7a590b3c 100644
--- a/doc/tooling/tt_cli/migrations.rst
+++ b/doc/tooling/tt_cli/migrations.rst
@@ -352,8 +352,8 @@ Options
Remove migrations disregarding their status. Possible values:
- - ``config-storage``: remove migrations on etcd centralized migrations storage disregarding the cluster apply status.
- - ``cluster``: remove migrations status info only on a Tarantool cluster.
+ - ``config-storage``: remove migrations on etcd centralized migrations storage disregarding the cluster apply status.
+ - ``cluster``: remove migrations status info only on a Tarantool cluster.
- ``all`` to execute both ``config-storage`` and ``cluster`` force removals.
.. warning::
From 52b7b981226767de5da73a88d5b7f580cae72bc2 Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Thu, 3 Oct 2024 16:37:51 +0700
Subject: [PATCH 14/16] Fix
---
doc/platform/ddl_dml/migrations/index.rst | 2 +-
.../ddl_dml/migrations/troubleshoot_migrations_tt.rst | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/platform/ddl_dml/migrations/index.rst b/doc/platform/ddl_dml/migrations/index.rst
index 173e8c038..ffdae15d4 100644
--- a/doc/platform/ddl_dml/migrations/index.rst
+++ b/doc/platform/ddl_dml/migrations/index.rst
@@ -28,7 +28,7 @@ There are two types of schema migration that do not require data migration:
.. code-block:: lua
- local users = box.space.users
+ local users = box.space.writers
local fmt = users:format()
table.insert(fmt, { name = 'age', type = 'number', is_nullable = true })
diff --git a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
index d09af260d..dd655b9fb 100644
--- a/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/troubleshoot_migrations_tt.rst
@@ -1,7 +1,7 @@
.. _centralized_migrations_tt_troubleshoot:
Troubleshooting migrations
---------------------------
+==========================
The centralized migrations mechanism allows troubleshooting migration issues using
dedicated ``tt migration`` options. When troubleshooting migrations, remember that
@@ -16,7 +16,7 @@ Additional steps may be needed to fix this.
.. _centralized_migrations_tt_troubleshoot_publish:
Incorrect migration published
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------
If an incorrect migration was published to etcd but wasn't applied yet,
fix the migration file and publish it again with the ``--overwrite`` option:
@@ -45,7 +45,7 @@ from etcd using ``tt migrations remove``:
.. _centralized_migrations_tt_troubleshoot_apply:
Incorrect migration applied
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------
If the migration is already applied, publish the fixed version and apply it with
the ``--force-reapply`` option:
@@ -72,7 +72,7 @@ to ensure consistency. To skip this check, add the ``--ignore-preceding-status``
.. _centralized_migrations_tt_troubleshoot_stop:
Migration execution takes too long
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------------
To interrupt migration execution on the cluster, use ``tt migrations stop``:
From 53506086180609a1a588c8a41f81409ac98363cc Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Thu, 3 Oct 2024 21:27:53 +0700
Subject: [PATCH 15/16] Fix
---
doc/platform/ddl_dml/migrations/basic_migrations_tt.rst | 1 +
1 file changed, 1 insertion(+)
diff --git a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
index 88ffae17f..707977b5d 100644
--- a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
@@ -92,6 +92,7 @@ Creating a cluster
- ``myapp-scm-1.rockspec``:
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
+ :language: none
:dedent:
#. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
From 5a4a414867f10817f4ffaf53897c0ef345f3deab Mon Sep 17 00:00:00 2001
From: Pavel Semyonov
Date: Thu, 3 Oct 2024 21:43:02 +0700
Subject: [PATCH 16/16] Fix list numbering
---
doc/platform/ddl_dml/migrations/basic_migrations_tt.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
index 707977b5d..9ae58f660 100644
--- a/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
+++ b/doc/platform/ddl_dml/migrations/basic_migrations_tt.rst
@@ -95,7 +95,7 @@ Creating a cluster
:language: none
:dedent:
-#. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
+4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
.. note::