Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding new mysql shell backup engine #16295

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

rvrangel
Copy link
Contributor

@rvrangel rvrangel commented Jun 28, 2024

Description

This is a PR that implements a new backup engine for use with MySQL Shell, which is mentioned in the feature request here: #16294

It works a bit differently than the existing engines in vitess in which it only stores the metadata of how the backup was created (location + parameters used) and during the restore uses the location plus other parameters (mysql shell are different if you are doing a dump vs a restore, so we can't use exactly the same ones)

Related Issue(s)

Fixes #16294

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

Signed-off-by: Renan Rangel <rrangel@slack-corp.com>
Copy link
Contributor

vitess-bot bot commented Jun 28, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Jun 28, 2024
@github-actions github-actions bot added this to the v21.0.0 milestone Jun 28, 2024
Copy link

codecov bot commented Jun 28, 2024

Codecov Report

Attention: Patch coverage is 17.85714% with 138 lines in your changes missing coverage. Please review.

Project coverage is 68.61%. Comparing base (7a737f4) to head (c51ee4b).
Report is 80 commits behind head on main.

Files Patch % Lines
go/vt/mysqlctl/mysqlshellbackupengine.go 14.37% 137 Missing ⚠️
go/vt/mysqlctl/backup.go 87.50% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##             main   #16295     +/-   ##
=========================================
  Coverage   68.61%   68.61%             
=========================================
  Files        1544     1549      +5     
  Lines      197993   199256   +1263     
=========================================
+ Hits       135848   136728    +880     
- Misses      62145    62528    +383     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@rvrangel rvrangel marked this pull request as ready for review July 11, 2024 14:17
@deepthi deepthi added Component: Backup and Restore Type: Feature Request and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Jul 11, 2024
Copy link
Contributor

@shlomi-noach shlomi-noach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this submission! Some initial general thoughts, before a deeper code review. PErhaps these questions are more appropriate on #16294 but I did not want to then split the discussion so let's keep it here.

I have not used MySQL Shell backups before. Some questions and notes:

  • This PR adds dependencies on mysqlsh and mysqlshell binaries. This is just an observation, but points for consideration are:

    • Neither are included in a standard MySQL build. What are version dependencies between mysqlsh/mysqlshell and the MySQL server?
    • Neither are included in the MySQL docker images, to the best of my understanding. This means this backup method will not be available on kubernetes deployments via vitess-operator.
  • Re: GTID not being available in the manifest file, this means we will not be able to run point in time recoveries with a mysqlshell-based full backup. Point in time recoveries require GTID information. As mentioned in Feature Request: MySQL Shell Logical Backups #16294 (comment), the mysqlshell method is the first and only (thus far) logical backup solution, so it's unfortunate that this solution will not support logical point in time recoveries.
    Is it not possible to read the gtidExecuted field from the @.json dump file immediately after the backup is complete, and update the manifest file? E.g. if the dump is into a directory, isn't that directory available for us to read?

// This is usually run in a background goroutine, so there's no point
// returning an error. Just log it.
logger.Warningf("error scanning lines from %s: %v", prefix, err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice we do not have a unit test for this function. Having moved it around, perhaps now is a good opportunity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I can probably add it there 👍

@rvrangel
Copy link
Contributor Author

These are good questions, thanks Shlomi!

  • I wasn't sure, but checking the officila mysql docker images, it seems to be included actually:

    $ docker run -it mysql:8.0 mysqlsh --version
    Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
    mysqlsh   Ver 8.0.38 for Linux on x86_64 - for MySQL 8.0.38 (MySQL Community Server (GPL))
    

    In relation to the version dependency, my understanding is that MySQL Shell needs to be at least the same version of the MySQL Server, but it can be newer. we have successfully been using MySQL Shell 8.4 with Percona Server 8.0 releases.

    We don't use vitess-operator so for us it would mean we need to make sure required binaries are installed anyway (like mysqld, xtrabackup). But I imagine it being included in the official docker images means it will be less of an issue?

  • Thats a good point I didn't realise. While it is possible to read the @.json file from a directory when it is completed (if we are writing to disk), it is less straight forward when we are storing the backups on an object store. Because mysqlsh doesn't work the same way (it doesn't provide you with a single stream that can be cleanly uplodad to whatever storage engine we are using), the thought was to bypass the storage engine in vitess (except for the MANIFEST which we still write using it) and just use this metadata to help the engine locate and restore the backup instead. If this was only saving to disk, it is much easier but also very limiting.

    If we were to do this, we would need to write code that would need to fetch the @.json from the supported object stores where there is a support overlap between mysqlsh and vitess (S3, Azure, GCP, etc) and some might be missing. Perhaps a better idea would be to include this backup engine without support for PITR in the beginning and file an upstream feature request that would print or save a copy of the executed GTID once the backup is done, which we could capture in an easier way (similar to the xtrabackup engine)?

    For additional context, as proposed on select backup engine in Backup() and ignore engines in RestoreFromBackup() #16428 and described in the use case, we plan to use this mostly to keep two backup types around for each shard, but always restoring by default from xtrabackup unless we require the logical backups for a specific reason.

    We also considered mysqldump which to be honest would fit the vitess backup engine workflow a lot better, but it was just too slow. This benchmark from Percona also highlights the same thing, and for use backing up/restoring was so slow it didn't make sense.

Signed-off-by: Renan Rangel <rrangel@slack-corp.com>
Signed-off-by: Renan Rangel <rrangel@slack-corp.com>
Signed-off-by: Renan Rangel <rrangel@slack-corp.com>
Copy link
Member

@frouioui frouioui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, thank you for this contribution.

Since we have not fully finished the deletion of mysqld in the vitess/lite Docker Images, the mysqlsh binary will have to be included in the vitess/lite image regardless if it's included in the official MySQL Docker Images or not. Since we are letting people choose between using an official MySQL image or the vitess/lite image for their Docker/K8S deployment we must have the binary in both.

Regarding vitess-operator, a follow-up PR is needed on the vitess-operator to allow this new backup engine. In our CRDs we have an enumeration that restrict what backup engines are allowed, we just need to add a new entry in the enumeration. This can be done here.

FYI, I can handle the vitess-operator changes.

@shlomi-noach
Copy link
Contributor

shlomi-noach commented Jul 22, 2024

We also considered mysqldump which to be honest would fit the vitess backup engine workflow a lot better, but it was just too slow.

Have you looked at mysqlpump? (Next gen mysqldump, included in standard builds).

@shlomi-noach
Copy link
Contributor

shlomi-noach commented Jul 22, 2024

I wasn't sure, but checking the officila mysql docker images, it seems to be included actually:

Oh, that's nice! The reason I thought it wasn't included is that mysqlsh/mysqlshell is not included in the standard MySQL build.

Thats a good point I didn't realise. While it is possible to read the @.json file from a directory when it is completed (if we are writing to disk), it is less straight forward when we are storing the backups on an object store.

I feel like it's OK to have some solution "with limitations". We should strive to support as much functionality as possible though. So IMHO we should strive to include the GTID when the backup goes into a directory. This should be possible to do, which then means the backup should fail if for some reason we can't fetch the GTID or validate it (correct GTID form). i.e. return BackupUnusable if unable to fetch and validate the GTID entry.

I'd like @deepthi to weigh in her opinion.

Assuming we do decide to move forward, then I'd next expect a CI/endtoend test please, as follows:

When these are all added, a new CI job will run to test mysqlshell-based backup, restores, and point-in-time recoveries. These can (and should) use the directory-based backup configuration, one which does make the GTID available.

If this test passes, then you will have validated the full cycle of backup and resotre, as well as correctness of the captured GTID.

Edit: since mysqlshell does not come bundled in the mysql distribution, we'd need to further download/install mysqlshell in the GitHub workflow file.

S3, Azure, GCP can be left without GTID support for now.

We'd need a documentation PR that clearly indicates the limitations of this method.

Comment on lines +23 to +38
var (
// location to store the mysql shell backup
mysqlShellBackupLocation = ""
// flags passed to the mysql shell utility, used both on dump/restore
mysqlShellFlags = "--defaults-file=/dev/null --js -h localhost"
// flags passed to the Dump command, as a JSON string
mysqlShellDumpFlags = `{"threads": 2}`
// flags passed to the Load command, as a JSON string
mysqlShellLoadFlags = `{"threads": 4, "updateGtidSet": "replace", "skipBinlog": true, "progressFile": ""}`
// drain a tablet when taking a backup
mysqlShellBackupShouldDrain = false
// disable redo logging and double write buffer
mysqlShellSpeedUpRestore = false

MySQLShellPreCheckError = errors.New("MySQLShellPreCheckError")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have correctly followed the existing design. I'm just taking the opportunity to say at some point we will want to move away from these global variables.

// location to store the mysql shell backup
mysqlShellBackupLocation = ""
// flags passed to the mysql shell utility, used both on dump/restore
mysqlShellFlags = "--defaults-file=/dev/null --js -h localhost"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure -h localhost will work well in a k8s deployment. @frouioui / @mattlord for review.

// flags passed to the mysql shell utility, used both on dump/restore
mysqlShellFlags = "--defaults-file=/dev/null --js -h localhost"
// flags passed to the Dump command, as a JSON string
mysqlShellDumpFlags = `{"threads": 2}`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any particular reason to choose 2 rather than something based on runtime.NumCPU()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say because MySQL Shell backups can be a bit more intensive - we are requesting a bunch of data off MySQL which needs to be fetched, parsed and compressed - and in our particular use case we were taking backups online so we didn't wan't it to cause that much disruption. It is also part of the reason why I made sure ShouldDrainForBackup() was configurable in case it is more suitable for the use case.

I am fine with changing the default to runtime.NumCPU() though since it is configurable and leave this up to the user to decide based on their environment requirements, although I am also conscious that it might cause some issues in Kube where it will show the number of CPUs of the underlying node despite the pod being possibly limited on how much CPU it can use.

fs.StringVar(&mysqlShellFlags, "mysql_shell_flags", mysqlShellFlags, "execution flags to pass to mysqlsh binary to be used during dump/load")
fs.StringVar(&mysqlShellDumpFlags, "mysql_shell_dump_flags", mysqlShellDumpFlags, "flags to pass to mysql shell dump utility. This should be a JSON string and will be saved in the MANIFEST")
fs.StringVar(&mysqlShellLoadFlags, "mysql_shell_load_flags", mysqlShellLoadFlags, "flags to pass to mysql shell load utility. This should be a JSON string")
fs.BoolVar(&mysqlShellBackupShouldDrain, "mysql_shell_should_drain", mysqlShellBackupShouldDrain, "decide if we should drain while taking a backup or continue to serving traffic")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing the choice of draining vs not draining is due to the increased workload on the server?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, exactly. in fact I have been meaning to propose this to be modifiable for the xtrabackup engine as well, so a tablet won't be service traffic when it is taking a backup

fs.StringVar(&mysqlShellDumpFlags, "mysql_shell_dump_flags", mysqlShellDumpFlags, "flags to pass to mysql shell dump utility. This should be a JSON string and will be saved in the MANIFEST")
fs.StringVar(&mysqlShellLoadFlags, "mysql_shell_load_flags", mysqlShellLoadFlags, "flags to pass to mysql shell load utility. This should be a JSON string")
fs.BoolVar(&mysqlShellBackupShouldDrain, "mysql_shell_should_drain", mysqlShellBackupShouldDrain, "decide if we should drain while taking a backup or continue to serving traffic")
fs.BoolVar(&mysqlShellSpeedUpRestore, "mysql_shell_speedup_restore", mysqlShellSpeedUpRestore, "speed up restore by disabling redo logging and double write buffer during the restore process")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels risky. Please indicate caveats in this flag's description. Otherwise this looks "too good", why wouldn't anyone want to speed up the restore?

Copy link
Contributor Author

@rvrangel rvrangel Jul 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless you need the redo log/double write buffer to be disable once the instance has completed the restore, there shouldn't be much risk in enabling this. for some setups there might be an interest in disabling this (I can see as a possible case, somebody running on zfs and wanting to keep the double write buffer disabled), so I didn't want to force it if the user has a similar scenario

}

start := time.Now().UTC()
location := path.Join(mysqlShellBackupLocation, params.Keyspace, params.Shard, start.Format("2006-01-02_15-04-05"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a non-standard format. It looks to my like you wish to avoid special characters -- which makes sense for S3 etc.
But then, why mix and match - and _ and not set all to -, or all to _?

Alternatively, consider this more condensed format:

// ToReadableTimestamp returns a timestamp, in seconds resolution, that is human readable
// (as opposed to unix timestamp which is just a number)
// Example: for Aug 25 2020, 16:04:25 we return "20200825160425"
func ToReadableTimestamp(t time.Time) string {
return t.Format(readableTimeFormat)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, simply because we were already using another alternative to taking logical backups outside of vitess and this follows the same format.

but to be fair, vitess already doesn't use a standard format either. backups are created using:

// BackupTimestampFormat is the format in which we save BackupTime and FinishedTime
BackupTimestampFormat = "2006-01-02.150405"

I can switch to using that format if it makes more sense?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, let us use this format, and keep it consistent within the backups code.

args = append(args, strings.Fields(mysqlShellFlags)...)
}

args = append(args, "-e", fmt.Sprintf("util.dumpSchemas([\"vt_%s\"], %q, %s)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the keyspace/schema names be escaped here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean escape like in MySQL as `vt_keyspace`? I am not sure MySQL Shell supports or expects it, I will verify that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't assume that the database name is vt_keyspace. It is determined by the --init_db_name_override flag.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This brings up another point. Typically when we do the regular kind of backup we also backup the sidecar db, which defaults to _vt. Does the mysqlshell backup specifically backup only the actual keyspace/database? What are the implications of this?

defer func() { // re-enable once we are done with the restore.
err := params.Mysqld.ExecuteSuperQueryList(ctx, []string{"ALTER INSTANCE ENABLE INNODB REDO_LOG"})
if err != nil {
params.Logger.Errorf("unable to re-enable REDO_LOG: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we fail the restore process? I'm not sure if I have a good answer here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a good question. the original intention here is, since we were able to successfully disable Redo logs/double write buffer, it would be better to fail the restore than potentially put it in service with a potentially dangerous configuration without the user realising it.

if the user wishes to run with redo log/double write disabled buffer they can avoid setting --mysql_shell_speedup_restore and handle it outside of vitess.

@rvrangel
Copy link
Contributor Author

rvrangel commented Jul 22, 2024

@shlomi-noach yeah, we looked into mysqlpump, but it has been deprecated (the page you linked also has a notice) and it is still slower than MySQL Shell. but since it is likely going to be removed in a future MySQL version, we though it would be better not to introduce a new feature using it :)

I think the proposal to have the GTID read when backing up to a directory while not when using an object store is fair, let me look into it and make the necessary changes. If all looks good I will proceed with working on the CI/endtoend tests, I just wanted to get some initial feedback before doing so. Also curious what @deepthi thinks about this approach.

Edit: since mysqlshell does not come bundled in the mysql distribution, we'd need to further download/install mysqlshell in the GitHub workflow file.

Is that something that needs to happen as part of this PR or something separate?

@rvrangel
Copy link
Contributor Author

Since we have not fully finished the deletion of mysqld in the vitess/lite Docker Images, the mysqlsh binary will have to be included in the vitess/lite image regardless if it's included in the official MySQL Docker Images or not. Since we are letting people choose between using an official MySQL image or the vitess/lite image for their Docker/K8S deployment we must have the binary in both.

@frouioui We don't use the docker images, but I am not sure I understand, if mysqld is being removed does that means users are expect to use a different image for running mysqld?

regarding the change in the vitess-operator, I opened a separate PR for it: planetscale/vitess-operator#586, is that all that needs to be changed?

@shlomi-noach
Copy link
Contributor

I just wanted to get some initial feedback before doing so. Also curious what @deepthi thinks about this approach.

@rvrangel absolutely! Let's wait for more input.

Edit: since mysqlshell does not come bundled in the mysql distribution, we'd need to further download/install mysqlshell in the GitHub workflow file.

Is that something that needs to happen as part of this PR or something separate?

In the scope of this PR please. I was merely pointing out that our standard workflow won't install mysql-shell. Oh, and there's a greater discussion about how our workflows are generated, see test/ci_workflow_gen.go. I can help you with that part if you need.

@frouioui
Copy link
Member

frouioui commented Jul 22, 2024

We don't use the docker images, but I am not sure I understand, if mysqld is being removed does that means users are expect to use a different image for running mysqld?

@rvrangel, historically we have always shipped mysqld and all required binaries in our Docker Images (vitess/lite). My point is that we need to make sure mysqlsh is available in the vitess/lite Docker Image so that people can use all the Vitess features on Docker/K8S. Currently, mysqlsh is not in our vitess/lite Docker Image:

$> docker run -it --user=vitess vitess/lite:latest bash
vitess@e92ffa0b2f14:/$ mysqlsh
bash: mysqlsh: command not found

EDIT: I said I was going to push to this PR, but I ended up not doing this in case you have ongoing work on the branch or else.

We need to add mysql-shell to the list of packages we want to install in the install_dependencies.sh script. We also need to create the /home/vitess directory in the lite image and give it the right permissions. Below is the git diff for the fix.

diff --git a/docker/lite/Dockerfile b/docker/lite/Dockerfile
index d5c46cac13..5fc83123b1 100644
--- a/docker/lite/Dockerfile
+++ b/docker/lite/Dockerfile
@@ -42,7 +42,7 @@ RUN /vt/dist/install_dependencies.sh mysql80
 
 # Set up Vitess user and directory tree.
 RUN groupadd -r vitess && useradd -r -g vitess vitess
-RUN mkdir -p /vt/vtdataroot && chown -R vitess:vitess /vt
+RUN mkdir -p /vt/vtdataroot /home/vitess && chown -R vitess:vitess /vt /home/vitess
 
 # Set up Vitess environment (just enough to run pre-built Go binaries)
 ENV VTROOT /vt/src/vitess.io/vitess
diff --git a/docker/utils/install_dependencies.sh b/docker/utils/install_dependencies.sh
index b686c2418b..91e6e2b8c7 100755
--- a/docker/utils/install_dependencies.sh
+++ b/docker/utils/install_dependencies.sh
@@ -86,6 +86,7 @@ mysql57)
         /tmp/mysql-client_${VERSION}-1debian10_amd64.deb
         /tmp/mysql-community-server_${VERSION}-1debian10_amd64.deb
         /tmp/mysql-server_${VERSION}-1debian10_amd64.deb
+        mysql-shell
         percona-xtrabackup-24
     )
     ;;
@@ -112,6 +113,7 @@ mysql80)
         /tmp/mysql-community-server-core_${VERSION}-1debian11_amd64.deb
         /tmp/mysql-community-server_${VERSION}-1debian11_amd64.deb
         /tmp/mysql-server_${VERSION}-1debian11_amd64.deb
+        mysql-shell
         percona-xtrabackup-80
     )
     ;;

You can ensure that the fix works by doing the following:

$> docker build -t vitess/lite-test:latest -f docker/lite/Dockerfile .
$> docker run -it --user="vitess" docker.io/vitess/lite-test:latest bash
vitess@e31f32ddcc91:/$ mysqlsh 
Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
MySQL Shell 8.0.37

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
 MySQL  JS > 
Bye!

Doing this, the version of mysqlsh is 8.0.37 which does not necessarily match the version of MySQL we have in the image (8.0.30), but it respect what you said earlier:

my understanding is that MySQL Shell needs to be at least the same version of the MySQL Server, but it can be newer

Note that if you are on different architecture than amd64 the Docker build will fail. You should merge main into this branch to get the commit that fixes it if you need it.

For @shlomi-noach, in the output I pasted just above, where I run mysqlsh inside our Docker Image. We immediately get the following warning Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory, would that potentially be an issue for our use-case?

regarding the change in the vitess-operator, I opened a separate PR for it: planetscale/vitess-operator#586, is that all that needs to be changed?

Thank you for opening this! 🙇🏻 I will take a look and comment on that PR directly.

@deepthi
Copy link
Member

deepthi commented Jul 22, 2024

@shlomi-noach and @frouioui have pretty much covered all of the main concerns. My opinion is that in principle this is a good feature addition.
The only concern I have left is whether this feature works only with the file backup storage type. I'm basing this off shlomi's comment

S3, Azure, GCP can be left without GTID support for now.

Can you explain what will and will not work with S3, GCP and Azure?

Copy link
Member

@frouioui frouioui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be worth writing something about this new backup engine in the release notes for v21.0.0: ./changelog/21.0/21.0.0/summary.md.

@frouioui frouioui added the release notes (needs details) This PR needs to be listed in the release notes in a dedicated section (deprecation notice, etc...) label Jul 22, 2024
Comment on lines +1 to +2
package mysqlctl

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file and go/vt/mysqlctl/mysqlshellbackupengine_test.go need a licence header.

Comment on lines +181 to +186
--mysql_shell_backup_location string location where the backup will be stored
--mysql_shell_dump_flags string flags to pass to mysql shell dump utility. This should be a JSON string and will be saved in the MANIFEST (default "{\"threads\": 2}")
--mysql_shell_flags string execution flags to pass to mysqlsh binary to be used during dump/load (default "--defaults-file=/dev/null --js -h localhost")
--mysql_shell_load_flags string flags to pass to mysql shell load utility. This should be a JSON string (default "{\"threads\": 4, \"updateGtidSet\": \"replace\", \"skipBinlog\": true, \"progressFile\": \"\"}")
--mysql_shell_should_drain decide if we should drain while taking a backup or continue to serving traffic
--mysql_shell_speedup_restore speed up restore by disabling redo logging and double write buffer during the restore process
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New flags need to use dashes as separators.

args = append(args, strings.Fields(mysqlShellFlags)...)
}

args = append(args, "-e", fmt.Sprintf("util.dumpSchemas([\"vt_%s\"], %q, %s)",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't assume that the database name is vt_keyspace. It is determined by the --init_db_name_override flag.

args = append(args, strings.Fields(mysqlShellFlags)...)
}

args = append(args, "-e", fmt.Sprintf("util.dumpSchemas([\"vt_%s\"], %q, %s)",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This brings up another point. Typically when we do the regular kind of backup we also backup the sidecar db, which defaults to _vt. Does the mysqlshell backup specifically backup only the actual keyspace/database? What are the implications of this?

@shlomi-noach
Copy link
Contributor

S3, Azure, GCP can be left without GTID support for now.

Can you explain what will and will not work with S3, GCP and Azure?

@deepthi I will meanwhile explain what my understanding is: that in S3, Azure, and GCP, we won't have GTID information in the backup manifest, which means we won't be able to use such backups for point in time recoveries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Backup and Restore NeedsWebsiteDocsUpdate What it says release notes (needs details) This PR needs to be listed in the release notes in a dedicated section (deprecation notice, etc...) Type: Feature Request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: MySQL Shell Logical Backups
4 participants