-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate unmanaged tablet to Vitess keyspace #257
Comments
Hey @zehweh |
Thanks @jawabuu for your quick response. Unfortunately, I get connection errors again. I noticed, that vttablet complains about mysql running in read-only mode... This looks suspicious. Some additional information I forgot to add:
|
@zehweh Are you on #vitess slack channel? |
I'd like to see your |
I found out what was causing the issues: 1: The resource limits were set too low in my VitessCluster manifest which resulted in occasional OOM kills of the mysql container and the logs you see above. 2: I'm working with a master-master replication on the external database (unmanaged tablet). The master I'm connecting to has multiple sets of GTIDs:
Because of this, the MoveTables workflow wouldn't start:
After switching to a master-slave setup and resetting the GTIDs, I was able to successfully start the VReplication. Thanks @jawabuu for your help! |
Hi,
for the last couple of days I've been trying to migrate data from an unmanaged tablet to a vitess cluster in kubernetes using the vitess orchestrator.
Setting up a replicated, unsharded test keyspace together with an unmanaged tablet works like a charm but when it comes to migrating the data, it fails and I see a lot of connection errors:
After checking the logs, I noticed that the mysql master on the test keyspace gets killed (by vttablet?) immediately after starting the migration.
I tested the migration with vitess versions 12, 13 and latest with percona as well as plain mysql (5.7) using a wordpress db dump.
Here are my configs and logs:
Hope that helps!
Thanks,
chris
The text was updated successfully, but these errors were encountered: