-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--test-on-replica should not write to binlogs #646
Comments
note: @Jericon and I discussed this earlier From a GTID standpoint, even if I can see the usefulness of being able to test on a replica both writing to the binary logs and avoiding them, but we've listed several scenarios where it would be better to have the option not to. |
@shlomi-noach I read #254 (as well as the rest of the issues) and I wanted to verify. The approach we should take here is to reset the GTID purged/executed on the replica to effectively doctor the set of GTIDs applied as if the test never happened? (Only when the option is passed). For what it's worth, I ran into this same issue in production a year+ ago by promoting a replica and the other replicas were blocked from starting replication. We realized after hours of digging that it was from our replica-only tests long ago. |
@zmoazeni yes, correct. |
Yeah we ended up doing the latter with a one-off script. But it did make us nervous. |
the latter (apply errant GTIDs on the master) is actually safer. However:
|
In our environment, we do not use GTID's and we run most clusters in an Active/Passive Master/Master configuration. The current behavior of gh-ost's test on replica function is that it assumes the host is a leaf node with no replicas and still writes the changes to the binlog.
It would be beneficial if there was an additional flag to not write to the binlog.
The specific situation I am in is that I am compressing some large tables. Based on small tests and estimates, we should have enough space to complete the compression of one table without running out of disk space. I had intended on running the migration on the passive master, which is taking no traffic and could run out of disk space without any negative issues. With the migration being written to the binlogs, though, this would also cause the active master to run out of space as well.
The text was updated successfully, but these errors were encountered: