Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/mysql] Slave data consistency #832

Closed
pentago opened this issue Sep 28, 2018 · 5 comments
Closed

[bitnami/mysql] Slave data consistency #832

pentago opened this issue Sep 28, 2018 · 5 comments

Comments

@pentago
Copy link
Contributor

pentago commented Sep 28, 2018

Just installed latest chart version and I discovered that on the deployment of 1 master + 2 slaves, (I login to the slave service with Adminer or PHPMyAdmin) slaves do not have the same amount of data. Same can be seen when entering pods and listing /bitnami/mysql/data directory,

First slave has all the data, second slave has just some.

Seems that data doesn't get replicated consistently to slaves for some reason.
Can somebody test and confirm the issue? This is really critical issue.

@pentago pentago changed the title Slave data consistency [bitnami/mysql] Slave data consistency Sep 28, 2018
@juan131
Copy link
Contributor

juan131 commented Oct 1, 2018

Hi @pentago

Could you share the logs of each slave pod? I think there could have been an issue during the initialisation of one of them...

I could not reproduce it locally (check my results below) but I'd like to know the reason why data was not replicated on all your pods since it's a critical issue as you mentioned.

$ helm install --name mysql bitnami/mysql --set slave.replicas=2
$ kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
mysql-mysql-master-0   1/1       Running   0          2m
mysql-mysql-slave-0    1/1       Running   0          1m
mysql-mysql-slave-1    1/1       Running   0          39s
$ kubectl exec mysql-mysql-slave-0 ls /bitnami/mysql/data
auto.cnf
ib_buffer_pool
ib_logfile0
ib_logfile1
ibdata1
ibtmp1
my_database
mysql
mysql-bin.000001
mysql-bin.000002
mysql-bin.index
mysql-relay-bin.000003
mysql-relay-bin.000004
mysql-relay-bin.index
mysql_upgrade_info
performance_schema
sys
$ kubectl exec mysql-mysql-slave-1 ls /bitnami/mysql/data
auto.cnf
ib_buffer_pool
ib_logfile0
ib_logfile1
ibdata1
ibtmp1
my_database
mysql
mysql-bin.000001
mysql-bin.000002
mysql-bin.index
mysql-relay-bin.000003
mysql-relay-bin.000004
mysql-relay-bin.index
mysql_upgrade_info
performance_schema
sys

@pentago
Copy link
Contributor Author

pentago commented Oct 1, 2018

The issue occurred when I deleted existing chart installation and try to do the fresh install again by using existing, previously used PVC and volume with pre-existing data on it.

The first slave gets all the data but the second one didn't.

Steps to reproduce:

  1. Install chart with fresh, pre-provisioned clean volume and PVC
  2. import some database (larger if possible, my was 500 megs)
  3. wait for data to replicate to slaves
  4. delete/purge chart
  5. install chart again using previously used existing PVC with data
  6. check the status and compare data on both slaves.

I'd realy like to hear how these exact steps went on your end because on my end it results in different amount of data across slaves, not sure why yet.

@juan131
Copy link
Contributor

juan131 commented Oct 2, 2018

Hi @pentago

I couldn't reproduce it. I followed these steps:

  1. Install chart using old version with clean volumes and PVC
$ helm install bitnami/mysql --name mysql --set image.tag=5.7.23-r51,slave.persistence.enabled=false,slave.replicas=2
$ kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
mysql-mysql-master-0   1/1       Running   0          6m
mysql-mysql-slave-0    1/1       Running   0          6m
mysql-mysql-slave-1    1/1       Running   0          6m
REPLICA_PASSWORD=$(kubectl get secret --namespace default mysql-mysql -o jsonpath="{.data.mysql-replication-password}" | base64 --decode)
ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
  1. Import database (I use this test db: https://github.com/datacharmer/test_db):
  2. Wait for data to replicate to slaves
$ kubectl exec mysql-mysql-master-0 -- du -h /bitnami/mysql/data/
1.1M	/bitnami/mysql/data/performance_schema
8.0K	/bitnami/mysql/data/my_database
179M	/bitnami/mysql/data/employees
9.7M	/bitnami/mysql/data/mysql
676K	/bitnami/mysql/data/sys
374M	/bitnami/mysql/data/
$ kubectl exec mysql-mysql-slave-0 -- du -h /bitnami/mysql/data/
1.1M	/bitnami/mysql/data/performance_schema
8.0K	/bitnami/mysql/data/my_database
179M	/bitnami/mysql/data/employees
9.7M	/bitnami/mysql/data/mysql
676K	/bitnami/mysql/data/sys
501M	/bitnami/mysql/data/
$ kubectl exec mysql-mysql-slave-1 -- du -h /bitnami/mysql/data/
1.1M	/bitnami/mysql/data/performance_schema
8.0K	/bitnami/mysql/data/my_database
179M	/bitnami/mysql/data/employees
9.7M	/bitnami/mysql/data/mysql
676K	/bitnami/mysql/data/sys
501M	/bitnami/mysql/data/
  1. delete/purge chart
$ helm delete --purge mysql
  1. Install chart again using previously used existing PVC with data
$ kubectl get pvc
NAME                        STATUS    VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-mysql-mysql-master-0   Bound     local-pv-9f804d79   29Gi       RWO            fast-disks     8m
$ helm install bitnami/mysql --name mysql --set master.persistence.existingClaim=data-mysql-mysql-master-0,slave.persistence.enabled=false,slave.replicas=2,root.password=$ROOT_PASSWORD,replication.password=$REPLICA_PASSWORD
  1. Check the status and compare data on both slaves.
$ kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
mysql-mysql-master-0   1/1       Running   0          3m
mysql-mysql-slave-0    1/1       Running   0          1m
mysql-mysql-slave-1    1/1       Running   0          1m
$ kubectl exec mysql-mysql-slave-0 -- du -h /bitnami/mysql/data/
1.1M	/bitnami/mysql/data/performance_schema
8.0K	/bitnami/mysql/data/my_database
179M	/bitnami/mysql/data/employees
9.7M	/bitnami/mysql/data/mysql
676K	/bitnami/mysql/data/sys
501M	/bitnami/mysql/data/
$ kubectl exec mysql-mysql-slave-1 -- du -h /bitnami/mysql/data/
1.1M	/bitnami/mysql/data/performance_schema
8.0K	/bitnami/mysql/data/my_database
179M	/bitnami/mysql/data/employees
9.7M	/bitnami/mysql/data/mysql
676K	/bitnami/mysql/data/sys
501M	/bitnami/mysql/data/

@pentago
Copy link
Contributor Author

pentago commented Oct 5, 2018

Just tested it again a couple of times with the latest image (8.0.12-debian-9-r45) and it works fine. All slaves are consistent. Will close this for now but I'll test occasionally as I totally do not trust it :D

@pentago pentago closed this as completed Oct 5, 2018
@juan131
Copy link
Contributor

juan131 commented Oct 8, 2018

Great thanks @pentago !! Keep us in the loop

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants