Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug / Regression] mysql_db import fail to decompress dumps #20196

Open
Tronde opened this issue Jan 12, 2017 · 39 comments
Open

[Bug / Regression] mysql_db import fail to decompress dumps #20196

Tronde opened this issue Jan 12, 2017 · 39 comments

Comments

@Tronde
Copy link
Contributor

@Tronde Tronde commented Jan 12, 2017

From @bmalynovytch on June 2, 2016 15:43

ISSUE TYPE

Bug Report (regression)

COMPONENT NAME

mysql_db (import)

ANSIBLE VERSION
ansible 2.1.0.0
  config file = /Users/benjamin/xxxxxxx/ansible.cfg
  configured module search path = Default w/o overrides
CONFIGURATION
[defaults]
host_key_checking = False
forks=500
pipelining=True
retry_files_enabled = False

gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./.tmp/ansible
fact_caching_timeout = 3600

[ssh_connection]
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
ssh_args = -o ControlMaster=auto -o ControlPersist=30m
OS / ENVIRONMENT

Deployment OS: Mac OS X 10.11.5, python v2.7.11 with pyenv/virtualenv
Destination OS: ubuntu jessie/sid

SUMMARY

Using mysql_db to import Gzipped or Bzipped SQL dumps used to work like a charm with ansible 2.0.2.0
Now, compressed imports fail with a broken pipe error, either if .gz or .bz2
Strangely, this does not happen on a small (compressed) file (1.8k gzip compressed, 6k uncompressed)
Maybe related to https://blog.nelhage.com/2010/02/a-very-subtle-bug/

STEPS TO REPRODUCE

Try to import a compressed (large enough) SQL dump with mysql_db.
Failure happen with a 3.5MB gzip compressed / 20MB uncompressed dump.

- name: Restore database
  mysql_db:
  args:
    name: my_db
    state: import
    target: /path_to_backups/backup-pre-release.sql.bz2
    login_host: "{{ db.host }}"
    login_port: "{{ db.port }}"
    login_user: "{{ db.user }}"
    login_password: "{{ db.passwd }}"
EXPECTED RESULTS

Import should just work.

ACTUAL RESULTS
fatal: [xxxxxx]: FAILED! => {"changed": false, "failed": true, 
"msg": 
"bzip2: I/O or other error, bailing out.  Possible reason follows.
 bzip2: Broken pipe
       Input file = /opt/xxxxxx/backup-pre-release.sql.bz2, output file = (stdout)
"}

Copied from original issue: ansible/ansible-modules-core#3835

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ansibot on July 30, 2016 16:32

@Jmainguy ping, this issue is waiting for your response.
click here for bot help

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @Jmainguy on July 30, 2016 20:10

I tried to recreate this on ansible-2.2.0-0.git201605131739.e083fa3.devel.el7.centos.noarch and was unable to reproduce.

imported a 240mb .tar.gz (took a few hours, but it worked).

This was on centos, can you try again with devel on ubuntu and let me know if this is still happening?

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ansibot on September 8, 2016 20:59

@Jmainguy, ping. This issue is still waiting on your response.
click here for bot help

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @Jmainguy on September 9, 2016 18:19

ansibot "needs_info"

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @bmalynovytch on September 9, 2016 19:3

The bug concerns any size of compressed dumps: the module doesn't uncompress them anymore.
It might have been fixed, in recent versions, but I don't have time to give it a try, as the platforms on which I use the mysql_db module now handle uncompression before calling mysql_db, and I'm not working on them for now.
I don't have time to test that part again for the moment, sorry.

!needs_info

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ilyapoz on September 27, 2016 18:24

Any workarounds so far? Still reproducible on ubuntu trusty in a virtualbox
ansible 2.1.1.0

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ansibot on September 27, 2016 18:43

@Jmainguy, ping. This issue is still waiting on your response.
click here for bot help

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ilyapoz on September 27, 2016 18:44

Sorry, a possible workaround for small db's is not compressing the dump

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ansibot on December 9, 2016 19:50

This repository has been locked. All new issues and pull requests should be filed in https://github.com/ansible/ansible

Please read through the repomerge page in the dev guide. The guide contains links to tools which automatically move your issue or pull request to the ansible/ansible repo.

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @ulrith on December 20, 2016 11:35

I have the same behavior with my Ansible 2.2.0.0 on Ubuntu 16.04.
Funny but the 'unarchive' module task also fails if I try to decompress the archive before feeding a dump to the mysql_db module. >:-<
Too bad...

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

From @sachavuk on January 4, 2017 17:38

Hi,

I have the same issue on Debian 7.8 with ansible 2.2.0.0 the same error message 😢

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 12, 2017

Good evening,
I hope it was correct to move the issue to this repo as suggested in Move Issues and PRs to new Repo.

As @ulrith and @sachavuk I have this issue, too. In my case the issue is in rhel 7.3 with ansible 2.2.0.0.

The task in my play looks like:

- name: Restore database
  mysql_db:
    name: db_name
    state: import
    target: /tmp/db_dump.sql.bz2

During the playbook run I receive the following error message:

TASK [role-name : Restore database] ******************************************
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "failed": true, "msg": "\nbzip2: I/O or other error, bailing out.  Possible reason follows.\nbzip2: Broken pipe\n\tInput file = /tmp/db_dump.sql.bz2, output file = (stdout)\n"}

I don't know how to debug an ansible module. If there are any useful information I could provide, please tell me how to do that.

Regards,
Tronde

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 13, 2017

Hi there,
Not sure what's up exactly with these labels. But remember that version 2.2 is affected, too.

@Jmainguy

This comment has been minimized.

Copy link
Contributor

@Jmainguy Jmainguy commented Jan 24, 2017

@Tronde I am still unable to reproduce this bug using latest devel. Can you try and reproduce using latest devel, I followed the instructions above, any compressed database that is larger than 3.5mb when compressed.

[root@phy01 ~]# ansible -i localhost localhost -m mysql_db -a "state=dump target=/tmp/db.sql.bz2 name=diaspora"                                                                               
 [WARNING]: Host file not found: localhost

 [WARNING]: provided hosts list is empty, only localhost is available

localhost | SUCCESS => {
    "changed": true, 
    "db": "diaspora", 
    "msg": ""
}


[root@phy01 ~]# ansible -i /tmp/hosts all -m mysql_db -a "name=icannotreproducethisbug state=import target=/tmp/db.sql.bz2"
centos7.soh.re | SUCCESS => {
    "changed": true, 
    "db": "icannotreproducethisbug", 
    "msg": ""
}



[root@centos7 ~]# ls -ltrh /tmp/db.sql.bz2 
-rw-r--r--. 1 root root 79M Jan 24 15:23 /tmp/db.sql.bz2

[root@phy01 ~]# rpm -qa ansible
ansible-2.3.0-100.git201701131819.d25a708.devel.el7.centos.noarch
@Jmainguy

This comment has been minimized.

Copy link
Contributor

@Jmainguy Jmainguy commented Jan 24, 2017

@ansibot 'needs_info'

@ansibot ansibot added the needs_info label Jan 24, 2017
@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 24, 2017

@Jmainguy I was able to reproduce the problem with the latest devel:

$ ansible-playbook --version
ansible-playbook 2.3.0 (devel 6a6fb28af5) last updated 2017/01/24 18:33:11 (GMT +200)
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

Running $ ansible-playbook my-it-brain.yml with the task:

- name: Restore database
  mysql_db:
    name: my_db
    state: import
    target: /tmp/my_db.sql.bz2

results in:

TASK [role-name : Restore database] ******************************************
fatal: [10.0.2.4]: FAILED! => {"changed": false, "failed": true, "msg": "\nbzip2: I/O or other error, bailing out.  Possible reason follows.\nbzip2: Broken pipe\n\tInput file = /tmp/my_db.sql.bz2, output file = (stdout)\n"}
$ ll roles/my-it-brain/files/
total 129332
-rw-rw-r--. 1 tronde tronde   9754808 Jan  8 20:45 my_db.sql.bz2
-rw-rw-r--. 1 tronde tronde 122675935 Jan 10 13:18 another_file.tar.bz2

Please tell me, if you need further information.

@Jmainguy

This comment has been minimized.

Copy link
Contributor

@Jmainguy Jmainguy commented Jan 24, 2017

I am testing against centos7 with mariadb-server, are you testing against another DB? there is clearly something between our two env setups because I am unable to reproduce this. I just compiled and installed latest devel again to be sure.

[root@phy01 test]# ansible-playbook -i hosts  mysql_db.py 

PLAY [Please break] ************************************************************

TASK [import a bz2 file and break hopefully] ***********************************
changed: [centos7.soh.re]

PLAY RECAP *********************************************************************
centos7.soh.re             : ok=1    changed=1    unreachable=0    failed=0   

[root@phy01 test]# cat mysql_db.py 
---

- name: Please break
  hosts: all
  gather_facts: false
  tasks:
    - name: import a bz2 file and break hopefully
      mysql_db:
        name: my_db
        state: import
        target: /tmp/db.sql.bz2
[root@phy01 test]# rpm -qa ansible
ansible-2.3.0-100.git201701241457.8d4246c.devel.el7.centos.noarch

[root@centos7 ~]# ls -ltrh /tmp/db.sql.bz2
-rw-r--r--. 1 root root 79M Jan 24 15:23 /tmp/db.sql.bz2
[root@centos7 ~]# rpm -qa | grep -i maria
mariadb-5.5.52-1.el7.x86_64
mariadb-server-5.5.52-1.el7.x86_64
mariadb-libs-5.5.52-1.el7.x86_64

Is it possible bz2 is failing to uncompress because of disk running out or something? Why is bz2 failing in your (and other testers reproducing this) envs? bzip2: I/O or other error, bailing out.

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 24, 2017

Hi,

here are some information about my env.

Controller:

$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.3 (Maipo)

Target Node:

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 14.04.5 LTS
Release:	14.04
Codename:	trusty

$ dpkg -l mysql-server
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                        Version            Architecture       Description
+++-===========================-==================-==================-============================================================
ii  mysql-server                5.5.54-0ubuntu0.14 all                MySQL database server (metapackage depending on the latest v

The node should have enough disk space and ram to extract the bz2 file:

df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            486M  4.0K  486M   1% /dev
tmpfs           100M  476K   99M   1% /run
/dev/sda1        12G  1.8G  9.4G  16% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            497M     0  497M   0% /run/shm
none            100M     0  100M   0% /run/user

I increased the memory to 2048 mb to be sure not to run in an out of memory issue. But I'm still getting the same error message.

@Jmainguy

This comment has been minimized.

Copy link
Contributor

@Jmainguy Jmainguy commented Jan 24, 2017

I have no idea why I cannot reproduce this, can you try manually bunzip2 that file and see if it shoots the i/o error?

Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Jan 24 20:07:20 CET 2017

  System load:  0.25              Processes:           86
  Usage of /:   4.2% of 38.02GB   Users logged in:     0
  Memory usage: 17%               IP address for eth0: 192.168.122.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

*** System restart required ***
Last login: Tue Jan 24 20:07:20 2017 from 192.168.122.1
root@ubuntu1404:~# df -Th
Filesystem                      Type      Size  Used Avail Use% Mounted on
udev                            devtmpfs  487M  8.0K  487M   1% /dev
tmpfs                           tmpfs     100M  432K   99M   1% /run
/dev/mapper/ubuntu1404--vg-root ext4       39G  1.8G   35G   5% /
none                            tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
none                            tmpfs     5.0M     0  5.0M   0% /run/lock
none                            tmpfs     498M     0  498M   0% /run/shm
none                            tmpfs     100M     0  100M   0% /run/user
/dev/sda1                       ext2      236M   40M  184M  18% /boot
root@ubuntu1404:~# ls -ltrh /tmp/
total 79M
-rw-r--r-- 1 root root 79M Jan 24 20:07 db.sql.bz2
root@ubuntu1404:~# bunzip /tmp/db.sql.bz2 
No command 'bunzip' found, did you mean:
 Command 'runzip' from package 'rzip' (universe)
 Command 'funzip' from package 'unzip' (main)
 Command 'ebunzip' from package 'eb-utils' (universe)
 Command 'unzip' from package 'unzip' (main)
 Command 'bunzip2' from package 'bzip2' (main)
 Command 'gunzip' from package 'gzip' (main)
 Command 'lunzip' from package 'lunzip' (universe)
bunzip: command not found
root@ubuntu1404:~# bunzip2 /tmp/db.sql.bz2 
root@ubuntu1404:~# ls -ltrh /tmp/
total 123M
-rw-r--r-- 1 root root 123M Jan 24 20:07 db.sql
root@ubuntu1404:~# dpkg -l mysql-server
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                      Version                   Architecture              Description
+++-=========================================-=========================-=========================-========================================================================================
un  mysql-server                              <none>                    <none>                    (no description available)
root@ubuntu1404:~# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 45
Server version: 5.5.54-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| my_db              |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)

mysql> Bye
root@ubuntu1404:~# free -m
             total       used       free     shared    buffers     cached
Mem:           994        839        154          0         30        527
-/+ buffers/cache:        281        712
Swap:         1023          3       1020
@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 24, 2017

Using bunzip2 to extract the file locally on the target node works just fine without any error.

Unfortunately I have no idea how to help you reproducing this error. :-(

@ansibot ansibot removed the needs_info label Jan 24, 2017
@Jmainguy

This comment has been minimized.

Copy link
Contributor

@Jmainguy Jmainguy commented Jan 24, 2017

I am unable to reproduce this bug. That being said, supposing it does exist, it has to do with stdout ending before the compression tool thinks it should. Guessing running out of ram / swap.

I added this code 1608163

Which was reverted with this code aa79810

So it seems going from decompressing to file, importing, then compressing back up was nixed for decompressing to stdout, then importing from that stdout.

I imagine going back to disk will fix this, at the cost of speed, and disk space while the playbook is running (it will compress after it imports).

Thoughts?

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Jan 25, 2017

Guessing running out of ram / swap.

Well, I give it a try again, today. Using two different mysql dumps I encountered the same error. In both cases I had at least 120 MB of ram left. I thought that should be enough.

My understanding of this matter is not informed enough to give you any helpful thoughts on this. But if you are going back to disk, I would be happy to give it another run.

@cristiroma

This comment has been minimized.

Copy link

@cristiroma cristiroma commented May 9, 2017

In my case I had the same error as OP, but when I tried to manually load the file on remote (bunzip2 -c file.tar.bz2 | mysql db), I got the error: MySQL has gone away - the problem was max_allowed_packet too small.

HTH.

@er1zo

This comment has been minimized.

Copy link

@er1zo er1zo commented May 24, 2017

Apparently this issue appears only if database is already imported(on second and every next run). Could be more idempotent..

@DWSR

This comment has been minimized.

Copy link

@DWSR DWSR commented Jun 13, 2017

Want to add a "me too" for this issue. I'm importing a GZipped SQL file and getting Broken Pipe. My DB is 54MB compressed, ~400MB uncompressed. Using ansible 2.3 on WSL. Target box is Ubuntu 16.04.2 / MariaDB 10. Workaround for now is to uncompress the db file using a shell task (because unarchive doesn't support this op) and then importing.

@rockpunk

This comment has been minimized.

Copy link

@rockpunk rockpunk commented Jul 5, 2017

Hey guys, here's another reason for the broken pipe error. Maybe it helps some of the can't reproduce problems above?

In my case, it boiled down to using passwords with special characters. Ansible is trying to be smart and quote passwords before sending it off to mysql, unfortunately mysql assumes the quotes are part of the password.

Basically, mysql_db module is doing:

        # cmd looks something like:
        cmd = ['mysql','--user=%s' % pipes.quote(user), '--password=%s' % pipes.quote(pass), ... ]
        # comp_prog_path is the compression tool based on target ext (gzip, bzip, etc)
        p1 = subprocess.Popen([comp_prog_path, '-dc', target], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        p2 = subprocess.Popen(cmd, stdin=p1.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        (stdout2, stderr2) = p2.communicate()
        p1.stdout.close()
        p1.wait()
        if p1.returncode != 0:
            stderr1 = p1.stderr.read()
            return p1.returncode, '', stderr1
        else:
            return p2.returncode, stdout2, stderr2

So, when the password is 'test1234!', ansible tries to pass ['mysql','--user=admin',"--password='test1234!'",'-D','testdb'] to subprocess. (Notice the quotes around password)

When exec passes this off to mysql, Mysql fails with ERROR 1045 (28000): Access denied for user 'admin'@'x.x.x.x' (using password: YES), and both p1 and p2 have non-zero error returns. Unfortunately, ansible only returns the broken pipe error (from the gzip, bzip, etc commands).

I monkey patched my version of ansible's mysql_db.py to remove pipes.quote() from the password and all worked fine.

Given the difficulty of troubleshooting the many different errors for broken pipes, I would suggest:

  1. only returning p1 error if pipe wasn't broken (not sure the best way to handle this w/ subprocess, but bash does it right: gzip -dc foo.gz | mysql --password='wrong' does not show broken pipe error for gzip.)
  2. not using pipes.quote() for passwords
@rockpunk

This comment has been minimized.

Copy link

@rockpunk rockpunk commented Jul 6, 2017

Submitted PR #26504 for this.

@muratdx

This comment has been minimized.

Copy link

@muratdx muratdx commented Jul 21, 2017

Any chance we can get the fix into the 2.3 series as well?

@ansibot ansibot added bug and removed bug_report labels Mar 1, 2018
kentr added a commit to kentr/drupal-vm that referenced this issue Apr 22, 2018
The password is no longer necessary for the task
since a `.my.cnf` file is now created.

It was also interfering with import of a gzipped
dump file on AWS Ubuntu.

Possibly related to 
ansible/ansible#20196
kentr added a commit to kentr/ansible-role-wordpress that referenced this issue Jun 1, 2018
Works around the Ansible issue with importing gzipped files when a MySQL
password is specified.

See ansible/ansible#20196.
@dagwieers dagwieers added the mysql label Jan 26, 2019
@ansibot ansibot added the test label Feb 3, 2019
@ansibot

This comment has been minimized.

Copy link
Contributor

@ansibot ansibot commented Feb 19, 2019

@ansibot ansibot added the database label Feb 19, 2019
rockpunk added a commit to rockpunk/ansible that referenced this issue Feb 20, 2019
@ansibot

This comment has been minimized.

Copy link
Contributor

@ansibot ansibot commented Feb 27, 2019

@ansibot

This comment has been minimized.

Copy link
Contributor

@ansibot ansibot commented Jun 3, 2019

@ansibot

This comment has been minimized.

Copy link
Contributor

@ansibot ansibot commented Jun 19, 2019

@ansibot

This comment has been minimized.

Copy link
Contributor

@ansibot ansibot commented Aug 8, 2019

@Tronde

This comment has been minimized.

Copy link
Contributor Author

@Tronde Tronde commented Mar 17, 2020

Time appropriate greetings everyone,
I would like to provide some new information to this issue.

Information about the test case

I've used a bzip2 compressed MySQL dump file and tried to import it on two different target systems.

Ansible Control Node

Red Hat Enterprise Linux Server release 7.7 (Maipo)
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/tronde/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

Target nodes

  1. Debian Buster (current patch level)
  2. CentOS 7.7 (current patch level)

Scenario 1: Successful deployment on Debian Buster

Playbook

---

- name: Test case to debug mysql_db module
  hosts: 10.0.2.6
  tasks:
    - name: Copy database dump file
      copy:
        src: /tmp/test-case.sql.bz2
        dest: /tmp

    - name: Restore database
      mysql_db:
        name: test-case-db
        state: import
        target: /tmp/test-case.sql.bz2

Test file

-rwxr-xr-x. 1 1000 1000 12445232 Mar 17 20:40 /tmp/test-case.sql.bz2

Playbook run

PLAY [Test case to debug mysql_db module] **************************************

TASK [Gathering Facts] *********************************************************
ok: [10.0.2.6]

TASK [Check for nginx, php-fpm and mysql-server] *******************************
changed: [10.0.2.6]

TASK [Copy database dump file] *************************************************
changed: [10.0.2.6]

TASK [Restore database] ********************************************************
changed: [10.0.2.6]

PLAY RECAP *********************************************************************
10.0.2.6                   : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Everything is fine.

Scenario 2: Unsuccessful deployment on CentOS 7.7

Playbook and test file are the same as in scenario 1.

Playbook run

PLAY [Test case to debug mysql_db module] **************************************

TASK [Gathering Facts] *********************************************************
ok: [10.0.2.15]

TASK [Copy database dump file] *************************************************
changed: [10.0.2.15]

TASK [Restore database] ********************************************************
fatal: [10.0.2.15]: FAILED! => {"changed": false, "msg": "\nbzip2: I/O or other error, bailing out.  Possible reason follows.\nbzip2: Broken pipe\n\tInput file = /tmp/test-case.sql.bz2, output file = (stdout)\n"}

PLAY RECAP *********************************************************************
10.0.2.15                  : ok=4    changed=3    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Running bunzip2 /tmp/test-case.sql.bz2 on the target node completed without error.

In case you need more information, please tell me which and how to gather them.

Regards,
Tronde

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.