Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
[Bug / Regression] mysql_db import fail to decompress dumps #20196
From @bmalynovytch on June 2, 2016 15:43
Bug Report (regression)
OS / ENVIRONMENT
Deployment OS: Mac OS X 10.11.5, python v2.7.11 with pyenv/virtualenv
STEPS TO REPRODUCE
Try to import a compressed (large enough) SQL dump with mysql_db.
Import should just work.
Copied from original issue: ansible/ansible-modules-core#3835
From @Jmainguy on July 30, 2016 20:10
I tried to recreate this on ansible-2.2.0-0.git201605131739.e083fa3.devel.el7.centos.noarch and was unable to reproduce.
imported a 240mb .tar.gz (took a few hours, but it worked).
This was on centos, can you try again with devel on ubuntu and let me know if this is still happening?
From @bmalynovytch on September 9, 2016 19:3
The bug concerns any size of compressed dumps: the module doesn't uncompress them anymore.
From @ansibot on December 9, 2016 19:50
This repository has been locked. All new issues and pull requests should be filed in https://github.com/ansible/ansible
Please read through the repomerge page in the dev guide. The guide contains links to tools which automatically move your issue or pull request to the ansible/ansible repo.
From @ulrith on December 20, 2016 11:35
I have the same behavior with my Ansible 184.108.40.206 on Ubuntu 16.04.
As @ulrith and @sachavuk I have this issue, too. In my case the issue is in rhel 7.3 with ansible 220.127.116.11.
The task in my play looks like:
During the playbook run I receive the following error message:
I don't know how to debug an ansible module. If there are any useful information I could provide, please tell me how to do that.
@Tronde I am still unable to reproduce this bug using latest devel. Can you try and reproduce using latest devel, I followed the instructions above, any compressed database that is larger than 3.5mb when compressed.
@Jmainguy I was able to reproduce the problem with the latest devel:
Please tell me, if you need further information.
I am testing against centos7 with mariadb-server, are you testing against another DB? there is clearly something between our two env setups because I am unable to reproduce this. I just compiled and installed latest devel again to be sure.
Is it possible bz2 is failing to uncompress because of disk running out or something? Why is bz2 failing in your (and other testers reproducing this) envs? bzip2: I/O or other error, bailing out.
here are some information about my env.
The node should have enough disk space and ram to extract the bz2 file:
I increased the memory to 2048 mb to be sure not to run in an out of memory issue. But I'm still getting the same error message.
I have no idea why I cannot reproduce this, can you try manually bunzip2 that file and see if it shoots the i/o error?
I am unable to reproduce this bug. That being said, supposing it does exist, it has to do with stdout ending before the compression tool thinks it should. Guessing running out of ram / swap.
I added this code 1608163
Which was reverted with this code aa79810
So it seems going from decompressing to file, importing, then compressing back up was nixed for decompressing to stdout, then importing from that stdout.
I imagine going back to disk will fix this, at the cost of speed, and disk space while the playbook is running (it will compress after it imports).
Well, I give it a try again, today. Using two different mysql dumps I encountered the same error. In both cases I had at least 120 MB of ram left. I thought that should be enough.
My understanding of this matter is not informed enough to give you any helpful thoughts on this. But if you are going back to disk, I would be happy to give it another run.
Want to add a "me too" for this issue. I'm importing a GZipped SQL file and getting Broken Pipe. My DB is 54MB compressed, ~400MB uncompressed. Using ansible 2.3 on WSL. Target box is Ubuntu 16.04.2 / MariaDB 10. Workaround for now is to uncompress the db file using a shell task (because unarchive doesn't support this op) and then importing.
Hey guys, here's another reason for the broken pipe error. Maybe it helps some of the can't reproduce problems above?
In my case, it boiled down to using passwords with special characters. Ansible is trying to be smart and quote passwords before sending it off to mysql, unfortunately mysql assumes the quotes are part of the password.
Basically, mysql_db module is doing:
# cmd looks something like: cmd = ['mysql','--user=%s' % pipes.quote(user), '--password=%s' % pipes.quote(pass), ... ] # comp_prog_path is the compression tool based on target ext (gzip, bzip, etc) p1 = subprocess.Popen([comp_prog_path, '-dc', target], stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen(cmd, stdin=p1.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout2, stderr2) = p2.communicate() p1.stdout.close() p1.wait() if p1.returncode != 0: stderr1 = p1.stderr.read() return p1.returncode, '', stderr1 else: return p2.returncode, stdout2, stderr2
So, when the password is 'test1234!', ansible tries to pass
When exec passes this off to mysql, Mysql fails with
I monkey patched my version of ansible's mysql_db.py to remove pipes.quote() from the password and all worked fine.
Given the difficulty of troubleshooting the many different errors for broken pipes, I would suggest:
Time appropriate greetings everyone,
Information about the test case
I've used a bzip2 compressed MySQL dump file and tried to import it on two different target systems.
Ansible Control Node
Red Hat Enterprise Linux Server release 7.7 (Maipo)
Scenario 1: Successful deployment on Debian Buster
Everything is fine.
Scenario 2: Unsuccessful deployment on CentOS 7.7
Playbook and test file are the same as in scenario 1.
In case you need more information, please tell me which and how to gather them.