-
-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when make backup of directory #63
Comments
Hi this is a tar restriction, so you have to work around it. As you write constantly I think there is a lot of data. So I assume copying the directory and create a backup from there is not an option. If I'm wrong here, phpbu could do the copy before hand, create the backup and delete the copy again. Some other options are:
|
Hi, Thanks for answering. I'll plan how to make this work with phpbu.
This run great, without error. Probably have difference between this and the command generated by phpbu. Thk's |
You can check the command
or
The last one will execute the backup, the first one will only show what phpbu would do if executed |
I performed with the simulate option and everything return ok, withou errors, but when run without the simulate option, many errors occurs. And I can not get which are the commands generated by the phpbu. |
Can you post your phpbu backup configuration that backups the folder and causes the errors.
I will try to run some tests in a similar environment. |
Follow my phpbu.json {
"verbose": true,
"logging": [
{
"type": "json",
"target": "/backups/json.log"
},
{
"type": "mail",
"options": {
"transport": "smtp",
"recipients": "emails@...",
"smtp.port": "587",
"smtp.host": "...",
"smtp.username": "...",
"smtp.password": "...",
"smtp.encryption": "tls"
}
}
],
"backups": [
{
"source": {
"type": "mysqldump",
"options": {
"host": "",
"databases": "",
"user": "",
"password": ""
}
},
"target": {
"dirname": "/backups/mysql",
"filename": "%Y%m%d-%H%i.sql",
"compress": "bzip2"
},
"checks": [
{
"type": "SizeMin",
"value": "100M"
}
],
"syncs": [
{
"type": "amazons3",
"options": {
"key": "...",
"secret": "...",
"bucket": "...",
"region": "...",
"path": "/mysql"
}
}
],
"cleanup": {
"type": "Quantity",
"options": {
"amount": 2
}
},
"crypt": {
"type": "openssl",
"options": {
"password": "...",
"algorithm": "aes-256-cbc"
}
}
},
{
"source": {
"type": "redis",
"options": {
"pathToRedisData": "/var/lib/redis/dump.rdb"
}
},
"target": {
"dirname": "/backups/redis",
"filename": "%Y%m%d-%H%i",
"compress": "bzip2"
},
"syncs": [
{
"type": "amazons3",
"options": {
"key": "...",
"secret": "...",
"bucket": "...",
"region": "...",
"path": "/redis"
}
}
],
"cleanup": {
"type": "Quantity",
"options": {
"amount": 2
}
},
"crypt": {
"type": "openssl",
"options": {
"password": "...",
"algorithm": "aes-256-cbc"
}
}
},
{
"source": {
"type": "tar",
"options": {
"path": "/var/www/html/app/data/arquivos"
}
},
"target": {
"dirname": "/backups/arquivos",
"filename": "arquivos-%Y%m%d-%H%i",
"compress": "bzip2"
},
"checks": [
{
"type": "SizeMin",
"value": "2G"
}
],
"syncs": [
{
"type": "amazons3",
"options": {
"key": "...",
"secret": "...",
"bucket": "...",
"region": "...",
"path": "/arquivos"
}
}
],
"cleanup": {
"type": "Quantity",
"options": {
"amount": 1
}
},
"crypt": {
"type": "openssl",
"options": {
"password": "...",
"algorithm": "aes-256-cbc"
}
}
},
{
"source": {
"type": "tar",
"options": {
"path": "/var/www/html/app/public/uploads"
}
},
"target": {
"dirname": "/backups/arquivos",
"filename": "uploads-%Y%m%d-%H%i",
"compress": "bzip2"
},
"checks": [
{
"type": "SizeMin",
"value": "2G"
}
],
"syncs": [
{
"type": "amazons3",
"options": {
"key": "...",
"secret": "...",
"bucket": "...",
"region": "...",
"path": "/arquivos"
}
}
],
"cleanup": {
"type": "Quantity",
"options": {
"amount": 1
}
},
"crypt": {
"type": "openssl",
"options": {
"password": "...",
"algorithm": "aes-256-cbc"
}
}
}
]
} I'm execute using: phpbu --configuration=/var/www/html/app/phpbu.json |
I cleaned up the configuration a bit, turned debug to false and ran phpbu returned
So the commands phpbu executes are:
The difference to your Could you check if you execute those two commands manualy you get the same errors you are experiencing while executing them via phpbu (arquivos/pacientes: file changed as we read it) and make sure the command you are using in your shell script works. If your command works and the ones with the
If those work I will add an option to the Here`s the config I used: {
"verbose": false,
"backups": [
{
"source": {
"type": "tar",
"options": {
"path": "/var/www/html/app/data/arquivos"
}
},
"target": {
"dirname": "/backups/arquivos",
"filename": "arquivos-%Y%m%d-%H%i",
"compress": "bzip2"
}
},
{
"source": {
"type": "tar",
"options": {
"path": "/var/www/html/app/public/uploads"
}
},
"target": {
"dirname": "/backups/arquivos",
"filename": "uploads-%Y%m%d-%H%i",
"compress": "bzip2"
}
}
]
} |
Hi, I make this test:
/bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads' Or with ignore-failed-read flag: /bin/tar --ignore-failed-read -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'
touch public/uploads/pacientes/teste.txt
touch public/uploads/teste.txt Result: /bin/tar: uploads/pacientes: file changed as we read it
/bin/tar: uploads: file changed as we read it |
And the What about... /bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' '/var/www/html/app/public/uploads' That should be exactly what you where using before. Does this work, even if you create new files? |
File is created: But with this errors, phpbu can't complete the task or I'm wrong? |
So let me recap:
Right? I would have hoped Nr. 2 would work. But if you confirm that only Nr. 3 works for you, I will add an option |
Running with Now, I'm running without |
File is created:
I'm guess the errors/warning messages can't stop compression but phpbu don't understand this and broke the process. Or the file compression don't complete correctly and file continue in the folder. |
phpbu currently executes it without the 'ignoreFailedRead': true, With this, tar should ignore the error and no error code should be returned, so phpbu shouldn't mark this backup as failed anymore. |
I just release phpbu 3.1.4 Please update phpbu
Add the Hope it works :) |
I'll try today :D |
Remains giving error :(
I'll run only mysql for checking problems. |
No error returned or message on json.log, and generated mysqldump command with --simulate, running fine outside phpbu: /usr/bin/mysqldump --user='x' --password='y' --host='z' 'k' > /backups/bjm/mysql/20160522-1426.sql |
Ok, I managed to reproduce the error. I changed the With this changes I couldn't reproduce the error anymore. I will release this update shortly. |
I just released version 3.1.5, please update phpbu and try the tar generation again.
|
Same errors:
phpbu.json
Config is ok, right? |
Ah I see the problem, "true" has to be a string. ...
"source": {
"type": "tar",
"options": {
"path": "/var/.../arquivos",
"ignoreFailedRead": "true"
}
}
...
"source": {
"type": "tar",
"options": {
"path": "/var/.../uploads",
"ignoreFailedRead": "true"
}
}
... |
@silasrm any feedback on the tar execution with the updated config ("true")? |
Sorry for delay. I'll try today and return with a response. Thk's |
Sorry for delay, again. Everything work fine except the AWS S3 (my sync config) has a limit of upload and the mysqldump:
http://aws.amazon.com/blogs/aws/amazon-s3-object-size-limit/ I'll check with AWS S3.
Mysqldump backup skip and don't create any log about reason of fail. About AWS S3 upload limit, You have some knowledge? Thk's to help me and sorry for the troubles. |
No worries :) Depending on the size of the data you are uploading, Amazon S3 offers the following options:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html So if your file is bigger than 5GB phpbu has to support multipart uploads. |
Hi, I'll need to support multipart uploads, and I'll see how to implements this support to phpbu. My main concern is about mysqldump backup, it's fail but no one log erro is returned. Any suggestion? Thk's. |
So phpbu --simulate And check if running it manually works? |
Failing but phpbu don't return any error in ou out of the log. phpbu generate this command:
Run ok without errors. |
Seems pretty simple. Are you running any checks on the created mysql backup. |
Forget that! |
So are you using a check? |
My config: {
"source": {
"type": "mysqldump",
"options": {
....
}
},
"target": {
"dirname": "/backups/bjm/mysql",
"filename": "%Y%m%d-%H%i.sql",
"compress": "bzip2"
},
"checks": [
{
"type": "SizeMin",
"value": "100M"
}
],
"syncs": [
{
"type": "amazons3",
"options": {
....
}
}
],
"cleanup": {
"type": "Quantity",
"options": {
"amount": 2
}
},
"crypt": {
"type": "openssl",
"options": {
"password": "4M4z0n14",
"algorithm": "aes-256-cbc"
}
}
} |
And what kind of error does occur? |
With tar.gz the same database is 142MB. I'll decrease check value and test it now :D Sorry with issue. Thk's |
Hi @silasrm, "useMultiPartUpload": "true", to your Amazon sync configuration and you should be fine. Cheers Sebastian |
Thank you for your help to improve phpbu |
This is amazing dude :D Many thank's! |
I'm very happy 👯 Now, everything is ok phpbu 3.1.6
Runtime: PHP 5.6.4-4ubuntu6.4
Configuration: /var/.../phpbu.json
Time: 1.28 hours, Memory: 21.50Mb
backup mysqldump: OK
| executed | skipped | failed |
----------+----------+---------+--------+
checks | 1 | | 0 |
crypts | 1 | 0 | 0 |
syncs | 1 | 0 | 0 |
cleanups | 1 | 0 | 0 |
----------+----------+---------+--------+
backup redis: OK
| executed | skipped | failed |
----------+----------+---------+--------+
checks | 0 | | 0 |
crypts | 1 | 0 | 0 |
syncs | 1 | 0 | 0 |
cleanups | 1 | 0 | 0 |
----------+----------+---------+--------+
backup tar: OK
| executed | skipped | failed |
----------+----------+---------+--------+
checks | 1 | | 0 |
crypts | 1 | 0 | 0 |
syncs | 1 | 0 | 0 |
cleanups | 1 | 0 | 0 |
----------+----------+---------+--------+
backup tar: OK
| executed | skipped | failed |
----------+----------+---------+--------+
checks | 1 | | 0 |
crypts | 1 | 0 | 0 |
syncs | 1 | 0 | 0 |
cleanups | 1 | 0 | 0 |
----------+----------+---------+--------+
OK (4 backups, 3 checks, 4 crypts, 4 syncs, 4 cleanups) Thk's! |
Hi,
First, congratulation for the ideia.
My error occur when try to make a backup from directory with new files created constantly.
Any solution to this?
Thank's.
The text was updated successfully, but these errors were encountered: