New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when make backup of directory #63

Closed
silasrm opened this Issue May 17, 2016 · 39 comments

Comments

Projects
None yet
2 participants
@silasrm

silasrm commented May 17, 2016

Hi,

First, congratulation for the ideia.

My error occur when try to make a backup from directory with new files created constantly.

Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: arquivos/pacientes: file changed as we read it ' in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:104
Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: uploads/pacientes: file changed as we read it ' in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:104

Any solution to this?

Thank's.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 18, 2016

Owner

Hi

this is a tar restriction, so you have to work around it.
Here are some suggestions.

As you write constantly I think there is a lot of data. So I assume copying the directory and create a backup from there is not an option. If I'm wrong here, phpbu could do the copy before hand, create the backup and delete the copy again.

Some other options are:

  • Sharding the source diectory by day, so yesterdays directory is save to compress because there are no writes to it anymore. This way you do a kind of incremental backup, but to restore it you have to restore all your backups not just the last one. So you have to keep them all somewhere :)
  • Having a shadow copy of your data. For example use rsync and cron to sync your data every 5 minutes, stop the sync for the time of the backup and create a backup from your "shadow" copy.
  • Use some special tools that are able to create consistent backups, and maybe integrate them into phpbu #hint :)
Owner

sebastianfeldmann commented May 18, 2016

Hi

this is a tar restriction, so you have to work around it.
Here are some suggestions.

As you write constantly I think there is a lot of data. So I assume copying the directory and create a backup from there is not an option. If I'm wrong here, phpbu could do the copy before hand, create the backup and delete the copy again.

Some other options are:

  • Sharding the source diectory by day, so yesterdays directory is save to compress because there are no writes to it anymore. This way you do a kind of incremental backup, but to restore it you have to restore all your backups not just the last one. So you have to keep them all somewhere :)
  • Having a shadow copy of your data. For example use rsync and cron to sync your data every 5 minutes, stop the sync for the time of the backup and create a backup from your "shadow" copy.
  • Use some special tools that are able to create consistent backups, and maybe integrate them into phpbu #hint :)
@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 18, 2016

Hi,

Thanks for answering.

I'll plan how to make this work with phpbu.
For information: I use a shell script today, but I desire to migrate to phpbu for the all automation, and this work great:

#!/bin/sh

cd /backup/app/
DATA=`date +%d-%m-%Y__%H-%M-%S`;
tar -zcvf  app.$DATA.tar.gz /my/app/ > /dev/null

This run great, without error. Probably have difference between this and the command generated by phpbu.

Thk's

silasrm commented May 18, 2016

Hi,

Thanks for answering.

I'll plan how to make this work with phpbu.
For information: I use a shell script today, but I desire to migrate to phpbu for the all automation, and this work great:

#!/bin/sh

cd /backup/app/
DATA=`date +%d-%m-%Y__%H-%M-%S`;
tar -zcvf  app.$DATA.tar.gz /my/app/ > /dev/null

This run great, without error. Probably have difference between this and the command generated by phpbu.

Thk's

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 18, 2016

Owner

You can check the command phpbu generates with

# phpbu --simulate

or

# phpbu --debug

The last one will execute the backup, the first one will only show what phpbu would do if executed

Owner

sebastianfeldmann commented May 18, 2016

You can check the command phpbu generates with

# phpbu --simulate

or

# phpbu --debug

The last one will execute the backup, the first one will only show what phpbu would do if executed

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 18, 2016

I performed with the simulate option and everything return ok, withou errors, but when run without the simulate option, many errors occurs. And I can not get which are the commands generated by the phpbu.

silasrm commented May 18, 2016

I performed with the simulate option and everything return ok, withou errors, but when run without the simulate option, many errors occurs. And I can not get which are the commands generated by the phpbu.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 18, 2016

Owner

Can you post your phpbu backup configuration that backups the folder and causes the errors.
And add some infos about you file structure.

  • Where do you execute phpbu (path)?
  • Path to the config file
  • Path the directory you want do backup and compress

I will try to run some tests in a similar environment.

Owner

sebastianfeldmann commented May 18, 2016

Can you post your phpbu backup configuration that backups the folder and causes the errors.
And add some infos about you file structure.

  • Where do you execute phpbu (path)?
  • Path to the config file
  • Path the directory you want do backup and compress

I will try to run some tests in a similar environment.

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 18, 2016

Follow my phpbu.json

{
  "verbose": true,
  "logging": [
    {
      "type": "json",
      "target": "/backups/json.log"
    },
    {
      "type": "mail",
      "options": {
        "transport": "smtp",
        "recipients": "emails@...",
        "smtp.port": "587",
        "smtp.host": "...",
        "smtp.username": "...",
        "smtp.password": "...",
        "smtp.encryption": "tls"
      }
    }
  ],
  "backups": [
    {
      "source": {
        "type": "mysqldump",
        "options": {
          "host": "",
          "databases": "",
          "user": "",
          "password": ""
        }
      },
      "target": {
        "dirname": "/backups/mysql",
        "filename": "%Y%m%d-%H%i.sql",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "100M"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/mysql"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 2
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    },
    {
      "source": {
        "type": "redis",
        "options": {
          "pathToRedisData": "/var/lib/redis/dump.rdb"
        }
      },
      "target": {
        "dirname": "/backups/redis",
        "filename": "%Y%m%d-%H%i",
        "compress": "bzip2"
      },
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/redis"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 2
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    },
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/data/arquivos"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "arquivos-%Y%m%d-%H%i",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "2G"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/arquivos"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 1
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    },
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/public/uploads"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "uploads-%Y%m%d-%H%i",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "2G"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/arquivos"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 1
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    }
  ]
}

I'm execute using:

phpbu --configuration=/var/www/html/app/phpbu.json

silasrm commented May 18, 2016

Follow my phpbu.json

{
  "verbose": true,
  "logging": [
    {
      "type": "json",
      "target": "/backups/json.log"
    },
    {
      "type": "mail",
      "options": {
        "transport": "smtp",
        "recipients": "emails@...",
        "smtp.port": "587",
        "smtp.host": "...",
        "smtp.username": "...",
        "smtp.password": "...",
        "smtp.encryption": "tls"
      }
    }
  ],
  "backups": [
    {
      "source": {
        "type": "mysqldump",
        "options": {
          "host": "",
          "databases": "",
          "user": "",
          "password": ""
        }
      },
      "target": {
        "dirname": "/backups/mysql",
        "filename": "%Y%m%d-%H%i.sql",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "100M"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/mysql"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 2
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    },
    {
      "source": {
        "type": "redis",
        "options": {
          "pathToRedisData": "/var/lib/redis/dump.rdb"
        }
      },
      "target": {
        "dirname": "/backups/redis",
        "filename": "%Y%m%d-%H%i",
        "compress": "bzip2"
      },
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/redis"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 2
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    },
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/data/arquivos"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "arquivos-%Y%m%d-%H%i",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "2G"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/arquivos"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 1
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    },
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/public/uploads"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "uploads-%Y%m%d-%H%i",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "2G"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
            "key": "...",
            "secret": "...",
            "bucket": "...",
            "region": "...",
            "path": "/arquivos"
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 1
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "...",
          "algorithm": "aes-256-cbc"
        }
      }
    }
  ]
}

I'm execute using:

phpbu --configuration=/var/www/html/app/phpbu.json
@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 19, 2016

Owner

I cleaned up the configuration a bit, turned debug to false and ran
# phpbu --configuration=/var/www/phpbu/debug/silasrm.json --simulate

phpbu returned

phpbu 3.1.3

Runtime:       PHP 7.0.6
Configuration: /var/www/phpbu/debug/silasrm.json

backup: [tar] *************************************************************
backup data:
/bin/tar -jcf '/backups/arquivos/arquivos-20160519-0557.bz2' -C '/var/www/html/app/data' 'arquivos'
ok

backup: [tar] *************************************************************
backup data:
/bin/tar -jcf '/backups/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'
ok

Time: 40 ms, Memory: 2.00Mb

OK (2 backups, 0 checks, 0 crypts, 0 syncs, 0 cleanups)

So the commands phpbu executes are:

  • /bin/tar -jcf '/backups/arquivos/arquivos-20160519-0557.bz2' -C '/var/www/html/app/data' 'arquivos'
  • /bin/tar -jcf '/backups/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'

The difference to your tar -zcvf app.2016-05-19__05-57-13.tar.gz /my/app/ > /dev/null is the -C option that changes the working directory so your tar file doesn't contain all parent directories.

Could you check if you execute those two commands manualy you get the same errors you are experiencing while executing them via phpbu (arquivos/pacientes: file changed as we read it) and make sure the command you are using in your shell script works.

If your command works and the ones with the -C option don't I will add an option to not use the -C option. But I hope that's not the case because that would be kind of strange. I hope without the -C option you will get the same error. If so please give this a try.

  • /bin/tar --ignore-failed-read -jcf '/backups/arquivos/arquivos-20160519-0557.bz2' -C '/var/www/html/app/data' 'arquivos'
  • /bin/tar --ignore-failed-read -jcf '/backups/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'

If those work I will add an option to the tar source to use --ignore-failed-read.

Here`s the config I used:

{
  "verbose": false,
  "backups": [
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/data/arquivos"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "arquivos-%Y%m%d-%H%i",
        "compress": "bzip2"
      }
    },
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/public/uploads"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "uploads-%Y%m%d-%H%i",
        "compress": "bzip2"
      }
    }
  ]
}
Owner

sebastianfeldmann commented May 19, 2016

I cleaned up the configuration a bit, turned debug to false and ran
# phpbu --configuration=/var/www/phpbu/debug/silasrm.json --simulate

phpbu returned

phpbu 3.1.3

Runtime:       PHP 7.0.6
Configuration: /var/www/phpbu/debug/silasrm.json

backup: [tar] *************************************************************
backup data:
/bin/tar -jcf '/backups/arquivos/arquivos-20160519-0557.bz2' -C '/var/www/html/app/data' 'arquivos'
ok

backup: [tar] *************************************************************
backup data:
/bin/tar -jcf '/backups/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'
ok

Time: 40 ms, Memory: 2.00Mb

OK (2 backups, 0 checks, 0 crypts, 0 syncs, 0 cleanups)

So the commands phpbu executes are:

  • /bin/tar -jcf '/backups/arquivos/arquivos-20160519-0557.bz2' -C '/var/www/html/app/data' 'arquivos'
  • /bin/tar -jcf '/backups/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'

The difference to your tar -zcvf app.2016-05-19__05-57-13.tar.gz /my/app/ > /dev/null is the -C option that changes the working directory so your tar file doesn't contain all parent directories.

Could you check if you execute those two commands manualy you get the same errors you are experiencing while executing them via phpbu (arquivos/pacientes: file changed as we read it) and make sure the command you are using in your shell script works.

If your command works and the ones with the -C option don't I will add an option to not use the -C option. But I hope that's not the case because that would be kind of strange. I hope without the -C option you will get the same error. If so please give this a try.

  • /bin/tar --ignore-failed-read -jcf '/backups/arquivos/arquivos-20160519-0557.bz2' -C '/var/www/html/app/data' 'arquivos'
  • /bin/tar --ignore-failed-read -jcf '/backups/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'

If those work I will add an option to the tar source to use --ignore-failed-read.

Here`s the config I used:

{
  "verbose": false,
  "backups": [
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/data/arquivos"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "arquivos-%Y%m%d-%H%i",
        "compress": "bzip2"
      }
    },
    {
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/www/html/app/public/uploads"
        }
      },
      "target": {
        "dirname": "/backups/arquivos",
        "filename": "uploads-%Y%m%d-%H%i",
        "compress": "bzip2"
      }
    }
  ]
}
@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 19, 2016

Hi,

I make this test:

  • Run the command:
/bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'

Or with ignore-failed-read flag:

/bin/tar --ignore-failed-read -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'
  • Create 2 files on folder
touch public/uploads/pacientes/teste.txt
touch public/uploads/teste.txt

Result:

/bin/tar: uploads/pacientes: file changed as we read it
/bin/tar: uploads: file changed as we read it

silasrm commented May 19, 2016

Hi,

I make this test:

  • Run the command:
/bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'

Or with ignore-failed-read flag:

/bin/tar --ignore-failed-read -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' -C '/var/www/html/app/public' 'uploads'
  • Create 2 files on folder
touch public/uploads/pacientes/teste.txt
touch public/uploads/teste.txt

Result:

/bin/tar: uploads/pacientes: file changed as we read it
/bin/tar: uploads: file changed as we read it
@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 19, 2016

Owner

And the tar.bz2 is not created in both cases?
The warning should appear, but using --ignore-failed-read should prevent tar from exiting the compression.

What about...

/bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' '/var/www/html/app/public/uploads'

That should be exactly what you where using before. Does this work, even if you create new files?

Owner

sebastianfeldmann commented May 19, 2016

And the tar.bz2 is not created in both cases?
The warning should appear, but using --ignore-failed-read should prevent tar from exiting the compression.

What about...

/bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' '/var/www/html/app/public/uploads'

That should be exactly what you where using before. Does this work, even if you create new files?

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 19, 2016

File is created:
-rw-r--r-- 1 root root 3813245603 May 19 10:40 uploads-20160519-0557.bz2

But with this errors, phpbu can't complete the task or I'm wrong?

silasrm commented May 19, 2016

File is created:
-rw-r--r-- 1 root root 3813245603 May 19 10:40 uploads-20160519-0557.bz2

But with this errors, phpbu can't complete the task or I'm wrong?

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 19, 2016

Owner

So let me recap:

  1. The original phpbu command shows an error and no tar.bz2 is created
  2. The phpbu command with --ignore-failed-read shows an error and no tar.bz2 is created
  3. The command without -C creates the tar.bz2

Right?

I would have hoped Nr. 2 would work.

But if you confirm that only Nr. 3 works for you, I will add an option changeDir and by setting this to false phpbu will execute the tar command without -C

Owner

sebastianfeldmann commented May 19, 2016

So let me recap:

  1. The original phpbu command shows an error and no tar.bz2 is created
  2. The phpbu command with --ignore-failed-read shows an error and no tar.bz2 is created
  3. The command without -C creates the tar.bz2

Right?

I would have hoped Nr. 2 would work.

But if you confirm that only Nr. 3 works for you, I will add an option changeDir and by setting this to false phpbu will execute the tar command without -C

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 19, 2016

Running with --ignore-failed-read the bz2 file is created.

Now, I'm running without --ignore-failed-read to check if file is created.

silasrm commented May 19, 2016

Running with --ignore-failed-read the bz2 file is created.

Now, I'm running without --ignore-failed-read to check if file is created.

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 19, 2016

File is created:

/var/www/html/app$ /bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' '/var/www/html/app/public/uploads'
/bin/tar: Removing leading `/' from member names
/bin/tar: /var/www/html/app/public/uploads/pacientes: file changed as we read it
/bin/tar: /var/www/html/app/public/uploads: file changed as we read it
/var/www/html/app$ ll /backups/bjm/arquivos/
total 3732660
-rw-rw-r-- 1 silas silas 3822231106 May 19 18:43 uploads-20160519-0557.bz2

I'm guess the errors/warning messages can't stop compression but phpbu don't understand this and broke the process. Or the file compression don't complete correctly and file continue in the folder.

silasrm commented May 19, 2016

File is created:

/var/www/html/app$ /bin/tar -jcf '/backups/bjm/arquivos/uploads-20160519-0557.bz2' '/var/www/html/app/public/uploads'
/bin/tar: Removing leading `/' from member names
/bin/tar: /var/www/html/app/public/uploads/pacientes: file changed as we read it
/bin/tar: /var/www/html/app/public/uploads: file changed as we read it
/var/www/html/app$ ll /backups/bjm/arquivos/
total 3732660
-rw-rw-r-- 1 silas silas 3822231106 May 19 18:43 uploads-20160519-0557.bz2

I'm guess the errors/warning messages can't stop compression but phpbu don't understand this and broke the process. Or the file compression don't complete correctly and file continue in the folder.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 20, 2016

Owner

phpbu currently executes it without the --ignore-failed-read option and this way tar returns an error code, phpbu recognizes that and throws an exception.
I will release version 3.1.4 in a minute. There you can add the following to your tar configuration.

'ignoreFailedRead': true,

With this, tar should ignore the error and no error code should be returned, so phpbu shouldn't mark this backup as failed anymore.

Owner

sebastianfeldmann commented May 20, 2016

phpbu currently executes it without the --ignore-failed-read option and this way tar returns an error code, phpbu recognizes that and throws an exception.
I will release version 3.1.4 in a minute. There you can add the following to your tar configuration.

'ignoreFailedRead': true,

With this, tar should ignore the error and no error code should be returned, so phpbu shouldn't mark this backup as failed anymore.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 20, 2016

Owner

I just release phpbu 3.1.4

Please update phpbu

# phpbu --self-update

Add the ignoreFailedRead option to your backup configuration (docs) and give it a try.

Hope it works :)

Owner

sebastianfeldmann commented May 20, 2016

I just release phpbu 3.1.4

Please update phpbu

# phpbu --self-update

Add the ignoreFailedRead option to your backup configuration (docs) and give it a try.

Hope it works :)

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 20, 2016

I'll try today :D

silasrm commented May 20, 2016

I'll try today :D

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 22, 2016

Remains giving error :(

phpbu 3.1.4

Runtime:       PHP 5.6.4-4ubuntu6.4
Configuration: /var/www/html/app/phpbu.json

Time: 1.08 hours, Memory: 8.50Mb

PHPBU - backup report
2016-05-22 09:03
FAILURE
(4 backups, 1 check, 2 crypts, 2 syncs, 2 cleanups)
Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: arquivos/pacientes: file changed as we read it ' in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113
Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: uploads/pacientes: file changed as we read it ' in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113
backup mysqldump FAILURE
    executed    skipped failed
checks  1       1
crypts  1   1   0
syncs   1   1   0
cleanups    1   1   0
backup redis OK
    executed    skipped failed
checks  0       0
crypts  1   0   0
syncs   1   0   0
cleanups    1   0   0
backup tar FAILURE
    executed    skipped failed
checks  0       0
crypts  0   0   0
syncs   0   0   0
cleanups    0   0   0
backup tar FAILURE
    executed    skipped failed
checks  0       0
crypts  0   0   0
syncs   0   0   0
cleanups    0   0   0
Time: 1.08 hours, Memory: 8.50Mb

I'll run only mysql for checking problems.

silasrm commented May 22, 2016

Remains giving error :(

phpbu 3.1.4

Runtime:       PHP 5.6.4-4ubuntu6.4
Configuration: /var/www/html/app/phpbu.json

Time: 1.08 hours, Memory: 8.50Mb

PHPBU - backup report
2016-05-22 09:03
FAILURE
(4 backups, 1 check, 2 crypts, 2 syncs, 2 cleanups)
Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: arquivos/pacientes: file changed as we read it ' in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113
Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: uploads/pacientes: file changed as we read it ' in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113
backup mysqldump FAILURE
    executed    skipped failed
checks  1       1
crypts  1   1   0
syncs   1   1   0
cleanups    1   1   0
backup redis OK
    executed    skipped failed
checks  0       0
crypts  1   0   0
syncs   1   0   0
cleanups    1   0   0
backup tar FAILURE
    executed    skipped failed
checks  0       0
crypts  0   0   0
syncs   0   0   0
cleanups    0   0   0
backup tar FAILURE
    executed    skipped failed
checks  0       0
crypts  0   0   0
syncs   0   0   0
cleanups    0   0   0
Time: 1.08 hours, Memory: 8.50Mb

I'll run only mysql for checking problems.

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 22, 2016

No error returned or message on json.log, and generated mysqldump command with --simulate, running fine outside phpbu:

/usr/bin/mysqldump --user='x' --password='y' --host='z' 'k' > /backups/bjm/mysql/20160522-1426.sql

silasrm commented May 22, 2016

No error returned or message on json.log, and generated mysqldump command with --simulate, running fine outside phpbu:

/usr/bin/mysqldump --user='x' --password='y' --host='z' 'k' > /backups/bjm/mysql/20160522-1426.sql

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 22, 2016

Owner

Ok, I managed to reproduce the error.
The problem is, tar is still exiting with error code 1

I changed the tar source so it ignores this error code if ignoreFailedRead is active.

With this changes I couldn't reproduce the error anymore.

I will release this update shortly.

Owner

sebastianfeldmann commented May 22, 2016

Ok, I managed to reproduce the error.
The problem is, tar is still exiting with error code 1

I changed the tar source so it ignores this error code if ignoreFailedRead is active.

With this changes I couldn't reproduce the error anymore.

I will release this update shortly.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 22, 2016

Owner

I just released version 3.1.5, please update phpbu and try the tar generation again.

phpbu --self-update
Owner

sebastianfeldmann commented May 22, 2016

I just released version 3.1.5, please update phpbu and try the tar generation again.

phpbu --self-update

@sebastianfeldmann sebastianfeldmann added bug and removed question labels May 22, 2016

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 23, 2016

Same errors:

phpbu --configuration=/var/.../phpbu.json
phpbu 3.1.5

Runtime:       PHP 5.6.4-4ubuntu6.4
Configuration: /var/.../phpbu.json

Time: 1.12 hours, Memory: 8.50Mb

Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: arquivos/pacientes: file changed as we read it
'
in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113

Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: uploads/pacientes: file changed as we read it
'
in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113

backup mysqldump: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      1 |
 crypts   |        1 |       1 |      0 |
 syncs    |        1 |       1 |      0 |
 cleanups |        1 |       1 |      0 |
----------+----------+---------+--------+

backup redis: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        0 |       0 |      0 |
 syncs    |        0 |       0 |      0 |
 cleanups |        0 |       0 |      0 |
----------+----------+---------+--------+

backup tar: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        0 |       0 |      0 |
 syncs    |        0 |       0 |      0 |
 cleanups |        0 |       0 |      0 |
----------+----------+---------+--------+

FAILURE!
Backups: 4, failed Checks: 1, failed Crypts: 0, failed Syncs: 0, failed Cleanups: 0.

phpbu.json

...
"source": {
        "type": "tar",
        "options": {
          "path": "/var/.../arquivos",
          "ignoreFailedRead": true
        }
      }
...
"source": {
        "type": "tar",
        "options": {
          "path": "/var/.../uploads",
          "ignoreFailedRead": true
        }
      }
...

Config is ok, right?

silasrm commented May 23, 2016

Same errors:

phpbu --configuration=/var/.../phpbu.json
phpbu 3.1.5

Runtime:       PHP 5.6.4-4ubuntu6.4
Configuration: /var/.../phpbu.json

Time: 1.12 hours, Memory: 8.50Mb

Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: arquivos/pacientes: file changed as we read it
'
in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113

Exception 'phpbu\App\Exception' with message 'tar failed: /bin/tar: uploads/pacientes: file changed as we read it
'
in phar:///usr/local/bin/phpbu/Backup/Source/Tar.php:113

backup mysqldump: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      1 |
 crypts   |        1 |       1 |      0 |
 syncs    |        1 |       1 |      0 |
 cleanups |        1 |       1 |      0 |
----------+----------+---------+--------+

backup redis: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        0 |       0 |      0 |
 syncs    |        0 |       0 |      0 |
 cleanups |        0 |       0 |      0 |
----------+----------+---------+--------+

backup tar: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        0 |       0 |      0 |
 syncs    |        0 |       0 |      0 |
 cleanups |        0 |       0 |      0 |
----------+----------+---------+--------+

FAILURE!
Backups: 4, failed Checks: 1, failed Crypts: 0, failed Syncs: 0, failed Cleanups: 0.

phpbu.json

...
"source": {
        "type": "tar",
        "options": {
          "path": "/var/.../arquivos",
          "ignoreFailedRead": true
        }
      }
...
"source": {
        "type": "tar",
        "options": {
          "path": "/var/.../uploads",
          "ignoreFailedRead": true
        }
      }
...

Config is ok, right?

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 23, 2016

Owner

Ah I see the problem, "true" has to be a string.
I know... confusing and I should fix that eventually.

...
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/.../arquivos",
          "ignoreFailedRead": "true"
        }
      }
...
     "source": {
        "type": "tar",
        "options": {
          "path": "/var/.../uploads",
          "ignoreFailedRead": "true"
        }
      }
...
Owner

sebastianfeldmann commented May 23, 2016

Ah I see the problem, "true" has to be a string.
I know... confusing and I should fix that eventually.

...
      "source": {
        "type": "tar",
        "options": {
          "path": "/var/.../arquivos",
          "ignoreFailedRead": "true"
        }
      }
...
     "source": {
        "type": "tar",
        "options": {
          "path": "/var/.../uploads",
          "ignoreFailedRead": "true"
        }
      }
...
@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 24, 2016

Owner

@silasrm any feedback on the tar execution with the updated config ("true")?

Owner

sebastianfeldmann commented May 24, 2016

@silasrm any feedback on the tar execution with the updated config ("true")?

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 24, 2016

Sorry for delay. I'll try today and return with a response.

Thk's

silasrm commented May 24, 2016

Sorry for delay. I'll try today and return with a response.

Thk's

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 25, 2016

Sorry for delay, again.

Everything work fine except the AWS S3 (my sync config) has a limit of upload and the mysqldump:

PHP Warning:  Error executing "PutObject" on "https://s3-us-west-2.amazonaws.com....bz2.enc"; AWS HTTP error: Client error: `PUT https://s3-us-west-2.amazonaws.com....bz2.enc` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maxim (truncated...)
 EntityTooLarge (client): Your proposed upload exceeds the maximum allowed size - <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maximum allowed size</Message><ProposedSize>9983453545</ProposedSize><MaxSizeAllowed>5368709120</MaxSizeAllowed><RequestId>...</RequestId><HostId>...</HostId></Error> in phar:///usr/local/bin/phpbu/lib/aws-sdk/S3/StreamWrapper.php on line 737

http://aws.amazon.com/blogs/aws/amazon-s3-object-size-limit/

I'll check with AWS S3.

backup mysqldump: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      1 |
 crypts   |        1 |       1 |      0 |
 syncs    |        1 |       1 |      0 |
 cleanups |        1 |       1 |      0 |
----------+----------+---------+--------+

backup redis: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

FAILURE!
Backups: 4, failed Checks: 1, failed Crypts: 0, failed Syncs: 0, failed Cleanups: 0.

Mysqldump backup skip and don't create any log about reason of fail.

About AWS S3 upload limit, You have some knowledge?

Thk's to help me and sorry for the troubles.

silasrm commented May 25, 2016

Sorry for delay, again.

Everything work fine except the AWS S3 (my sync config) has a limit of upload and the mysqldump:

PHP Warning:  Error executing "PutObject" on "https://s3-us-west-2.amazonaws.com....bz2.enc"; AWS HTTP error: Client error: `PUT https://s3-us-west-2.amazonaws.com....bz2.enc` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maxim (truncated...)
 EntityTooLarge (client): Your proposed upload exceeds the maximum allowed size - <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maximum allowed size</Message><ProposedSize>9983453545</ProposedSize><MaxSizeAllowed>5368709120</MaxSizeAllowed><RequestId>...</RequestId><HostId>...</HostId></Error> in phar:///usr/local/bin/phpbu/lib/aws-sdk/S3/StreamWrapper.php on line 737

http://aws.amazon.com/blogs/aws/amazon-s3-object-size-limit/

I'll check with AWS S3.

backup mysqldump: FAILED

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      1 |
 crypts   |        1 |       1 |      0 |
 syncs    |        1 |       1 |      0 |
 cleanups |        1 |       1 |      0 |
----------+----------+---------+--------+

backup redis: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

FAILURE!
Backups: 4, failed Checks: 1, failed Crypts: 0, failed Syncs: 0, failed Cleanups: 0.

Mysqldump backup skip and don't create any log about reason of fail.

About AWS S3 upload limit, You have some knowledge?

Thk's to help me and sorry for the troubles.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 25, 2016

Owner

No worries :)
I think the new ignoreFailedRead option is a big plus for phpbu.
I'm not that familiar with Amazon AWS.
My Test-Setup normally doesn't include files that big.
This sounds a lot like an AWS restriction / problem.

Depending on the size of the data you are uploading, Amazon S3 offers the following options:

  • Upload objects in a single operation - With a single PUT operation you can upload objects up to 5 GB in size.
  • Upload objects in parts—Using the Multipart upload API you can upload large objects, up to 5 TB.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html

So if your file is bigger than 5GB phpbu has to support multipart uploads.

Owner

sebastianfeldmann commented May 25, 2016

No worries :)
I think the new ignoreFailedRead option is a big plus for phpbu.
I'm not that familiar with Amazon AWS.
My Test-Setup normally doesn't include files that big.
This sounds a lot like an AWS restriction / problem.

Depending on the size of the data you are uploading, Amazon S3 offers the following options:

  • Upload objects in a single operation - With a single PUT operation you can upload objects up to 5 GB in size.
  • Upload objects in parts—Using the Multipart upload API you can upload large objects, up to 5 TB.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html

So if your file is bigger than 5GB phpbu has to support multipart uploads.

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 26, 2016

Hi,

I'll need to support multipart uploads, and I'll see how to implements this support to phpbu.

My main concern is about mysqldump backup, it's fail but no one log erro is returned. Any suggestion?

Thk's.

silasrm commented May 26, 2016

Hi,

I'll need to support multipart uploads, and I'll see how to implements this support to phpbu.

My main concern is about mysqldump backup, it's fail but no one log erro is returned. Any suggestion?

Thk's.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 26, 2016

Owner

So mysqldump is failing but phpbu doesn't recognize it?
Can you get the executed mysqldump command with

phpbu --simulate

And check if running it manually works?

Owner

sebastianfeldmann commented May 26, 2016

So mysqldump is failing but phpbu doesn't recognize it?
Can you get the executed mysqldump command with

phpbu --simulate

And check if running it manually works?

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 26, 2016

Failing but phpbu don't return any error in ou out of the log.

phpbu generate this command:

/usr/bin/mysqldump --user='user' --password='###' --host='###' '***' > /backups/bjm/mysql/20160526-1508.sql

Run ok without errors.

silasrm commented May 26, 2016

Failing but phpbu don't return any error in ou out of the log.

phpbu generate this command:

/usr/bin/mysqldump --user='user' --password='###' --host='###' '***' > /backups/bjm/mysql/20160526-1508.sql

Run ok without errors.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 26, 2016

Owner

Seems pretty simple. Are you running any checks on the created mysql backup.
What kind of cleanup are you using?
If you are using capacity it could happen, that your current backup is going to be erased immediately, because the backup is using to much space.
You have to set deleteTarget to false to make sure at least your current backup is kept.

Owner

sebastianfeldmann commented May 26, 2016

Seems pretty simple. Are you running any checks on the created mysql backup.
What kind of cleanup are you using?
If you are using capacity it could happen, that your current backup is going to be erased immediately, because the backup is using to much space.
You have to set deleteTarget to false to make sure at least your current backup is kept.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 26, 2016

Owner

Forget that!
deleteTarget is false by default so that should not be the case at all.

Owner

sebastianfeldmann commented May 26, 2016

Forget that!
deleteTarget is false by default so that should not be the case at all.

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 26, 2016

Owner

So are you using a check?

Owner

sebastianfeldmann commented May 26, 2016

So are you using a check?

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 26, 2016

My config:

{
      "source": {
        "type": "mysqldump",
        "options": {
                   ....
        }
      },
      "target": {
        "dirname": "/backups/bjm/mysql",
        "filename": "%Y%m%d-%H%i.sql",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "100M"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
                   ....
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 2
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "4M4z0n14",
          "algorithm": "aes-256-cbc"
        }
      }
    }

silasrm commented May 26, 2016

My config:

{
      "source": {
        "type": "mysqldump",
        "options": {
                   ....
        }
      },
      "target": {
        "dirname": "/backups/bjm/mysql",
        "filename": "%Y%m%d-%H%i.sql",
        "compress": "bzip2"
      },
      "checks": [
        {
          "type": "SizeMin",
          "value": "100M"
        }
      ],
      "syncs": [
        {
          "type": "amazons3",
          "options": {
                   ....
          }
        }
      ],
      "cleanup": {
        "type": "Quantity",
        "options": {
          "amount": 2
        }
      },
      "crypt": {
        "type": "openssl",
        "options": {
          "password": "4M4z0n14",
          "algorithm": "aes-256-cbc"
        }
      }
    }
@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 26, 2016

Owner

And what kind of error does occur?
Is the backup corrupt?
Is the backup file even created?
Because with a SizeMin check I would assume that it is being created at least with a size greater 100 MegaBytes.

Owner

sebastianfeldmann commented May 26, 2016

And what kind of error does occur?
Is the backup corrupt?
Is the backup file even created?
Because with a SizeMin check I would assume that it is being created at least with a size greater 100 MegaBytes.

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 26, 2016

No one error occur, but phpbu says is failed. No log is registered.
My database without compression is 1.1GB. But I test now, compressed with bzip2 the size is 96MB :(

With tar.gz the same database is 142MB. I'll decrease check value and test it now :D

Sorry with issue. Thk's

silasrm commented May 26, 2016

No one error occur, but phpbu says is failed. No log is registered.
My database without compression is 1.1GB. But I test now, compressed with bzip2 the size is 96MB :(

With tar.gz the same database is 142MB. I'll decrease check value and test it now :D

Sorry with issue. Thk's

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 27, 2016

Owner

Hi @silasrm,
I just released version 3.1.6 with Amazon S3 multi part upload support.
Just add

  "useMultiPartUpload": "true",

to your Amazon sync configuration and you should be fine.

Cheers Sebastian

Owner

sebastianfeldmann commented May 27, 2016

Hi @silasrm,
I just released version 3.1.6 with Amazon S3 multi part upload support.
Just add

  "useMultiPartUpload": "true",

to your Amazon sync configuration and you should be fine.

Cheers Sebastian

@sebastianfeldmann

This comment has been minimized.

Show comment
Hide comment
@sebastianfeldmann

sebastianfeldmann May 27, 2016

Owner

Thank you for your help to improve phpbu

Owner

sebastianfeldmann commented May 27, 2016

Thank you for your help to improve phpbu

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 27, 2016

This is amazing dude :D
I'll test today.

Many thank's!

silasrm commented May 27, 2016

This is amazing dude :D
I'll test today.

Many thank's!

@silasrm

This comment has been minimized.

Show comment
Hide comment
@silasrm

silasrm May 27, 2016

I'm very happy 👯

Now, everything is ok

phpbu 3.1.6

Runtime:       PHP 5.6.4-4ubuntu6.4
Configuration: /var/.../phpbu.json

Time: 1.28 hours, Memory: 21.50Mb

backup mysqldump: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup redis: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

OK (4 backups, 3 checks, 4 crypts, 4 syncs, 4 cleanups)

Thk's!

silasrm commented May 27, 2016

I'm very happy 👯

Now, everything is ok

phpbu 3.1.6

Runtime:       PHP 5.6.4-4ubuntu6.4
Configuration: /var/.../phpbu.json

Time: 1.28 hours, Memory: 21.50Mb

backup mysqldump: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup redis: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        0 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

backup tar: OK

          | executed | skipped | failed |
----------+----------+---------+--------+
 checks   |        1 |         |      0 |
 crypts   |        1 |       0 |      0 |
 syncs    |        1 |       0 |      0 |
 cleanups |        1 |       0 |      0 |
----------+----------+---------+--------+

OK (4 backups, 3 checks, 4 crypts, 4 syncs, 4 cleanups)

Thk's!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment