-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create backup file per db schema #64
Comments
No arguments here. What would be the API? How would you indicate you wanted "file per schema"? I imagine it would be more work to write the tests for this than the actual implementation itself. |
A simple boolean argument should be sufficient here eg: DB_DUMP_BY_SCHEME=true |
May also be good to have if wanting to backup all databases without needing to use the DB_NAMES as a list only:
|
Have you had a chance to look at the source code and API that consume the file? This is non-trivial to implement:
We would break every one of those external contract APIs, some of which would struggle with it, e.g. target rewriting. In addition, it would require some complex rewriting of how we do the final upload. I think this goes against the grain here, which is to have a single file backup, one nice The more I think about it, the more I struggle to understand the use case:
If you really want to separate it for future processing, why not just use post-backup processing to take the single dump file, separate it into files per schema, and zip it up again? |
Your comment is understandable, having more than one file to process sounds breaking and hard to implement. On the other side, extracting a single scheme out of the single file will as well not be that easy for everyone and that is what in my case would be helpful. I have to deal with the backups of the schemes differently sometimes and that would make it easy. BTW, another way to achieve this, would be to keep the schemes as separate files within the single resulting compressed file. This way it would stay one file but just contain the files per schema. This would make the post-backup processing much easier. |
Definitely understand. There is a difference between "single file for multiple schemas zipped up" and "multiple files, one per schema, zipped up".
This sounds like it might be a good path. Either way, it gets a single It looks like it is doable entirely within the scope of https://github.com/deitch/mysql-backup/blob/master/entrypoint#L181-L188 You want to take a crack at it? |
i personally support this, too. |
Yeah, that should be easy enough. As far as I know, |
@deitch i guess you have todo seperate "mysqldump" calls, one call for each database, so you have to loop. the other solution would be to split the big dump file into pieces, but for me, that isn´t a good idea at all 😏 |
I do not like the idea of trying to tease apart the structure of the file, and then having it fail or require rework for a newer version at some point. If |
@deitch do you see a problem looping through the databases, create a tmp dir, put all dumps into it and compress all files to a single one, and then go on with the unmodified rest of your script ? 🤔 |
You mean like this ? |
@deitch yeah, i am happy to test this 😄 👍 |
@deitch one little thing i would like to recommend: it would be nice to have the timestamp in each of the schema dumps as well. so maybe you could change
to
🙏 😏 |
Why? It already is in the zipped file? And it would make it a bit harder to extract? New issue anyways |
@deitch pardon: i cannot agree. a restore is something i do manually, usually, because its a ciritical act. so i don´t care about the filename. the main reason is: if i extract one single database and copy it to somewhere else, give it so someone else or someone finds that single file - that person (or myself) has absolutly no clue when that backup was created. that simple information can be easily stored inside the filename - that´s all. just a "very simple" practical thing. i this sounds reasonable for you, i would be happy to create a new issue fo that 😏 or maybe, you ment: it´s harder for your script, than sorry, i didn´t consider that 🙈 i just saw this from the manual point of view. |
BTW, the completion date/time of export is in each dump at the end clearly declared by mysqldump itself. So there's no high need to have it somehow in the file name I assume. |
OK |
@Kusig not everyone is happy to open files with several GB of data, just to find a timestamp somewhere 😔 not everybody knows that, but everybody, really - everybody - is able to read a filename, even without any technical skills. Some things in live can become very hard, when you work with other people, that don't have your knowledge, things like that (timestamp in filename), can save you hours of your life, because you don't have to explain someone, what a "tail" is or even worse: explain a windows user: "you want to know when the dump was made? Sure, no problem, just open the 10GB file in your notepad and find somewhere a timestamp" this person will ask me: "are you kidding me?" So, this is just a real life example of my past. |
@michabbb is the timestamp on the I don't greatly object to it, but want to understand reasons first. |
@deitch thanks for your interest 👍 my little story - from real life - should only be a good example, why it can be very frustrating, if someone - not like you or me or anybody else who knows howto handle very large files.... or even ever has seen a mysql dump in his life. don´t ask, why such a person has a dump in his hands, don´t ask 😂 the reason is super super easy: why should i open (and again: not every like you and me knows howto do a grep, tail or less) a large file (>GB) just to find a timestamp somewhere, if that timestamp could be part of the file itself. you think: whats the problem ? the main-zipped file has the timestamp. but please understand, in real life: you sometimes extract only a single database of the main file and send this file to someone else, that´s nothing unusual, if you work with other people 😏 maybe this sounds stupid to you all, but these are things already happened to me, very frustrating, just because super simple main infos need to be found somehwere, even they could be placed into a filename. |
Ha! The world of non-automated systems admin. I started my career doing financial services IT, we were obsessed with automation. Long before the term DevOps was coined, we were using that mindset. Every manual task was just another reason to remove humans from the equation. |
Hmm.. beforehand, the |
I will have to do something about the restore, but it works for now... |
The container actually creates one big dump file of all schemas found on the db server.
It would be helpful to have an option to create a dump file per schema where the schema name is part of the dump file.
eg. db_backup_schemaname_20181118181008.gz
The text was updated successfully, but these errors were encountered: