New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[operator] Add backup #7441
[operator] Add backup #7441
Conversation
With this:
Where does the backup go? |
If you specify the secret for S3, to S3, if not, nowhere |
"k8s.io/client-go/kubernetes" | ||
"sigs.k8s.io/controller-runtime/pkg/manager" | ||
|
||
cron "github.com/robfig/cron/v3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make more sense to have a k8s CronJob, instead of the backup running within the operator process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no because I need to spin off a docker container for it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lgarciaaco can you elaborate that? You can spin the same image as the one used by the operator with perhaps a different entrypoint/arguments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imho it is harder to work with pods than this way where the backup is tided up to the running binary. It is harder to develop and to debug. Why don't you tell me the benefits of doing it with a k8s cron job over using a cron withing the operator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see backup and configuration as two separate concerns, for me it makes more sense to have them separate as any issues with the backup will be limited to the backup CronJob
, likewise any issues with the operator would not jeopardize the backup. CronJob
also gives the operators more flexibility than a feature we develop on our own, and matching existing features in our codebase unnecessary brings on more code to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are arguable ...
as any issues with the backup will be limited to the backup CronJob, likewise any issues with the operator would not jeopardize the backup. CronJob
If the operator is jeopardize do I want to continue running backups? maybe syndesis is jeopardize all together? this seems to a bigger decision that requires some further discussion
also gives the operators more flexibility than a feature we develop on our own
Right now a feature I developed on my own allows me to do all I need. I don't need to use something different for the promise that in the future I might (or well might not) use some of it features. If I need those feature whatever it might be I can swap then, if the cron package I am using can't provide it.
Either way, my point is that using this library makes my development and test circle easier. I don't see any value using k8s cron over it, but if more engineers think it is better I will gladly change it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are arguable ...
I'm sorry, they're not, apart from the extend that you're willing to argue about them.
If the operator is failing because, say, it hits memory limits and gets OOMed, the backups are not run. Likewise if the backup ends up crashing the operator any pending configuration changes will not be applied.
Having the operator run the backups functionality that jeopardizes the backups and the rest of the operator functionality.
I'm okay with this getting merged to master
and further improved upon, I'm not okay with this getting released in 1.9 as I think there will be issues in operating it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not willing to argue about anything. As I said, if others see a value right now or If we figure we actually need it, I will gladly change it.
* master: (37 commits) refactor(ENTESB-12218): Fix missing camel-rest-openapi component in Camel BOMs refactor(ENTESB-12218): Use camel-rest-openapi components chore(test): exclude ServiceVdbGeneratorTest TEIIDTOOLS-854 improving error reporting on publish chore(build): setup maven-buildtime-extension chore: Update camel version to 2.23.2.fuse-760011 chore(jaeger-addon): Remove addon cluster directory chore: configure syndesisio/backport-action fix: Fixed nightly integration tests on CircleCi TEIIDTOOLS-890 changing publishing to a single service fix: removing usedby msg fix: changes in publishing heading fix: rework display of virtualization details header TEIIDTOOLS-879 Connects DV metrics page to backend TEIIDTOOLS-861 removing duplicate entries user doc: Added links to doc for extension mechanism and datavirt user doc: Add MS SQLServer entry to table of schema descriptions user doc: New doc that describes Schema specification in a database connection [ENTESB-12331] Adding auto-discovery for strimzi kafka/AMQ brokers ENTESB-12330: Bubble up custom-resource switch to syndesis install scripts ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
May wanna squash the commits then add a single commit message?
@@ -1,6 +1,6 @@ | |||
// +build !ignore_autogenerated | |||
|
|||
// Code generated by deepcopy-gen. DO NOT EDIT. | |||
// Code generated by operator-sdk. DO NOT EDIT. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This keeps alternating every time I do a commit. Any ideas why, eg. are we running different regenerations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea it also bother me ... no idea why
Pull request approved by @phantomjinx - applying pr/approved label |
1 similar comment
Pull request approved by @phantomjinx - applying pr/approved label |
Yea I was planning to do so as per Zoran comment |
rsh where? |
Well I though it might bring some value if you can create a pod with a pvc and scp to it, as an alternative. The method would take the namespace, labels to select the pod, path to copy the backup over |
sure, I was just curious. Would this also be a part of upgrade process? Or will the backups be completely separate? |
this is what I plan to use for the upgrade backup and rollback, and actually the |
[backport 1.9.x] Backport #7441 to 1.9.x
Enable periodical and one time backups.
A backup contains syndesis openshift resources as well as a database backup. The
backup
packages allows for backups to be upload to an external datastore. At the moment, onlyS3
is supported, but I plan on addingrsh
next.Backing up from command line.
Backup
To perform a backup from command line just run
syndesis-operator backup
. This will create a backup folder with all backed up resources inside.Uploading to external datastores is disabled with this mode.
You can set the output directory with
--backup dir
Restore
To restore, simply run
restore
also takes a zipped backup as an argument.Running periodical backups
To enable periodical backups, fill up the backup section of the CR.
schedule
accepted values are: hourly;daily;midnight;weekly;monthly;yearlyschedule
is a set once property, changing its value wont yield any result.To disable periodical backups, leave in blank the backup section in the CR.
Enable S3 as external datastore
To enable S3 as external datastore you must create a secret as following: