Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deduplicate Check and Prunes for the same backup repository #214

Open
ccremer opened this issue Dec 18, 2020 · 14 comments
Open

Deduplicate Check and Prunes for the same backup repository #214

ccremer opened this issue Dec 18, 2020 · 14 comments
Labels
enhancement New feature or request

Comments

@ccremer
Copy link
Contributor

ccremer commented Dec 18, 2020

Summary

As K8up user
I want to deduplicate Jobs that target the same repository
So that exclusive Jobs are not run excessively

Context

Check and Prunes are Restic Jobs that need exclusive access to the backend repository: Only one job can effectively run at the same time. However, multiple backups can target the same Restic repository.

The operator should deduplicate prune jobs that are managed by a smart schedule. So for example if there are multiple schedules with @daily-random prunes to the same S3 endpoint the scheduler should only register one of these.

But if the prunes have explicit cron patterns like 5 4 * * * and 5 5 * * * they should NOT be deduplicated. This will ensure maximum flexibility if for some reason a user explicitly wants multiple prune runs.

Out of Scope

Further links

Acceptance criteria

  • Given 2 Schedules with either Check or Prune job, when the same randomized predefined cron syntax is specified that targets the same backup repository, then ignore the duplicated schedule of the same job type that has also the same schedule and backend.
  • Given 2 Schedules with jobs that are already deduplicated, when changing the cron schedule of one of the jobs, then remove the deduplication and schedule both jobs separately.
  • Given 2 Schedules with jobs that are already deduplicated, when changing the backend of a Schedule, then remove the deduplication and schedule jobs separately.

Implementation Ideas

  • Schedules are added to the cron library at the time when Schedules get reconciled. At this time we could do the deduplication logic so that the duplication job does not get added
@ccremer ccremer added the enhancement New feature or request label Dec 18, 2020
@ccremer ccremer added this to the Schnitzel Features milestone Dec 18, 2020
@tobru tobru changed the title [Feature] Deduplicate Check and Prunes for the same backup repository Deduplicate Check and Prunes for the same backup repository Jan 5, 2021
@tobru tobru modified the milestones: v1.1.0, v1.0.0 Jan 13, 2021
@cimnine
Copy link
Contributor

cimnine commented Jan 21, 2021

Some ideas to spin:

  • Deduplicate when scheduling, i.e. before adding a new entry to the internal cron
  • Deduplicate when creating job objects,
    • by checking an internal reference
    • by querying k8s controller if a similar object was already scheduled
    • combination of them.

@ccremer
Copy link
Contributor Author

ccremer commented Jan 22, 2021

This ain't exactly easy. I started implementing, but soon discovered that

  • we need to check for deduplication first in all other Schedule CRs, as only those contain the @... schedule definitions.
  • also with other schedules, we check if the backend is "same" -> equal comparison implementations necessary
  • then, we don't know whether the checks or prunes of the other schedules have already been deduplicated or not, thus we also need to verify if they're not already in the cron scheduler.

So, back to drawing board:
image

The graphic is in the PR for editing, though it's not intended to stay there in that form.

@ccremer ccremer linked a pull request Jan 22, 2021 that will close this issue
5 tasks
@Kidswiss
Copy link
Contributor

Kidswiss commented Jan 22, 2021

One thought I just had:

If we do this, should we provide a stable deduplication for this? Example:

A newscheduleB is not registered because scheduleA is already registered. Both contain a prune with @weekly-random. SchedulA's random prune was just triggered for this week. We restart the operator and now suddenly scheduleB is registered but scheduleA not, because it was handled first. ScheduleB's @weekly-random would already trigger the next day. Now we have a schedule that didn't run in the specified interval.

To guarantee the same interval regardless of operator restarts it would need to a way to know which schedule it should prefer.

@cimnine
Copy link
Contributor

cimnine commented Jan 22, 2021

To guarantee the same interval regardless of operator restarts it would need to a way to know which schedule it should prefer.

Could we sort them by date of creation?

It could work something like this:

  1. Read 1st Schedule CR that arrives (i.e. that is reconciled)
    • Just schedule it in cron.
    • Add it to a new internal data structure; remember the fields on which we like to deduplicate, plus creation date for sorting.
  2. Read 2nd Schedule CR that arrives
    • Check the internal data structure whether we've seen an equal schedule before
      • If yes: Check if it's older than the other(s)
        • If yes: Remove the previously oldest schedule from cron, then add this one.
        • If no: Ignore.
      • If no: Schedule it.
    • In any case: Add it to the internal data structure as well.
  3. Repeat for every next Schedule CR.

Now, if a Schedule is deleted:

  • Check the internal data structure if it is the oldest of the available Schedules.
    • If yes:
      • Remove from cron.
      • Remove it from the internal data structure.
      • Schedule the now oldest Schedule CR.
    • If no:
      • Just remove from internal data structure.

@Kidswiss
Copy link
Contributor

I had another idea over the weekend:

We could hash the repository string and the type and use that as the randomness seed (https://golang.org/pkg/math/rand/#Seed). So each type and repo combination will generate the same "random" time. This way we only have to track if at least one of the jobs is registered for a given type/repo combo.

By hashing the values before using as the seed it should generate enough spread for the schedules.

@cimnine
Copy link
Contributor

cimnine commented Jan 25, 2021

We could hash the repository string and the type and use that as the randomness seed (https://golang.org/pkg/math/rand/#Seed). So each type and repo combination will generate the same "random" time. This way we only have to track if at least one of the jobs is registered for a given type/repo combo.

It sounds like rand should not be used then, but rather a number should be deduced directly from the hash. For one, because there's the underlying assumption that the implementation of rand does not change and stays stable. It's also not obvious to rely on rand to produce a predictable number ;)

My main concern with this solution: It's anything but obvious to understand. I.e., it's a very implicit solution and in my experience with implicit solutions is that they are hard to understand right away for the next developer.

@Kidswiss
Copy link
Contributor

Sure we can use something else to generate the times, rand was just a suggestion.

But I feel like we'd have to get the randomness for the same types and repos down. Your suggestion could still lead to garbled execution times if there are a lot of namespace changes on a cluster.

@ccremer
Copy link
Contributor Author

ccremer commented Jan 25, 2021

Unpopular opinion: The more we try to solve these "stable across restarts" problem, the more I'm convinced we should get rid of any internal states altogether. e.g. replace the cron library with K8s CronJobs etc.
This is already the 2nd or 3rd time we try to solve this problem with special mechanisms 🙈
Such an attempt ofc would not much simplify the deduplication logic, but if we can find a "stateless" algorithm, we can leave state information to Kubernetes API/etcd and not need to worry about restarts.

In a private project/operator I'm exactly at the same problem: handling scheduling and restarts. I have found a working solution, we can discuss it if you're further interested.

At the moment I'm a bit hesitant to come up with complicated "solutions" that solve deduplication across restarts when using internal state. Maybe we should limit the deduplication feature to @daily-random only, so that a missed schedule due to K8up restart isn't the end of the world.

@Kidswiss
Copy link
Contributor

If we implement the deduplication logic fo @daily-random it can also be used for the others, or not? I mean, the effort to implement it for one would probably be the same as implementing it for all.

I agree that switching to k8s native cron-jobs could help with things, but they may make other things more complicated.

I also agree that off-loading as much state as possible to k8s should be desired, but there are cases where I think having a small in-memory state could make sense. For example to reduce the amount of API queries.

I'm interested in hearing your solution for that issue

@ccremer
Copy link
Contributor Author

ccremer commented Jan 25, 2021

For example to reduce the amount of API queries.

With the switch to Operator SDK resp. controller-runtime, the client has built-in read-cache by default. Each GETted object lands in the cache and is automatically watched for changes. repeated GETs for already retrieved objects don't even land at the API server anymore. It's actually harder to ignore the cache for certain object Kinds for whatever reason.

So, as far as performance goes, I think it's worse when we try to maintain our own barely tested cache ;)

If we implement the deduplication logic fo @daily-random it can also be used for the others, or not? I mean, the effort to implement it for one would probably be the same as implementing it for all.

It depends whether we also implement for stable deduplication across restarts. If we decide to do it stable, we are accepting added complexity and reduced maintainability, whereas with ephemeral deduplication we can simplify deduplication at the cost of missing schedules as you described.

@Kidswiss
Copy link
Contributor

My personal opinion is that missing schedules should be something that the k8up operator should avoid as much as possible. Nobody wants a backup solution that may or may not trigger a job.

@ccremer
Copy link
Contributor Author

ccremer commented Jan 25, 2021

Thanks for the good internal discussions 👍

Here is the new proposal how it could work:
image

  • We add a new CRD, EffectiveSchedule (better name welcome) that removes the effectiveSchedule status fields from the Schedule CR. This new CR, stored in the same namespace as the operator is running, will be a persistent link that holds the information in order to deduplicate Check and Prune.
  • This CR will be created when a Schedule is reconciled with a @-random spec. If it finds an EffectiveSchedule object that already has a back-reference to itself via OwnerReference, then it does nothing. Otherwise, it will create a new EffectiveSchedule with a randomized schedule and added to internal Cron scheduler.
  • When another Schedule gets reconciled that has the same schedule and same backend(s), then the EffectiveSchedule will get another OwnerReference to the new Schedule, but it won't be added to the internal Cron scheduler. That way, the duplicate schedule is deduplicated. If the schedule that has go the prune and check jobs assigned, is deleted, then the next Schedule is elected to "master" the Check and Prune. If no Schedule is in the list anymore, then Kubernetes automatically GCs the EffectiveSchedule.
  • The Idea of this new CRD is to have a intermediary step before we can go plain Kubernetes CronJobs in K8up v2. It is not meant to be a resource maintained from K8up end users, but purely from the Operator. Thus it's regarded as an implementation change. It may be removed in K8up v2 or later if the relationships can be computed at runtime.
  • The new CRD should go into K8up v1.0, but the deduplication feature goes into v1.1

@tobru tobru removed this from the v1.1 milestone Sep 13, 2021
@smlx
Copy link

smlx commented Aug 12, 2024

Hi, what's the current status of this issue in the latest version of k8up? Does k8up ensure that a check job scheduled with @weekly-random will not run at the same time as a prune or backup job?

@Kidswiss
Copy link
Contributor

Hi @smlx

K8up doesn't yet deduplicate jobs to the same repository.

However, there are already mechanisms in place that will prevent two exclusive jobs (like prune and check) from running at the same time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants