Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Allow configuration of a rule evaluation delay #14061

Merged
merged 19 commits into from
May 30, 2024

Conversation

gotjosh
Copy link
Member

@gotjosh gotjosh commented May 7, 2024

Now that Prometheus is allowed to be a remote write target, we think this is highly useful.

This basically enables Prometheus to do two things:

  1. Allow setting a server-wide rule evaluation delay.
  2. Allow the overriding of such a setting at a rule group level.

The main motivation of this work is to allow remote write receivers (in this particular case Prometheus itself) to delay alert rule evaluation by a certain period to take into account any sort of remote write delays.

In the early days of Mimir, our customers observed plenty of gaps in recording rules and flapping alerts due to remote write delays. Network anomalies are a factor of when and not if and this change help mitigate most of that flapping. In Mimir, we run it with a default of 1 minute but for Prometheus I'm proposing a default of 0 to avoid any sort of breaking change and leave it mostly as a tool for operators to decide how do they want to balance out time of evaluation for recording rules and alerts vs tolerance for remote write delays.

For context, this has been running in Mimir for years.

** Note for Reviewers **

I've tried adhere as much as possible to the original author's code where it made sense -- however, there was a huge refactor on the rules package since this code was introduce so a lot of merge I had to manually port it to the right place so please take a look carefully.

Copy link
Contributor

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we update the documentation here and here, to mention the new field in the rules config file?

cmd/prometheus/main.go Outdated Show resolved Hide resolved
Copy link
Contributor

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I compared the logic with the one we use in Mimir and I haven't seen any difference, so LGTM.

rules/rule.go Outdated Show resolved Hide resolved
@gotjosh
Copy link
Member Author

gotjosh commented May 9, 2024

Should we update the documentation here and here, to mention the new field in the rules config file?

I have updated the docs in here as this is where the have the reference to the file format.

@gotjosh gotjosh marked this pull request as ready for review May 9, 2024 10:58
@gotjosh
Copy link
Member Author

gotjosh commented May 9, 2024

@juliusv @roidelapluie @bwplotka @beorn7 @ArthurSens any thoughts?

@prymitive
Copy link
Contributor

rule evaluation delay to me sounds like the rule query won't be executed immediately but rather after a while. But this change, I think, is to use a constant offset when querying metrics during rule evaluation. How about rule evaluation offset ?

Copy link
Member

@juliusv juliusv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rule evaluation delay to me sounds like the rule query won't be executed immediately but rather after a while. But this change, I think, is to use a constant offset when querying metrics during rule evaluation. How about rule evaluation offset ?

Yeah, from reading the PR description and the flag help string, I also thought this would just pause the rule manager initially and then start executing rules as normal. But as far as I can see, it's not actually initially pausing at all, but it's changing every query to run at a past timestamp and then store it at a past timestamp.

The closest analogy would be the offset modifier we have for selectors and subqueries in PromQL. Maybe the feature should be called "rule query offset" or something like that?

cmd/prometheus/main.go Outdated Show resolved Hide resolved
@@ -86,6 +86,9 @@ name: <string>
# rule can produce. 0 is no limit.
[ limit: <int> | default = 0 ]

# Delay rule evaluation of this particular group by the specified duration.
[ evaluation_delay: <duration> | default = rules.evaluation_delay ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, this would introduce the first config item that overrides a command-line flag... at least the other per-rule-group settings only override settings from the global configuration file. Makes me wonder:

  • Do we need this per group at all (at least initially, is someone asking for it already?)?
  • If so, would it be better to make this a global config option rather than a flag, to be more in line with what we do already?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this per group at all (at least initially, is someone asking for it already?)?

From experience - this has proven significantly useful.

The argument was that we want to have a lower evaluation delay for all of metrics to safeguard for remote write delays but we want an even bigger one (or no delay due to its time sensitivity) for our "system rules" as those monitor 100s of 1000s of things were fast positives trigger an alert storm. With the same argument being in reverse, we don't want any delay for all of our metrics but we want this particular group to have a delay because where the remote write source where they come from tends to lag behind sometimes due to its size.

If so, would it be better to make this a global config option rather than a flag, to be more in line with what we do already?

I can see a world where you might want to control the flag at a server level - perhaps what I should introduce is a guard to make sure the config option cannot be lower than the server flag which I think it's what makes sense in my head.

CHANGELOG.md Outdated Show resolved Hide resolved
@gotjosh
Copy link
Member Author

gotjosh commented May 16, 2024

The closest analogy would be the offset modifier we have for selectors and subqueries in PromQL. Maybe the feature should be called "rule query offset" or something like that?

I don't have a strong opinion on name and I'm happy to go whichever direction you think it's right - Just want to say that a rename would be a big source of pain for us as we'll have to deprecate flags and change the config file format but that's not a Prometheus problem.

Just want to make sure the whatever name we end up picking you feel strongly about it - and it's not one of those "it would be nice if".

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
@roidelapluie
Copy link
Member

I am on Julius' side, while Reading the PR I was thinking it was slipping evals for a certain time, not evaluating in the past.

@juliusv
Copy link
Member

juliusv commented May 16, 2024

I don't have a strong opinion on name and I'm happy to go whichever direction you think it's right - Just want to say that a rename would be a big source of pain for us as we'll have to deprecate flags and change the config file format but that's not a Prometheus problem.

Yeah, I understand an am sorry for the pain, but I also believe it's worth to make a user-facing option as clear as possible. Even users who don't need it will have to read it in the docs and try to understand what it does and whether they need it.

I think even "offset" alone isn't fully clear, as that could just mean an offset in when rule evaluations are triggered, without changing the actual timestamp of queries into the past. So that's why I'd suggest rule_query_offset / ruleQueryOffset. This can just be queryOffset within the rules package and query_offset for the per-group option, as the rules context is clear there.

I can see a world where you might want to control the flag at a server level

Not sure I follow 100% - whether it's a flag or a global config option, both would control the option at a server level. A config option would just be more consistent in terms of how our existing configuration overrides work: I don't think we have a flag that can be overridden by any config option anywhere, but we have global config options that can be overridden in more specific places (e.g. rule evaluation intervals). That's just why it'd feel more natural to me to have the rule query offset global default setting in the config file as well (vs. as a flag). The other implication of putting something into the config vs. flag is that the setting has to be reloadable during runtime, which I hope is doable for the global rule query offset.

perhaps what I should introduce is a guard to make sure the config option cannot be lower than the server flag which I think it's what makes sense in my head.

I could imagine both configuring lower and higher offsets for specific groups, depending on whether most of your data comes from a regular source or from a delayed one (then you'd choose the default offset to be according to that and either configure lower or higher offsets for specific exceptional groups that use data from a different source).

@gotjosh
Copy link
Member Author

gotjosh commented May 16, 2024

Yeah, I understand an am sorry for the pain, but I also believe it's worth to make a user-facing option as clear as possible. Even users who don't need it will have to read it in the docs and try to understand what it does and whether they need it.

Sounds good to me!

I think even "offset" alone isn't fully clear, as that could just mean an offset in when rule evaluations are triggered, without changing the actual timestamp of queries into the past. So that's why I'd suggest rule_query_offset / ruleQueryOffset. This can just be queryOffset within the rules package and query_offset for the per-group option, as the rules context is clear there.

This makes sense to me, of all the options, I do like rule query offset / query offset the most. I'll make this change.

Not sure I follow 100% - whether it's a flag or a global config option, both would control the option at a server level. A config option would just be more consistent in terms of how our existing configuration overrides work: I don't think we have a flag that can be overridden by any config option anywhere, but we have global config options that can be overridden in more specific places (e.g. rule evaluation intervals). That's just why it'd feel more natural to me to have the rule query offset global default setting in the config file as well (vs. as a flag). The other implication of putting something into the config vs. flag is that the setting has to be reloadable during runtime, which I hope is doable for the global rule query offset.

🤦 Sorry I misunderstood your argument entirely - yeah, I think this change makes perfect sense. The pattern of global config can be overrideable via a more specific configuration option of a component is very common in Prometheus world.

I could imagine both configuring lower and higher offsets for specific groups, depending on whether most of your data comes from a regular source or from a delayed one (then you'd choose the default offset to be according to that and either configure lower or higher offsets for specific exceptional groups that use data from a different source).

Sounds good - I'll ship it without the guard and we can evaluate it as i evolves.

codesome and others added 9 commits May 16, 2024 12:12
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
…e it a global configuration option.

Signed-off-by: gotjosh <josue.abreu@gmail.com>

E Your branch is up to date with 'origin/gotjosh/evaluation-delay'.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
rules/group.go Outdated
@@ -90,6 +91,7 @@ type GroupOptions struct {
Rules []Rule
ShouldRestore bool
Opts *ManagerOptions
RuleQueryOffset *time.Duration
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All public methods/attributes get the Rule prefix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that since this field is always accessed via its GroupOptions struct context, just QueryOffset would suffice?

Signed-off-by: gotjosh <josue.abreu@gmail.com>
@gotjosh
Copy link
Member Author

gotjosh commented May 20, 2024

@juliusv @roidelapluie I think I have addressed all your concerns now. Let me know if you have any other thoughts.

You can look at the last 4 commits if you'd like to know what changes since you last (the force push was a result of some tests that changed in main for manager_test.go)

Copy link
Collaborator

@machine424 machine424 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should make it explicit that with such offset, on the shutdown of an instance an extra offset amount of data will not be considered for evaluation. (It's not a magic solution for 0 gap)

return *g.queryOffset
}

if g.opts.DefaultRuleQueryOffset != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we add some pre checks to ensure that we'll be able to append that "old" data?
And some pre-check regarding the eval_interval? (delay<eval_interval maybe?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow, can you please elaborate?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding some validation to make sure that the "append in the past" will be possible (with regard to minValidTime and OOO config as well) otherwise they would be rejected.
Also we may want to prevent setting the delay > eval_interval.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also we may want to prevent setting the delay > eval_interval.

I don't think I get this. Why does this matter? Assuming we have a 5m delay and a 1m interval - this is perfectly fine.

groups:
  - name: offset
    query_offset: 5m
    interval: 1m
    rules:
      - record: offset
        expr: vector(12)

Your initial rule evaluation would be delayed by 5m, but afterwards, it'll continue to be evaluated every 1m, but for a timestamp 5m in the past.

Adding some validation to make sure that the "append in the past" will be possible (with regard to minValidTime
and OOO config as well) otherwise they would be rejected.

I'm not entirely sure this is needed; you'd have 3 signals to indicate that something went wrong:

  1. The rule health would be set to bad and it would show up in the UI.
  2. The rule evaluation would fail - and we highly encourage you'd have meta-monitoring on those
  3. We would log the reason for the failure.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand this. Why does it matter? Let's assume we have a 5-minute delay and a 1-minute interval - this seems perfectly fine.

Yes, I'm aware that everything will be shifted. I was just considering that perhaps we might want to "guarantee" that when delay is enabled and Prometheus is shut down, the window of data that won't be evaluated is <delay+eval_interval<2*eval_interval (an extra eval_interval compared to when delay is disabled). With an unlimited delay, the gap may be larger upon Prometheus shutdown. But if we assume users know what they're doing, we can leave it as is.

I'm not entirely sure this is necessary; you'd have 3 signals to indicate that something went wrong:

Yes, I just thought that "failing early" would be better, but I think some of the parameters that control the "append in the past" may change dynamically, which could make spotting that more difficult.

Copy link
Member Author

@gotjosh gotjosh May 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your feedback is very valid. Would you mind opening up an issue with your first point? If it becomes a wider problem (I can't think of a world where this makes sense) to use offsets so long that this matters.

I think we can fix this by looking at the last evaluation as we restart, but we've never had this feedback before, which makes me think that this is not as relevant as we think it is and is not a straightforward piece of work.

@gotjosh
Copy link
Member Author

gotjosh commented May 28, 2024

@juliusv @roidelapluie @machine424 I have more work I'd like to do around the rules engine so if possible, I'd love to move forward. I'll take your silence as consensus and attempt to merge it tomorrow unless I hear otherwise.

config/config.go Outdated Show resolved Hide resolved
docs/configuration/configuration.md Outdated Show resolved Hide resolved
docs/configuration/recording_rules.md Outdated Show resolved Hide resolved
docs/configuration/recording_rules.md Outdated Show resolved Hide resolved
Rules []RuleNode `yaml:"rules"`
Name string `yaml:"name"`
Interval model.Duration `yaml:"interval,omitempty"`
QueryOffset *model.Duration `yaml:"query_offset,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be a pointer? For the Interval above, we just use the zero value to indicate that it's unset (and the omitempty ensures that it's not serialized then).

Copy link
Member Author

@gotjosh gotjosh May 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this can potentially override the configuration value, we need to differentiate between a zero value (which means override whatever yo have in the global config) and no value (which means use the global configuration option). What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, right. So for example if you have a 5m offset in the global config, but you want a 0m offset in a specific rule group. Yeah, that makes sense, thanks for the explanation :) 👍

rules/group.go Outdated
@@ -90,6 +91,7 @@ type GroupOptions struct {
Rules []Rule
ShouldRestore bool
Opts *ManagerOptions
RuleQueryOffset *time.Duration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that since this field is always accessed via its GroupOptions struct context, just QueryOffset would suffice?

rules/group.go Outdated Show resolved Hide resolved
Co-authored-by: Julius Volz <julius.volz@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
@juliusv
Copy link
Member

juliusv commented May 29, 2024

Btw. 👍 besides my last comment/question about the pointer.

Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
@gotjosh
Copy link
Member Author

gotjosh commented May 30, 2024

I'm going to proceed with the merge. If there's any additional feedback, please let me know so I can follow up immediately.

@gotjosh gotjosh merged commit 37b408c into main May 30, 2024
42 checks passed
@gotjosh gotjosh deleted the gotjosh/evaluation-delay branch May 30, 2024 10:49
gotjosh added a commit that referenced this pull request Jun 5, 2024
A small oversight of when I introduced #14061, I could add a test to cover it but it seems like an overkill given other similar attributes don't have it either. Let me know if you think it's worth it.

Signed-off-by: gotjosh <josue.abreu@gmail.com>
gotjosh added a commit that referenced this pull request Jun 5, 2024
A small oversight of when I introduced #14061, I could add a test to cover it but it seems like an overkill given other similar attributes don't have it either. Let me know if you think it's worth it.

Signed-off-by: gotjosh <josue.abreu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants