-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: cumulative pprof merge for pull mode #1794
Conversation
Codecov ReportBase: 66.47% // Head: 66.47% // No change to project coverage 👍
Additional details and impacted files@@ Coverage Diff @@
## main #1794 +/- ##
========================================
Coverage 66.47% 66.47%
========================================
Files 172 172
Lines 5594 5594
Branches 1260 1506 +246
========================================
Hits 3718 3718
Misses 1867 1867
Partials 9 9 Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
size-limit report 📦
|
p.Profile = mergedProfileBytes.Bytes() | ||
p.PreviousProfile = nil | ||
p.SampleTypeConfig = stConfig | ||
p.RawData = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note: I might be mistaken but this means we will deserialize pprof protobuf bytes again in parser down the data flow, although we already have merged
: as far as I understand we can't use it directly because it is github.com/google/pprof/profile.Profile
– which is very sad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is a different type.
Maybe we should do the new merge only for remote write case? and let the parser do the merge(diff)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should do the new merge only for remote write case? and let the parser do the merge(diff)
yeah this makes sense to me
pkg/convert/pprof/profile.go
Outdated
@@ -67,6 +68,33 @@ func (p *RawProfile) Push(profile []byte, cumulative bool) *RawProfile { | |||
return p.next | |||
} | |||
|
|||
func (p *RawProfile) MergeCumulative(ms *cumulativepprof.Mergers) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should return an error
pkg/scrape/scrape.go
Outdated
sl.scraper.profile = profile.Push(buf.Bytes(), sl.scraper.cumulative) | ||
if !sl.disableCumulativeMerge { | ||
if sl.scraper.cumulative && profile.SampleTypeConfig != nil { | ||
if sl.scraper.mergers == nil { | ||
sl.scraper.mergers = cumulativepprof.NewMergers() | ||
} | ||
profile.MergeCumulative(sl.scraper.mergers) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't fully understand why we do this check: profile.SampleTypeConfig != nil
. Above I see
if sl.scraper.profile == nil {
sl.scraper.profile = &pprof.RawProfile{
SampleTypeConfig: sl.scraper.config.SampleTypes,
}
}
I must be missing something, but it does not seem there is a case when profile.SampleTypeConfig != nil
evaluates as false
.
A nit: consider moving the logic to Push
(moving sl.scraper.mergers
as well). The initial idea was that this method is a bridge between pull and push modes. But now it looks like a caller should know too much about RawProfile
internals.
Also, I can't remember why we need this sl.scraper.cumulative
outside the profile
– it seems it is not used there and should be moved to RawProfile
as well
Looks very good and promising! |
pkg/cli/server.go
Outdated
@@ -201,7 +201,8 @@ func newServerService(c *config.Server) (*serverService, error) { | |||
svc.scrapeManager = scrape.NewManager( | |||
svc.logger.WithField("component", "scrape-manager"), | |||
ingester, | |||
defaultMetricsRegistry) | |||
defaultMetricsRegistry, | |||
svc.config.DisableCumulativeMerge || !svc.config.RemoteWrite.Enabled) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@korniltsev I have a question about this.
First of all, I rewrote the original boolean logic, because it was too confusing.
But even after doing that I don't think I understand why if remote write is enabled we should be doing force merges. Could you explain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything makes sense but I want to chat about remote write / no remote write cases
This reverts commit 89265cd.
Do the same thing as in client grafana/pyroscope-go#24 for scrape and before ingestion