-
Notifications
You must be signed in to change notification settings - Fork 17.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/compile: add basic block counters for PGO #65466
Comments
We certainly would like a way to precisely identify basic blocks in profiles so we can use that information for basic block level optimizations. That is what #59612 is intended to cover. That issue doesn't cover precisely how the discriminator values are determined or how they get matched back to IR nodes and/or SSA blocks/values during the next build. It seems like this is something your design above tries to define, which is great. That said, I don't quite follow exactly how you are defining them, or how propagation works as IR is mutated and eventually becomes SSA. Are you assigning discriminator values to high-level IR nodes (like I suspect this kind of propagation will get very complicated. I'd be tempted, at least for an initial version, to keep everything in SSA. Discriminators are based on basic block numbers, and optimizations prior to SSA simply cannot use them. Even in this case, I think tracking correlation to the profile through the different layers of SSA passes will be difficult. I happen to be working on a prototype to assign discriminators to each PC, and plumb that into the binary metadata and the pprof profile. Once I have that done, you may be able to use this prototype to play with the PGO side of matching the samples-with-discriminators back to the build and applying basic block optimizations. |
@prattmic thank you for your answer. Yes, you are right that discriminators are based on the basic blocks. My proposal needs the information about the column for the sampled instruction, so maybe the dwarf column information will be more suitable for my proposal. Thanks for highlighting that. Answering the question about the discriminator assignment - I do not do that for now (and need not discriminators itself, but the column number). Yet, the column information is useful, we can load the counters to the AST nodes without it. The profile may be less precise, but still we can implement the profile-based optimizations. My motivation for loading the counters to the AST level is an opportunity to implement not only inline optimization on the AST level, so I think we should start loading the profile counters at the AST level. |
Change https://go.dev/cl/560781 mentions this issue: |
https://go.dev/cl/560781 plumbs a discriminator value from the compiler to the pprof Line.Column field (even though the value is not actually the column number. Feel free to use this if you'd like to play with using discriminators for PGO. If you'd like just the column number itself, see my comment at https://go-review.googlesource.com/c/go/+/560781/2/src/cmd/internal/obj/pcln.go. |
I don't see why this has to be a proposal; taking it out of the proposal process. |
Change https://go.dev/cl/564055 mentions this issue: |
I implemented some proof-of-concept prototype for basic block counters. Currently done:
The patch shows how we can load basic block counters from the pprof file, how we can propagate them, and how to use them. The patch was tested on the go1 benchmark set, and it showed that depending on bb-layout pass the test fannkuch can be faster from 3.5% to 12.2% on the intel core machine (the xeon machine does not show this improvement). To use basic block counters you should add an option This patch just shows the main idea of basic block counters loading. Before we can use it in the compiler, the following steps should be done:
This patch itself is not for review, but I would like to get some feedback for the general approach - the propagation algorithm and the Node structure modifications itself. Please, take a look at the irgraph.go and the ssa part. |
Status update:
During work with this feature, I came across some problems:
Currently I set zero counters to the
Further plans:
|
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. Currently implemented: + Loading counters to AST nodes by line number information + The AST counter propagation algorithm + Moving counters to SSA nodes from AST nodes + Adding profile usage in basic block layout pass (just proof-of-concept) + Print counters to the html dumps The patch shows how we can load basic block counters from the pprof file, how we can propagate them, and how to use them. The patch was tested on the go1 benchmark set, and it showed that depending on bb-layout pass the test fannkuch can be faster from 3.5% to 12.2% on the intel core machine (the xeon machine does not show this improvement). To use basic block counters you should add an option -bbpgo: go test -a -bbpgo -pgo=go1.prof This patch just shows the main idea of basic block counters loading. Before we can use it in the compiler, the following steps should be done: * Improve the counter propagation on the AST and SSA. Currently there are some propagations, but they still may be improved * Improve basic block layout and register allocation passes with profile information Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. Currently implemented: + Loading counters to AST nodes by line number information + The AST counter propagation algorithm + Moving counters to SSA nodes from AST nodes + Adding profile usage in basic block layout pass (just proof-of-concept) + Print counters to the html dumps The patch shows how we can load basic block counters from the pprof file, how we can propagate them, and how to use them. The patch was tested on the go1 benchmark set, and it showed that depending on bb-layout pass the test fannkuch can be faster from 3.5% to 12.2% on the intel core machine (the xeon machine does not show this improvement). To use basic block counters you should add an option -bbpgo: go test -a -bbpgo -pgo=go1.prof This patch just shows the main idea of basic block counters loading. Before we can use it in the compiler, the following steps should be done: * Improve the counter propagation on the AST and SSA. Currently there are some propagations, but they still may be improved * Improve basic block layout and register allocation passes with profile information Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. Currently implemented: + Loading counters to AST nodes by line number information + The AST counter propagation algorithm + Moving counters to SSA nodes from AST nodes + Adding profile usage in basic block layout pass (just proof-of-concept) + Print counters to the html dumps The patch shows how we can load basic block counters from the pprof file, how we can propagate them, and how to use them. The patch was tested on the go1 benchmark set, and it showed that depending on bb-layout pass the test fannkuch can be faster from 3.5% to 12.2% on the intel core machine (the xeon machine does not show this improvement). To use basic block counters you should add an option -bbpgo: go test -a -bbpgo -pgo=go1.prof This patch just shows the main idea of basic block counters loading. Before we can use it in the compiler, the following steps should be done: * Improve the counter propagation on the AST and SSA. Currently there are some propagations, but they still may be improved * Improve basic block layout and register allocation passes with profile information Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Status update:
For now, there are lots of things to do, but the basic block counters subsystem already allows to be used by pass developers. I ask for the review of this patch. After that I will be able to continue improving the basic block counters subsystem. |
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -bbpgo: go test -a -bbpgo -pgo=go1.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Alternative approach by @jinlin-bayarea: https://go.dev/cl/571535 |
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -bbpgo: go test -a -bbpgo -pgo=go1.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -bbpgo: go test -a -bbpgo -pgo=go1.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
I would like to discuss the current status of basic-block counters implementation. According to the issue #62463, we have a list of wanted PGO optimizations. Some of them are related to the AST level, and some of them to the SSA level. Now we have two prototypes for basic block counters implementation:
Both approaches have advantages and disadvantages, and I would like to understand if I should continue improving the first one. The advantages of loading counters on AST:
The problems of loading counters on AST:
The advantages of loading counters on SSA:
The problems of loading counters on SSA:
In general, I believe that the loading counters to the AST is useful, as it gives us many opportunities for profile optimizations in all the parts of the compiler. I think that we even can use both approaches at the same time, as they do not conflict: for early optimizations we can use less precise counters and for late optimizations - more precise. Please, share your thoughts on this idea - it is important to understand the community point of view. CC @jinlin-bayarea @cherrymui @aclements @jinlin-bayarea @prattmic |
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -bbpgo: go build -a -bbpgo -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -bbpgo: go build -a -bbpgo -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -bbpgo: go build -a -bbpgo -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -pgobb: go build -a -pgobb -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -pgobb: go build -a -pgobb -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -pgobb: go build -a -pgobb -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Change https://go.dev/cl/602015 mentions this issue: |
Delivered a new version of basic block pgo. Now it uses preprofile subsystem, also tests for arm64 were added. The general idea is the same, the patch is ready for review and usage. |
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -pgobb: go build -a -pgobb -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Change https://go.dev/cl/605555 mentions this issue: |
Current progress of pgobb. What it can do:
Evaluation results.Testing modes:
Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz:
ARMv8 Kunpeng920:
Currently the geomean results show, that the pgobb is a bit better (or bit worse), than standard pgo, and other combinations of pgobb and greedy layout shows larger dispersion in the results, but in general gives worse results. Below the results for all the tests are listed: ARMv8 Kunpeng920:
Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz:
Some notes on the results:
The BinaryTree17 and Fannkuch11 on Kunpeng strongly depends on code alignment and the results may vary from version to version. Also, we can see, that x86 has improvements on the Fannkuch11 and the best improvement is with interaction between pgobb and layout.
Here we see, that Gob has improvement for pgobb on x86 and degradation for other combinations on x86. The arm results almost not changed.
We see, that on x86 the performance is worse, but the pgobbbbgreed mode shows improvement in x86 and on arm.
Here we see, that simple pgobb shows improvement, this is and issue for investigation.
On Kunpeng this test strongly depends on code alignemnt, but we see the big degradation even on x86 with greedy algorithm. This is an issue for investigation.
We see degradation on Kunpeng and improvement on x86. Probably the code alignment issue. Also good improvement for pgobb*greed modes.
Test degrades on x86 with pgobbgreed, this is an issue for investigation.
Good improvements for pgobb on both platforms, but degradations for greedy algorithm in x86.
AlsoWe use pgobb in our production code and it shows performance improvement 1.5%, that is good result. SummaryThe pgobb itself is not an optimization, but a framework for profile-guided optimization, but it can show improvements on some tests. We see, than average benchmark results are not as good, as could be, and that should be investigated. I believe, that improving pgobb and pgo in general will bring more benifit to us. |
Interesting results, thanks for sharing! I recommend also benchmarking with sweet. Bent has a big set of benchmarks, but mostly microbenchmarks, so they aren't as useful for PGO evaluation. Sweet is "integration" benchmarks: larger applications where it IMO makes more sense to apply PGO. |
The porting of ext-stp algorithm is incomplete. I do have data structure to store the edge counter information. In addition, you did not port the frequency propagation file. Please remove the incomplete ext-stp implementaiton from https://go-review.googlesource.com/c/go/+/605555. |
Hello. Yes, I mention, that it is incomplete and do not make measurements with it. If you insist, I will remove it, but this patch is in the WIP status. Is it possible not to remove it, before I finish? UPD: removed |
Current patch adds the counters to the AST and SSA nodes. The counters are loaded from the pprof file, no profile format changes needed. To use basic block counters you should add an option -pgobb: go build -a -pgobb -pgo=file.prof Fixes golang#65466 Change-Id: I5d6be7d87f384625259a9ba794744a652060de4e
Proposal Details
This is a proposal for implementing PGO basic block counters in the Go compiler. The issue #62463 describes profile-based optimizations useful for the Go compiler. Most of them (basic block ordering, loop unroll, PGO register allocation and others) need counters inside the basic blocks. Currently, the Go compiler has the weighted call graph, which cannot be used for such optimizations directly.
Here I propose to add the basic block counters to make possible implementation of profile guided optimizations.
General approach
The general approach is based on adding counter values to the AST and SSA IR nodes, getting these values from the pprof file and correcting them during the compilation.
Step 1. Load the counter values to the AST nodes. The counters from the samples can easily be loaded to the corresponding AST nodes. As we use sampling profile, not all the nodes will have the values.
Step 2. Propagate the values to the remaining nodes. Here we traverse the AST nodes and propagate existing values to the nodes with no values. This is needed for further steps
Step 3. Correct values after devirtualization and inline. The callee function nodes contains the summary value of all the calls, but after inline, we should re-evaluate these values according to the inline point counter.
Step 4. Assign counters to the basic blocks during the SSA generation.
Step 5. Correct the counters of the basic blocks if any optimization changes the control flow.
Step 6. Implement the optimizations that rely on basic block counters.
Notes on implementation
Alternative approach. The suggested approach assumes storing and correcting the counters during the whole compilation pipeline. This will add additional field to the IR nodes and can complicate the optimization implementation (at least additional steps to the inline). As an alternative, we could try to load counters to the particular SSA basic blocks, basing on the position information of the operations. This approach has the following disadvantages: we still need counter correction, based on top-down and bottom-up control flow graph traversing, and additional correction based on inline tree information. If there exists an optimization, that changes the control flow, we still need correction. Also, the dynamic escapes on cold paths optimization needs the counters on the AST nodes. So, loading the counters to the AST nodes is not more complicated (probably even easier) and gives more opportunities.
One of the non-trivial parts is Step 2 - propagating nodes on the AST. Probably, this algorithm will be implemented as a down-top and top-down walk through the tree. The particular algorithm will be designed during implementation.
To make the profile more precise, we need line discriminators. Currently, the debug information in the Go binary contains only per-line information. This will play a role in the cases of a few conditions in "if" construction, for example, but even without this information, the profile will be useful. The approach for loading this information is described in issue cmd/compile: add intra-line discrimination to PGO profiles #59612.
Implementation plan
I made a prototype that loads counters into the AST IR nodes and going to pass them to the SSA basic blocks. After that, I will implement the Steps 2 and 3. Then I'm going to add discriminators and implement the rest. After that I'm going to implement some of the optimizations like local basic block ordering.
I would like to get feedback from the community and understand if the community finds this useful.
The text was updated successfully, but these errors were encountered: