Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cgroup2: Reduce allocations for manager.Stat #281

Merged
merged 2 commits into from
May 10, 2023

Conversation

dcantah
Copy link
Member

@dcantah dcantah commented Apr 7, 2023

Related to #280. There's still some spots we can fix, as the majority of allocs are occurring in bufio.Scan in readKVStatsFile. cc @kolyshkin @manugupt1 seeming as how we were all there for the last one

This change works towards bringing down allocations for manager.Stat(). There's quite a few things we can do:

  1. Swap to opening a file and issuing a single read for uint64/"max" values. Previously we were doing os.ReadFile's which returns a byte slice, so it needs to allocate.
  2. Sizing the map we store {controller}.stat values in. We store 40+ stats in this map and the default bucket size for Go seems to be smaller than this, so we'd incur a couple allocs at runtime when adding these.
  3. Finally, and funnily enough the biggest win, was just not calling f.Readdir(-1) for hugeTlb stats, and swapping this for a trick the runc cgroup code does. The number of allocs shaved off here is tied to the number of files in the cgroup directory
    as f.Readdir(-1) was being called everytime we went to collect HugeTlb stats (it returns a slice of interfaces, that all we were doing is using the Name() method of anyways). To optimize this we can take advantage of the fact that we know exactly what files we need to open as they're fixed, the only variable portion is the hugepage sizes available on the host. To get around this we can compute the hugepage sizes in a sync.Once, and all future manager.Stat calls can reap the benefits as we don't need to Readdir() the entire cgroup path to search for hugetlb.pagesize.foobar's.

All in all on Go 1.19.4 this drops allocations in Stat from 322 to 198

benchstat ~/benchstat/old.txt ~/benchstat/new.txt 
goos: linux
goarch: amd64
pkg: github.com/containerd/cgroups/v3/cgroup2
cpu: AMD Ryzen 9 5900X 12-Core Processor            
        │ /home/dcantah/benchstat/old.txt │   /home/dcantah/benchstat/new.txt   │
        │             sec/op              │   sec/op     vs base                │
Stat-24                      112.29µ ± 3%   65.10µ ± 6%  -42.02% (p=0.000 n=10)

        │ /home/dcantah/benchstat/old.txt │   /home/dcantah/benchstat/new.txt    │
        │              B/op               │     B/op      vs base                │
Stat-24                      41.40Ki ± 0%   22.26Ki ± 0%  -46.23% (p=0.000 n=10)

        │ /home/dcantah/benchstat/old.txt │  /home/dcantah/benchstat/new.txt   │
        │            allocs/op            │ allocs/op   vs base                │
Stat-24                        322.0 ± 0%   198.0 ± 0%  -38.51% (p=0.000 n=10)

cgroup2/manager.go Outdated Show resolved Hide resolved
cgroup2/utils_test.go Outdated Show resolved Hide resolved
@kolyshkin
Copy link
Contributor

I suggest making readHugeTlbStats changes related to readdirnames as a separate commit.

@dcantah
Copy link
Member Author

dcantah commented Apr 7, 2023

@kolyshkin Made a separate commit here for the hugeTlb changes, thanks for the quick review!

// Sizing this avoids an allocation to increase the map at runtime;
// currently the default bucket size is 8 and we put 40+ elements
// in it so we'd always end up allocating.
out := make(map[string]string, 50)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps have the size as 0 and capacity as 64?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't set the capacity of a map or chan, only slices

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, my bad, ENEEDCOFFEE

@kolyshkin
Copy link
Contributor

By looking at the code, I'd rather have it refactored first, then optimize. At least some parts that are crying to be removed, such as readSingleFile -- PTAL #283

@dcantah
Copy link
Member Author

dcantah commented Apr 7, 2023

@kolyshkin ack, there is quite a few places that all do the same thing. I'm fine getting rid of them and then rebasing here

cgroup2/manager.go Outdated Show resolved Hide resolved
cgroup2/utils_test.go Outdated Show resolved Hide resolved
@dcantah dcantah changed the title Cgroups2: Reduce allocations for manager.Stat Cgroup2: Reduce allocations for manager.Stat Apr 11, 2023
@dcantah dcantah marked this pull request as draft April 13, 2023 20:42
@dcantah dcantah force-pushed the cg2-reduce-allocs branch 2 times, most recently from 488dbdc to f6f6bf0 Compare April 18, 2023 02:04
@dcantah dcantah marked this pull request as ready for review April 18, 2023 02:07
cgroup2/utils.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated Show resolved Hide resolved
@dcantah dcantah force-pushed the cg2-reduce-allocs branch 2 times, most recently from 66fce44 to 41646cb Compare April 24, 2023 11:13
@dcantah
Copy link
Member Author

dcantah commented Apr 24, 2023

Okay, updated to include the runc trick @kolyshkin, great suggestion. We're down to 199 now.

cgroup2/utils.go Outdated
Comment on lines 393 to 398
for _, pagesize := range hugePageSizes() {
usage = append(usage, &stats.HugeTlbStat{
Max: getStatFileContentUint64(filepath.Join(path, "hugetlb."+pagesize+".max")),
Current: getStatFileContentUint64(filepath.Join(path, "hugetlb."+pagesize+".current")),
Pagesize: pagesize,
})
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So while I do love how this looks now, we'd include N entries (where the N is the amount of hugepages we found in /sys/kernel/mm/hugepages) in this slice now whereas before if we failed to find any hugetlb.x.y files we'd return an empty slice. If someone didn't have the hugetlb controller on it may be odd to see multiple entries all with zero stats. There's not a great way to handle this if we use getStatFileContentUint64 as it doesn't error, it just returns 0, which are valid numbers for these. I don't want to make getStatFileContentUint64 error either as it makes the other 7 uses of it more annoying. We could just make an anonymous function here that does what getStatFileContentUint64 does but errors on failures and we'll just continue in the loop, or leave if no one has concerns

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm! Can we do soemthing like

hugePageSizeSlice := hugePageSizes()
var usage = make(*stats.HugeTlbStat{}, len(hugePageSizeSlice)
and then iterate like an array
for idx, pageSize  := range hugePageSizeSlice {
  usage[idx] = .... 
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same outcome though right? We still have a slice with empty entries, where the number of them is tied to how many huge page sizes we found in /sys/kernel/mm/hugepages. Before this we'd return an empty slice which I think is ideal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh okay! yes. what I suggested matches your current code; but I was looking from an allocation perspective. If you pre-allocate; the slice won't have to expand while looping and appending.

I do not have an opinion on functionality as I do not know how users use it. I would try and keep the previous behavior though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @manugupt1 that pre-allocating the usage slice is better than growing it dynamically. Even if it won't shave off any allocations, this is more optimized overall, and since we know the size, it makes total sense to do so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I agree about pre-allocating, more the point I was trying to get across still stands: this behavior is different than whats here today. We'll have entries in hugetlb stats regardless of if someone even has the controller enabled, as long as there's entries in /sys/kernel/mm/hugepages

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right! I am not the best judge of this tbh!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets see what others think, I don't think this is too crazy a departure

@dcantah dcantah requested a review from manugupt1 April 25, 2023 05:28
cgroup2/testutils_test.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated Show resolved Hide resolved
cgroup2/utils.go Outdated
Comment on lines 393 to 398
for _, pagesize := range hugePageSizes() {
usage = append(usage, &stats.HugeTlbStat{
Max: getStatFileContentUint64(filepath.Join(path, "hugetlb."+pagesize+".max")),
Current: getStatFileContentUint64(filepath.Join(path, "hugetlb."+pagesize+".current")),
Pagesize: pagesize,
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm! Can we do soemthing like

hugePageSizeSlice := hugePageSizes()
var usage = make(*stats.HugeTlbStat{}, len(hugePageSizeSlice)
and then iterate like an array
for idx, pageSize  := range hugePageSizeSlice {
  usage[idx] = .... 
}

cgroup2/manager_test.go Outdated Show resolved Hide resolved
This change works towards bringing down allocations for manager.Stat().
There's two things this change does:

1. Swap to opening a file and issuing a single read for uint64/"max" values.
Previously we were doing os.ReadFile's which returns a byte slice, so it needs
to allocate.
2. Sizing the map we store {controller}.stat values in. We store 40+ stats in this
map and the default bucket size for Go seems to be smaller than this, so we'd incur
a couple allocs at runtime when adding these.

Signed-off-by: Danny Canter <danny@dcantah.dev>
cgroup2/utils.go Outdated Show resolved Hide resolved
Copy link
Contributor

@kolyshkin kolyshkin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, except for preallocating the usage slice would be better.

@dcantah also, if you like, you can add Co-authored-by: Kir Kolyshkin <kolyshkin@gmail.com> to the second commit, as I wrote that venerable /sys/kernel/mm/hugepages/* parsing code.

@kolyshkin
Copy link
Contributor

All in all on Go 1.19.4 this drops allocations in Stat from 322 to 199, or 47%.

Oh, and if you can use benchstat for such calculations that'd be terrific.

@dcantah
Copy link
Member Author

dcantah commented Apr 27, 2023

@kolyshkin Sure thing! I didn't even realize you wrote that, so you've already gone through this pain 🤣

In the journey of continuing to reduce allocations in manager.Stat,
this sets its eyes on optimizing readHugeTlbStats. The number of allocs
shaved off here is tied to the number of files in the cgroup directory
as f.Readdir(-1) was being called which returns a slice of interfaces.
To optimize this we can use a trick runc's cgroup code does which is to
take advantage of the fact that we know exactly what files we need to
open as they're fixed, the only variable portion is the hugepage sizes
available on the host. To get around this we can compute the hugepage
sizes in a sync.Once, and all future manager.Stat calls can reap the
benefits as we don't need to Readdir() the entire cgroup path to search
for hugetlb.pagesize.foobar's.

This brings the total number of allocations on Go 1.19.4 down to 199 from
the starting point of 322.

Signed-off-by: Danny Canter <danny@dcantah.dev>
Co-authored-by: Kir Kolyshkin <kolyshkin@gmail.com>
@dcantah
Copy link
Member Author

dcantah commented Apr 28, 2023

Oh, and if you can use benchstat for such calculations that'd be terrific.

No wonder you recommended, I fat fingered 47 from 38.. No more programming past 12 🐸

Copy link
Member

@fuweid fuweid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

None yet

5 participants