Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Long start time on bpf-next (6.4-rc3) #189

Closed
brb opened this issue Jun 6, 2023 · 5 comments
Closed

Long start time on bpf-next (6.4-rc3) #189

brb opened this issue Jun 6, 2023 · 5 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@brb
Copy link
Member

brb commented Jun 6, 2023

How to reproduce:

cd $GOPATH/src/github.com/cilium
git clone https://github.com/cilium/little-vm-helper
cd little-vm-helper && make && sudo make install && cd -
mkdir images/
IMAGE_DIR=./images ./little-vm-helper/scripts/pull_image.sh quay.io/lvh-images/kind:bpf-next-20230531.165437
lvh run --image ./images/kind_bpf-next.qcow2 --host-mount $GOPATH/src/github.com/cilium/ --daemonize -p 2222:22 --cpu=3 --mem=6G
ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost
cd /host/pwru
./pwru --filter-dst-ip=1.1.1.1
... takes ages until attach kprobes appear

strace.log

@brb brb added bug Something isn't working help wanted Extra attention is needed labels Jun 6, 2023
brb added a commit that referenced this issue Jun 6, 2023
Until #189 has been fixed.

Signed-off-by: Martynas Pumputis <m@lambda.lt>
tklauser pushed a commit that referenced this issue Jun 6, 2023
Until #189 has been fixed.

Signed-off-by: Martynas Pumputis <m@lambda.lt>
@lmb
Copy link
Contributor

lmb commented Jun 28, 2023

(pprof) top -cum
Showing nodes accounting for 2.86s, 1.68% of 169.87s total
Dropped 361 nodes (cum <= 0.85s)
Showing top 10 nodes out of 104
      flat  flat%   sum%        cum   cum%
         0     0%     0%     98.79s 58.16%  main.main
         0     0%     0%     98.79s 58.16%  runtime.main
     2.86s  1.68%  1.68%     88.28s 51.97%  github.com/cilium/ebpf/btf.copier.copy
         0     0%  1.68%     88.24s 51.95%  github.com/cilium/ebpf.(*CollectionSpec).LoadAndAssign
         0     0%  1.68%     88.24s 51.95%  github.com/cilium/ebpf.(*CollectionSpec).LoadAndAssign.func1
         0     0%  1.68%     88.24s 51.95%  github.com/cilium/ebpf.(*collectionLoader).loadProgram
         0     0%  1.68%     88.24s 51.95%  github.com/cilium/ebpf.assignValues
         0     0%  1.68%     88.23s 51.94%  github.com/cilium/ebpf.newProgramWithOptions
         0     0%  1.68%     88.20s 51.92%  github.com/cilium/ebpf.applyRelocations
         0     0%  1.68%     88.20s 51.92%  github.com/cilium/ebpf/btf.CORERelocate

Ouch! 😅 Definitely a cilium/ebpf issue. Mind moving the issue?

NVM, I'll create a new one.

@Asphaltt
Copy link
Contributor

I also confirm that github.com/cilium/ebpf/btf.Copy() consumes most of loading time.

It consumes 10m1.519248586s of 10m21.610184033s bpfSpec.LoadAndAssign() running time.

My running env is 6.2 kernel on Ubuntu 23.04 VM built from server ISO (#201 env info).

pwru.run.log

@brb
Copy link
Member Author

brb commented Jun 29, 2023

Whoa, nice finding!

As a temp solution, we can disable the field access via CO-RE. The field positions are very unlikely to change between different kernel versions.

@Asphaltt
Copy link
Contributor

Furthermore, I add a fmt log in github.com/cilium/ebpf/btf.Copy():

func Copy(typ Type, transform Transformer) Type {
	defer functrace.Trace()()
	fmt.Printf("Copy type: %s, %+v\n", typ.TypeName(), typ)
	copies := make(copier)
	copies.copy(&typ, transform)
	return typ
}

Here is the log file: pwru.run.log.

Then, dump the copying structs info from vmlinux: btf.vmlinux.structs.txt by bpftool btf dump file /sys/kernel/btf/vmlinux format raw | grep -E "'(iphdr|ipv6hdr|net|net_device|pt_regs|sk_buff|sock|tcphdr|udphdr)'" > btf.vmlinux.structs.txt.

Hopefully, it can help to solve this issue.

@brb
Copy link
Member Author

brb commented Jul 6, 2023

Fixed by #220.

@brb brb closed this as completed Jul 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants