Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not found a container for a registered process #169

Closed
mjura opened this issue Feb 9, 2022 · 4 comments · Fixed by #171
Closed

Could not found a container for a registered process #169

mjura opened this issue Feb 9, 2022 · 4 comments · Fixed by #171
Assignees

Comments

@mjura
Copy link
Collaborator

mjura commented Feb 9, 2022

After migration to Aya we can observe new issue.

How to reproduce it:

  1. Install lockcd using normal procedure
  2. Launch some example pods
  3. Try to connect to pods
  4. Upgrade helm lockcd release
  5. After trying to connect to pods we can see following errors:
mjura@gecko:~/git/lockc> kubectl logs -n lockc lockcd-dn7kf -f
2022-02-09T14:08:17.937616Z ERROR lockcd: could not send eBPF command result although the operation was succeessful command="add_proceess
bpftool prog trace
...
runc-30504   [001] d..21 624920.732342: bpf_trace_printk: error: get_policy_level: could not found a container for a registered process

mjura@gecko:~/lockc> kubectl exec -ti myapp-7bb5f9b56b-vqjxc -- bash
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "da94fe7fa19be99a826d1f93776b389112b7925bc13bdd45d97436431cb86db7": OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: process_linux.go:99: starting setns process caused: fork/exec /proc/self/exe: operation not permitted: unknown
@mjura
Copy link
Collaborator Author

mjura commented Feb 9, 2022

This issue looks like similar issue for #145

@mjura mjura self-assigned this Feb 9, 2022
@vadorovsky
Copy link
Member

Potentially a bug in Aya, which should reuse the maps simply by the map_pin_path param here

https://github.com/rancher-sandbox/lockc/blob/main/lockc/src/load.rs#L14

vadorovsky added a commit to vadorovsky/lockc that referenced this issue Feb 10, 2022
Aya relies on the `pinning` field in BPF map definitions. libbpf doesn't
provide that field, so instead of using their bpf_map_def struct, here
we define our bpf_elf_map struct which has it.

Our structure is similar to those available in Cilium[0] and some
selftests in the kernel tree[1].

[0] https://github.com/cilium/cilium/blob/v1.11.1/bpf/include/bpf/loader.h#L19-L29
[1] https://elixir.bootlin.com/linux/v5.16.8/source/samples/bpf/tc_l2_redirect_kern.c#L23

Fixes: lockc-project#169
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
@vadorovsky
Copy link
Member

Disregard my comments above ^^

All is good in Aya. The issue is that it just pins map differently than libbpf.

So basically, there are two ways:

  • setting pinned attribute in map definitions, in code - then pinning happens as soon as the BPF object is loaded - that's what Aya and Cilium are using
  • libbpf doesn't honor that attribute and they expect people to use .pinned function/method explicitly, after the BPF object is loaded

I just had to adjust our C code to the 1st way.

vadorovsky added a commit to vadorovsky/lockc that referenced this issue Feb 11, 2022
Aya relies on the `pinning` field in BPF map definitions. libbpf doesn't
provide that field, so instead of using their bpf_map_def struct, here
we define our bpf_elf_map struct which has it.

Our structure is similar to those available in Cilium[0] and some
selftests in the kernel tree[1].

[0] https://github.com/cilium/cilium/blob/v1.11.1/bpf/include/bpf/loader.h#L19-L29
[1] https://elixir.bootlin.com/linux/v5.16.8/source/samples/bpf/tc_l2_redirect_kern.c#L23

Fixes: lockc-project#169
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants