-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release LLVM memory after binaries have been loaded into kernel #1181
Comments
I did some experiments (jemalloc case): So I kind of still prefer to have an option to free everything. To free rw_engine and llvm ctx_, we do need user instructions since we have no idea about whether users will use sscanf/snprintf or not. Do we have use cases to load the same bpf programs again into kernel in the same module context? This seems rare? If users changed the bpf program source code, typically they will create a different BPF module. |
Only 2MB RSS? A snoop tool (e.g. biosnoop) takes more than 130MB in my system, which isn't reasonable since there is not much data in use for both the program itself and Python runtime. I assumed a ~30MB size for them instead. If the compiling takes memory, I hope there is a way to compile the code beforehand. |
The newly added |
It didn't do much for me. I inspect the |
In which platform and how do you build bcc with libLLVM-8.so and libclang*.so? Typically I build with static llvm/clang libraries and hence everything is in libbcc.so. You can take a look at function It can easily extend to free libLLVM* libclang* library as well. Maybe you can help contribute. Thanks! |
Thank you for the reply. My bcc is dynamically linked to libLLVM-8.so. I tried to |
Great! |
Is this still active? |
The current implementation assumes dynamically linked libbcc.so, but statically linked llvm. So it tried to free insn memory in libbcc.so only. If you also dynamically link llvm as well, the current mechanism may not help. The commit is here which implemented original
Maybe you could help improve the implement to free |
Hello @yonghong-song I followed the BCC INSTALL page to install BCC on alpine container, can you provide instructions to statically link llvm? Regards, |
By default, libbcc.so does use static linking for llvm libraries. For example, on my system, I have
There are no llvm or clang shared library. If you want to have libbcc.so dynamically linking with clang/llvm library, you need to enable ENABLE_LLVM_SHARED in cmake command line. Does this answer your question for static linking? Note that we will need to build libbcc.so for python C binding. |
Currently, for a bcc process, after program is loaded into kernel, typically the program will get/set map data and no more compilation is needed. The llvm related resources (bpf program execution engine, sscanf/snprintf execution engine and llvm context) are not freed though. This actually takes quite bit some memory (more than 2MB RSS memory in one of my examples).
If people have many tiny bcc monitoring tools and the sum of llvm related memory consumption could be substantial.
Maybe we can provide a public interface to release internal compilation resources. This way, application writer can release llvm related memory at a pointer which deems safe from operation point of view.
The text was updated successfully, but these errors were encountered: