-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make subbuffer size configurable with Trace action #51
Make subbuffer size configurable with Trace action #51
Conversation
d477b2a
to
915fac9
Compare
Thanks for the PR. We've discussed this before, see #20 and https://gitlab.com/ros-tracing/ros2_tracing/-/issues/129. The conclusion was that there are a lot of parameters, and exposing them all through Python function parameters would be a lot. We were looking for a simple/generic way to let users configure more advanced parameters such as this one, like YAML configuration files, or letting users provide Of course, in this case, you're only looking to be able to modify the sub-buffer size; you're not looking to support all the parameters I listed in the original GitLab issue. However, I just wonder if this is really the right fix. What I've done internally at Apex.AI is to just increase default sub-buffer size to 16777216 == 16 MiB, which is 512 times bigger than the default here but still not huge. We've never had a problem with it since then. If the The only thing I would ask is: why not make the kernel sub-buffer size configurable too, potentially as a separate parameter? |
There can be massive amounts of events really quickly if
Have the buffer size easily configurabe on a case by case basis made sense to me because it's probably pretty hard to find a sufficienlty large buffer size for every use case. I didn't consider the kernel buffer size because I didn't run into issues there yet. However, it's a bit unfortunate, that lttng-load is not exposed via the python API, as it seems to be quite the solution you wanted to have in the first place. |
I've done a lot of that, and it's been fine so far with 16 MiB, but anyway I still think this is a good improvement. What about bumping the default to, say, Also, I think exposing this option to
I've definitely run into issues there, for example when enabling
Yeah, it's unfortunate that the Python API is not complete. We could always just call |
@cwecht FYI the feature freeze for Iron is next week, on April 17th. If you don't have time before then, I can take care of the rest if you want. A rebase would be good, by the way. |
915fac9
to
21dc81e
Compare
I also added the respective parameter for the kernel events buffer: we ran in the same issue here too. |
Signed-off-by: Christopher Wecht <cwecht@mailbox.org>
Signed-off-by: Christopher Wecht <cwecht@mailbox.org>
Signed-off-by: Christophe Bedard <christophe.bedard@apex.ai>
Signed-off-by: Christophe Bedard <christophe.bedard@apex.ai>
Signed-off-by: Christophe Bedard <christophe.bedard@apex.ai>
ce9c7e1
to
326f6d9
Compare
I rebased, fixed some linter-related stuff, fixed the frontend parsing, and updated the I'll let you have a look and then I'll run CI. |
Signed-off-by: Christophe Bedard <christophe.bedard@apex.ai>
LGTM! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the improvement! I'll eventually expose this through |
Signed-off-by: Christopher Wecht <cwecht@mailbox.org> Co-authored-by: Christophe Bedard <christophe.bedard@apex.ai>
Turns out: in a sufficiently large system with sufficiently much going LTTng's buffers may overflow quite quickly - especially during startup. So it is quite convenient to have the ability to configure the buffer size right in the Trace action.