Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): Better connection memory tracking #2205

Merged
merged 1 commit into from
Nov 26, 2023
Merged

feat(server): Better connection memory tracking #2205

merged 1 commit into from
Nov 26, 2023

Conversation

chakaz
Copy link
Collaborator

@chakaz chakaz commented Nov 23, 2023

No description provided.

@@ -28,7 +28,7 @@ if (NOT APPLE)
endif()

add_library(dragonfly_lib engine_shard_set.cc channel_store.cc command_registry.cc
config_registry.cc conn_context.cc debugcmd.cc dflycmd.cc
config_registry.cc conn_context.cc debugcmd.cc dflycmd.cc allocation_sampler.cc
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can split this PR to few PRs?
for example I believe AllocationSampler can be in a seprate PR

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Done.

@@ -98,8 +100,8 @@ class RedisParser {
absl::InlinedVector<std::pair<uint32_t, RespVec*>, 4> parse_stack_;
std::vector<std::unique_ptr<RespVec>> stash_;

using BlobPtr = std::unique_ptr<uint8_t[]>;
std::vector<BlobPtr> buf_stash_;
using Blob = std::vector<uint8_t>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you extract this change as well to a new PR?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is part of this PR though, because that's the only way we can do tracking (we have no way of getting the array size)

@chakaz chakaz merged commit d6292ba into main Nov 26, 2023
10 checks passed
@chakaz chakaz deleted the mem-tracking branch November 26, 2023 12:51
Comment on lines +1325 to +1328
// We add a hardcoded 9k value to accomodate for the part of the Fiber stack that is in use.
// The allocated stack is actually larger (~130k), but only a small fraction of that (9k
// according to our checks) is actually part of the RSS.
mem += 9'000;
Copy link
Contributor

@dranikpg dranikpg Nov 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chakaz

Don't we account for the dispatch fiber here as well?

So shouldn't it be:

mem += 9k; // unconditional main fiber
if (dispatch_fb_.IsJoinable())
   mem += 9k

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a nice observation.
An actual test of creating many connections show an average of 9kb per connection.
All of these connections send a PING, and then later randomly (and slowly) created string keys.
I imagine that PING creates a dispatch fiber, and as such, it should already be included in the 9kb. In other words, maybe it should be

constexpr size_t kMinimalFiberStackKb = 4'500;
mem += kMinimalFiberStackKb;
if (dispatch_db_.IsJoinable())
  mem += kMinimalFiberStackKb;

However, I would like to check this with and without issuing a PING to make sure it works as expected. Stay tuned :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, I also created a simple test to make sure that using a Fiber's stack actually increases RSS (in addition to creating that stack). But the 9k part comes from experimenting with the actual connection fibers (as it depends how much of the stack is used)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants