perf: reuse hashmap resource in cross-service metainfo node #11
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
In the design of connection resource reuse, we observed that for each new request using a reused connection, the framework side still incurs some memory allocation for data structures. Specifically, during the decoding of each request, there is behavior associated with the hashmap's
**reserve_rehash**
, leading us to infer that certain data structures do not retain their memory space after the request is completed.Upon investigation, we found that at the end of each request, the cleanup of the
**metainfo**
node for cross-service does not use the hashmap's**clear**
method. Instead, it is directly set to**None**
. This causes the connection resource to regenerate a default hashmap, i.e., an empty hashmap, for the next request. Therefore, during the**insert**
operation, frequent**reserve_rehash**
operations occur due to the lack of reserved memory, resulting in a certain computational cost.Solution
To achieve connection resource reuse, when cleaning up the specific data within the current connection resource for each request, we preserve the memory space for all underlying data structures. In this optimization, we primarily focus on preserving memory for the data structures used in the cross-service
**metainfo**
transfer. The specific approach involves implementing a**clear**
method for the**Node**
type, which calls the**clear**
method on the encapsulated hashmap to perform data cleanup, abandoning the practice of directly setting the**Node**
structure to**None**
.