Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using spin lock and skiplist #370

Closed
wants to merge 4 commits into from
Closed

using spin lock and skiplist #370

wants to merge 4 commits into from

Conversation

al8n
Copy link
Contributor

@al8n al8n commented Sep 5, 2023

Let's discuss how to improve the performance of conn pool in this PR. This is the first try to remove expensive locks by using Skiplist and a spinlock. As we know, we do not have any data race in TcpStream, so this spin lock will not consume too many CPU cycles.

@al8n al8n changed the title WIP: using spin lock and skiplist using spin lock and skiplist Sep 5, 2023
use tokio::net::TcpStream;

#[derive(Clone)]
pub struct ConnectionPool {
pool: Arc<Mutex<HashMap<SocketAddr, Arc<tokio::sync::Mutex<TcpStream>>>>>,
pool: Arc<SkipMap<SocketAddr, Arc<RwLock<TcpStream>>>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we know, we do not have any data race in TcpStream, so this spin lock will not consume too many CPU cycles.

I think skiplist is a good choice, rather than tokio::sync::Mutex. But, I’m not sure with spinlock since network IO will sometimes takes some time.

@zeegomo
Copy link
Contributor

zeegomo commented Sep 5, 2023

we do not have any data race in TcpStream, so this spin lock will not consume too many CPU cycles.

contention is what determines the amount of wasted CPU cycles, not presence of data race.

Anyway, I'm against any solution for improving the performance at this stage unless it also clearly simplifies the code, and this does not.
The rationale is:

  • We don't even know if the current solution is too slow for our needs
  • We don't have a way to measure that the new solution is better

@al8n
Copy link
Contributor Author

al8n commented Sep 5, 2023

we do not have any data race in TcpStream, so this spin lock will not consume too many CPU cycles.

contention is what determines the amount of wasted CPU cycles, not presence of data race.

Anyway, I'm against any solution for improving the performance at this stage unless it also clearly simplifies the code, and this does not. The rationale is:

  • We don't even know if the current solution is too slow for our needs
  • We don't have a way to measure that the new solution is better

Improving the performance is not the main target of this PR, we are blocking in mixnet unit test and the changes solve the problem.

@zeegomo
Copy link
Contributor

zeegomo commented Sep 5, 2023

Improving the performance is not the main target of this PR, we are blocking in mixnet unit test and the changes solve the problem.

Why is this not mentioned anywhere in the PR? How am I supposed to know? :)
Please put all the necessary information in here so that reviewing is easier

@al8n
Copy link
Contributor Author

al8n commented Sep 5, 2023

Make mixnode handle bodies coming from the same source concurrently #372

Sorry for that. Here is the reference, #372.

@zeegomo
Copy link
Contributor

zeegomo commented Sep 5, 2023

let's continue the conversation here

#372 (comment)

It only looks like the mixnet test got slower, but #372 (comment) might solve it independently from this

@al8n
Copy link
Contributor Author

al8n commented Sep 5, 2023

Closed to support #373

@al8n al8n closed this Sep 5, 2023
@jakubgs jakubgs deleted the mixnet-pool-performance branch February 15, 2024 09:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants