-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using spin lock and skiplist #370
Conversation
This reverts commit cd92801.
use tokio::net::TcpStream; | ||
|
||
#[derive(Clone)] | ||
pub struct ConnectionPool { | ||
pool: Arc<Mutex<HashMap<SocketAddr, Arc<tokio::sync::Mutex<TcpStream>>>>>, | ||
pool: Arc<SkipMap<SocketAddr, Arc<RwLock<TcpStream>>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we know, we do not have any data race in TcpStream, so this spin lock will not consume too many CPU cycles.
I think skiplist is a good choice, rather than tokio::sync::Mutex. But, I’m not sure with spinlock since network IO will sometimes takes some time.
contention is what determines the amount of wasted CPU cycles, not presence of data race. Anyway, I'm against any solution for improving the performance at this stage unless it also clearly simplifies the code, and this does not.
|
Improving the performance is not the main target of this PR, we are blocking in mixnet unit test and the changes solve the problem. |
Why is this not mentioned anywhere in the PR? How am I supposed to know? :) |
let's continue the conversation here It only looks like the mixnet test got slower, but #372 (comment) might solve it independently from this |
Closed to support #373 |
Let's discuss how to improve the performance of conn pool in this PR. This is the first try to remove expensive locks by using Skiplist and a spinlock. As we know, we do not have any data race in TcpStream, so this spin lock will not consume too many CPU cycles.