Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Websocket performance degradation after v0.11.0 #430

Closed
mdben1247 opened this issue Dec 18, 2020 · 4 comments
Closed

Websocket performance degradation after v0.11.0 #430

mdben1247 opened this issue Dec 18, 2020 · 4 comments

Comments

@mdben1247
Copy link
Contributor

We are experiencing quite a large, about an order of magnitude performance degradation in Websocket performance in various applications going from v0.11.0 to v0.12.0 . It can be observed comparing the run times of a simple test:

#[tokio::main]    
async fn main() {                                                                                        
    let t = web3::transports::WebSocket::new("ws://127.0.0.1:8546").await.unwrap();
    let web3 = web3::Web3::new(t);                                                                       
    for _ in 0..1000 {                                                                                   
        let _accounts = web3.eth().accounts().await.unwrap();
    }                                                                                                    
}   

or for v0.11.0:

use web3::futures::Future;
fn main() {
    let (_eloop, t) = web3::transports::WebSocket::new("ws://127.0.0.1:8546").unwrap();
    let web3 = web3::Web3::new(t);
    for _ in 0..1000 {
        let _accounts = web3.eth().accounts().wait().unwrap();
    }
}

All versions after v0.11.0 are affected, tried testing with v0.12.0, v0.13.0 and master. Looking at changes, it seems the backend has been changed from websocket to socketto at the time.

@mdben1247 mdben1247 changed the title Websocket Performance Degradation After v0.11.0 Websocket performance degradation after v0.11.0 Dec 18, 2020
@mdben1247
Copy link
Contributor Author

Timing each individual request with:

let now = std::time::Instant::now();
let _accounts = web3.eth().accounts().await.unwrap();
println!("..+ {:>7.2}ms", now.elapsed().as_secs_f64() * 1000_f64);

shows requests normally finish in sub-millisecond range for both old and new version, however with v0.12.0 and forward there are almost regular hiccups every 20 or so requests, where a single request stands out with 40ms or more to complete. Attached timings as measured.

v011.txt
v013.txt

@mdben1247
Copy link
Contributor Author

A fix for the naive test above is to set stream.set_nodelay(true) in src/transport/ws.rs. See:
https://doc.rust-lang.org/std/net/struct.TcpStream.html#method.set_nodelay

This send small messages immediately without waiting, and I think it should be the default, since is quite necessary for the real-time purposes here - many times the process will wait for web3 response, since execution depends on the results.

Unfortunately, even with this fix, we still observe a large performance drop in our production workloads moving from v0.11.0 to v0.12.0. I'll try to make additional tests to see what's going on.

@mdben1247
Copy link
Contributor Author

Unfortunately, even with this fix, we still observe a large performance drop in our production workloads moving from v0.11.0 to v0.12.0. I'll try to make additional tests to see what's going on.

This is not correct, I made a mistake. I am happy to report that set_nodelay(true) completely solves performance problems both in the naive test and our production. If anything, the newer versions are now somewhat faster that v0.11.0.

Should I make a pull request or is the fix obvious enough?

@tomusdrw
Copy link
Owner

@mdben1247 great to hear you indentified the culprit. Please make a PR, happy to review and merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants