Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: handle panic on Ws error #1915

Merged
merged 5 commits into from Dec 1, 2022

Conversation

0xMelkor
Copy link
Contributor

@0xMelkor 0xMelkor commented Dec 1, 2022

Motivation

Some issues have been raised, reporting WsServer panics on upstream connection errors.
#1889 #1815

Err(ClientError::UnexpectedClose) => {
error!("{}", ClientError::UnexpectedClose);
break
}
Err(e) => {
panic!("WS Server panic: {}", e);
}

Solution

In order to have a simpler and more predictable behavior, I believe the application should directly handle reconnections (also stated here), avoiding recovery mechanisms at the transport layer.

Instead of crashing the program, we gracefully close all active subscriptions. Clients will then observe the end of their streams and issue proper actions.

What do you guys think?

PR Checklist

  • Added Tests
  • Added Documentation
  • Updated the changelog

@0xMelkor 0xMelkor mentioned this pull request Dec 1, 2022
5 tasks
Lint
cargo +nightly fmt
@0xMelkor 0xMelkor marked this pull request as ready for review December 1, 2022 14:31
Copy link
Collaborator

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo, it makes more sense to handle this gracefully and don't panic on the ws task.

needs changelog entry

- Added `#` prefix to issue IDs where missing
@0xMelkor
Copy link
Contributor Author

0xMelkor commented Dec 1, 2022

Hi @mattsse, changelog updated.

Copy link
Owner

@gakonst gakonst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - let's iterate on this @mattsse so we unblock @0xMelkor

@gakonst gakonst merged commit c4e09f2 into gakonst:master Dec 1, 2022
@Tigranchick
Copy link

Tigranchick commented Jan 16, 2023

Sorry for the dumb question, but how can I catch this panic or error, i mean is there place to match it. I don't get messages from websocket often and it breaks with connection refused. Now I took one of the solutions with 2 loops and a timeout in tokio select! like there #1889 .

@0xMelkor
Copy link
Contributor Author

0xMelkor commented Jan 16, 2023

Hi @Tigranchick,

this PR removed the panic! statement in ws.rs, in favor of a graceful teardown of active subscriptions. The effect is that your stream reaches the last element and you are up to handle reconnections.

    // This loop iterates over WebSocket disconnections
    'connection: loop {

        let client = Provider::<Ws>::connect("wss://your-uri").await?;
        let client = Arc::new(client);
        let filter = Filter::new().event("Transfer(address,address,uint256)");

        let mut stream = client.subscribe_logs(&filter).await?;

        while let Some(log) = stream.next().await {
                 // Do some work
        }
       
        // If you get here, your subscription has been canceled => Iterate over the main loop to handle reconnection
   }

How can we improve ethers in my opinion

A cleaner approach would also propagate errors from Ws to the application.
This requires some library changes along the stack, and the subscription code would become like:

        // The stream now emits `Result`instead of `Log`
        while let Some(result) = stream.next().await {
                 match result {
                     Ok(log) => // Do some work
                     Err(e) => // Handle the error
                 }
        }

@Tigranchick
Copy link

@0xMelkor, thanks you to explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants