Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Informative events API #206

Open
AbstractiveNord opened this issue Mar 14, 2024 · 10 comments
Open

Informative events API #206

AbstractiveNord opened this issue Mar 14, 2024 · 10 comments
Labels
enhancement New feature or request

Comments

@AbstractiveNord
Copy link

Sometimes I got an errors like this.

ERROR client_loop: twitch_irc::client::event_loop: Pool connection 68 has failed due to error (removing it): Did not receive a PONG back after sending PING

How to catch that errors in code? How to treat that?

@RAnders00
Copy link
Collaborator

They are informative, the library automatically handles the error recovery. I‘ll keep this ticket open though as a feature request to add a mechanism to receive the events through an API.

@RAnders00 RAnders00 added the enhancement New feature or request label Mar 15, 2024
@RAnders00 RAnders00 changed the title Handling for crate events Informative events API Mar 15, 2024
@AbstractiveNord
Copy link
Author

The library automatically handles the error recovery.

Glad to hear, because since that error comes, my program starts to work strange. But how to check in code, is enough channels already loaded? Maybe some traffic stats, or something like that? Should that be monitored outside of crate?

@RAnders00
Copy link
Collaborator

You can use the monitoring feature of this crate- You will need to set up an external Prometheus service to make use of it. An example is available here: https://github.com/robotty/twitch-irc-rs/blob/master/examples/metrics.rs

To view the metrics collected through Prometheus, which is the database that will store the data, you can use something like Grafana. Here are some links to get you started: https://grafana.com/products/cloud/ or self-hosted: https://grafana.com/docs/grafana/latest/getting-started/get-started-grafana-prometheus/#get-started-with-grafana-and-prometheus

I can share a template for a dashboard, I'm going to create a ticket for that enhancement.

@RAnders00
Copy link
Collaborator

You can view the instructions that I want to publish shortly in #208! Feel free to give it a review/request changes should anything need improvement.

@AbstractiveNord
Copy link
Author

They are informative, the library automatically handles the error recovery. I‘ll keep this ticket open though as a feature request to add a mechanism to receive the events through an API.

Got a case, the exact same error happened, connections has been recreated successfully, but some channels lost, that's monitored via metrics. Does channels, which have been counted as type: server, may be lost after reconnect in case of ban or something like that? That behavior confused me a little, because crate doesn't provide a callback about channel ban.

@AbstractiveNord
Copy link
Author

After long-time work, while some channels gets banned, reconnect can trigger loss of that banned channels, which cause connection resources be to unused or partially used.

@RAnders00
Copy link
Collaborator

That is correct, and there's no way to avoid that. This behaviour is, at least in parts, described here: https://docs.rs/twitch-irc/latest/twitch_irc/client/struct.TwitchIRCClient.html#method.join and here: https://docs.rs/twitch-irc/latest/twitch_irc/client/struct.TwitchIRCClient.html#method.get_channel_status

There's no way to work around this: If a channel is banned at the moment the client wants to reconnect it, there is no way to reconnect to it, Twitch simply doesn't allow joining banned channels. You can, however, keep retrying failed channels using join(), as described, a retry attempt will be made if the join is not already confirmed.

I had not considered the fact that this results in underutilized connections to be a problem until now, since the number of channels being joined by the library that get banned should be small in relation to those not getting banned, but there is an open ticket about reducing the number of connections if possible: #13

@AbstractiveNord
Copy link
Author

That is correct, and there's no way to avoid that. This behaviour is, at least in parts, described here: https://docs.rs/twitch-irc/latest/twitch_irc/client/struct.TwitchIRCClient.html#method.join and here: https://docs.rs/twitch-irc/latest/twitch_irc/client/struct.TwitchIRCClient.html#method.get_channel_status

There's no way to work around this: If a channel is banned at the moment the client wants to reconnect it, there is no way to reconnect to it, Twitch simply doesn't allow joining banned channels. You can, however, keep retrying failed channels using join(), as described, a retry attempt will be made if the join is not already confirmed.

I had not considered the fact that this results in underutilized connections to be a problem until now, since the number of channels being joined by the library that get banned should be small in relation to those not getting banned, but there is an open ticket about reducing the number of connections if possible: #13

Is there no way to determine, which channels gets lost in certain connection? If API allow, I can store hashmaps per connection, but I am not sure about that way. In my case, if node already lost that channels, it's useless to spam retry, better let it as is, with manual reconnect in future.

I have multiple shards, every utilize 1000 connections. It's quite good to keep connection to banned chat, but if reconnection already happened due to network issues, it's underutilized resource. Meanwhile, I use join and part directly, no set wanted channels, gonna think more about it.

@RAnders00
Copy link
Collaborator

Is there no way to determine, which channels gets lost in certain connection? If API allow, I can store hashmaps per connection, but I am not sure about that way. In my case, if node already lost that channels, it's useless to spam retry, better let it as is, with manual reconnect in future.

That's what this ticket is about, if I understand correctly. There's currently no API for that

I have multiple shards, every utilize 1000 connections. It's quite good to keep connection to banned chat, but if reconnection already happened due to network issues, it's underutilized resource.

With all due respect, I have a hard time believing this to be a significant source of inefficiency. I run a service that's joined to 166k channels at once, and basically 99.9% to 100% of those are successful. Have you actually monitored the percentage on your services?

@AbstractiveNord
Copy link
Author

Is there no way to determine, which channels gets lost in certain connection? If API allow, I can store hashmaps per connection, but I am not sure about that way. In my case, if node already lost that channels, it's useless to spam retry, better let it as is, with manual reconnect in future.

That's what this ticket is about, if I understand correctly. There's currently no API for that

I have multiple shards, every utilize 1000 connections. It's quite good to keep connection to banned chat, but if reconnection already happened due to network issues, it's underutilized resource.

With all due respect, I have a hard time believing this to be a significant source of inefficiency. I run a service that's joined to 166k channels at once, and basically 99.9% to 100% of those are successful. Have you actually monitored the percentage on your services?

Yes, I mean that's scale when channels count updates slowly, and amount of banned channels became significant. I monitor diff between wanted and server metric named twitchirc_channels. For example, one shard come to 200k channels, and another only 120k channels. I suspect that's banned channels to be culprit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants