Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix clippy warnings #23

Merged
merged 1 commit into from
Apr 24, 2024
Merged

Fix clippy warnings #23

merged 1 commit into from
Apr 24, 2024

Conversation

negezor
Copy link
Contributor

@negezor negezor commented Apr 12, 2024

PR Info

Just fixes the clippy warrnings. I would like to point out that the declared MSVR does not correspond to the functions used, namely:

  • std::hint::black_box (min MSVR 1.66)
  • std::thread::JoinHandle::is_finished (min MSVR 1.61)

@tyt2y3
Copy link
Member

tyt2y3 commented Apr 13, 2024

Oh. Really? I've been using hint::black_box for so long (probably in nightly)

Copy link
Member

@tyt2y3 tyt2y3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make a new release after merging other PRs

Thank you. Such nice software engineering work.

Btw, I've been using the high-level SeaStreamer API for a long time and I don't see myself having more needs. Of course Kafka might have a thousand knobs, but I don't find myself using Kafka much these days, and I love Redis Streams more or more. But I guess this can change.

And we've been using the File API in FireDBG, it's more stable than I thought. And there isn't much I'd change about it either. Rewind could be made faster, but it does not bug me.

So, all I am saying is, I'd inclined to make a 1.0 release sometime. What's your usecase? Is there anything architecturally you want to see changes before then?

@negezor
Copy link
Contributor Author

negezor commented Apr 14, 2024

Thank you, first and foremost, for such a rich ecosystem of modules that I use in my projects.

Currently, I am using sea-streamer with Kafka backend with a connection to Redpanda for:

  • Sending analytical events for subsequent processing and insertion into ClickHouse (requires batching transaction, AutoOffsetReset::Earliest)
  • Sending push notifications (requires batching transaction, AutoOffsetReset::Earliest)
  • Implementing Real-Time subscriptions in GraphQL by async-graphql (without transaction, AutoOffsetReset::Latest)

The inconvenience arises when it is necessary not only to accumulate messages, but also to add a minimum waiting time for debounce and a maximum for throttle (for example, like this in this demo with maxWait), deserialize messages, and then process events and commit the identifier to the consumer. At the same time, it is necessary to explicitly handle each type of backend for the identifier commit, which already sounds like the implementation of a first-class abstraction.

I wrote a small helper, the implementation idea of which I borrowed a bit from futures-batch and tokio::time::timeout. It partially eliminates the inconveniences mentioned above, but I am not entirely confident in its reliability. Because committing the identifier to the consumer should only occur after successful execution, otherwise retries should occur with backoff (for example module) and no new events should be received until then.

Maybe it's not really the module's area of responsibility, but I think it would be handy. Otherwise, I am completely fine with the implementation of the module in terms of usability.

@tyt2y3
Copy link
Member

tyt2y3 commented Apr 14, 2024

Thank you for your input. I think may be we can have a BufferedProducer<P> that wraps a Producer and have some buffering parameters: 1. max batch size 2. min send interval 3. max send interval

Speaking of which, I actually have implemented a similar thing somewhere in one of my projects.

The difficulty, though, as you already experienced, is to handle max send interval, because it means we'd have to set up a callback to wake us up after some time.

Btw, I think Kafka already has something similar https://stackoverflow.com/questions/51878751/send-bulk-of-messages-kafka-producer

@negezor
Copy link
Contributor Author

negezor commented Apr 15, 2024

So far, it is the batch processing of messages in the consumer that is causing the problem. I currently rely on batching from librdkafka in producer, but it won't work in Redis.

@tyt2y3 tyt2y3 merged commit 11d456d into SeaQL:main Apr 24, 2024
9 checks passed
tyt2y3 pushed a commit that referenced this pull request Apr 24, 2024
Copy link

🎉 Released In 0.5.0 🎉

Thank you everyone for the contribution!
This feature is now available in the latest release. Now is a good time to upgrade!
Your participation is what makes us unique; your adoption is what drives us forward.
You can support SeaQL 🌊 by starring our repos, sharing our libraries and becoming a sponsor ⭐.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants