-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[discusssion] Go away from broker centric systems and SQL database to get performance #45
Comments
For event store, we need persistence to the disk and I don't think Chronicle Map is the best; however, I am seriously thinking to support Chronicle Queue along with Kafka for the messaging. What we can do is to abstract the producer and consumer into interfaces and then inject different implementations. I am also thinking others customers are asking for the support like Solace and Redis. Do you know if Chronicle Map can persist to disk? |
@stevehu - actually, even the chronicle queue itself is for-ever persisted (if you want), where all messages are stored forever, so can be used for replay capabilities ... but this needs brainstorming Regarding Chronicle Map it looks like there is enterprise version which combine Map with Queue into single transactional system by providing so called Journal (probably something similar to file system journaling capabilities), some discussion is here: Anyway you can persist Chronicle Map into disk optionally, but it all depends on OS, so there is no control about how often the data are written to disk in the moment. Only when you close the Map, it is guaranteed that all data are writen to disk. Here is feature matrix for the Chronicle Map:
some more details: PS: Peter Lawrey answer very quickly any kind of question posted on stackoverflow, or we can invite him here as well, not a problem for more low-level questions about Chronicle I am now working on some high performance trading bot based on Chronicle queue which is used. For replication to other machines I am now considering (playing around) with Aeron (fast UDP based library -not only) to replicate the queues accross machines. I think even when you require ACK of the message to be provided by the consumer/receiver, it will be better to implement it this feature via independent UDP server/client infrastructure in the future instead of depending on TCP for lowering overhead, but of course without guaranteed delivery. In case delivery is required (in your case might be), for TCP stuff Chronicle provides so called Network project. This whole idea is moving away from broker centric system and look at the queue (when combined with map and remote access, etc.) like a high performance database of data which you can replicate where you need them, again I just work on some concepts and bench marking these technologies in the moment, but as I like your pure JEE implementation (for boosting performance), I think it might be something for you to consider for your project to become real beast :-), same apply for JavaRX usage (but that is for different discussion we can open another day...) Possible half off-topic |
Small enhancement/repair to my previous text NOTE: |
Based on our conversation, I am seriously thinking to abstract consumer and producer interface and using service.yml to plugin different implementations for different message brokers. Chronicle is one of them and one of our customers asked for Solace. |
Nice, I am closing this issue than as this is the root idea I originally had in mind :-) |
Gavin has opened another issue based on this discussion in light-tram-4j and let's use it to track the progress. |
That is brilliant to hear. I will more investigate posibilities for event store replacement as any kind of SQL database is simply putting too much latency (that is personal point of view :-) ). On the other hand you can connect to postgres via domain sockets to remove the burden of TCP locally. I will in future focus on the superior cqengine. I think with mixture of chronicle queue, you can achieve deadly async beast for event store and both the local service data views (when we talk about CQRS pattern). This whole concept can mixture SQL(cold storage) with fully in memory(hot storage) event store. However it will require additional not trivial work to achieve this in fully scalable way and when we talk about chronicle queue, which is in open source version not capable to talk via network, one must accept enterprose version (which I am not aware of the price), or implement this functionallity by himself. It means to implement the replication over network of the queues. One must also understand the chronicle queue behaviour, **it by default store the data forever (**these guys are having 100TB queues within their systems - reported as biggest users), so in following diagram I am saying to sync in-memory failed query engine with SQL storage, but you can simply implement Queue full read again, if you don't use any kind of retention policy. It sounds like a JOURNAL (almost) by design, but Chronicle Enterprise provides real implementation so called JOURNAL which looks like some kind of Chronicle Queue and Map (with network replication support) mixture. Otherwise queue design is quite "simple", when you create a topic, it stores data in following way: So when you no longer need some kind of data, you can just remove the specific file. I am not sure if other than DAY resolution is supported. I am again closing this, but just wanted to add some additional hints which might in future leads to more high performance design options when adopting eventuate-4j. |
This is more free topic only.
I have 2 topics to consider as replacement in the microservice framework to achieve deadly performance.
Event Store
Would you consider to replace SQL like store with Chronicle Map system?
Broker
Would you consider to replace Kafka or any other messaging with Chronicle Queue subsystem?
Best regards,
Ladislav
The text was updated successfully, but these errors were encountered: