-
Notifications
You must be signed in to change notification settings - Fork 351
-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dds_take and dds_read looks completely same #17
Comments
Hi, They look the same, but they are very different in behaviour: Sometimes it doesn’t really make much of a difference — for a trivial example like So a simple answer could be: use (TL;DR hint: skip to the end, but hopefully you’ll take the time to read on.) The more complicated answer is almost philosophical in nature, because it really is asking “what is DDS?” To that question, you’ll probably get a different answer from every person you ask, which is probably the case because DDS is two things at the same time. The first is that is an eventually consistent shared data space. Shared data spaces go back to early ‘80s with two independent strains in existence: the synchronous ones along the lines of Linda (from D. Gelernter at Yale; Java’s TupleSpace and many others are direct descendants); and the eventually consistent ones, starting with SPLICE (M. Boasson at Hollandse Signaalapparaten BV) and of which DDS is a direct descendant. The shared key feature of both kinds of shared data space is that they fully focus on the data (they’re the opposite of OOP because they encapsulate functionality and expose data) and decouple data availability from the existence of application processes. So it doesn’t matter when a process starts because the data is available anyway; if the processes are stateless, they can be killed and restarted at will. Despite sharing this key feature, they are also very different. Linda (and descendants) is like a database server: applications send requests to the server to add tuples, read a copy of a tuple or remove a tuple. All operations are atomic, which turns the server into a choke point. Of course you can build a fault-tolerant cluster for this, but you will still have the round-trip latencies and the inherent scalability problems that come with a synchronous model. DDS (and its predecessors) are more like a distributed caching scheme, where each process subscribes to what it needs and caches its own copy of that part of the data space. These caches can contain stale data when a new sample has already been published but hasn’t arrived yet, but in this model, this is considered perfectly acceptable as long as it gets updated eventually. This makes updating and reading very fast and predictable because there is no system-wide synchronisation, but the price is that deleting data becomes more complicated. For deleting data, basically all you can do is indicate that an instance (= key value) is no longer relevant using the An important consequence is that the content generally should describe the state of the system rather than events, because you decouple the processes so much that they may not always get all updates — think the difference between a light dimmer sending commands like “brighten by 1 unit” vs “set the brightness to X”: if you skip one of the former it affects the result, but skipping one of the latter has no meaningful effect. If you need to deal with events, you can, but that is the other face of DDS. Ideally, then, your data would be similar to what you would store in relational database. Seen in that light, the The other face of DDS is that it is also a publish-subscribe messaging system — like MQTT, like Kafka, like ROS2, like so many others. In this view of DDS, you publish messages and you consume them. So then, you would generally want to use The QoS settings determine which face of DDS you are dealing with: the first is (as a guideline): reliable, no auto-dispose, transient durability, keep-last history, by-source-timestamp destination order; the second is: reliable, volatile durability, keep-all history (on reader and writer), by-reception-timestamp destination order. If you want some measure of historical data for late-joining applications in the second mode, upgrade from volatile to transient-local. Similarly, if, in the first mode, you can tolerate waiting until the next update comes (or you don't have late joiners), you downgrade it from transient to volatile. Two important notes: (1) Cyclone DDS isn’t done yet and support for the transient setting isn’t there yet. In practice, you can build many applications by simply using transient-local instead. (2) The ability to do proper queries on the data is not there yet, so saying “use So for the time being, you probably want to use With apologies if I have only made it more confusing ... |
Hi, Thanks for kind reply, your note can get me more familiar with DDS :)
|
I'm not sure I understand your first question ... in the implementation, The "multi" is about a single subscriber (very technically speaking, a single DDS DataReader) reading multiple times. Different subscribers (DataReaders, to be exact) are always completely independent. Even the |
Oh, finally I could notice the small difference, true or false with dds_read_impl, this is bool to indicate take or read. Sorry for bothering you.
And thanks for the "multi" meaning :) |
No worries — indeed, thanks for being interested :) |
Hello,
I noticed dds_read and dds_take are completely same. is this expected implementation?
Or forget to remove one side?
I'm confused when I reading example codes, one is using dds_read, one another is using dds_take
Thanks
The text was updated successfully, but these errors were encountered: