-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ivy to redis #1071
Ivy to redis #1071
Conversation
Nice. |
Yeah, that works. One last thing is to make sure the key makes most sense: |
You already had the ac_id first, so I just added the msg_class. Actually assuming ac_id for "ground" messages comes after the message name is not a good idea, as it is not the case for all ground messages... To be honest I don't think it makes much sense to put in message "values" into redis like this anyway, but it is a start... What you actually want to get is some sort of a PprzMessage object that also already contains the field names, type, unit, description, timestamp, etc... |
I realized during my travel to work that the ac_id is better left in, because otherwise messages will overwrite keys/values of another ac. So let's leave it like msg_class first, then msg_name and ac_id at the end, then they don't get overwritten. I agree about the lack of definition, but ivy sends it that way and adding the definition each time sounds a bit like overkill. Google has this project called Protobuf, which compiles the message definition and then uses a much simpler field ID (which may never change!) to reduce the size of the messages. I think the definition occurs once, so is not repeated, if there is a repeated field in there (which probably isn't the case for us). So I see that we could move to something like that, if we make sure that for each message, the fields are also numbered and that numbering never changes, which is easy to enforce since we have the messages.xml definition. This already clarifies that it is a discussion that should be taken to the mailing list. A potential benefit is that the requirement that every process must use the current definition can be relaxed. There's now a md5-check that this actually happens. With field and message id's in place, you could theoretically allow processes to lag in updates if the updates have no impact on its function and the md5-check can be removed ( I think? ). It's not a big deal to pick up the message definition by this script and store that in redis somewhere as some kind of dictionary. |
I have no intention on changing it on the ivy bus or starting to replace it with protobuf or anything like that now. While I am not particularly fond of Ivy (for various reasons), replacing Ivy is currently not an option we are considering since it is used everywhere in our code and we don't have the manpower. Putting messages into redis is not a bad idea, but for me the interesting part is not redis itself, but rather the format we choose to publish stuff in. |
I had a quick look yesterday to see how complex it would be and it is a massive effort, most specifically on the GCS. Getting back to the discussion; I'll look into producing the code to parse messages.xml into an easily readable format (json?) and then put that into redis. Redis has the suggested pub/sub constructs and is also a datastore, which crossbar.io has not. It's good to have an ability to query configurations, definitions, because the alternative is to make RPC's to get this data. Redis is one of the most used at the moment, so it's easy to interface with it from iOS, Android, Web and a very impressive list of other languages. A downside could be is that it doesn't use multicast, so you depend on the server working. But it's always possible to create a little process that stuffs a datastore with data if needed and only run with the online part. APM has a utility called mavproxy. This is a cmd-line tool that accepts mavlink data and redistributes this to others. It requires you to add "-out" parameters, which makes the proxy unnecessarily aware of which clients connect. But it doesn't convert mavlink packets, it just redistributes them to other apps that know how to talk mavlink. Msgpack ( http://msgpack.org/ ) is one way to reduce the json message size by 20-40%, yet achieve fast encoding/decoding capabilities. It's a good effort to save space and network resources and still can be decoded to the full specification in json. |
I guess what is most useful, depends what you want to use it for in the end... From my point of view the interesting thing here is to get the needed messages/functionality exposed to easily build external "apps" like a simple GCS, etc.. To me that basically means: some sort of middleware with pub/sub mechanism and something to get/set some common descriptions, statuses, commands, etc. (that could be either done via RPC or something like a get on redis). So while I'm mostly ignorant about all the new web stuff with lots of the possible solutions (and without really having used any of these), it currently seems to me:
Which if course doesn't mean that we can't use redis "locally" as a sort of "server" and expose it via WAMP to external apps again (or any other tool/combination). In any case as long as the pub/sub "messages" have the same format it should be easy to use redis for this and WAMP for that... All the above is said under some assumptions:
For fast encoding/exchange of binary data I would recommend to look into Cap'n Proto |
I read through crossbar.io and it should serve the needs well. The benefits are having access to TLS and different transports (websocket, rawsocket or pipe) with some other things on top if you want them. Some concerns here are there's practically one developer working on it and it hasn't appeared much in the web circles. Crossbar.io can have these features because it uses an underlying framework that supports them so easily (tornado, autobahn). In essence, crossbar.io is a smart webserver and its main focus is on web-related interactions (signin, notifications, ajax-done-differently). The most important driver for pprz should be to abstract the message bus functionality, so that switching from any underlying transport is easy. Attached is a file of how I imagine this to work. The main differences there is the assumption that the local PC/network is trusted, so no security is needed there. On the boundaries, you'd have nodejs and a REST server, where TLS and other security measures can be implemented to restrict what's possible from the browser. I don't see RPC as a necessary thing to have, because this is not a webapp with a predetermined workflow. RPC essentially is a set of two topics with a relation between them (a timeout and a verification a response was received?). An RPC also binds processes together, so it makes one process depend on another to be able to function. This is good if you have stringent business rules, but in other cases maybe a GCS isn't really required, because someone wants to run with a more automatic workflow? It is also possible to connect all UI applications through nodejs/REST, creating an app landscape in which REST applies the same rules to everything (as the user is concerned) with some back-end processes running in the cloud in the back. REST and nodeJS then implement the security features, but leave the message bus as a very simple, easy to access bus to communicate with for quickly drawn up scripts for testing. You can then enforce or require users with specific roles to only have access to specific functions in the GCS depending on their login. |
merged with 6462125 How to proceed with integrating other messaging systems should be discussed on the mailing list and maybe a separate issue... |
This is a little script that subscribes to the ivy bus and pushes all messages through redis. Redis is a very fast keystore of various data types with a very simple client interface and also offers a pub/sub mechanism with key matching of any length. This page shows how it works: https://pypi.python.org/pypi/redis/
Other converters like UDP servers exist, but that requires clients to also implement socket level constructs and you don't get any additional logic with those. It's also better to initiate the connection from the client. There are many client libraries available for redis, so that it's possible to interface with paparazzi from a host of languages or environments, even web browsers if you set up a node.js server. Packages that do that exist.
The main reason why I wrote this script is because I want to improve the android tablet client and make a simpler ground control for non-experienced and non-engineer users. Redis then allows me to pick which messages are transferred over either bluetooth/wifi, so that resources are better utilized.
Messages are also "set" as a key in the keystore, which means that clients connecting later can inspect the values of settings at any point in time and don't need to request or wait for them.
Notes: