Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nimbus Local (Same Machine) Messaging #51

Closed
KodrAus opened this issue Feb 19, 2014 · 7 comments
Closed

Nimbus Local (Same Machine) Messaging #51

KodrAus opened this issue Feb 19, 2014 · 7 comments
Assignees

Comments

@KodrAus
Copy link

KodrAus commented Feb 19, 2014

I've been looking through the Nimbus source to identify the ease with which local messaging can be implemented. I don't want to move around too much infrastructural cheese, but in terms of scalability I think it's a great feature.

At the moment would I be right in saying the message flow (one-way) is roughly as follows (sorry about the crummy text-diagrams):
[Bus] -> Execute operation to -> [Sender (eg. ICommandSender)] -> Get queue (IMessageSender) from -> [MessagingFactory] -> Create queue if necessary in -> [QueueManager] -> Send message to queue <- Gets messages <- [Message Pump] -> Sends to -> [Dispatcher] -> Creates instance of -> [Handler] -> Handles

So in short:
Outgoing:
[Bus] -> [Sender] -> [MessagingFactory] -> Queue
Incoming:
Queue -> [MessagePump] -> [Dispatcher] -> [Handler]

My early thoughts are to wrap an F# Mailbox Processor in the IMessageSender and IMessageReceiver interfaces and for each Azure Queue that is created, create an equivalent local queue.

When the MessagingFactory passes the queue to the sender it will first try to find a local queue. If it does, and the Bus has opted in to local messaging, return it over the Azure queue. If not, return the Azure queue. It would look like this:

Outgoing:
[Bus] -> [Sender] -> [MessagingFactory] -> Local or External? -> Queue
Incoming (Local):
Queue -> [Dispatcher] -> [Handler]
Incoming (External):
Azure Queue -> [Message Pump] -> Local Queue -> [Dispatcher] -> [Handler]

In the message pump, route all Azure Queue traffic through the equivalent local queue which will then call the dispatcher, rather than calling the dispatcher through the pump. This way, if you have multiple services running on the same machine you can optimise bandwidth and performance by messaging via the F# queues instead of Azure, but also receive and handle traffic from the Azure queues in exactly the same way.

I think the important things to note would be: local messaging is less resilient, so it would be opt in to return local queues. Whether or not the Bus opts in, still build the local queues and route MessagePump traffic through them anyway because this makes no difference in that case.

Does anyone have anything to add or potential caveats they can see? I'm playing around with the concept locally in a fork. If I implement this without breaking existing funcitonality would it be pulled into the main branch or is it outside the scope of Nimbus?

@uglybugger
Copy link
Contributor

Hi Ashley,

This is something that's been on our radar for a while as it's a key part of allowing offline behaviour. I agree with the local queue approach in principle and you'll see recent refactorings to encapsulate message senders and receivers to allow pretty much precisely that.

The one sticking point is that there are lots of cases where you want both local and remote concurrently. Consider this scenario: A FooCommand is both sent and handled by a bunch of machines. The commands tend to need to be done in batches (think image encoding) so any machine in the cluster can generate many commands in one hit. If we have completely local message queuing then the sending machine will be swamped but other machines in the farm will sit idle. Likewise, if we push all the commands to a central queue first then 1) we don't work offline and 2) we pay a latency penalty every single time.

One way I was thinking about doing it was effectively having an outgoing message pump for each queue as well as an incoming one, i.e.:

[Bus] -> [Sender] -> Local outbound queue -> [Local Dispatcher Message Pump] -> [Dispatcher]
                                          -> [Local Outbound Message Pump] -> Azure queue

This way would allow us to have local handlers compete by popping messages off the local outbound queue directly, provided that they were idle, but there'd be another pump competing to push them to the Azure queue. We'd get the best of both worlds, then - the highest possible throughput locally whilst offloading any overflow to the cloud.

Thoughts?

@KodrAus
Copy link
Author

KodrAus commented Feb 19, 2014

First off, can you please point me to who is responsible for creating Message Pumps and what their multiplicity relationship is with queues?

I see your point and I think the outgoing pump is a good solution. So we post our message to the F# (doesn't have to be F#, I just like it) outbound queue, we then have two pumps pulling messages out of it; one will pass on to the local incoming queue for that handler, and then to the dispatcher, the other will push that message out to the Azure queue. I think I'll stick a configurable throttle (maybe a 2ms wait or something) on the Outbound Pump for testing so we can minimise messages that go all the way out to Azure where they don't need to.

Another potential scenario could be that you want the physical machine/process to be a boundary that ensures messages are only delivered to microservices within your microservice (metamicroservices?) that are implicitly tied together, without having to tightly couple them. An example may be some kind of performance metric service, or access to a local datasource. Or perhaps an application that you decide is best to deploy on a one-machine-per-customer basis.

So who should be responsible for configuration in determining whether or not to use local messaging and the outbound Azure pump? I had thought since the glue that holds it all together is the message, maybe an optional Attribute could be applied to Messaging Contracts that could look something like:

[DeliveryPreference(DeliveryMethods.Local)] [DeliveryPreference(DeliveryMethods.External)] [DeliveryPreference(DeliveryMethods.Any)] //(The default if the attribute isn't set)

I have mixed feelings about Attributes, but I think this way is the least intrusive for folks who already have a lot of messages.

We can then wrap a Mailbox.Scan call in a function on the Outbound Queue that gets the first message with either DeliveryMethods.Any and DeliveryMethods.External/DeliveryMethods.Local so the message pumps don't touch messages that are explicitly flagged for a delivery method.

The devil is in the details. But I'll start fleshing things out over the next week.

@uglybugger uglybugger added this to the Future milestone Feb 22, 2014
@KodrAus
Copy link
Author

KodrAus commented Feb 23, 2014

Just about got a proof of concept together for the local messaging. I'm supporting two scenarios: Handlers that exist in process (handled by direct access to the queue) and handlers that exist in different processes on the same machine (handled by access via pipes). I've written up a simple non-blocking library that uses NamedPipes to emulate Azure Topics. The way it'll work is when initially wiring everything up, the manager will check if the local queue is known to it. If it is, then it's an in process message. If not, then it will check a machine-wide resource where each application declares its existence (at this stage I've got a Windows Service). If it finds the reference there, then it's a cross process message. If the application is not found there (or fails to respond to a handshake) then it's an Azure message. I've handled the scenario of local services that aren't all started at the same time, so the manager will pick up local services that start after the initial setup.

Any thoughts?

@jhashemi
Copy link

jhashemi commented Jun 6, 2014

Check queues out here: http://wacel.codeplex.com/documentation I think that would work amazingly for this.

@uglybugger uglybugger modified the milestones: 3.0, Future Dec 21, 2015
@uglybugger uglybugger self-assigned this Dec 21, 2015
@uglybugger uglybugger modified the milestones: Future, 3.0 Dec 21, 2015
@uglybugger
Copy link
Contributor

Argh. Derp. I moved the wrong issue into 3.0 by mistake. Moving it back out for now. I still think this idea is worth discussing, though.

@KodrAus
Copy link
Author

KodrAus commented Dec 21, 2015

Definitely. Last time I thought about this I was trying to think of a unified way to do internal/cross process messaging with named pipes. I'm not so sure that's really the right approach now

@KodrAus
Copy link
Author

KodrAus commented Dec 21, 2015

#33

@KodrAus KodrAus closed this as completed Dec 21, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants