Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a "server mode" to the TCP Sender connector #2382

Closed
rbeckman-nextgen opened this issue May 11, 2020 · 17 comments
Closed

Add a "server mode" to the TCP Sender connector #2382

rbeckman-nextgen opened this issue May 11, 2020 · 17 comments
Milestone

Comments

@rbeckman-nextgen
Copy link
Collaborator

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

This stemmed from a very interesting conversation on the forums: http://www.mirthcorp.com/community/forums/showthread.php?t=3818&page=3

This would be very similar to how the TCP Listener has a "client mode". Basically instead of the TCP Sender always initializing the socket, it would prop up a server socket and wait for a client to connect. Once that happens, all other behaviour would remain the same. If messages are slated to be dispatched and a client has not yet connected, they would be queued (or errored out if queuing is disabled).

Imported Issue. Original Details:
Jira Issue Key: MIRTH-2446
Reporter: narupley
Created: 2013-04-01T12:43:46.000-0700

@rbeckman-nextgen rbeckman-nextgen added this to the 3.9.0 milestone May 11, 2020
@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

I'm am too looking for something exactly like this. Some EMRs ask channels to be configured like this for failover purposes. I myself use Mirth to do testing. I have a device that receives unsolicited ADT and to test it with MirthConnect I need to create a channel that is a TCP Listener, that on connect it starts to unsolicitedly send ADT back to my device on the same connection.

Hopefully this makes sense. If there is another way of accomplishing the same thing, maybe with multiple channels, I'm all ears!

Imported Comment. Original Details:
Author: hsingh00
Created: 2013-11-01T12:11:30.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

This would be a great addition to Mirth Connect. We run into this issue occasionally in the US but more often outside the US. Having this as an option would help avoid a difficult conversation saying "no we cannot connect that way."

Imported Comment. Original Details:
Author: mmaund
Created: 2014-01-31T13:45:28.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

We ended up building our own windows service that reverts the connection flow. It waits for the connection from the LIS and then open a channel that Mirth is able to connect to. Then it just copies bytes between the two open sockets. If the LIS connection is lost, it will close down the Mirth connection as well and wait for reconnect from the LIS (and it all repeats). Very robust solution.

Imported Comment. Original Details:
Author: gurun
Created: 2014-01-31T15:06:57.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

Thanks for the thought Niclas. I might just give that a try.

Imported Comment. Original Details:
Author: mmaund
Created: 2014-02-03T06:28:15.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

To those of you watching the issue, in your opinion what's the correct behavior when multiple clients connect to the server socket on the TCP Sender? When a message is ready to be dispatched, I'm assuming it should be sent to all currently connected clients, not just one of them, right? Or should that be configurable in some fashion (like a round-robin option)? Or does multiple clients not really make sense in this scenario? Instead of allowing that (along with a max connections option like the receiver has), we could also just always only allow one client to connect at a time.

Imported Comment. Original Details:
Author: narupley
Created: 2014-03-18T08:42:14.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

My initial thoughts would be that it would be up to the Destination configuration. Perhaps an option that limits the number of simultaneous connections to that particular destination. One could potentially have multiple destinations in "Client mode" or "Server Mode" or a mixture of Client and Server. My current needs are a single connection.

Setting options if in Server Mode could be:

  • Listener port number.
  • Number of simultaneous connections.
  • Keep connection open indefinitely? Yes/no

Imported Comment. Original Details:
Author: mmaund
Created: 2014-03-18T08:51:54.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

I would expect it to NOT use a server socket with a dispatcher, but rather one listener socket only. The usage is the same as any other client-type socket, just the the connection setup is reversed, initially.

So no broadcast, round robin, whatever. Just one connection at any time.

Remember, the use case is quite different from a regular listener or server socket, since after initial connection has been established, the server is expected to drive the communication. Server in this case being Mirth, but acting just like it does on a normal client connection.

In addition, I would expect Mirth to log any other attempts to connect as an warning in the logs. This would help really much when looking for staled connections that Mirth might not have realized.

Imported Comment. Original Details:
Author: gurun
Created: 2014-03-18T10:24:56.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

I concur with Niclas' description. Nicely stated.

Imported Comment. Original Details:
Author: mmaund
Created: 2014-03-18T10:31:46.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

Another forum thread today about it: [http://www.mirthcorp.com/community/forums/showthread.php?t=11380]

Older threads:

[http://www.mirthcorp.com/community/forums/showthread.php?t=9617]
[http://www.mirthcorp.com/community/forums/showthread.php?t=9679]

Imported Comment. Original Details:
Author: narupley
Created: 2014-08-21T08:24:33.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

I have a slightly different use case for this functionality. Currently, I have a large number of client applications that all need to subscribe to updates from a central datastore. The datastore is capable of publishing it's updates, but it has no ability to route these updates to specific subscribing clients. Because there are going to be a large number of clients, each of which may have subscribed to a different subset of the datastore, we don't want to broadcast every datastore update to every client. Basically, I'd like to use Mirth as the message broker in a sub-pub architecture.

Schematically, Mirth would open a TCP server port. Each client would connect to this server port on the Mirth server, establishing a persistent TCP connection and sending a packet that describes the data fields the client would like to subscribe to. This pool of active sockets would be the "destination" of the Mirth channel. Mirth would also open a different listening port that the datastore could use to signal Mirth whenever any data had been updated. This dedicated listener would be the "source" of the Mirth channel.
Whenever the datastore updated any data, it would signal Mirth with the data to be published (probably using an HTTP POST message). Then Mirth would be responsible for deciding which of the clients had subscribed to the data in the update, and sending a message to each of the subscribing clients via the correct TCP socket(s).

We're currently considering 2 variants to get this to work. The first variant will use an external connection manager (probably written in Python) to handle the multiple TCP sockets. Then Mirth will act as a pass-through between the central datastore and the connection manager. The downside of this approach is that we need to write and maintain a new service on the Mirth server. This could be made somewhat easier if Mirth supported AQMP, allowing communications with message broker services like RabbitMQ (see Mirth-2236).
The second variant doesn't use an external connection manager, but instead changes the sub-pub architecture to a polling architecture. In this variant, each client will regularly send a message to Mirth requesting specific information. Mirth will pass the request to the datastore, and will then pass the desired data in the response message back to the specific client. The downside of this approach is the requirement for each client to poll the central datastore very frequently.
If anyone has any advise on other ways to architect sub-pub in Mirth, I'm eager to hear it.

Imported Comment. Original Details:
Author: neils
Created: 2014-12-10T10:55:39.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

What you're describing should be possible if the TCP Sender had a "server mode" and it allowed more than one connecting client at a time. However, let's separate the two main requirements as I see them.

First, clients would open up a TCP connection to Mirth Connect and send a packet of whatever in order to subscribe. That is a job for a source connector, not a destination connector, because data is going in rather than out. What you would have to do instead is open up a different subscription endpoint (TCP or HTTP Listener). Since you already need an HTTP Listener for the data source, it might be easy enough to just re-use the same source connector for both purposes (receiving subscription messages from clients, and receiving updates from the data source). When you receive a subscription notice, you can keep track of it however you wish (in-memory with the global channel map, write to a file or database table, etc.). That subscription message should be filtered in such a way that it won't trigger the TCP Sender the same way a message from the actual data source would.

Second, clients would establish a persistent TCP connection to Mirth Connect, and would automatically receive data source updates, without sending anything (ignoring the TCP-level ACKs and such). That is definitely a job for a destination connector since data is going out, so it falls in line with the server mode TCP Sender. When the data source sends an update to the channel via HTTP, that would process through the channel and get sent out from the TCP Sender to all the currently connected clients. If a client isn't connected at the time the update goes out, it misses that update.

However, there would be no way right now to selectively send some messages to some of the connected clients. If a message flows through the TCP Sender, it would dispatch that message to all connected clients. If we added some sort of "Allowed Remote Addresses" field specifically for server mode though, then you could do that. In the transformer you would decide which clients (by IP/DN) should get a particular update, and put that in the connector map. Then when the TCP Sender dispatches the message, it would only send to the client sockets that match those allowed IPs.

I'll attach an example channel (for 3.1.1). It obviously won't work right now because the server mode and "allowed addresses" features don't yet exist. But it shows how such a channel might work.

Imported Comment. Original Details:
Author: narupley
Created: 2014-12-11T08:54:05.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

The idea of 2 separate open TCP server ports on Mirth (one port to create persistent 'sending-server' sockets, and one port to listen for 'registration' requests) is clever. Paraphrasing:
A. The external client could receive a unique "subscription token" when it connects to the 'sending-server' port.
B. Then the client could send that "subscription token" along with info about itself (like what information it wants to subscribe to) back to the 'registration' port.
The uniqueness of the subscription token would allow the info the client sent in step (B) to be correlated with the TCP socket connected to in step (A).

I'm not sure I agree that all clients connected to the channel should get the same information, though. It seems like the pool of connections would be associated with a single 'destination' in the Mirth channel. This makes it sound reasonable that the Filter and Transformer code for the destination could be made to run once for each TCP socket, rather than just once for the entire destination. That makes it fairly trivial to allow individual sockets to send a subset of the overall information. In other words, Mirth could send messages to clients in a round-robin style for load balancing, in a broadcast style for alerting all clients to an event, or in a a filter-by-message fashion for pub/sub messaging.

Imported Comment. Original Details:
Author: neils
Created: 2014-12-12T13:55:26.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

First, a client would not be able to receive a subscription token just by connecting to the "sending-server" port. That is a destination connector, and without a message processing through the channel to trigger it, a connecting client would not receive anything at all. The whole point of the registration port is to provide a way for the client to push initial data to the server and/or receive initial data.

Second, a pool of connected clients *would be associated with a single destination connector. As such, when a message processes through, it would be dispatched to all connected clients for that destination*. If you like you can create separate destinations for separate clients, but obviously those server sockets would be bound to different ports.

For a single destination connector, the filter/transformer would not be run for each socket that happens to be currently connected. The job of the dispatcher (TCP Sender) is to take the output of the transformer (encoded data) and send it out to a downstream system. The transformer has no knowledge whatsoever of the dispatcher, what kind it is, how many sockets are connected if it happens to be a TCP Sender, etc

You can already do round-robin message dispatching with the TCP Sender by using a Velocity template variable in the address/port fields. If and when we add the new "server mode" feature, then you automatically have the broadcast-style alerting. If we added the "allowed addresses" field for server mode, then you can do the filter-by-message style for pub/sub.

We are considering adding a new "re-run the filter/transformer on each queue attempt" feature. If you leverage that, the variable address/port replacement, and the response transformer, then you could technically run the transformer for each specific socket and dispatch a message only to that socket using the "allowed addresses" field.

Imported Comment. Original Details:
Author: narupley
Created: 2014-12-12T15:20:43.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

[http://www.mirthcorp.com/community/forums/showthread.php?t=15386]

Imported Comment. Original Details:
Author: narupley
Created: 2015-07-27T09:32:15.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

I'm looking to use "server mode" on a TCP Sender connector for HL7v2.x. Can you make any suggestions on how I can do this?

Imported Comment. Original Details:
Author: waldront
Created: 2016-11-08T09:19:58.000-0800

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

This could be done in a different way, such as having a socket resource that would have client/server mode, and both listeners and senders could use.

Imported Comment. Original Details:
Author: narupley
Created: 2018-06-11T09:05:32.000-0700

@rbeckman-nextgen
Copy link
Collaborator Author

@rbeckman-nextgen rbeckman-nextgen commented May 11, 2020

ROCKSOLID-4577

Imported Comment. Original Details:
Author: christ
Created: 2020-04-13T14:23:51.000-0700

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant
You can’t perform that action at this time.