Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure interconnect #1

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

mike-costello
Copy link

*PLEASE DO NOT MERGE THIS PR
PR for purposes of displaying progress on the configure interconnect feature


![amq-dr-reference-architecture](./media/amq-dr-arch.png)

In this diagram two sites/data-centers are represented. A GTM directs traffic to either one or the other data-center. Message create by the message producer are received by the interconnect pods. These pods are configured to send the message to the brokers in the same data-center. They also have an alternative route with high-weight which sends the messages to the interconnect pods in the other data-centers, should no broker be available in the current data-center.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A GTM directs traffic to either one or the other data-center. this is not true. the GTM is either clustered across data-center or consumed as a service.

@@ -469,7 +469,10 @@ objects:
dnsNames:
- ${INTERCONNECT_APPLICATION_NAME}.${NAMESPACE}.svc
- ${INTERCONNECT_APPLICATION_NAME}-${NAMESPACE}.${DEFAULT_ROUTE_DOMAIN}
- ${INTERCONNECT_APPLICATION_NAME}-console-${NAMESPACE}.${DEFAULT_ROUTE_DOMAIN}
- ${INTERCONNECT_APPLICATION_NAME}-console-${NAMESPACE}.${DEFAULT_ROUTE_DOMAIN}
- ${INTERCONNECT_APPLICATION_NAME}-${REMOTE_ROUTER_HOST}.svc #FIXME mcostell bad hack to add back in a "second ca"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what you are trying to do here. These are the FQDNs represented by this service. it cannot both represent a local and a remote endpoint.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the idea behind the hack was that since the client ca points to this exact same secret that is based off of this certificate, simply signing the cert with both sets of dns names might be enough to get everyone to trust each other. Its a bad hack and only meant to deal with the complexity around not having a common ca:
- name: client-ca secret: secretName: ${INTERCONNECT_APPLICATION_NAME}-certs - name: sasl-config

@@ -722,6 +725,21 @@ objects:
sslProfile: service_tls
saslMechanisms: ANONYMOUS
}

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to understand this configuration. I get this piece, but where is the logic that says to duplicate messages?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic to multicast messages is actually not here but in line 715 of the template (in my draft PR):
address { prefix: broadcast distribution: multicast }
What this tells the router is, for all addresses that begin with broadcast semantics, for instance if I had a queue named: broadcast.inventory.items, that those messages would get broadcast to all message listeners on Interconnects link graph. What I am trying to do with the connector configuration:
connector { name: remote-mesh-router role: inter-router host: ${REMOTE_ROUTER_HOST} #this should be the external ingress endpoint for the other router for instance: amq-interconnect-datacentre-b.apps.cluster-austin-5274.austin-5274.openshiftworkshop.com port: 443 #amqps sslProfile: inter_router_tls saslMechanisms: ANONYMOUS #cost: 100 #TODO mcostell we plan to multicast messages as a result, we likely want to weight each node on the graph to the same cost, adding this here to check that assumption maxSessions: 10000 #FIXME mcostell ensure this makes sense. This is the maximum number of inflight sessions that can be made on the connection linkCapacity: 10000 #FIXME mcostell ensure this makes sense. This is the number of inflight messages that can be in flight per link messageLoggingComponents: all #FIXME mcostell this likely does not make sense in most contexts, leaving here to discuss the logging needs with the group idleTimeoutSeconds: 3 #FIXME mcostell the default timeout is 16 seconds which would be far too high for our use case; however, 3 seconds might be too long stripAnnotations: out #TODO it is likely dispatch router annotations should be stripped as it goes to the next router, check this assumption }
Is to hook up to the remote router as a peer, and with some attention to the effect that it might have on the router altogether, i.e. I want to constrain my inter-router behaviour to some degree as I want to have enough pipe to be able to take messages via normal means, which is why I've taken some initial first stabs at things like maxSessions or linkCapacity

mike-costello pushed a commit to mike-costello/amq-multicluster-ref-arch that referenced this pull request Jul 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants