Apps should communicate over local network #2715
Comments
there are 2 ways to solve that .. ether hostfile change with full domain and point to any router the way to make that work via skydns is change the way the unitfile of the service is written ( fleet scheduler.py in the controller) .. and extend it with a prestartcondition or post start .. write an etcd entry with the local ip like /skydns/local/cluster/services/MYSERVICE '{"host": ip}' that can be changed everytime the service moves .. instead of writing a new hostfile .. that can help as well if you got other unitfilebase services on the system .. to pass in a static env.var .. and removes the need of confd for all expect the router |
this technic can be used for interal roundrobin as well .. |
whatever we do I think we need to round robin it... so that a request to say |
I'm 100% fine with the url being .service.cluster.local vs .webdomain.com |
yup skydns makes that with /x1 x2 xN .. that will be automaticaly RR curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/local/cluster/db/x1 -d and you call it with db.cluster.local |
skydns looks like the way to go, hosts files are more simple/elegant/less docs needed... it could also get jacked up if a user has an edited hosts file already via a dockerfile for example |
/me dons security hat This looks like it collides with #986. The general idea I get from most of our customers is that they do not want containers to be able to talk to each other. Heroku has the same answer, which is to say that communication does not occur internally and that if you want to link apps together, use DNS. I could see a proposal where we add a way for users to allow communication between apps via some internal network bridge... I'm just concerned about the security implications. /me removes security hat |
There are cases communication between containers are required. Especially if the app needs to identify individual instance of a service, like Netflix Eureka. |
that can go over the router .. apart from that .. im all for internal dns |
Why cannot Deis populate a set of environment variables to expose the internal ip address and port number when the container starts? That way, if internal communication required, the app can just use the internal private ip and port. |
what I am looking for here is a way to communicate with an "instance" of an application. Say if we have an application and we scale it up to 3 instances. Now if we want to send a request to instance 2 to query its status, I don't think there is any way we can do it in current version of Deis. |
I think that's an abstraction that Deis arguably shouldn't need to know or care about. Is that something that you can handle in the application logic itself, and keep track of its peers? |
related: #3812 |
And...
I agree with @carmstrong that making individual application instances directly addressable is territory we ought not head into, but I think that saying "communication does not occur internally" might be a little heavy-handed. If I am understanding the OP's objections correctly, it's really about the unnecessary (albeit brief) loop out to the open internet and back when two apps in the same cluster need to communicate. It is a concern I have had in the past as well. Donning my own security hat, if sensitive traffic is routed over an open network unnecessarily, it makes me uneasy. There's a middle ground... I guess if "communication does not occur internally" means inter-app communication ought not occur within a Deis cluster itself, I can buy that, both on the grounds of security and a firm belief that we'd be doing ourselves a disservice by cutting the load balancer out of the equation. But IMO there is definitely a use case here for "internal communication" if the definition of "internal" means strictly within the network or VPC in which the Deis cluster resides. All such traffic should still be load-balanced, obviously. I have some ideas kicking around (mostly involving a second, internal load balancer), but no complete implementation in mind yet, but I'd like to add my voice in saying the OP's concern is legitimate. |
The problem I am facing is how to monitor an application instance. Say I have an app and I put up 10 instances and the app acts funky and I need to know which instance(s) causes it. Deis provides functions to monitor a container (CPU, Memory usage etc), but how can I monitor from a application level? |
If you want to perform monitoring from an application level, there are agents which you can install in your container such as datadog or new relic. That way you can filter the data you want specifically for your app. I'm not sure this is something we can easily solve in Deis as every app will want to monitor different things :) |
I could be wrong but I think those agents all depend on application to PUSH information to them. Are there any pulling solutions? Your statement is exactly correctly - because every app will want to monitor different things, would it be better for Deis to open the communication channel and let the app to decide what to monitor? |
@jchauncey is working on defining a metrics API and collection agent for Deis over in #3699. We'd love to have your feedback there! |
--statsd and collectd are the only two I'm aware of, though I'm sure there are others. :) Don't think so. If a monitoring program cannot talk to individual agent, how can it "poll" information from them? |
I agree with @seeksong that deis should allow middleware to communicate with each other. We use Spring Cloud and Eureka is key for middleware service registry. Decomposing the complexity of the backend for externally accessible web interfaces is pretty much the way everybody is going... and forcing a suite of backend services to the 'outside' world is not going to help your adoption. Imagine all the extra work there to lock down security. This is the reason my company eliminated Cloud Foundry, Heroku and other PaaS ... complex service to service communication hacks, everything exposed to the world, and forced server side load balancing. Microservices for backend and middleware is the way everybody is going. Odd to me that you all are not considering this. |
We are considering this. This is actually globbed up into #4173, where apps deployed in the same team (synonymous with orgs in Cloud Foundry land) will be discoverable and can communicate with each other. This is one of the main drivers for why we are moving over to kubernetes. See #4170 for additional context on how this can be achieved in v2. |
As seen with https://deis.com/blog/2016/private-applications-on-deis-workflow/ this is now possible in v2. Closing! |
If I have 2 apps on the same cluster that need to communicate as a Backing Services, this should in my opinion communicate over the local network vs going out to the internet and being routed back locally
The text was updated successfully, but these errors were encountered: