Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

Apps should communicate over local network #2715

Closed
mattapperson opened this issue Dec 10, 2014 · 23 comments
Closed

Apps should communicate over local network #2715

mattapperson opened this issue Dec 10, 2014 · 23 comments

Comments

@mattapperson
Copy link
Contributor

If I have 2 apps on the same cluster that need to communicate as a Backing Services, this should in my opinion communicate over the local network vs going out to the internet and being routed back locally

@PierreKircher
Copy link
Contributor

there are 2 ways to solve that .. ether hostfile change with full domain and point to any router
or ..
skydns .. listens on 0.0.0.0:53 and falls back to google ns servers ..

the way to make that work via skydns is change the way the unitfile of the service is written ( fleet scheduler.py in the controller) .. and extend it with a prestartcondition or post start ..

write an etcd entry with the local ip like /skydns/local/cluster/services/MYSERVICE '{"host": ip}'

that can be changed everytime the service moves .. instead of writing a new hostfile .. that can help as well if you got other unitfilebase services on the system .. to pass in a static env.var .. and removes the need of confd for all expect the router

@PierreKircher
Copy link
Contributor

this technic can be used for interal roundrobin as well ..

@mattapperson
Copy link
Contributor Author

whatever we do I think we need to round robin it... so that a request to say myoneapp.service.cluster.local where there is more then one instance of myoneapp is shared between instances

@mattapperson
Copy link
Contributor Author

I'm 100% fine with the url being .service.cluster.local vs .webdomain.com

@PierreKircher
Copy link
Contributor

yup skydns makes that with /x1 x2 xN .. that will be automaticaly RR

curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/local/cluster/db/x1 -d
value='{"Host":"127.0.0.1"}'
curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/local/cluster/db/x2 -d
value='{"Host": "127.0.0.2"'}
curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/local/cluster/db/x3 -d
value='{"Host": "127.0.0.3"'}

and you call it with db.cluster.local

@mattapperson
Copy link
Contributor Author

skydns looks like the way to go, hosts files are more simple/elegant/less docs needed... it could also get jacked up if a user has an edited hosts file already via a dockerfile for example

@bacongobbler
Copy link
Member

/me dons security hat

This looks like it collides with #986. The general idea I get from most of our customers is that they do not want containers to be able to talk to each other. Heroku has the same answer, which is to say that communication does not occur internally and that if you want to link apps together, use DNS. I could see a proposal where we add a way for users to allow communication between apps via some internal network bridge... I'm just concerned about the security implications.

/me removes security hat

@seeksong
Copy link

seeksong commented Jun 2, 2015

There are cases communication between containers are required. Especially if the app needs to identify individual instance of a service, like Netflix Eureka.

@PierreKircher
Copy link
Contributor

that can go over the router .. apart from that .. im all for internal dns

@seeksong
Copy link

seeksong commented Jun 2, 2015

Why cannot Deis populate a set of environment variables to expose the internal ip address and port number when the container starts? That way, if internal communication required, the app can just use the internal private ip and port.

@seeksong
Copy link

seeksong commented Jun 2, 2015

what I am looking for here is a way to communicate with an "instance" of an application. Say if we have an application and we scale it up to 3 instances. Now if we want to send a request to instance 2 to query its status, I don't think there is any way we can do it in current version of Deis.

@carmstrong
Copy link
Contributor

what I am looking for here is a way to communicate with an "instance" of an application.

I think that's an abstraction that Deis arguably shouldn't need to know or care about. Is that something that you can handle in the application logic itself, and keep track of its peers?

@bacongobbler
Copy link
Member

related: #3812

@krancour
Copy link
Contributor

The general idea I get from most of our customers is that they do not want containers to be able to talk to each other. Heroku has the same answer, which is to say that communication does not occur internally and that if you want to link apps together, use DNS.

And...

what I am looking for here is a way to communicate with an "instance" of an application.

I agree with @carmstrong that making individual application instances directly addressable is territory we ought not head into, but I think that saying "communication does not occur internally" might be a little heavy-handed.

If I am understanding the OP's objections correctly, it's really about the unnecessary (albeit brief) loop out to the open internet and back when two apps in the same cluster need to communicate. It is a concern I have had in the past as well. Donning my own security hat, if sensitive traffic is routed over an open network unnecessarily, it makes me uneasy.

There's a middle ground...

I guess if "communication does not occur internally" means inter-app communication ought not occur within a Deis cluster itself, I can buy that, both on the grounds of security and a firm belief that we'd be doing ourselves a disservice by cutting the load balancer out of the equation. But IMO there is definitely a use case here for "internal communication" if the definition of "internal" means strictly within the network or VPC in which the Deis cluster resides. All such traffic should still be load-balanced, obviously.

I have some ideas kicking around (mostly involving a second, internal load balancer), but no complete implementation in mind yet, but I'd like to add my voice in saying the OP's concern is legitimate.

@seeksong
Copy link

The problem I am facing is how to monitor an application instance. Say I have an app and I put up 10 instances and the app acts funky and I need to know which instance(s) causes it. Deis provides functions to monitor a container (CPU, Memory usage etc), but how can I monitor from a application level?

@bacongobbler
Copy link
Member

If you want to perform monitoring from an application level, there are agents which you can install in your container such as datadog or new relic. That way you can filter the data you want specifically for your app. I'm not sure this is something we can easily solve in Deis as every app will want to monitor different things :)

@seeksong
Copy link

I could be wrong but I think those agents all depend on application to PUSH information to them. Are there any pulling solutions? Your statement is exactly correctly - because every app will want to monitor different things, would it be better for Deis to open the communication channel and let the app to decide what to monitor?

@carmstrong
Copy link
Contributor

would it be better for Deis to open the communication channel and let the app to decide what to monitor?

@jchauncey is working on defining a metrics API and collection agent for Deis over in #3699. We'd love to have your feedback there!

@bacongobbler
Copy link
Member

Are there any pulling solutions?

statsd and collectd are the only two I'm aware of, though I'm sure there are others. :)

@seeksong
Copy link

--statsd and collectd are the only two I'm aware of, though I'm sure there are others. :)

Don't think so. If a monitoring program cannot talk to individual agent, how can it "poll" information from them?

@eroubal
Copy link

eroubal commented Sep 29, 2015

I agree with @seeksong that deis should allow middleware to communicate with each other. We use Spring Cloud and Eureka is key for middleware service registry. Decomposing the complexity of the backend for externally accessible web interfaces is pretty much the way everybody is going... and forcing a suite of backend services to the 'outside' world is not going to help your adoption. Imagine all the extra work there to lock down security. This is the reason my company eliminated Cloud Foundry, Heroku and other PaaS ... complex service to service communication hacks, everything exposed to the world, and forced server side load balancing.

Microservices for backend and middleware is the way everybody is going. Odd to me that you all are not considering this.

@bacongobbler
Copy link
Member

We are considering this. This is actually globbed up into #4173, where apps deployed in the same team (synonymous with orgs in Cloud Foundry land) will be discoverable and can communicate with each other. This is one of the main drivers for why we are moving over to kubernetes.

See #4170 for additional context on how this can be achieved in v2.

@bacongobbler
Copy link
Member

As seen with https://deis.com/blog/2016/private-applications-on-deis-workflow/ this is now possible in v2. Closing!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants