Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto figure out includeIPRanges by default #1440

Closed
kyessenov opened this issue Nov 3, 2017 · 8 comments
Closed

Auto figure out includeIPRanges by default #1440

kyessenov opened this issue Nov 3, 2017 · 8 comments

Comments

@kyessenov
Copy link
Contributor

@ldemailly commented on Thu Jun 29 2017

By default we should only intercept cluster traffic (or even istio managed services traffic)

We shouldn't need complex manual steps like
https://istio.io/docs/tasks/egress.html#calling-external-services-directly
for apps to continue to work with external services

A clean (k8s api driven) solution depends on kubernetes/kubernetes#25533 but we should be able to figure out the value on our own (in istioctl or pilot from observing the registered services)

A brute force approach could be to get the list of managed ips through kubectl get svc and/or guess the netmask from said list


@cmluciano commented on Wed Aug 02 2017

In upstream kube, it appears the goal is to read from a config-map for kube-proxy. This likely won't land in kube 1.8 since it is under development for kubelet.


@ldemailly commented on Mon Aug 14 2017

interesting history and lots of good pointers and discussion in istio/old_pilot_repo#515


@ayj commented on Fri Aug 18 2017

envoyproxy/envoy#1314 could also address this issue. @cmluciano, is this something you'd be willing to look into vs. waiting on upstream k8s to add the necessary information to a ConfigMap?


@rshriram commented on Fri Aug 18 2017

i think @VadimEisenberg is working on this. He already added the cluster
type but hasn't plugged it in fully.

~shriram


@ayj commented on Fri Aug 18 2017

Great. Changing the milestone to 0.3 since a clean solution to auto-detected includeIPRange has an external dependency on kubernetes/kubernetes#25533 which may not be ready for 0.2 release.


@ldemailly commented on Fri Aug 18 2017

I'm not convinced we can ship 0.2 without some improvement in that area but let's see


@vadimeisenbergibm commented on Fri Aug 18 2017

By default we should only intercept cluster traffic (or even istio managed services traffic)

@ldemailly I think that in order to make the mesh more secure, Istio should control as much traffic as possible. One of the solutions that we are discussing with Egress design 0.2 is to allow external traffic to ports 443 only, for HTTPS, and all other external traffic to go thru sidecar proxies/dedicated egress proxies, controlled by egress rules and mixer security policies.

So I would not let all the external traffic go outside by default. I would even consider to remove --includeIPRanges option and to use --excludeIPRanges and --excludePorts instead. Suppose we have in v0.2 the following:

  1. --excludeIPRanges and --excludePorts implemented (handles HTTPS and some corner cases)
  2. all other traffic, e.g. to bluemix services will be defined by an egress rule with domain *.bluemix.net and this traffic will go thru a sidecar proxy (handles HTTP and TCP)

Will that solution be good enough for v0.2?


@ldemailly commented on Fri Aug 18 2017

I don't think we should break existing applications no
I think both models have value


@vadimeisenbergibm commented on Sat Aug 19 2017

But how will the existing applications be broken if we support --excludeIPRanges and --excludePorts and egress-rules with domains with wildcards ?

vbatts pushed a commit to vbatts/istio that referenced this issue Nov 8, 2017
…o#1440)

Automatic merge from submit-queue

fix for istio-initializer needs to ignore hostNetwork pod specs

**What this PR does / why we need it**:

PR for istio/istio.io#655, see istio#655 for details

**Release note**:

```release-note
When a pod uses hostNetwork: true, the pod will be disabled from side car injection on purpose because we don't want the envoy side car to change the network configuration at the host level.
```
@rshriram rshriram added the pilot label Nov 9, 2017
@axelbodo
Copy link

axelbodo commented Apr 3, 2018

welcome the effort of solving the mtls related issues.
When we rnd some servicemesh implementation to use over k8s, we liked istio without mtls because of its transparency and low preformance footprint. However our infrastructure auth/authz needs are completly solvable relying only just k8s native stuff (secrets, abac/rbac, using them in ms with AOP), it was also an option to take a look at the istio mtls feature. However when we started to use it, we've faced a lot of issues:
- external communication stopped working (e.g. apt-get)
- liveness/readyness probe stopped working
- our custom services, which communicate both system services and reguler services/pods stopped working.

@rshriram reacting to istio/old_pilot_repo#515 and the first point above, I think apt-get communication is relevant, as there can be services, where some of the system resources is needed to be updated regularly even in prod (e.g. timezone db for one of our services, and there can be several other use cases, services can depends on up to date world information provided by debian/ubuntu repos). We can solve this issue with the includeIPRanges option, and it looks a good solution.

Regarding to second point it killed all comunication between services, we temporarily turned off probes, and I think separating probe ports from the regular ones is only a QF, and cannot be the final solution. A mesh proxy (hopefully) should be as transparent as possible, without modifying existing code.

Regarding to 3rd point, our custom metric scrapers communicates both system and regular service, furthermore, for regular services it communicates directly with the pods particypating in the specified service. Digging into istio github issues, they suggest that a service should communicate vie the service ip, but necesseraly there are services (like metric scrapers), which should communicate directly with pods via pod ips. Of course, there exists a workaround isolating metrics endpoint form the regular ones, like in the case of second point, but this also violates the "being transparent" principle. Another thing: when we try to communicate with pods directly, we get a confusing 404 statuscode from the proxy itself, it would be better to use something other code or just doing a connection reset, as in this case the devs think, there is a poblem with the recipient service itself.

I'm relatively new in istio, but I see now these are difficult problems to solve. I guess the envoy proxies communicates directly with the peer pod's proxy (guessing from seeing the LB comparision with the native k8s iptables random one), so envoy actually knows the service pods as well, so I project it wouldn!t be difficult to employ the pod to pod communication as well, however I cannot guess the preformance impact.

@nmittler nmittler assigned costinm and unassigned nmittler Apr 3, 2018
@rshriram
Copy link
Member

rshriram commented Apr 5, 2018

There are several things being discussed here. I suggest you create a wiki page istio caveats and dump this nice summary there.

@axelbodo
Copy link

axelbodo commented May 7, 2018

Sorry for the late reaction. Can you send me an url for wiki guidelines, to make the page in your conventional way (if I have time)?

@costinm
Copy link
Contributor

costinm commented May 7, 2018 via email

@philicious
Copy link

I faced the same culprits as @axelbodo when introducing istio

- external communication stopped working (e.g. apt-get)
- liveness/readyness probe stopped working
- our custom services, which communicate both system services and reguler services/pods stopped working.
  • connections to systems (DBs ..) outside the cluster didnt work anymore

in the end it was solvable by editing the sidecar injection templates configmap and adding the --includeIPRanges (-i) flag to the initContainers args and using the IP ranges of the k8s cluster.
However this didnt come clear by following the installation documentation.
Some bold warning text would have been great like
"..istio will block all outgoing connections unless...."

@cmluciano
Copy link
Member

@philicious I agree that we should have that big warning label somewhere. Do you have suggestions on where we should highlight this in the documentation?

@philicious
Copy link

@cmluciano I would add it to https://istio.io/docs/setup/kubernetes/quick-start.html#installation-steps as bullet point 7
the warning should link to https://istio.io/docs/tasks/traffic-management/egress.html but also mention as a tl;dr that one can solve it two ways:

  • for HTTP(S) connections its easiest to write an EgressRule (btw. the egress page lacks info about wildcard support)
  • for ANY connection (TCP/UDP) you have to
    • add --includeIPRangesto kube-inject command for manual injection
    • edit and apply install/kubernetes/istio-sidecar-injector-configmap-release.yaml and add the -i flag there when using automatic sidecar injection

The egress page didnt mention the configmap at all and only has instructions for manual injection.

kyessenov pushed a commit to kyessenov/istio that referenced this issue Aug 13, 2018
Automatic merge from submit-queue.

Update ENVOY SHA and fix compilation error

**What this PR does / why we need it**:


**Release note**:

```release-note
None
```
@ijsnellf
Copy link
Contributor

ijsnellf commented Oct 3, 2018

This is no longer valid post 1.x. Closing...

@ijsnellf ijsnellf closed this as completed Oct 3, 2018
@rlenglet rlenglet removed this from the Nebulous Future milestone Jul 9, 2019
@rlenglet rlenglet added this to the 1.1 milestone Jul 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests