New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto figure out includeIPRanges by default #1440
Comments
…o#1440) Automatic merge from submit-queue fix for istio-initializer needs to ignore hostNetwork pod specs **What this PR does / why we need it**: PR for istio/istio.io#655, see istio#655 for details **Release note**: ```release-note When a pod uses hostNetwork: true, the pod will be disabled from side car injection on purpose because we don't want the envoy side car to change the network configuration at the host level. ```
welcome the effort of solving the mtls related issues. @rshriram reacting to istio/old_pilot_repo#515 and the first point above, I think apt-get communication is relevant, as there can be services, where some of the system resources is needed to be updated regularly even in prod (e.g. timezone db for one of our services, and there can be several other use cases, services can depends on up to date world information provided by debian/ubuntu repos). We can solve this issue with the includeIPRanges option, and it looks a good solution. Regarding to second point it killed all comunication between services, we temporarily turned off probes, and I think separating probe ports from the regular ones is only a QF, and cannot be the final solution. A mesh proxy (hopefully) should be as transparent as possible, without modifying existing code. Regarding to 3rd point, our custom metric scrapers communicates both system and regular service, furthermore, for regular services it communicates directly with the pods particypating in the specified service. Digging into istio github issues, they suggest that a service should communicate vie the service ip, but necesseraly there are services (like metric scrapers), which should communicate directly with pods via pod ips. Of course, there exists a workaround isolating metrics endpoint form the regular ones, like in the case of second point, but this also violates the "being transparent" principle. Another thing: when we try to communicate with pods directly, we get a confusing 404 statuscode from the proxy itself, it would be better to use something other code or just doing a connection reset, as in this case the devs think, there is a poblem with the recipient service itself. I'm relatively new in istio, but I see now these are difficult problems to solve. I guess the envoy proxies communicates directly with the peer pod's proxy (guessing from seeing the LB comparision with the native k8s iptables random one), so envoy actually knows the service pods as well, so I project it wouldn!t be difficult to employ the pod to pod communication as well, however I cannot guess the preformance impact. |
There are several things being discussed here. I suggest you create a wiki page istio caveats and dump this nice summary there. |
Sorry for the late reaction. Can you send me an url for wiki guidelines, to make the page in your conventional way (if I have time)? |
I would add that in 0.8 Istio is only capturing incoming traffic for the
ports declared as containerPort
or via an annotation. This is needed to fix few issues (security and direct
access to ports). In future
we may add back an option to intercept or block ports we can't handle.
There is also an option to exclude specific ports from capture.
Longer term - there are few things we are working on, SNI sniffing and
dynamic creation of
listeners.
…On Mon, May 7, 2018 at 6:22 AM, axelbodo ***@***.***> wrote:
Sorry for the late reaction. Can you send me an url for wiki guidelines,
to make the page in your conventional way (if I have time)?
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#1440 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAFI6vjJ_VvM9EPyOrqnEqoY_3mWFQuUks5twEqIgaJpZM4QRxIc>
.
|
I faced the same culprits as @axelbodo when introducing istio
in the end it was solvable by editing the sidecar injection templates configmap and adding the |
@philicious I agree that we should have that big warning label somewhere. Do you have suggestions on where we should highlight this in the documentation? |
@cmluciano I would add it to https://istio.io/docs/setup/kubernetes/quick-start.html#installation-steps as bullet point 7
The egress page didnt mention the configmap at all and only has instructions for manual injection. |
Automatic merge from submit-queue. Update ENVOY SHA and fix compilation error **What this PR does / why we need it**: **Release note**: ```release-note None ```
This is no longer valid post 1.x. Closing... |
@ldemailly commented on Thu Jun 29 2017
By default we should only intercept cluster traffic (or even istio managed services traffic)
We shouldn't need complex manual steps like
https://istio.io/docs/tasks/egress.html#calling-external-services-directly
for apps to continue to work with external services
A clean (k8s api driven) solution depends on kubernetes/kubernetes#25533 but we should be able to figure out the value on our own (in istioctl or pilot from observing the registered services)
A brute force approach could be to get the list of managed ips through kubectl get svc and/or guess the netmask from said list
@cmluciano commented on Wed Aug 02 2017
In upstream kube, it appears the goal is to read from a config-map for kube-proxy. This likely won't land in kube 1.8 since it is under development for kubelet.
@ldemailly commented on Mon Aug 14 2017
interesting history and lots of good pointers and discussion in istio/old_pilot_repo#515
@ayj commented on Fri Aug 18 2017
envoyproxy/envoy#1314 could also address this issue. @cmluciano, is this something you'd be willing to look into vs. waiting on upstream k8s to add the necessary information to a ConfigMap?
@rshriram commented on Fri Aug 18 2017
i think @VadimEisenberg is working on this. He already added the cluster
type but hasn't plugged it in fully.
~shriram
@ayj commented on Fri Aug 18 2017
Great. Changing the milestone to 0.3 since a clean solution to auto-detected
includeIPRange
has an external dependency on kubernetes/kubernetes#25533 which may not be ready for 0.2 release.@ldemailly commented on Fri Aug 18 2017
I'm not convinced we can ship 0.2 without some improvement in that area but let's see
@vadimeisenbergibm commented on Fri Aug 18 2017
@ldemailly I think that in order to make the mesh more secure, Istio should control as much traffic as possible. One of the solutions that we are discussing with Egress design 0.2 is to allow external traffic to ports 443 only, for HTTPS, and all other external traffic to go thru sidecar proxies/dedicated egress proxies, controlled by egress rules and mixer security policies.
So I would not let all the external traffic go outside by default. I would even consider to remove --includeIPRanges option and to use --excludeIPRanges and --excludePorts instead. Suppose we have in v0.2 the following:
*.bluemix.net
and this traffic will go thru a sidecar proxy (handles HTTP and TCP)Will that solution be good enough for v0.2?
@ldemailly commented on Fri Aug 18 2017
I don't think we should break existing applications no
I think both models have value
@vadimeisenbergibm commented on Sat Aug 19 2017
But how will the existing applications be broken if we support --excludeIPRanges and --excludePorts and egress-rules with domains with wildcards ?
The text was updated successfully, but these errors were encountered: