You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before I describe my issue is, I want to explain briefly about my use case:
We have 2 backends: foo and bar in namespace team.
foo is accesible by public hostname of foo-team.company.com
bar is accessible by public hostname of bar-team.company.com
Each backend has its own CICD pipeline that utilizes Istio related stuff to split traffic between stable vs canary.
So, for example for foo backend, this is how its like:
Imagine resources above is managed by the team (not by Central team), that can change the canary and stable weight.
Now, the team want to consolidate the backend public hostname into 1 and routed by path. Example:
foo -> accessible from team.company.com/foo
bar -> accessible from team.company.com/bar
Surely we can use 1 Istio Gateway, Virtual Service and DestinationRule that consolidate the rules for these 2 backends and put that into 1 centralized repository.
However, making it centralized makes it harder for each backend to do automated CICD pipeline with canary split traffic.
So, to overcome this problem, I introduce another Gateway and VirtualService resource like below:
That resource is managed in a centralized style (located in namespace: central-team), regardless on how each backend has its own traffic management rules. Note that the centralized resource is using istio: central-ingressgateway as the selector, while the backend's one is using istio: ingressgateway. So they are using different Ingress Gateway Controller pods. Also, the centralized resources is rewriting the authority and uri so that it can be routed by the team's istio ingress gateway. The difference of traffic flow is like this:
End user access foo.company.com -> Team's Istio Ingress Gateway Pods (istio:ingressgateway) -> foo.team.svc.cluster (it has istio-proxy on it).
End User access team.company.com/foo -> Central Istio Ingress Gateway Pods (istio:central-ingressgateway) -> Istio Ingress Gateway Pods (istio:ingressgateway) -> foo.team.svc.cluster (it has istio-proxy on it).
The approach above is working fine, meaning team.company.com/foo successfully routed to foo backend, either https or http plain text.
However, as you can see, there are 2 Istio Ingress Gateway deployment being used: that has selector istio: central-ingressgateway and istio: ingressgateway. Out of curiousity, I tested the central resource by using the same selector istio: ingressgateway but somehow accessing via HTTPS failed but HTTP plain text is working fine.
curl https://team.company.com/foo
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to team.company.com/foo:443
I'm seeing the same behavior with multiple Gateway definitions, each with their own TLS settings. So far the only solution has been to consolidate to one Gateway.
My sense is that this is related to not using SDS for certs, and might be addressed in the upcoming 1.1 release.
This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last month and a half. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.
Describe the bug
VirtualService route to the same gateway causing https failure
Expected behavior
It should working fine by terminate the SSL in the first layer Gateway
Steps to reproduce the bug
See the description below
Version
Installation
Using helm installation
Environment
GKE
Issue Description
Hi Guys,
Before I describe my issue is, I want to explain briefly about my use case:
foo
andbar
in namespaceteam
.foo
is accesible by public hostname offoo-team.company.com
bar
is accessible by public hostname ofbar-team.company.com
So, for example for
foo
backend, this is how its like:Imagine resources above is managed by the team (not by Central team), that can change the
canary
andstable
weight.Now, the team want to consolidate the backend public hostname into 1 and routed by path. Example:
foo
-> accessible fromteam.company.com/foo
bar
-> accessible fromteam.company.com/bar
Surely we can use 1 Istio Gateway, Virtual Service and DestinationRule that consolidate the rules for these 2 backends and put that into 1 centralized repository.
However, making it centralized makes it harder for each backend to do automated CICD pipeline with canary split traffic.
So, to overcome this problem, I introduce another Gateway and VirtualService resource like below:
That resource is managed in a centralized style (located in
namespace: central-team
), regardless on how each backend has its own traffic management rules. Note that the centralized resource is usingistio: central-ingressgateway
as the selector, while the backend's one is usingistio: ingressgateway
. So they are using different Ingress Gateway Controller pods. Also, the centralized resources is rewriting the authority and uri so that it can be routed by the team's istio ingress gateway. The difference of traffic flow is like this:foo.company.com
-> Team's Istio Ingress Gateway Pods (istio:ingressgateway
) -> foo.team.svc.cluster (it has istio-proxy on it).team.company.com/foo
-> Central Istio Ingress Gateway Pods (istio:central-ingressgateway
) -> Istio Ingress Gateway Pods (istio:ingressgateway
) -> foo.team.svc.cluster (it has istio-proxy on it).The approach above is working fine, meaning
team.company.com/foo
successfully routed tofoo
backend, either https or http plain text.However, as you can see, there are 2 Istio Ingress Gateway deployment being used: that has selector
istio: central-ingressgateway
andistio: ingressgateway
. Out of curiousity, I tested the central resource by using the same selectoristio: ingressgateway
but somehow accessing via HTTPS failed but HTTP plain text is working fine.But access the https://foo-team.company.com is okay. Both are using *.company.com SSL certificate.
Do you guys have any idea why this behavior occur?
The text was updated successfully, but these errors were encountered: