You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently testing a dual-stack solution for Istio, loosely based on #29076, ie by using :: ipv4_compat listeners serving both IP families for any/all wildcards (0.0.0.0 or ::). For testing we rebuilt pilot-agent with (only) the #35310 changes to achieve IPv6 traffic interception and plugged it in to the official istio/proxyv2:1.11.2 image.
In the 1.10 release this had already been working quite well. Now trying to resume the testing on 1.11, one direct HTTP egress case has started failing when/if explicit protocol selection is used (ie using name=http-port or appProtocol=http in the k8s service). Requests that would previously route to the correct endpoint now suddenly end up in BlackHoleCluster (using REGISTRY_ONLY setting, otherwise Passthrough).
What is more, when running an ip6tables-enabled 1.10-based client in the 1.11 control plane, it still manages to connect to the external HTTP service.
Removing the port name/appProtocol fields from the k8s service immediately restores connectivity to the external service on the 1.11-based client.
It appears that enabling explicit protocol selection removes the HttpProtocolOptions setting from the outbound service clusters and replaces the separate <SVC_IPv4>_80 / <SVC_IPv6>_80 listeners by a singular wildcard 0.0.0.0_80 listener. But this behavior does not seem to be new and had previously worked when modifying the listener to :: ipv4_compat, as described.
Version
$ istioctl version
client version: 1.11.2
control plane version: 1.11.2
data plane version: 1.10.4 (1 proxies), 1.11.2 (1 proxies)
$ kubectl version --short
Client Version: v1.18.2
Server Version: v1.21.2
Additional Information
DS cluster
Set-up using kind and the following config file:
The behavior is result of the new Envoy runtime guard envoy.reloadable_features.listener_wildcard_match_ip_family. That seems to filter listeners by IP family matching the request. But :: listeners using the ipv4_compat socket option appear to be discarded when processing IPv4 requests, despite the fact that they are perfectly capable of handling the request...
I don't know if there is any interest from Istio side to use this information to potentially disable the runtime guard? Probably not while there is no dual-stack support, and maybe not even when/if that should come!?
Bug Description
We are currently testing a dual-stack solution for Istio, loosely based on #29076, ie by using
::
ipv4_compat listeners serving both IP families for any/all wildcards (0.0.0.0
or::
). For testing we rebuilt pilot-agent with (only) the #35310 changes to achieve IPv6 traffic interception and plugged it in to the official istio/proxyv2:1.11.2 image.In the 1.10 release this had already been working quite well. Now trying to resume the testing on 1.11, one direct HTTP egress case has started failing when/if explicit protocol selection is used (ie using
name=http-port
orappProtocol=http
in the k8s service). Requests that would previously route to the correct endpoint now suddenly end up in BlackHoleCluster (usingREGISTRY_ONLY
setting, otherwise Passthrough).What is more, when running an ip6tables-enabled 1.10-based client in the 1.11 control plane, it still manages to connect to the external HTTP service.
Removing the port name/appProtocol fields from the k8s service immediately restores connectivity to the external service on the 1.11-based client.
It appears that enabling explicit protocol selection removes the HttpProtocolOptions setting from the outbound service clusters and replaces the separate <SVC_IPv4>_80 / <SVC_IPv6>_80 listeners by a singular wildcard 0.0.0.0_80 listener. But this behavior does not seem to be new and had previously worked when modifying the listener to
::
ipv4_compat, as described.Version
Additional Information
DS cluster
Set-up using kind and the following config file:
Istio
Deployed using helm:
Test resources
EnvoyFilter replacing the outbound 15001 listener (minimal solution, but we have actually replaced ALL wildcards with same results)
Service, separately exposed via IPv4 and IPv6:
ip6tables-enabled clients (1.10 and 1.11):
Reproduction
Proxy config dumps
named-port-config-1112.txt
unnamed-port-config-1104.txt
unnamed-port-config-1112.txt
named-port-config-1104.txt
The text was updated successfully, but these errors were encountered: