Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the Pod in Mesh to VM : TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER #35870

Closed
tanjunchen opened this issue Nov 3, 2021 · 40 comments
Labels
area/networking area/user experience feature/Virtual-machine issues related with VM support lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@tanjunchen
Copy link
Member

tanjunchen commented Nov 3, 2021

Bug Description

Mesh -> VM:

the log of bash:
bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1#

1 size:1.1kB resource:ROOTCA
2021-11-03T13:13:26.229375Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.770628192s
2021-11-03T13:13:26.229406Z	info	ads	SDS: PUSH for node:network-multitool-5cb859cc-5d9nb.vm-test resources:1 size:1.1kB resource:ROOTCA
2021-11-03T13:13:27.119064Z	info	Initialization took 1.227850313s
2021-11-03T13:13:27.119104Z	info	Envoy proxy is ready
[2021-11-03T13:14:44.157Z] "GET /hello HTTP/1.1" 503 UF,URX upstream_reset_before_response_started{connection_failure,TLS_error:_268435703:SSL_routines:OPENSSL_internal:WRONG_VERSION_NUMBER} - "TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER" 0 190 39 - "-" "curl/7.79.1" "b257df75-408b-9715-a253-b4c62e9ab0a4" "helloworld.vm-test.svc.cluster.local:8500" "192.168.0.6:8500" outbound|8500||helloworld.vm-test.svc.cluster.local - 10.254.123.40:8500 172.16.0.58:53886 - default

Traffic in this direction failed.

2021-11-03T11:49:47.337593Z	debug	envoy pool	creating a new connection
2021-11-03T11:49:47.337661Z	debug	envoy client	[C3756] connecting
2021-11-03T11:49:47.337670Z	debug	envoy connection	[C3756] connecting to 192.168.0.6:8500
2021-11-03T11:49:47.337739Z	debug	envoy connection	[C3756] connection in progress
2021-11-03T11:49:47.337751Z	trace	envoy pool	not creating a new connection, shouldCreateNewConnection returned false.
2021-11-03T11:49:47.338220Z	trace	envoy connection	[C3756] socket event: 2
2021-11-03T11:49:47.338233Z	trace	envoy connection	[C3756] write ready
2021-11-03T11:49:47.338238Z	debug	envoy connection	[C3756] connected
2021-11-03T11:49:47.338298Z	trace	envoy connection	[C3756] ssl error occurred while read: WANT_READ
2021-11-03T11:49:47.339134Z	trace	envoy connection	[C3756] socket event: 3
2021-11-03T11:49:47.339154Z	trace	envoy connection	[C3756] write ready
2021-11-03T11:49:47.339208Z	trace	envoy connection	[C3756] ssl error occurred while read: SSL
2021-11-03T11:49:47.339217Z	debug	envoy connection	[C3756] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339220Z	debug	envoy connection	[C3756] closing socket: 0
2021-11-03T11:49:47.339237Z	debug	envoy connection	[C3756] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339272Z	trace	envoy connection	[C3756] raising connection event 0
2021-11-03T11:49:47.339282Z	debug	envoy client	[C3756] disconnect. resetting 0 pending requests
2021-11-03T11:49:47.339288Z	debug	envoy pool	[C3756] client disconnected, failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339301Z	debug	envoy router	[C3753][S10717251153181087726] upstream reset: reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339366Z	debug	envoy http	[C3753][S10717251153181087726] Sending local reply with details upstream_reset_before_response_started{connection failure,TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER}
2021-11-03T11:49:47.339397Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b737ce0 status=0
2021-11-03T11:49:47.339405Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b533490 status=0
2021-11-03T11:49:47.339408Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b97d650 status=0
2021-11-03T11:49:47.339431Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b585ce0 status=0
2021-11-03T11:49:47.339460Z	debug	envoy http	[C3753][S10717251153181087726] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '190'
'content-type', 'text/plain'
'date', 'Wed, 03 Nov 2021 11:49:47 GMT'
'server', 'envoy'

2021-11-03T11:49:47.339476Z	trace	envoy connection	[C3753] writing 135 bytes, end_stream false
2021-11-03T11:49:47.339488Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b737ce0 status=0
2021-11-03T11:49:47.339491Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b533490 status=0
2021-11-03T11:49:47.339548Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b97d650 status=0
2021-11-03T11:49:47.339552Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b585ce0 status=0
2021-11-03T11:49:47.339556Z	trace	envoy http	[C3753][S10717251153181087726] encoding data via codec (size=190 end_stream=true)
2021-11-03T11:49:47.339564Z	trace	envoy connection	[C3753] writing 190 bytes, end_stream false
2021-11-03T11:49:47.339571Z	trace	envoy connection	[C3753] readDisable: disable=false disable_count=1 state=0 buffer_length=0
2021-11-03T11:49:47.339802Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=32
2021-11-03T11:49:47.339821Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=66
2021-11-03T11:49:47.339824Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=70
2021-11-03T11:49:47.339828Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=74
2021-11-03T11:49:47.339836Z	trace	envoy main	item added to deferred deletion list (size=1)
2021-11-03T11:49:47.339842Z	trace	envoy misc	enableTimer called on 0x55678b9d7080 for 3600000ms, min is 3600000ms
2021-11-03T11:49:47.339852Z	trace	envoy pool	not creating a new connection, shouldCreateNewConnection returned false.
2021-11-03T11:49:47.339864Z	trace	envoy main	item added to deferred deletion list (size=2)
2021-11-03T11:49:47.339869Z	trace	envoy main	item added to deferred deletion list (size=3)
2021-11-03T11:49:47.339874Z	trace	envoy main	clearing deferred deletion list (size=3)
2021-11-03T11:49:47.339990Z	trace	envoy connection	[C3753] socket event: 2
2021-11-03T11:49:47.340000Z	trace	envoy connection	[C3753] write ready
2021-11-03T11:49:47.340046Z	trace	envoy connection	[C3753] write returns: 325
2021-11-03T11:49:47.408932Z	trace	envoy connection	[C3753] socket event: 3
2021-11-03T11:49:47.408966Z	trace	envoy connection	[C3753] write ready
2021-11-03T11:49:47.408976Z	trace	envoy connection	[C3753] read ready. dispatch_buffered_data=false
2021-11-03T11:49:47.409029Z	trace	envoy connection	[C3753] read returns: 0
2021-11-03T11:49:47.409034Z	debug	envoy connection	[C3753] remote close
2021-11-03T11:49:47.409036Z	debug	envoy connection	[C3753] closing socket: 0
2021-11-03T11:49:47.409100Z	trace	envoy connection	[C3753] raising connection event 0
2021-11-03T11:49:47.409122Z	debug	envoy conn_handler	[C3753] adding to cleanup list
2021-11-03T11:49:47.409128Z	trace	envoy main	item added to deferred deletion list (size=1)
2021-11-03T11:49:47.409131Z	trace	envoy main	item added to deferred deletion list (size=2)
2021-11-03T11:49:47.409136Z	trace	envoy main	clearing deferred deletion list (size=2)
[2021-11-03T11:49:47.309Z] "GET /hello HTTP/1.1" 503 UF,URX upstream_reset_before_response_started{connection_failure,TLS_error:_268435703:SSL_routines:OPENSSL_internal:WRONG_VERSION_NUMBER} - "TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER" 0 190 30 - "-" "curl/7.79.1" "0723f47d-22f0-9914-ab3f-df51cc68910b" "helloworld.vm-test.svc.cluster.local:8500" "192.168.0.6:8500" outbound|8500||helloworld.vm-test.svc.cluster.local - 10.254.123.40:8500 172.16.1.80:38634 - default
2021-11-03T11:49:48.055553Z	debug	envoy main	flushing stats
2021-11-03T11:49:48.089899Z	trace	envoy misc	enableTimer called on 0x55678b99b700 for 3600000ms, min is 3600000ms
2021-11-03T11:49:48.089937Z	debug	envoy conn_handler	[C3757] new connection from 192.168.0.5:49458
2021-11-03T11:49:48.089959Z	trace	envoy connection	[C3757] socket event: 3
2021-11-03T11:49:48.089964Z	trace	envoy connection	[C3757] write ready
2021-11-03T11:49:48.089969Z	trace	envoy connection	[C3757] read ready. dispatch_buffered_data=false
2021-11-03T11:49:48.089986Z	trace	envoy connection	[C3757] read returns: 117
2021-11-03T11:49:48.090000Z	trace	envoy connection	[C3757] read error: Resource temporarily unavailable
2021-11-03T11:49:48.090023Z	trace	envoy http	[C3757] parsing 117 bytes
2021-11-03T11:49:48.090035Z	trace	envoy http	[C3757] message begin
2021-11-03T11:49:48.090045Z	debug	envoy http	[C3757] new stream
2021-11-03T11:49:48.090064Z	trace	envoy misc	enableTimer called on 0x55678b669100 for 300000ms, min is 300000ms
2021-11-03T11:49:48.090085Z	trace	envoy http	[C3757] completed header: key=Host value=172.16.1.80:15021
2021-11-03T11:49:48.090101Z	trace	envoy http	[C3757] completed header: key=User-Agent value=kube-probe/1.20
2021-11-03T11:49:48.090113Z	trace	envoy http	[C3757] completed header: key=Accept value=*/*
2021-11-03T11:49:48.090127Z	trace	envoy http	[C3757] onHeadersCompleteBase
2021-11-03T11:49:48.090131Z	trace	envoy http	[C3757] completed header: key=Connection value=close
2021-11-03T11:49:48.090142Z	trace	envoy http	[C3757] Server: onHeadersComplete size=4
2021-11-03T11:49:48.090159Z	trace	envoy http	[C3757] message complete
2021-11-03T11:49:48.090170Z	trace	envoy connection	[C3757] readDisable: disable=true disable_count=0 state=0 buffer_length=117
2021-11-03T11:49:48.090197Z	debug	envoy http	[C3757][S4433421122818485152] request headers complete (end_stream=true):
':authority', '172.16.1.80:15021'
':path', '/healthz/ready'
':method', 'GET'
'user-agent', 'kube-probe/1.20'
'accept', '*/*'
'connection', 'close'

2021-11-03T11:49:48.090206Z	debug	envoy http	[C3757][S4433421122818485152] request end stream
2021-11-03T11:49:48.090263Z	debug	envoy router	[C3757][S4433421122818485152] cluster 'agent' match for URL '/healthz/ready'
2021-11-03T11:49:48.090322Z	debug	envoy router	[C3757][S4433421122818485152] router decoding headers:
':authority', '172.16.1.80:15021'
':path', '/healthz/ready'
':method', 'GET'
':scheme', 'http'
'user-agent', 'kube-probe/1.20'
'accept', '*/*'
'x-forwarded-proto', 'http'
'x-request-id', 'e2589795-9504-42a0-b219-c04bb3ab1dd1'
'x-envoy-expected-rq-timeout-ms', '15000'

2021-11-03T11:49:48.090400Z	debug	envoy pool	[C3] using existing connection
2021-11-03T11:49:48.090406Z	debug	envoy pool	[C3] creating stream
2021-11-03T11:49:48.090421Z	debug	envoy router	[C3757][S4433421122818485152] pool ready
2021-11-03T11:49:48.090451Z	trace	envoy connection	[C3] writing 214 bytes, end_stream false
2021-11-03T11:49:48.090465Z	trace	envoy pool	not creating a new connection, shouldCreateNewConnection returned false.
2021-11-03T11:49:48.090474Z	trace	envoy http	[C3757][S4433421122818485152] decode headers called: filter=0x55678b990bd0 status=1
2021-11-03T11:49:48.090482Z	trace	envoy misc	enableTimer called on 0x55678b669100 for 300000ms, min is 300000ms
2021-11-03T11:49:48.090495Z	trace	envoy http	[C3757] parsed 117 bytes
2021-11-03T11:49:48.090551Z	trace	envoy connection	[C3757] socket event: 2
2021-11-03T11:49:48.090555Z	trace	envoy connection	[C3757] write ready
2021-11-03T11:49:48.090561Z	trace	envoy connection	[C3] socket event: 2
2021-11-03T11:49:48.090563Z	trace	envoy connection	[C3] write ready
2021-11-03T11:49:48.090615Z	trace	envoy connection	[C3] write returns: 214
2021-11-03T11:49:48.090826Z	trace	envoy connection	[C3] socket event: 3
2021-11-03T11:49:48.090836Z	trace	envoy connection	[C3] write ready
2021-11-03T11:49:48.090841Z	trace	envoy connection	[C3] read ready. dispatch_buffered_data=false
2021-11-03T11:49:48.090855Z	trace	envoy connection	[C3] read returns: 75
2021-11-03T11:49:48.090866Z	trace	envoy connection	[C3] read error: Resource temporarily unavailable
2021-11-03T11:49:48.090875Z	trace	envoy http	[C3] parsing 75 bytes
2021-11-03T11:49:48.090881Z	trace	envoy http	[C3] message begin
2021-11-03T11:49:48.090902Z	trace	envoy http	[C3] completed header: key=Date value=Wed, 03 Nov 2021 11:49:48 GMT
2021-11-03T11:49:48.090915Z	trace	envoy http	[C3] onHeadersCompleteBase
2021-11-03T11:49:48.090918Z	trace	envoy http	[C3] completed header: key=Content-Length value=0
2021-11-03T11:49:48.090928Z	trace	envoy http	[C3] status_code 200
2021-11-03T11:49:48.090934Z	trace	envoy http	[C3] Client: onHeadersComplete size=2
2021-11-03T11:49:48.090941Z	trace	envoy http	[C3] message complete
2021-11-03T11:49:48.090949Z	trace	envoy http	[C3] message complete
2021-11-03T11:49:48.090953Z	debug	envoy client	[C3] response complete
2021-11-03T11:49:48.090958Z	trace	envoy main	item added to deferred deletion list (size=1)
2021-11-03T11:49:48.090972Z	debug	envoy router	[C3757][S4433421122818485152] upstream headers complete: end_stream=true
2021-11-03T11:49:48.091027Z	trace	envoy misc	enableTimer called on 0x55678b669100 for 300000ms, min is 300000ms
2021-11-03T11:49:48.091049Z	debug	envoy http	[C3757][S4433421122818485152] closing connection due to connection close header
2021-11-03T11:49:48.091069Z	debug	envoy http	[C3757][S4433421122818485152] encoding headers via codec (end_stream=true):
':status', '200'
'date', 'Wed, 03 Nov 2021 11:49:48 GMT'
'content-length', '0'
'x-envoy-upstream-service-time', '0'
'server', 'envoy'
'connection', 'close'

VM -> Mesh:
This is correct

[root@instance-6dcpbbai vm]#  curl httpbin.vm-test:8000/headers
{
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.vm-test:8000",
    "User-Agent": "curl/7.61.1",
    "X-B3-Parentspanid": "fbf55b25584edfae",
    "X-B3-Sampled": "0",
    "X-B3-Spanid": "eb8bd54b7d469867",
    "X-B3-Traceid": "c348400f3dd4a671fbf55b25584edfae",
    "X-Envoy-Attempt-Count": "1",
    "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/vm-test/sa/httpbin;Hash=17423b675afa7ed8951d4e76371739d61d3520a2ce29942b179919bbfec3fdd5;Subject=\"\";URI=spiffe://cluster.local/ns/vm-test/sa/vm"
  }
}

Version

➜  debug-istio-vm istioctl version
client version: 1.11.4
control plane version: 1.11.4
data plane version: 1.11.4 (9 proxies), 1.11.0 (1 proxies)

➜  debug-istio-vm kubectl version --short
Client Version: v1.21.2
Server Version: v1.20.8

Additional Information

the yaml se of we:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: vm-helloworld
  labels:
    app: vm-test
spec:
  hosts:
  - helloworld.vm-test.svc.cluster.local
  location: MESH_INTERNAL
  addresses:
  - 10.254.123.40
  ports:
  - name: http-8500
    number: 8500
    protocol: HTTP
    targetPort: 8500
  resolution: DNS
  workloadSelector:
    labels:
      app: vm-test
---
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
  labels:
    app: vm-test
  name: vm-test-192.168.0.6
spec:
  address: 192.168.0.6
  labels:
    app: vm-test
  serviceAccount: vm
[root@instance-6dcpbbai vm]# curl localhost:15000/clusters | grep helloworld
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 99337    0 99337    0     0  47.3M      0 --:--outbound|8500||helloworld.vm-test.svc.cluster.local::observability_name::outbound|8500||helloworld.vm-test.svc.cluster.local
:-outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_connections::4294967295
-outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_pending_requests::4294967295
 outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_requests::4294967295
-outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_retries::4294967295
-:outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_connections::1024
-outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_pending_requests::1024
-:outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_requests::1024
-- --:--:-- 4outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_retries::3
7outbound|8500||helloworld.vm-test.svc.cluster.local::added_via_api::true
.outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::cx_active::0
3outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::cx_connect_fail::0
Moutbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::cx_total::0

outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_active::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_error::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_success::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_timeout::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_total::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::hostname::192.168.0.6
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::health_flags::healthy
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::weight::1
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::region::
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::zone::
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::sub_zone::
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::canary::false
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::priority::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::success_rate::-1.0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::local_origin_success_rate::-1.0

I can access to it by ip + port:

➜  debug-istio-vm k -n vm-test exec -it network-multitool-5cb859cc-5d9nb bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1#
bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello^C
bash-5.1# curl 192.168.0.6:8500/hello
Hello version: v2, instance: 07c4c3d7b486
bash-5.1# curl 192.168.0.6:8500/hello
Hello version: v2, instance: 07c4c3d7b486
[root@instance-6dcpbbai vm]# docker ps
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID  IMAGE                                          COMMAND               CREATED      STATUS          PORTS                   NAMES
07c4c3d7b486  docker.io/istio/examples-helloworld-v2:latest  /bin/sh -c python...  3 hours ago  Up 3 hours ago  0.0.0.0:8500->5000/tcp  beautiful_hypatia
[root@instance-6dcpbbai vm]# curl 192.168.0.6:8500/hello
Hello version: v2, instance: 07c4c3d7b486

the config of vm:
cluster.env

BOOTSTRAP_XDS_AGENT='true'
CANONICAL_REVISION='latest'
CANONICAL_SERVICE='vm-helloworld-v2'
ISTIO_INBOUND_PORTS='*'
ISTIO_LOCAL_EXCLUDE_PORTS='15090,15021,15020,8022,22'
ISTIO_METAJSON_LABELS='{"app":"vm-helloworld-v2","service.istio.io/canonical-name":"vm-helloworld-v2","service.istio.io/canonical-version":"latest"}'
ISTIO_META_CLUSTER_ID='Kubernetes'
ISTIO_META_DNS_CAPTURE='true'
ISTIO_META_MESH_ID='mesh1'
ISTIO_META_NETWORK=''
ISTIO_META_WORKLOAD_NAME='vm-helloworld-v2'
ISTIO_NAMESPACE='vm-test'
ISTIO_SERVICE='vm-helloworld-v2.vm-test'
ISTIO_SERVICE_CIDR='*'
POD_NAMESPACE='vm-test'
SERVICE_ACCOUNT='vm'
TRUST_DOMAIN='cluster.local'

istio-token

eyJhbGciOiJSUzI1NiIsImtpZCI6IkFXQWEyN3AzNmRkN2R5R2pxNUFVZVVHWGJPdkpEclk4QnNrdWRScjN2T0kifQ.eyJhdWQiOlsiaXN0aW8tY2EiXSwiZXhwIjoxNjY3NDY3NzYwLCJpYXQiOjE2MzU5MzE3NjAsImlzcyI6Imt1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InZtLXRlc3QiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidm0iLCJ1aWQiOiJjMWU0ZTYyNC0zN2E5LTQwMmQtODNjZi1kMTI3MDgzNWU3NTIifX0sIm5iZiI6MTYzNTkzMTc2MCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnZtLXRlc3Q6dm0ifQ.RT3A2hqsnlkXvfrEMldKuEQab70AtS2ELez2DTTcU87fcgyTm0F9RenJoBGk2PwaZ4E2YbSSkuVuie6eQaRohcmILDKzGJ6U0GxQbea0pzghDVjzw8VMwoVi7pn3xXsucrHJhZJGceGELeCYEyWeAJe7JlkS43P7QYJOv535PbT9Vb5sPrvN_pnRruSDWTi_aY2WCMrz-0EDh2430TCV3tClwsWRAfe8CyoC6u--NJ9yYfhKdS-eU9-gIfpSR9LQWg6J0U1GEyCAMDJ2299uxJcJKOtGo9RCzvjQK2JT3CwbmdnNxC6EBrQJc6Iw6wZFQhBxm3MgesCk5Bw7xOg1qQ

mesh.yaml:

defaultConfig:
  discoveryAddress: istiod.istio-system.svc:15012
  meshId: mesh1
  proxyMetadata:
    BOOTSTRAP_XDS_AGENT: "true"
    CANONICAL_REVISION: latest
    CANONICAL_SERVICE: vm-helloworld-v2
    ISTIO_META_CLUSTER_ID: Kubernetes
    ISTIO_META_DNS_CAPTURE: "true"
    ISTIO_META_MESH_ID: mesh1
    ISTIO_META_NETWORK: ""
    ISTIO_META_WORKLOAD_NAME: vm-helloworld-v2
    ISTIO_METAJSON_LABELS: '{"app":"vm-helloworld-v2","service.istio.io/canonical-name":"vm-helloworld-v2","service.istio.io/canonical-version":"latest"}'
    POD_NAMESPACE: vm-test
    SERVICE_ACCOUNT: vm
    TRUST_DOMAIN: cluster.local
  tracing:
    zipkin:
      address: zipkin.istio-system:9411

root-cert.pem:

-----BEGIN CERTIFICATE-----
MIIC/TCCAeWgAwIBAgIRAJYeRT1UAkIe9yZlsr5ruvIwDQYJKoZIhvcNAQELBQAw
GDEWMBQGA1UEChMNY2x1c3Rlci5sb2NhbDAeFw0yMTExMDMwOTI1NTNaFw0zMTEx
MDEwOTI1NTNaMBgxFjAUBgNVBAoTDWNsdXN0ZXIubG9jYWwwggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQDFhg29rwZG0hJ+QqokaPVZtkZdrZuPlSXqS7yN
4/49C7Yz/SvnudYeWOLNYapfnFH1IDS+0He4WSaNjaBa754sdmVfhcMReaiJ+kTX
rQhGRSmaPkx83Fga9eVP+I/X6Rn1Y3CbABXBDS80O/d3o7kSwKu+WUoGYhfxjTpJ
tGOy15bJP2PgDP8mUZMnSy20vCqJ8f7McXBrrAS+Hr5RvFKVwaO31ziN3yeXImJg
Jk+iy+o4IE3O6b7mec0WJsNrypjjUvsJVQCdVtar1CcMi3F84yt5XM2CG0FONIb4
ALl2xlhSjYFQOEyvG7TQj6hJHRoJAGtsc1kygTVKaHIBIQWFAgMBAAGjQjBAMA4G
A1UdDwEB/wQEAwICBDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRMbhEXuUwz
g3yYrZPB4CiPsiBjYzANBgkqhkiG9w0BAQsFAAOCAQEATHmRhyP3GG23riOIpLMm
IEYxPKzigYIwYEPAvERfJomywLAqvllDNGuIoNaXVaWxKKygPOowHgmZvJkTo80I
LxBH7gbSaleWsF8Tzj4SgDZLjhTO+R1d8L4g55mqeVf01etvqbnGAjbNJF/+GOox
pdE+nU7qg6Z97DuRAzKwnaBUOA6Opaz6/PwEkcNKJHHrr/zcVXYMH9PqUIiYG/cE
+gSMNCfP53Z5/9vEncHYfPlQndAEJdfYnwEMc7lGAz6E7PZ01EOw0XEdFQT1P6/D
I7DDcajxmieadtM36GXpriEuHKA83jNcdrB1QtjOaLMxVP1N/tEaUTyb9YCCza0x
lw==
-----END CERTIFICATE-----

the istio configmap:

apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      meshId: mesh1
      proxyMetadata:
        BOOTSTRAP_XDS_AGENT: "true"
        ISTIO_META_DNS_CAPTURE: "true"
      tracing:
        sampling: 100
        zipkin:
          address: zipkin.istio-system:9411
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: 'networks: {}'
kind: ConfigMap
metadata:
  labels:
    install.operator.istio.io/owning-resource: istio
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.11.4
    release: istio
  name: istio
  namespace: istio-system
@hzxuzhonghu
Copy link
Member

Can you check the supported tls version of your vm?

@hzxuzhonghu
Copy link
Member

#15701

@tanjunchen
Copy link
Member Author

Can you check the supported tls version of your vm?

image

there is no policy and the istio cni is disabled.
image

image

@tanjunchen
Copy link
Member Author

there are two case:

  1. I changed the location in Service Entry from MESH_INTERNAL to MESH_EXTERNAL , and found that helloworld.vm.svc.cluster.local:8500/hello is accessible. But tcpdump did not capture the tls packet.
    image
  2. When I set up tls authentication for host helloworld.vm.svc.cluster.local, I found that the access was blocked. But tcpdump captured tls packets.
    image
    I am very confused what is the problem, is the certificate problem? Or is it a question of where is the configuration of envoy xds? Does anyone know?

@yuanxch
Copy link

yuanxch commented Nov 19, 2021

same question, istio 1.11.4
cni:disable
image

http://10.98.41.167:31614/productpage
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

@howardjohn
Copy link
Member

howardjohn commented Nov 19, 2021 via email

@tanjunchen
Copy link
Member Author

@howardjohn Hello, I followed the documentation on the official website to operate the virtual machine. Once I started mtls for the service in the virtual machine, this problem occurred and the iptables rules were effective. Is there a case of accessing vm from mesh in the official document? Is the case of mutual authentication opened? thanks.

@yuanxch
Copy link

yuanxch commented Nov 22, 2021

Looks like the mtls request is going directly to your app. Maybe iptables isn't set up properly?

On Thu, Nov 18, 2021 at 9:47 PM yuanxch @.***> wrote: same question, istio 1.11.4 cni:disable [image: image] https://user-images.githubusercontent.com/10543069/142571796-ca56cf88-e0d4-47ed-b0e2-82ecdcffe6fc.png http://10.98.41.167:31614/productpage upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER — You are receiving this because you were assigned. Reply to this email directly, view it on GitHub <#35870 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEYGXP3BXCZWKWYY3YADHTUMXQIFANCNFSM5HI5EPYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

  trafficPolicy:
    tls:
      mode: DISABLE

It works when i add the codes into samples/bookinfo/networking/destination-rule-all.yaml

@tanjunchen
Copy link
Member Author

@yuanxch me too.

@howardjohn
Copy link
Member

howardjohn commented Nov 22, 2021 via email

@yuanxch
Copy link

yuanxch commented Nov 23, 2021

thank you @howardjohn

both results are same。

sudo iptables-save

# Generated by iptables-save v1.8.4 on Tue Nov 23 09:48:28 2021
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWX - [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -j LIBVIRT_INP
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -o br-59ca438f4591 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-59ca438f4591 -j DOCKER
-A FORWARD -i br-59ca438f4591 ! -o br-59ca438f4591 -j ACCEPT
-A FORWARD -i br-59ca438f4591 -o br-59ca438f4591 -j ACCEPT
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -j LIBVIRT_OUT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
-A LIBVIRT_FWO -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWI -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-59ca438f4591 ! -o br-59ca438f4591 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-59ca438f4591 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
COMMIT
# Completed on Tue Nov 23 09:48:28 2021
# Generated by iptables-save v1.8.4 on Tue Nov 23 09:48:28 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LIBVIRT_PRT - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-WUMTZJLBIHT6QFYJ - [0:0]
:DOCKER - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SEP-BHAOPIJUHF4GBMP7 - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SEP-BZHHBMQSWZWBVQVH - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SEP-PZXRNLJ3TMU4GE65 - [0:0]
:KUBE-SEP-SY2IDVYD6QHI4OSE - [0:0]
:KUBE-SEP-SN5KQBCPIOR75D2V - [0:0]
:KUBE-SEP-UKXHRPYURGFMIXEC - [0:0]
:KUBE-SVC-SIMN7BSCUVAV2FC7 - [0:0]
:KUBE-SEP-JZE46AANXZMPOG6E - [0:0]
:KUBE-SVC-NVNLZVDQSGQUD3NM - [0:0]
:KUBE-SEP-JGORAKDXFN2C552T - [0:0]
:KUBE-SVC-WHNIZNLB5XFXIX2C - [0:0]
:KUBE-SEP-765EF7GFE5EPKZPF - [0:0]
:KUBE-SVC-XHUBMW47Y5G3ICIS - [0:0]
:KUBE-SEP-PUBIY4UWEXDC3U2B - [0:0]
:KUBE-SVC-CG3LQLBYYHBKATGN - [0:0]
:KUBE-SEP-D4PT4JE4IH3ZSRCU - [0:0]
:KUBE-SVC-S4S242M2WNFIAT6Y - [0:0]
:KUBE-SEP-BYVWZYGQLZSLXO7N - [0:0]
:KUBE-SVC-G6D3V5KS3PXPUEDS - [0:0]
:KUBE-SEP-IIYQ4WXDTEOK6OD5 - [0:0]
:KUBE-SVC-7N6LHPYFOVFT454K - [0:0]
:KUBE-SEP-Z7GS6PWWK5U4RYAH - [0:0]
:KUBE-SVC-62L5C2KEOX6ICGVJ - [0:0]
:KUBE-SEP-QYABR3OCVCVPKBDB - [0:0]
:KUBE-SVC-TFRZ6Y6WOLX5SOWZ - [0:0]
:KUBE-SEP-ZCZ3ZIHDOBOCPGZJ - [0:0]
:KUBE-SVC-IBZWWK3KTI7UHZ5A - [0:0]
:KUBE-SEP-JRW2B4SW6QS2OGYA - [0:0]
:KUBE-SVC-F2IARDLERJIFF7VR - [0:0]
:KUBE-SEP-VUG5WQMKBHWAN7ZE - [0:0]
:KUBE-SVC-53SQRANQXVHTJ6HK - [0:0]
:KUBE-SEP-RHCJ5EV7FHKCNZUU - [0:0]
:KUBE-SEP-VN2563Y2S4SEHFZM - [0:0]
:KUBE-SEP-IWJWCF4BOHEY36BZ - [0:0]
:KUBE-SVC-SB7WEE53EMIXFNKY - [0:0]
:KUBE-SEP-FG7KL5MWSB7UGTBW - [0:0]
:KUBE-SVC-4MYBDLPZ2DFGC5Z6 - [0:0]
:KUBE-SEP-5NOPIYZ2DEMWJ4BE - [0:0]
:KUBE-SVC-ROH4UCJ7RVN2OSM4 - [0:0]
:KUBE-SEP-EXEAVA34QMIGNSLM - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.18.0.0/16 ! -o br-59ca438f4591 -j MASQUERADE
-A POSTROUTING -j LIBVIRT_PRT
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.98.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A LIBVIRT_PRT -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
-A LIBVIRT_PRT -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
-A LIBVIRT_PRT -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A LIBVIRT_PRT -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A LIBVIRT_PRT -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-7N6LHPYFOVFT454K
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp cluster IP" -m tcp --dport 31400 -j KUBE-SVC-62L5C2KEOX6ICGVJ
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port cluster IP" -m tcp --dport 15021 -j KUBE-SVC-TFRZ6Y6WOLX5SOWZ
-A KUBE-SERVICES -d 10.1.236.187/32 -p tcp -m comment --comment "default/reviews:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-53SQRANQXVHTJ6HK
-A KUBE-SERVICES -d 10.1.108.219/32 -p tcp -m comment --comment "istio-operator/istio-operator:http-metrics cluster IP" -m tcp --dport 8383 -j KUBE-SVC-SIMN7BSCUVAV2FC7
-A KUBE-SERVICES -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:grpc-xds cluster IP" -m tcp --dport 15010 -j KUBE-SVC-NVNLZVDQSGQUD3NM
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-webhook cluster IP" -m tcp --dport 443 -j KUBE-SVC-WHNIZNLB5XFXIX2C
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls cluster IP" -m tcp --dport 15443 -j KUBE-SVC-S4S242M2WNFIAT6Y
-A KUBE-SERVICES -d 10.1.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:http-monitoring cluster IP" -m tcp --dport 15014 -j KUBE-SVC-XHUBMW47Y5G3ICIS
-A KUBE-SERVICES -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-SVC-IBZWWK3KTI7UHZ5A
-A KUBE-SERVICES -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-F2IARDLERJIFF7VR
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-SVC-G6D3V5KS3PXPUEDS
-A KUBE-SERVICES -d 10.1.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.1.164.17/32 -p tcp -m comment --comment "default/details:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-SB7WEE53EMIXFNKY
-A KUBE-SERVICES -d 10.1.123.2/32 -p tcp -m comment --comment "default/ratings:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-4MYBDLPZ2DFGC5Z6
-A KUBE-SERVICES -d 10.1.176.191/32 -p tcp -m comment --comment "default/productpage:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-ROH4UCJ7RVN2OSM4
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-dns cluster IP" -m tcp --dport 15012 -j KUBE-SVC-CG3LQLBYYHBKATGN
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:https" -m tcp --dport 31088 -j KUBE-SVC-7N6LHPYFOVFT454K
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp" -m tcp --dport 32587 -j KUBE-SVC-62L5C2KEOX6ICGVJ
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port" -m tcp --dport 32272 -j KUBE-SVC-TFRZ6Y6WOLX5SOWZ
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls" -m tcp --dport 31800 -j KUBE-SVC-S4S242M2WNFIAT6Y
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2" -m tcp --dport 31614 -j KUBE-SVC-G6D3V5KS3PXPUEDS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.1.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-WUMTZJLBIHT6QFYJ
-A KUBE-SEP-WUMTZJLBIHT6QFYJ -s 10.98.41.167/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-WUMTZJLBIHT6QFYJ -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 10.98.41.167:6443
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-59ca438f4591 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 5000 -j DNAT --to-destination 172.17.0.2:5000
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.1.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SY2IDVYD6QHI4OSE
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-BHAOPIJUHF4GBMP7
-A KUBE-SEP-BHAOPIJUHF4GBMP7 -s 10.98.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-BHAOPIJUHF4GBMP7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.98.0.3:53
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SN5KQBCPIOR75D2V
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-BZHHBMQSWZWBVQVH
-A KUBE-SEP-BZHHBMQSWZWBVQVH -s 10.98.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-BZHHBMQSWZWBVQVH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.98.0.3:53
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UKXHRPYURGFMIXEC
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-PZXRNLJ3TMU4GE65
-A KUBE-SEP-PZXRNLJ3TMU4GE65 -s 10.98.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-PZXRNLJ3TMU4GE65 -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.98.0.3:9153
-A KUBE-SEP-SY2IDVYD6QHI4OSE -s 10.98.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SY2IDVYD6QHI4OSE -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.98.0.2:53
-A KUBE-SEP-SN5KQBCPIOR75D2V -s 10.98.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-SN5KQBCPIOR75D2V -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.98.0.2:53
-A KUBE-SEP-UKXHRPYURGFMIXEC -s 10.98.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-UKXHRPYURGFMIXEC -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.98.0.2:9153
-A KUBE-SVC-SIMN7BSCUVAV2FC7 ! -s 10.244.0.0/16 -d 10.1.108.219/32 -p tcp -m comment --comment "istio-operator/istio-operator:http-metrics cluster IP" -m tcp --dport 8383 -j KUBE-MARK-MASQ
-A KUBE-SVC-SIMN7BSCUVAV2FC7 -m comment --comment "istio-operator/istio-operator:http-metrics" -j KUBE-SEP-JZE46AANXZMPOG6E
-A KUBE-SEP-JZE46AANXZMPOG6E -s 10.98.4.9/32 -m comment --comment "istio-operator/istio-operator:http-metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-JZE46AANXZMPOG6E -p tcp -m comment --comment "istio-operator/istio-operator:http-metrics" -m tcp -j DNAT --to-destination 10.98.4.9:8383
-A KUBE-SVC-NVNLZVDQSGQUD3NM ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:grpc-xds cluster IP" -m tcp --dport 15010 -j KUBE-MARK-MASQ
-A KUBE-SVC-NVNLZVDQSGQUD3NM -m comment --comment "istio-system/istiod:grpc-xds" -j KUBE-SEP-JGORAKDXFN2C552T
-A KUBE-SEP-JGORAKDXFN2C552T -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:grpc-xds" -j KUBE-MARK-MASQ
-A KUBE-SEP-JGORAKDXFN2C552T -p tcp -m comment --comment "istio-system/istiod:grpc-xds" -m tcp -j DNAT --to-destination 10.98.3.6:15010
-A KUBE-SVC-WHNIZNLB5XFXIX2C ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-webhook cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-WHNIZNLB5XFXIX2C -m comment --comment "istio-system/istiod:https-webhook" -j KUBE-SEP-765EF7GFE5EPKZPF
-A KUBE-SEP-765EF7GFE5EPKZPF -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:https-webhook" -j KUBE-MARK-MASQ
-A KUBE-SEP-765EF7GFE5EPKZPF -p tcp -m comment --comment "istio-system/istiod:https-webhook" -m tcp -j DNAT --to-destination 10.98.3.6:15017
-A KUBE-SVC-XHUBMW47Y5G3ICIS ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:http-monitoring cluster IP" -m tcp --dport 15014 -j KUBE-MARK-MASQ
-A KUBE-SVC-XHUBMW47Y5G3ICIS -m comment --comment "istio-system/istiod:http-monitoring" -j KUBE-SEP-PUBIY4UWEXDC3U2B
-A KUBE-SEP-PUBIY4UWEXDC3U2B -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:http-monitoring" -j KUBE-MARK-MASQ
-A KUBE-SEP-PUBIY4UWEXDC3U2B -p tcp -m comment --comment "istio-system/istiod:http-monitoring" -m tcp -j DNAT --to-destination 10.98.3.6:15014
-A KUBE-SVC-CG3LQLBYYHBKATGN ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-dns cluster IP" -m tcp --dport 15012 -j KUBE-MARK-MASQ
-A KUBE-SVC-CG3LQLBYYHBKATGN -m comment --comment "istio-system/istiod:https-dns" -j KUBE-SEP-D4PT4JE4IH3ZSRCU
-A KUBE-SEP-D4PT4JE4IH3ZSRCU -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:https-dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-D4PT4JE4IH3ZSRCU -p tcp -m comment --comment "istio-system/istiod:https-dns" -m tcp -j DNAT --to-destination 10.98.3.6:15012
-A KUBE-SVC-S4S242M2WNFIAT6Y ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls cluster IP" -m tcp --dport 15443 -j KUBE-MARK-MASQ
-A KUBE-SVC-S4S242M2WNFIAT6Y -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls" -m tcp --dport 31800 -j KUBE-MARK-MASQ
-A KUBE-SVC-S4S242M2WNFIAT6Y -m comment --comment "istio-system/istio-ingressgateway:tls" -j KUBE-SEP-BYVWZYGQLZSLXO7N
-A KUBE-SEP-BYVWZYGQLZSLXO7N -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:tls" -j KUBE-MARK-MASQ
-A KUBE-SEP-BYVWZYGQLZSLXO7N -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls" -m tcp -j DNAT --to-destination 10.98.4.10:15443
-A KUBE-SVC-G6D3V5KS3PXPUEDS ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-G6D3V5KS3PXPUEDS -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2" -m tcp --dport 31614 -j KUBE-MARK-MASQ
-A KUBE-SVC-G6D3V5KS3PXPUEDS -m comment --comment "istio-system/istio-ingressgateway:http2" -j KUBE-SEP-IIYQ4WXDTEOK6OD5
-A KUBE-SEP-IIYQ4WXDTEOK6OD5 -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:http2" -j KUBE-MARK-MASQ
-A KUBE-SEP-IIYQ4WXDTEOK6OD5 -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2" -m tcp -j DNAT --to-destination 10.98.4.10:8080
-A KUBE-SVC-7N6LHPYFOVFT454K ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-7N6LHPYFOVFT454K -p tcp -m comment --comment "istio-system/istio-ingressgateway:https" -m tcp --dport 31088 -j KUBE-MARK-MASQ
-A KUBE-SVC-7N6LHPYFOVFT454K -m comment --comment "istio-system/istio-ingressgateway:https" -j KUBE-SEP-Z7GS6PWWK5U4RYAH
-A KUBE-SEP-Z7GS6PWWK5U4RYAH -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-Z7GS6PWWK5U4RYAH -p tcp -m comment --comment "istio-system/istio-ingressgateway:https" -m tcp -j DNAT --to-destination 10.98.4.10:8443
-A KUBE-SVC-62L5C2KEOX6ICGVJ ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp cluster IP" -m tcp --dport 31400 -j KUBE-MARK-MASQ
-A KUBE-SVC-62L5C2KEOX6ICGVJ -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp" -m tcp --dport 32587 -j KUBE-MARK-MASQ
-A KUBE-SVC-62L5C2KEOX6ICGVJ -m comment --comment "istio-system/istio-ingressgateway:tcp" -j KUBE-SEP-QYABR3OCVCVPKBDB
-A KUBE-SEP-QYABR3OCVCVPKBDB -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-QYABR3OCVCVPKBDB -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp" -m tcp -j DNAT --to-destination 10.98.4.10:31400
-A KUBE-SVC-TFRZ6Y6WOLX5SOWZ ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port cluster IP" -m tcp --dport 15021 -j KUBE-MARK-MASQ
-A KUBE-SVC-TFRZ6Y6WOLX5SOWZ -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port" -m tcp --dport 32272 -j KUBE-MARK-MASQ
-A KUBE-SVC-TFRZ6Y6WOLX5SOWZ -m comment --comment "istio-system/istio-ingressgateway:status-port" -j KUBE-SEP-ZCZ3ZIHDOBOCPGZJ
-A KUBE-SEP-ZCZ3ZIHDOBOCPGZJ -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:status-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCZ3ZIHDOBOCPGZJ -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port" -m tcp -j DNAT --to-destination 10.98.4.10:15021
-A KUBE-SVC-IBZWWK3KTI7UHZ5A ! -s 10.244.0.0/16 -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-IBZWWK3KTI7UHZ5A -m comment --comment "istio-system/istio-egressgateway:http2" -j KUBE-SEP-JRW2B4SW6QS2OGYA
-A KUBE-SEP-JRW2B4SW6QS2OGYA -s 10.98.3.7/32 -m comment --comment "istio-system/istio-egressgateway:http2" -j KUBE-MARK-MASQ
-A KUBE-SEP-JRW2B4SW6QS2OGYA -p tcp -m comment --comment "istio-system/istio-egressgateway:http2" -m tcp -j DNAT --to-destination 10.98.3.7:8080
-A KUBE-SVC-F2IARDLERJIFF7VR ! -s 10.244.0.0/16 -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-F2IARDLERJIFF7VR -m comment --comment "istio-system/istio-egressgateway:https" -j KUBE-SEP-VUG5WQMKBHWAN7ZE
-A KUBE-SEP-VUG5WQMKBHWAN7ZE -s 10.98.3.7/32 -m comment --comment "istio-system/istio-egressgateway:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-VUG5WQMKBHWAN7ZE -p tcp -m comment --comment "istio-system/istio-egressgateway:https" -m tcp -j DNAT --to-destination 10.98.3.7:8443
-A KUBE-SVC-53SQRANQXVHTJ6HK ! -s 10.244.0.0/16 -d 10.1.236.187/32 -p tcp -m comment --comment "default/reviews:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-53SQRANQXVHTJ6HK -m comment --comment "default/reviews:http" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-VN2563Y2S4SEHFZM
-A KUBE-SVC-53SQRANQXVHTJ6HK -m comment --comment "default/reviews:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-RHCJ5EV7FHKCNZUU
-A KUBE-SVC-53SQRANQXVHTJ6HK -m comment --comment "default/reviews:http" -j KUBE-SEP-IWJWCF4BOHEY36BZ
-A KUBE-SEP-RHCJ5EV7FHKCNZUU -s 10.98.4.31/32 -m comment --comment "default/reviews:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-RHCJ5EV7FHKCNZUU -p tcp -m comment --comment "default/reviews:http" -m tcp -j DNAT --to-destination 10.98.4.31:9080
-A KUBE-SEP-VN2563Y2S4SEHFZM -s 10.98.3.29/32 -m comment --comment "default/reviews:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-VN2563Y2S4SEHFZM -p tcp -m comment --comment "default/reviews:http" -m tcp -j DNAT --to-destination 10.98.3.29:9080
-A KUBE-SEP-IWJWCF4BOHEY36BZ -s 10.98.4.32/32 -m comment --comment "default/reviews:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-IWJWCF4BOHEY36BZ -p tcp -m comment --comment "default/reviews:http" -m tcp -j DNAT --to-destination 10.98.4.32:9080
-A KUBE-SVC-SB7WEE53EMIXFNKY ! -s 10.244.0.0/16 -d 10.1.164.17/32 -p tcp -m comment --comment "default/details:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-SB7WEE53EMIXFNKY -m comment --comment "default/details:http" -j KUBE-SEP-FG7KL5MWSB7UGTBW
-A KUBE-SEP-FG7KL5MWSB7UGTBW -s 10.98.4.30/32 -m comment --comment "default/details:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-FG7KL5MWSB7UGTBW -p tcp -m comment --comment "default/details:http" -m tcp -j DNAT --to-destination 10.98.4.30:9080
-A KUBE-SVC-4MYBDLPZ2DFGC5Z6 ! -s 10.244.0.0/16 -d 10.1.123.2/32 -p tcp -m comment --comment "default/ratings:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-4MYBDLPZ2DFGC5Z6 -m comment --comment "default/ratings:http" -j KUBE-SEP-5NOPIYZ2DEMWJ4BE
-A KUBE-SEP-5NOPIYZ2DEMWJ4BE -s 10.98.3.28/32 -m comment --comment "default/ratings:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-5NOPIYZ2DEMWJ4BE -p tcp -m comment --comment "default/ratings:http" -m tcp -j DNAT --to-destination 10.98.3.28:9080
-A KUBE-SVC-ROH4UCJ7RVN2OSM4 ! -s 10.244.0.0/16 -d 10.1.176.191/32 -p tcp -m comment --comment "default/productpage:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-ROH4UCJ7RVN2OSM4 -m comment --comment "default/productpage:http" -j KUBE-SEP-EXEAVA34QMIGNSLM
-A KUBE-SEP-EXEAVA34QMIGNSLM -s 10.98.3.30/32 -m comment --comment "default/productpage:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-EXEAVA34QMIGNSLM -p tcp -m comment --comment "default/productpage:http" -m tcp -j DNAT --to-destination 10.98.3.30:9080
COMMIT
# Completed on Tue Nov 23 09:48:28 2021
# Generated by iptables-save v1.8.4 on Tue Nov 23 09:48:28 2021
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:LIBVIRT_PRT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
-A POSTROUTING -j LIBVIRT_PRT
-A LIBVIRT_PRT -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
# Completed on Tue Nov 23 09:48:28 2021

@howardjohn
Copy link
Member

I mean in the pod and/or VM that has the issue

@Noksa
Copy link

Noksa commented Nov 29, 2021

I receive the same error when two pods are on the same mesh but one of them has the following annotations (all traffic bypasses istio).

Error:

upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

Annotations on one of pods:

traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/includeOutboundPorts: ""

To solve this currently it is needed to create a DestinationRule as described above but in my case I've added this to a specific service:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: sbc
spec:
  host: sbc
  trafficPolicy:
    tls:
      mode: DISABLE

Is it correct behavior?

@lxv458
Copy link

lxv458 commented Dec 8, 2021

I receive the same error when two pods are on the same mesh but one of them has the following annotations (all traffic bypasses istio).

Error:

upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

Annotations on one of pods:

traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/includeOutboundPorts: ""

To solve this currently it is needed to create a DestinationRule as described above but in my case I've added this to a specific service:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: sbc
spec:
  host: sbc
  trafficPolicy:
    tls:
      mode: DISABLE

Is it correct behavior?

I also met this tls error when configuring the annotation traffic.sidecar.istio.io/includeInboundPorts: "" to make inbound traffic bypass sidecar.

This solution is useful and worked for me.

@Dudesons
Copy link

Dudesons commented Dec 9, 2021

We had the same issue and setting the tls block with mode DISABLE fix the issue on our side.
I have a question for @lxv458 and @Noksa about your istio configuration.
Are you in strict or permissive mode ? On our side we are in permissive mode so maybe it's a bug with permissive mode when tls block is not set

@Noksa
Copy link

Noksa commented Dec 10, 2021

@Dudesons I didn't change this setting and as I know by default it is permissive.
So i my case it is also permissive mode.

@timcosta
Copy link

This also affects the instructions for running prometheus in a (mostly) strict mesh: https://istio.io/latest/docs/ops/integrations/prometheus/

Grafana and Prometheus are running with sidecars using the above configurations and grafana is unable to talk to prometheus. Additionally, we're unable to route to it from a gateway using a virtualservice. Both scenarios get the ssl wrong version error.

@nataraj24
Copy link

We are also getting same below error when we enable mTLS option in mesh.

"268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"

Currently destination rule for each service is set as STRICT mode.

Any other resolution other than disabling the TLS mode in destination rule

@lxv458
Copy link

lxv458 commented Jan 11, 2022

@Dudesons I also use the default settings, it should be permissive mode

@sanwen
Copy link

sanwen commented Feb 8, 2022

I have a similar problem. And I add a PeerAuthentication, it's just solved.

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  generation: 4
  name: myapp
  namespace: mynamespace
spec:
  mtls:
    mode: PERMISSIVE
  selector:
    matchLabels:
      app: myapp

The value of mtls.mode ,both PERMISSIVE and STRICT are ok, UNSET is not work。

Ref:

https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls
中文: https://istio.io/latest/zh/docs/tasks/security/authentication/authn-policy/

@howardjohn
Copy link
Member

The value of mtls.mode ,both PERMISSIVE and STRICT are ok, UNSET is not work。

That seems odd, PERMISSIVE mode is identical to unset

@sanwen
Copy link

sanwen commented Feb 9, 2022

@howardjohn Maybe I know why. I find my cluster(istio version: 1.8.4-r2) has a Globally PeerAuthentication, the mode is DISABLE

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: DISABLE

and the tls.mode set in DestinationRule of myapp is ISTIO_MUTUAL, the two configurations are inconsistent, and I get the 'OPENSSL_internal:WRONG_VERSION_NUMBER' error.

DR:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: myapp
  namespace: mynamespace
spec:
  host: myapp.mynamespace.svc.cluster.local
  subsets:
  - labels:
      version: v51777
    name: v51777
  - labels:
      version: 20220208091536-v26641
    name: 20220208091536-v26641
  trafficPolicy:
    connectionPool:
      http:
        idleTimeout: 15s
      tcp:
        maxConnections: 2048
    loadBalancer:
      simple: ROUND_ROBIN
    tls:
      mode: ISTIO_MUTUAL

@nataraj24
Copy link

nataraj24 commented Feb 10, 2022

By creating PeerAuthentication object with mTLS option disabled at Mesh level will solve the OpenSSL_internal:WRONG_VERSION_NUMBER problem?

@sanwen
Copy link

sanwen commented Feb 10, 2022

@nataraj24 I think the key point is the tls config in PeerAuthentication and DestinationRule should not be conflicting.

@song0071000
Copy link

By creating PeerAuthentication object with mTLS option disabled at Mesh level will solve the OpenSSL_internal:WRONG_VERSION_NUMBER problem?

so solve the problem or not ?
i have same problem - -!

@tanjunchen
Copy link
Member Author

By creating PeerAuthentication object with mTLS option disabled at Mesh level will solve the OpenSSL_internal:WRONG_VERSION_NUMBER problem?

so solve the problem or not ? i have same problem - -!
now,pls refer to this #35870 (comment).

@andrevcf
Copy link

andrevcf commented Apr 24, 2022

I have the same issue here following the docs from (https://istio.io/latest/docs/setup/install/virtual-machine/#configure-the-virtual-machine).
In my case, the problem was that my test was done, on the VM side, with docker exposing a port in default network mode sudo docker run --rm -it -p 8080:80 nginx.
If I switch to host network mode it works normally (with traffic VM->MESH & MESH->VM)sudo docker run --rm -it --network host nginx.
I believe it was a problem with the iptables rules, the traffic was coming directly to the nginx port instead of passing throught the envoy-proxy. With the --network host the traffic was coming to the envoy and then to nginx port.

I can confirm also that if I DISABLE mTLS, then it works with docker on 'default' or on 'host' network.
Below is my peerauthentication.yaml with mTLS DISABLED:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: test-istio-vm
  namespace: istio-vm
spec:
  mtls:
    mode: DISABLE
  selector:
    matchLabels:
      app: test-istio-vm

@hzxuzhonghu
Copy link
Member

@sanwen Can you test it on master? 1.8.4 is not maintained.

@hzxuzhonghu
Copy link
Member

@andrevcf We canot figure out what's wrong with your info? You can show your WorkloadEntry ServiceEntry. And if you suspect the traffic is not intercepted, you can also run iptables-save

@andrevcf
Copy link

andrevcf commented Apr 25, 2022

@hzxuzhonghu, my workloadentry is pointing to a vm with nginx running inside docker
(inside the vm: sudo docker run --rm -it -p 80:80 nginx)

apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
  name: test-istio-vm
spec:
  serviceAccount: istio-vm-sa
  address: 10.205.2.8
  labels:
    app: test-istio-vm
    instance-id: test-istio-vm

The service on k8s is pointing to two ports just for testing purpose:

apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: test-istio-vm
    name: test-istio-vm
    namespace: istio-vm
  spec:
    ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
    selector:
      app: test-istio-vm
    type: ClusterIP

If I call the service inside the mesh with curl http://test-istio-vm.istio-vm:80 it returns

< HTTP/1.1 503 Service Unavailable
< content-length: 190
< content-type: text/plain
< date: Mon, 25 Apr 2022 01:35:51 GMT
< server: envoy
< 
* Connection #0 to host test-istio-vm.istio-vm left intact
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER/

iptables-save WITHOUT --net host

# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*raw
:PREROUTING ACCEPT [3976:7885478]
:OUTPUT ACCEPT [4389:1350592]
-A PREROUTING -d 127.0.0.53/32 -p udp -m udp --sport 53 -j CT --zone 1
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --uid-owner 997 -j CT --zone 2
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --gid-owner 997 -j CT --zone 2
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j CT --zone 2
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*mangle
:PREROUTING ACCEPT [67542:336741183]
:INPUT ACCEPT [67374:336720441]
:FORWARD ACCEPT [168:20742]
:OUTPUT ACCEPT [73603:136411188]
:POSTROUTING ACCEPT [61890:135720222]
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*filter
:INPUT ACCEPT [37:11475]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [51:15403]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [4:240]
:POSTROUTING ACCEPT [3:211]
:DOCKER - [0:0]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j RETURN
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j RETURN
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j REDIRECT --to-ports 15053
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.2:80
-A ISTIO_INBOUND -p tcp -m tcp --dport 15008 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -p tcp -m tcp ! --dport 53 -m owner --uid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.53/32 -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 15053
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*security
:INPUT ACCEPT [1675:2707628]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1608:558317]
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
COMMIT
# Completed on Mon Apr 25 01:42:26 2022

=============================================================================
BELOW IS THE CODE THAT WORKS

If I add just the --net host with the command run of docker, then it works
sudo docker run --rm -it --net host nginx

iptables-save with the use of --net host

# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*raw
:PREROUTING ACCEPT [3114:5319942]
:OUTPUT ACCEPT [3357:1043366]
-A PREROUTING -d 127.0.0.53/32 -p udp -m udp --sport 53 -j CT --zone 1
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --uid-owner 997 -j CT --zone 2
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --gid-owner 997 -j CT --zone 2
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j CT --zone 2
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*mangle
:PREROUTING ACCEPT [66680:334175647]
:INPUT ACCEPT [66512:334154905]
:FORWARD ACCEPT [168:20742]
:OUTPUT ACCEPT [72568:136104210]
:POSTROUTING ACCEPT [61055:135425244]
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*filter
:INPUT ACCEPT [440:72942]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [476:122758]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [84:5229]
:POSTROUTING ACCEPT [35:2413]
:DOCKER - [0:0]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j RETURN
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j RETURN
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j REDIRECT --to-ports 15053
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15008 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -p tcp -m tcp ! --dport 53 -m owner --uid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.53/32 -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 15053
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*security
:INPUT ACCEPT [813:142092]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [776:266107]
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
COMMIT
# Completed on Mon Apr 25 01:39:17 2022

@hzxuzhonghu
Copy link
Member

Can you check the inbound cluster type? I suspect you are using an old version istio. The inbound cluster type is not ORIGINAL_DST

istioctl pc cluster xxxx |grep inbound in VM you should call the API by hand.

@andrevcf
Copy link

andrevcf commented Apr 25, 2022

@hzxuzhonghu , inside a pod running on the mesh (productpage of bookinfo)
istioctl pc cluster xxxx |grep inbound inbound
9080 - inbound ORIGINAL_DST

istioctl version
client version: 1.13.3
control plane version: 1.13.3
data plane version: 1.13.3 (10 proxies), 1.13.0 (2 proxies)

@andrevcf
Copy link

@hzxuzhonghu, this means that I must use the --net host for docker to work with istio on vm?
That's fine by me!! It works well!
If you want more information I can try to help but i'm ok with adding --net host if it is a requirement

@hzxuzhonghu
Copy link
Member

Envoy inbould cluster is Original_Dest type. it will send packet to 10.205.2.8:80. And with -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER, it will not be redirected to 172.xxx:80

@hzxuzhonghu
Copy link
Member

hzxuzhonghu commented Apr 25, 2022

From all your info, i think --net host is a requirement

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jul 24, 2022
@tanjunchen
Copy link
Member Author

/stale

@istio-policy-bot istio-policy-bot removed the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jul 26, 2022
@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Oct 24, 2022
@tanjunchen
Copy link
Member Author

/stale

@istio-policy-bot istio-policy-bot removed the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Oct 24, 2022
@john-a-joyce
Copy link
Contributor

FWIW - I see the same TLS error/failure signature but my setup is very different. Documenting here in case it helps others. I am trying an IPv6 scenario and trying to curl from outside of the cluster to inside of the cluster. I am mostly following getting started sequence with some adjustments required for dual-stack and also accounting for #29076. When I define the istio-ingressgateway svc to be single stack IPv4 I can curl from outside just fine with an IPv4 loadbalance IP. When i define the svc to be single stack IPv6 (e.g. IPv6 load balance IP) I can't curl from outside and logs show the traffic is not hitting the side car but directly hitting the product page container. When I curl from the ingressgateway istio-proxy container using the product page pods IPv6 address it is successful and the traffic goes through the side car. Since this is sufficiently different than the initial compliant, I will open a different issue.

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jan 22, 2023
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2022-10-24. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Feb 6, 2023
@kromanow94
Copy link

This just happened to me as well but in a little but different scenario than described here. The issue was that the Pod running web server didn't have the istio sidecar injected but the DestinationRule was specified for this workflow.

Changing the DestinationRule with spec.trafficPolicy.tls.mode: DISABLE helped but it was a workaround.

Specifying label istio-injection: enabled on the Namespace or sidecar.istio.io/inject: "true" in Pod labels might help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/user experience feature/Virtual-machine issues related with VM support lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests