Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Balancing between 2 domains does not work as described in https://istio.io/latest/docs/reference/config/networking/service-entry / #43462

Closed
5 of 16 tasks
sailz116 opened this issue Feb 20, 2023 · 1 comment
Labels
area/networking lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@sailz116
Copy link

sailz116 commented Feb 20, 2023

Bug Description

Balancing between 2 domains does not work as described in https://istio.io/latest/docs/reference/config/networking/service-entry /
it is indicated that:
For HTTP-based services, it is possible to create a VirtualService backed by multiple DNS addressable endpoints. In such a scenario, the application can use the HTTP_PROXY environment variable to transparently reroute API calls for the VirtualService to a chosen backend. For example, the following configuration creates a non-existent external service called foo.bar.com backed by three domains: us.foo.bar.com:8080, uk.foo.bar.com:9080, and in.foo.bar.com:7080

I created SE:

spec:
endpoints:
- address: >-
http-route.apps.cluster1-fqdn
ports:
http: 80
- address: >-
http-route.apps.cluster2-fqdn
ports:
http: 80
exportTo:
- .
hosts:
- srvsc.routes.ru
location: MESH_INTERNAL
ports:
- name: http
number: 80
protocol: HTTP
- name: http-9999-int
number: 9997
protocol: HTTP
resolution: DNS

created VS:

spec:
exportTo:
- .
gateways:
- test-srvsc-gw
- mesh
hosts:
- srvsc.routes.ru
http:
- match:
- gateways:
- mesh
port: 9997
route:
- destination:
host: egress-svc-ver201
port:
number: 9445
- match:
- gateways:
- test-srvsc-gw
port: 9445
route:
- destination:
host: srvsc.routes.ru
port:
number: 80

created GW

spec:
selector:
istio: egw-
servers:
- hosts:
- srvsc.routes.ru
port:
name: http-9445
number: 9445
protocol: HTTP

In the environment variables for the application, I specified HTTP_PROXY=http://localhost/
When I call curl src.routes.ru:9997/chatx-service-scenario/health check, get a 503 response from the OpenShift cluster - Application is not available

if you look in the egressgateway log, you can see that the request goes to the ip address of the clusters .apps.cluster1-fqdn and .apps.cluster2-fqdn:
"GET /chatx-service-scenario/healthcheck HTTP/1.1" 503 URX "-" "-" 0 2503 51 50 "29.64.119.214" "curl/7.74.0" "e77c6458-79e2-9bca-b4c6-87ceeaa2c58d" "srvsc.routes.ru:9997" "<ip_address_
.apps.cluster1-fqdn>:80" outbound|80||srvsc.routes.ru ...
...
"GET /chatx-service-scenario/healthcheck HTTP/1.1" 503 URX "-" "-" 0 3265 56 55 "29.64.119.214" "curl/7.74.0" "7c271c7e-91ae-91cc-b714-e06ec89af313" "srvsc.routes.ru:9997" "<ip_address_
.apps.cluster2-fqdn>:80" outbound|80||srvsc.routes.ru ...

I set up envoyfilter for debag: I see all the headers in the egressgateway log:

spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
'@type': type.googleapis.com/envoy.config.filter.http.lua.v2.Lua
inlineCode: |
function envoy_on_request(request_handle)
-- Logging the request and its HTTP Headers
request_handle:logInfo(" ========= START DEBUG ========== ")
local my_headers = request_handle:headers()
for key, value in pairs (my_headers) do
request_handle:logCritical("[REQUEST GET HEADERS] Header:" .. key .. " -> " .. value)
end
end
workloadSelector:
labels:
istio: egw-

script log: ========= START DEBUG ==========
script log: [REQUEST GET HEADERS] Header::authority -> srvsc.routes.ru:9997
script log: [REQUEST GET HEADERS] Header::path -> /chatx-service-scenario/healthcheck
script log: [REQUEST GET HEADERS] Header::method -> GET
script log: [REQUEST GET HEADERS] Header:user-agent -> curl/7.74.0
script log: [REQUEST GET HEADERS] Header:accept -> /
script log: [REQUEST GET HEADERS] Header:x-forwarded-proto -> http
script log: [REQUEST GET HEADERS] Header:x-request-id -> 56739d32-e655-98c7-8c3c-520421a756bc
script log: [REQUEST GET HEADERS] Header:x-envoy-attempt-count -> 3
script log: [REQUEST GET HEADERS] Header:content-length -> 0
script log: [REQUEST GET HEADERS] Header:x-forwarded-for -> 29.64.119.214
script log: [REQUEST GET HEADERS] Header:x-envoy-external-address -> 29.64.119.214
script log: [REQUEST GET HEADERS] Header:x-envoy-decorator-operation -> srvsc.routes.ru:80/*
script log: [REQUEST GET HEADERS] Header:x-envoy-peer-metadata -> ...
script log: [REQUEST GET HEADERS] Header:x-envoy-peer-metadata-id -> ...

I can't understand why the header doesn't include the address of the end node specified in SE
instead of the intended header
authority: http-router.apps.cluster1-fqdn:9997 or authority: http-router.apps.cluster2-fqdn:9997,
i see only srvsc.routes.ru:9997

Version

istioctl version 
client version: 1.9.4
control plane version 2.0.*

oc version
Client Version: openshift-clients-4.6.0-202006250705.p0-168-g02c110006
Kubernetes Version: v1.21.6+b4b4813

Additional Information

No response

Affected product area

  • Ambient
  • Docs
  • Installation
  • Networking
  • Performance and Scalability
  • Extensions and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure
  • Upgrade
  • Multi Cluster
  • Virtual Machine
  • Control Plane Revisions

Is this the right place to submit this?

  • This is not a security vulnerability
  • This is not a question about how to use Istio
@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label May 21, 2023
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2023-02-20. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Jun 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests

3 participants