Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

do not build catch all virtual host every time #35449

Merged
merged 2 commits into from Oct 7, 2021

Conversation

ramaraochavali
Copy link
Contributor

@ramaraochavali ramaraochavali commented Oct 2, 2021

CatchAllVirtualHost only changes when sidecar for a proxy is changed. We do not have to build it for every route every time.

  • Configuration Infrastructure
  • Docs
  • Installation
  • Networking
  • Performance and Scalability
  • Policies and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure

Signed-off-by: Rama Chavali <rama.rao@salesforce.com>
@ramaraochavali ramaraochavali requested review from a team as code owners October 2, 2021 10:24
@istio-policy-bot istio-policy-bot added area/networking release-notes-none Indicates a PR that does not require release notes. labels Oct 2, 2021
@google-cla google-cla bot added the cla: yes Set by the Google CLA bot to indicate the author of a PR has signed the Google CLA. label Oct 2, 2021
@istio-testing istio-testing added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Oct 2, 2021
Signed-off-by: Rama Chavali <rama.rao@salesforce.com>
@howardjohn
Copy link
Member

/test benchmark_istio

@howardjohn
Copy link
Member

Benchmark test shows about a 3% increase maybe. Is that what you expected or do you have setups where its even more impactful?

I am 100% fine with this but am slightly worried about this making #27826 a bit worse. We need to fix that regardless for sure though...

The other question I had was - where do we stop with this? A natural next step would be to cache passthrough cluster, etc. I guess since those are only 1 instance its not a big deal vs this being 1 per route config.
It would be slightly preferred if we just generate the 1 catch all in the RDS gen (instead of in push context), but maybe that would require way to much refactoring to make it viable? That way all the route generation happens in the "right" spot, which is slightly useful for code organization and runtime performance. For example, we are now pushing route generation out of the route generation code, so telemetry is slightly skewed for xds gen timings, etc. Not a big deal though.

@ramaraochavali
Copy link
Contributor Author

Benchmark test shows about a 3% increase maybe. Is that what you expected or do you have setups where its even more impactful?

No. But I was hoping when we scale to more routes and proxies, it might help.

I am 100% fine with this but am slightly worried about this making #27826 a bit worse. We need to fix that regardless for sure though...

Yes Agree.

The other question I had was - where do we stop with this? A natural next step would be to cache passthrough cluster, etc. I guess since those are only 1 instance its not a big deal vs this being 1 per route config.
It would be slightly preferred if we just generate the 1 catch all in the RDS gen (instead of in push context), but maybe that would require way to much refactoring to make it viable? That way all the route generation happens in the "right" spot, which is slightly useful for code organization and runtime performance. For example, we are now pushing route generation out of the route generation code, so telemetry is slightly skewed for xds gen timings, etc. Not a big deal thoug

I was going back and forth on this specific thing. That would mean we have to change a lot.

@ramaraochavali
Copy link
Contributor Author

@howardjohn WDYT about this? Should we move forward or should we try to refactor it to rds? I can try that if you prefer that approach - but not sure how much refactoring it will be.

@YonkaFang
Copy link
Contributor

Benchmark test shows about a 3% increase maybe. Is that what you expected or do you have setups where its even more impactful?

No. But I was hoping when we scale to more routes and proxies, it might help.

I am 100% fine with this but am slightly worried about this making #27826 a bit worse. We need to fix that regardless for sure though...

Yes Agree.

The other question I had was - where do we stop with this? A natural next step would be to cache passthrough cluster, etc. I guess since those are only 1 instance its not a big deal vs this being 1 per route config.
It would be slightly preferred if we just generate the 1 catch all in the RDS gen (instead of in push context), but maybe that would require way to much refactoring to make it viable? That way all the route generation happens in the "right" spot, which is slightly useful for code organization and runtime performance. For example, we are now pushing route generation out of the route generation code, so telemetry is slightly skewed for xds gen timings, etc. Not a big deal thoug

I was going back and forth on this specific thing. That would mean we have to change a lot.

@howardjohn It looks like what u worried has happened according to #36357

hzxuzhonghu added a commit to hzxuzhonghu/istio that referenced this pull request Mar 2, 2022
istio-testing pushed a commit that referenced this pull request Mar 2, 2022
* Revert "Clone catch all vhost to prevent it from being mutated (#37612)"

This reverts commit e60eee9.

* Revert "do not build catch all virtual host every time (#35449)"

This reverts commit b32710c.

* fix build
aryan16 pushed a commit to aryan16/istio that referenced this pull request Mar 28, 2022
* Revert "Clone catch all vhost to prevent it from being mutated (istio#37612)"

This reverts commit e60eee9.

* Revert "do not build catch all virtual host every time (istio#35449)"

This reverts commit b32710c.

* fix build
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking cla: yes Set by the Google CLA bot to indicate the author of a PR has signed the Google CLA. release-notes-none Indicates a PR that does not require release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants