-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduced resource consumption after Auth restart #25235
Labels
bug
robustness
Resistance to crashes and reliability
scale
Changes required to achieve 100K nodes per cluster.
test-plan-problem
Issues which have been surfaced by running the manual release test plan
Comments
rosstimothy
added
bug
scale
Changes required to achieve 100K nodes per cluster.
robustness
Resistance to crashes and reliability
labels
Apr 26, 2023
Closed
zmb3
added
the
test-plan-problem
Issues which have been surfaced by running the manual release test plan
label
May 2, 2023
Based on the above, looks like the issue was introduced after 12.1.1 and before 12.1.5. Nothing super obvious in the diff, but here's a few things worth looking at: |
A git bisect between 12.1.1 and 12.1.5 revealed that the increased number of connections are a result of #23377 |
rosstimothy
added a commit
that referenced
this issue
May 4, 2023
The Okta client was creating a separate connection to Auth in `auth.NewClient` instead of reusing the gRPC connection of the api client. Since this method is used by every Teleport instance when establishing their initial connection to Auth it poses two problems: 1) Auth now has one additional connection per instance 2) Proxies require an additional trasnport channel per tunnel Fixes #25235
github-actions bot
pushed a commit
that referenced
this issue
May 4, 2023
The Okta client was creating a separate connection to Auth in `auth.NewClient` instead of reusing the gRPC connection of the api client. Since this method is used by every Teleport instance when establishing their initial connection to Auth it poses two problems: 1) Auth now has one additional connection per instance 2) Proxies require an additional trasnport channel per tunnel Fixes #25235
rosstimothy
added a commit
that referenced
this issue
May 4, 2023
The Okta client was creating a separate connection to Auth in `auth.NewClient` instead of reusing the gRPC connection of the api client. Since this method is used by every Teleport instance when establishing their initial connection to Auth it poses two problems: 1) Auth now has one additional connection per instance 2) Proxies require an additional trasnport channel per tunnel Fixes #25235
rosstimothy
added a commit
that referenced
this issue
May 4, 2023
The Okta client was creating a separate connection to Auth in `auth.NewClient` instead of reusing the gRPC connection of the api client. Since this method is used by every Teleport instance when establishing their initial connection to Auth it poses two problems: 1) Auth now has one additional connection per instance 2) Proxies require an additional trasnport channel per tunnel Fixes #25235
rosstimothy
added a commit
that referenced
this issue
May 4, 2023
The Okta client was creating a separate connection to Auth in `auth.NewClient` instead of reusing the gRPC connection of the api client. Since this method is used by every Teleport instance when establishing their initial connection to Auth it poses two problems: 1) Auth now has one additional connection per instance 2) Proxies require an additional trasnport channel per tunnel Fixes #25235
For posterity - the fix for this was released in 12.3.3 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
bug
robustness
Resistance to crashes and reliability
scale
Changes required to achieve 100K nodes per cluster.
test-plan-problem
Issues which have been surfaced by running the manual release test plan
Profiles taken from steps 4, 5, and 7:
profiles.zip
The text was updated successfully, but these errors were encountered: