Skip to content

Conversation

TrilokGeer
Copy link

Extend user tag limit to 40 based on AWS limits. 10 tags are reserved for OpenShift use. Previously, there has been a confusion on the tag limit for S3 bucket vs S3 bucket object tag limit. S3 bucket object tag limit is 10 while the requirement is for S3 bucket which has limit of 50 tags.
Reference : https://docs.aws.amazon.com/AmazonS3/latest/userguide/CostAllocTagging.html

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 29, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 29, 2022

Hello @TrilokGeer! Some important instructions when contributing to openshift/api:
API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.

For merging purposes, this repository follows the no-Feature-Freeze process which means that in addition to the standard lgtm and approved labels this repository requires either:

bugzilla/valid-bug - applied if your PR references a valid bugzilla bug

OR

qe-approved, docs-approved, and px-approved - these labels can be applied by anyone in the openshift org via the /label <labelname> command.

Who should apply these qe/docs/px labels?

  • For a no-Feature-Freeze team who is merging a feature before code freeze, they need to get those labels applied to their api repo PR by the appropriate teams (i.e. qe, docs, px)
  • For a Feature Freeze (traditional) team who is merging a feature before FF, they can self-apply the labels (via /label commands), they are basically irrelevant for those teams
  • For a Feature Freeze team who is merging a feature after FF, the PR should be rejected barring an exception

@openshift-ci openshift-ci bot requested review from knobunc and soltysh August 29, 2022 15:01
@TrilokGeer TrilokGeer changed the title WIP: [CFE-580] Extend user tags limit to 40 based on AWS limits [CFE-580] Extend user tags limit to 40 based on AWS limits Sep 5, 2022
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 5, 2022
@JoelSpeed
Copy link
Contributor

How many tags are we using in OpenShift today and why are we deciding to change the limit to 10 tags for OpenShift?
What's the customer driver behind moving the customer assigned tags up to 40? Can you provide some background for this change please

@TrilokGeer
Copy link
Author

How many tags are we using in OpenShift today and why are we deciding to change the limit to 10 tags for OpenShift? What's the customer driver behind moving the customer assigned tags up to 40? Can you provide some background for this change please

For OpenShift, 2 tags are used during installation, while SD sets 4 tags specific to OpenShift. Customers are not able to tag infrastructure resources with more than 10 tags (including user-defined and OpenShift to the total). Customer is unable to set policy enforcement implementations because of the limitation of 10 tags while AWS supports 50 tags in total. To extend the number of user-defined tags, OpenShift needs to have fixed reservation. Today, 10 tags are considered for OpenShfit with 6 as buffer for future requirements and 40 tags can used as user-defined tags.

@TrilokGeer TrilokGeer changed the title [CFE-580] Extend user tags limit to 40 based on AWS limits OCPBUGS-1234 : [CFE-580] Extend user tags limit to 40 based on AWS limits Sep 13, 2022
@JoelSpeed
Copy link
Contributor

Based on the previous comments, it looks as though we had 25 allocated for customers and 25 allocated for OpenShift, does that not meet the customer requirements?

Where did the limit of 10 come from for users? Can you link me the code?

@TrilokGeer
Copy link
Author

Installer, today allows 2 OpenShift and 8 user-defined tags regardless of the api limits. To increase the numbers when compared to 50 tags allowed by AWS, the thought is to reserve OpenShift to 10 with future buffer and provide available quota of 40 tags to add user-defined tags.

@JoelSpeed
Copy link
Contributor

Ahh I see, so the installer has imposed a limit, and because the tags are in status, we are confident no one has gone past those installer limits, so this means the change is safe, that makes more sense.
Could you please get someone from the installer team to confirm this is the case and that they are happy with this change going ahead here and within the installer code

@TrilokGeer
Copy link
Author

/cc @patrickdillon

@arendej
Copy link

arendej commented Sep 13, 2022

Up until now, OpenShift has had a 10 tag limit, total, with 2 already consumed by OpenShift (kubernetes.io and name).
The new state would be 10 tags reserved for Red Hat OpenShift (including the total of 4 when a ROSA/OSD cluster is made), and 6 remaining as buffer for Red Hat to use, which leaves however many remain from our proposed changes, for the cluster's owner to set... 25, and then 50, right @TrilokGeer ?

If customers so far have been limited to 8, then any improvement above 8 is purely a benefit.

@patrickdillon
Copy link
Contributor

Ahh I see, so the installer has imposed a limit, and because the tags are in status, we are confident no one has gone past those installer limits, so this means the change is safe, that makes more sense. Could you please get someone from the installer team to confirm this is the case and that they are happy with this change going ahead here and within the installer code

Yes we are fine with the tag limit being bumped. The 10 tag restriction came from limits on the S3 object that is created for bootstrap ignition. That case is being handled in an installer PR, so we have no other concerns with increasing the limit. Thank you!

@JoelSpeed
Copy link
Contributor

@TrilokGeer If you can get the verify fixed then I'll add the LGTM

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 13, 2022
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 13, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 13, 2022

@TrilokGeer: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@JoelSpeed
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 13, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 13, 2022

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: JoelSpeed, TrilokGeer
Once this PR has been reviewed and has the lgtm label, please assign eparis for approval by writing /assign @eparis in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@deads2k
Copy link
Contributor

deads2k commented Sep 14, 2022

If customers so far have been limited to 8, then any improvement above 8 is purely a benefit.

@patrickdillon I don't understand the problem this is solving. This already allows 25 and if we only need "better than 8", then why does this API have to change? don't we already allow 17 more than 8? Perhaps an example object showing the failure that is being fixed by the PR would help and the code that restricts customers to 10 tags based on API validation rule allowing 25 here.

@TrilokGeer
Copy link
Author

@deads2k,
The present problem is installer fails with validation errors as customer wants to use more than 10 tags.

the thought here is to restrict user to the maximum tags that can be allowed. As @patrickdillon mentioned, the restriction of 10 was set based on S3 object limits but was not restricted in api. Based on clarification with customer, S3 object tags are not required and expectation is to have possible maximum allowed to tag AWS users as per AWS limits (50).

One method is to set upper bound on the reservations as 10(OpenShift)/40(User) so that it becomes well defined and limit future updates to the api based on user needs. By updating to 40 at installer, api would fail validation for more than 25.

Other method is to gradually increase whenever customer faces validation errors for user tag limits.

It'd be great to have a more better solution that will help to mitigate reservation conflict.

@patrickdillon
Copy link
Contributor

If customers so far have been limited to 8, then any improvement above 8 is purely a benefit.

@patrickdillon I don't understand the problem this is solving. This already allows 25 and if we only need "better than 8", then why does this API have to change? don't we already allow 17 more than 8? Perhaps an example object showing the failure that is being fixed by the PR would help and the code that restricts customers to 10 tags based on API validation rule allowing 25 here.

Yeah, the Installer limit of 10 tags is actually orthogonal to the motivation here.As Joel observed, I think the only relevance of the installer limit is to answering the question: is it safe to decrease the number of OpenShift-reserved tags to 10? The fact that it has been impossible to install a cluster with more than 10 totals tags suggests it is safe.

That's a different question than the primary one of this PR, which is, I gather, to rebalance the 50 tag limit from 25 user/25 openshift -> 40 user/10 openshift. Trilok explained the motivation in his most recent commnet.

Hopefully that helps clarify...

Perhaps an example object showing the failure that is being fixed by the PR would help and the code that restricts customers to 10 tags based on API validation rule allowing 25 here.

In case it's not already clear, the "Installer 10 limit" was not based on an API validation rule. The history of what happened is something like this:

  • Tagging feature was introduced with API limits (I presume the original was still 25/25) and Installer tags every resource
  • Installs with more than 8 user provided tags failed due to aws limits on s3 bucket objects (which we create one of during install)
  • Installer added validation to prevent ^ failure from happening during install by limiting user tags to 8
  • PR is merging this week that removes the installer 8 tag limit (instead we apply a subset of 8 tags to the s3 bucket object)

@TrilokGeer
Copy link
Author

@deads2k @eparis .. do you suggest to merge this change? is there any clarification that I can help with?

@deads2k
Copy link
Contributor

deads2k commented Sep 15, 2022

/hold

OpenShift reserves half of the resource tag range for internal platform use. Given a specific customer with a specific requirement to change this limit and a consideration of the utility of future tags for the platform, I'm open to adjusting this limit. Lacking a specific need and lacking a clear analysis of future platform objectives, this limit should not be changed.

Changes should bring

  1. which customers need more
  2. how many tags does each of these customers need
  3. why do they need openshift to manage these tags for them as opposed to managing their tags after an initial tagging
  4. how many tags are available on each of the cloud we support and intend to support
  5. checking with payload components and storage to determine what they could use tags for

Lacking that, a 50/50 split is reasonable division that has worked for over a year.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 15, 2022
@TrilokGeer
Copy link
Author

/hold

OpenShift reserves half of the resource tag range for internal platform use. Given a specific customer with a specific requirement to change this limit and a consideration of the utility of future tags for the platform, I'm open to adjusting this limit. Lacking a specific need and lacking a clear analysis of future platform objectives, this limit should not be changed.

Changes should bring

  1. which customers need more
  2. how many tags does each of these customers need
  3. why do they need openshift to manage these tags for them as opposed to managing their tags after an initial tagging
  4. how many tags are available on each of the cloud we support and intend to support
  5. checking with payload components and storage to determine what they could use tags for

Lacking that, a 50/50 split is reasonable division that has worked for over a year.

I am not best to answer here, but wanted to remind that , I could not find the justification, why the existing values are chosen

@TrilokGeer
Copy link
Author

@deads2k The customers have a demand of > 8 and do not have use-case which requires more than 25 as of now. Hence, the pr will be closed.

@TrilokGeer
Copy link
Author

/close

@openshift-ci openshift-ci bot closed this Sep 20, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 20, 2022

@TrilokGeer: Closed this PR.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants