Skip to content

Commit

Permalink
UPSTREAM: <carry>: openshift-kube-apiserver: add kube-apiserver patches
Browse files Browse the repository at this point in the history
UPSTREAM: <carry>: openshift-kube-apiserver: enabled conversion gen for admission configs

UPSTREAM: <carry>: openshift-kube-apiserver/admission: fix featuregates resource name

UPSTREAM: <carry>: openshift-kube-apiserver/admission: add missing FeatureSets

UPSTREAM: <carry>: openshift-kube-apiserver: use github.com/openshift/apiserver-library-go/pkg/labelselector

UPSTREAM: <carry>: openshift authenticator: don't allow old-style tokens

UPSTREAM: <carry>: oauth-authn: support sha256 prefixed tokens

UPSTREAM: <carry>: oauth-token-authn: switch to sha256~ prefix

UPSTREAM: <carry>: oauth-token-authn: add sha256~ support to bootstrap authenticator

UPSTREAM: <drop>: remove the openshift authenticator from the apiserver

In 4.8, we moved the authenticator to be configured via
webhookTokenAuthenticators to an endpoint in the oauth-apiserver,
this should now be safe to remove.

UPSTREAM: <carry>: set ResourceQuotaValidationOptions to true

When PodAffinityNamespaceSelector goes to beta or GA this might affect
how our ClusterResourceQuota might work

UPSTREAM: <carry>: simplify the authorizer patch to allow the flags to function

UPSTREAM: <carry>: eliminate unnecessary closure in openshift configuration wiring

UPSTREAM: <carry>: add crdvalidation for apiserver.spec.tlsSecurityProfile

UPSTREAM: <carry>: openshift-kube-apiserver: Add custom resource validation for network spec

UPSTREAM: <carry>: stop overriding flags that are explicitly set

UPSTREAM: <carry>: add readyz check for openshift apiserver availability

UPSTREAM: <carry>: wait for oauth-apiserver accessibility

UPSTREAM: <carry>: provide a new admission plugin to mutate management pods CPUs requests

The ManagementCPUOverride admission plugin replaces pod container CPU requests with a new management resource.
It applies to all pods that:
 1. are in an allowed namespace
 2. and have the workload annotation.

It also sets the new management resource request and limit and  set resource annotation that CRI-O can
recognize and apply the relevant changes.
For more information, see - openshift/enhancements#703

Conditions for CPUs requests deletion:
 1. The namespace should have allowed annotation "workload.openshift.io/allowed": "management"
 2. The pod should have management annotation: "workload.openshift.io/management": "{"effect": "PreferredDuringScheduling"}"
 3. All nodes under the cluster should have new management resource - "management.workload.openshift.io/cores"
 4. The CPU request deletion will not change the pod QoS class

UPSTREAM: <carry>: Does not prevent pod creation because of no nodes reason when it runs under the regular cluster

Check the `cluster` infrastructure resource status to be sure that we run on top of a SNO cluster
and in case if the pod runs on top of regular cluster, exit before node existence check.

UPSTREAM: <carry>: do not mutate pods when it has a container with both CPU request and limit

Removing the CPU request from the container that has a CPU limit will result in the defaulter to set the CPU request back equals to the CPU limit.

UPSTREAM: <carry>: Reject the pod creation when we can not decide the cluster type

It is possible a race condition between pod creation and the update of the
infrastructure resource status with correct values under
Status.ControlPlaneTopology and Status.InfrastructureTopology.

UPSTREAM: <carry>: add CRD validation for dnses

Add an admission plugin that validates the dnses.operator.openshift.io
custom resource.  For now, the plugin only validates the DNS pod
node-placement parameters.

This commit fixes bug 1967745.

https://bugzilla.redhat.com/show_bug.cgi?id=1967745

* openshift-kube-apiserver/admission/customresourcevalidation/attributes.go
(init): Install operatorv1 into supportedObjectsScheme.
* openshift-kube-apiserver/admission/customresourcevalidation/customresourcevalidationregistration/cr_validation_registration.go
(AllCustomResourceValidators, RegisterCustomResourceValidation): Register
the new plugin.
* openshift-kube-apiserver/admission/customresourcevalidation/dns/validate_dns.go:
New file.
(PluginName): New const.
(Register): New function.  Register the plugin.
(toDNSV1): New function.  Convert a runtime object to a versioned DNS.
(dnsV1): New type to represent a runtime object that is validated as a
versioned DNS.
(ValidateCreate, ValidateUpdate, ValidateStatusUpdate): New methods.
Implement the ObjectValidator interface, using the validateDNSSpecCreate
and validateDNSSpecUpdate helpers.
(validateDNSSpecCreate, validateDNSSpecUpdate): New functions.  Validate a
DNS, using the validateDNSSpec helper.
(validateDNSSpec): New function.  Validate the spec field of a DNS, using
the validateDNSNodePlacement helper.
(validateDNSNodePlacement): New function.  Validate the node selector and
tolerations in a DNS's node-placement parameters, using
validateTolerations.
(validateTolerations): New function.  Validate a slice of
corev1.Toleration.
* openshift-kube-apiserver/admission/customresourcevalidation/dns/validate_dns_test.go:
New file.
(TestFailValidateDNSSpec): Verify that validateDNSSpec rejects invalid DNS
specs.
(TestSucceedValidateDNSSpec): Verify that validateDNSSpec accepts valid DNS
specs.
* vendor/*: Regenerate.

UPSTREAM: <carry>: prevent the kubecontrollermanager service-ca from getting less secure

UPSTREAM: <carry>: allow SCC to be disabled on a per-namespace basis

UPSTREAM: <carry>: verify required http2 cipher suites

In the Apiserver admission, we need to return an error if the required
http2 cipher suites are missing from a custom tlsSecurityProfile.
Currently, custom cipher suites missing ECDHE_RSA_WITH_AES_128_GCM_SHA256 or
ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 result in invalid http2 Server
configuration causing the apiservers to crash.
See: go/x/net/http2.ConfigureServer for futher information.

UPSTREAM: <carry>: drop the warning to use --keep-annotations

When a user runs the `oc debug` command for the pod with the
management resource, we will inform him that he should pass
`--keep-annotations` parameter to the debug command.

UPSTREAM: <carry>: admission/managementcpusoverride: cover the roll-back case

During the upgrade and roll-back flow 4.7->4.8->4.7, the topology related
fields under the infrastructure can be empty because the
old API does not support them.

The code will equal the empty infrastructure section with the current one.
When the status has some other non-empty field, and topology fields
are empty, we assume that the cluster currently passes
via roll-back and not via the clean install.

UPSTREAM: <carry>: Remove pod warning annotation when workload partitioning is disabled

UPSTREAM: <carry>: use new access token inactivity timeout field.

UPSTREAM: <carry>: apirequestcount validation

UPSTREAM: <carry>: Added config node object validation for extreme latency profiles

UPSTREAM: <carry>: Add Upstream validation in the DNS admission check

patches

UPSTREAM: <carry>: Make RestrictedEndpointsAdmission check NotReadyAddresses

UPSTREAM: <carry>: Make RestrictedEndpointsAdmission restrict EndpointSlices as well

Moved SkipSystemMasterAuthorizers to the authorizer.

UPSTREAM: <carry>: Add validation plugin for CRD-based route parity.

UPSTREAM: <carry>: Add host assignment plugin for CRD-based routes.

UPSTREAM: <carry>: Apply shared defaulters to CRD-based routes.

Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Signed-off-by: Swarup Ghosh <swghosh@redhat.com>
OpenShift-Rebase-Source: 932411e
OpenShift-Rebase-Source: 1899555
OpenShift-Rebase-Source: 453583e
OpenShift-Rebase-Source: bf7e23e

UPSTREAM: <carry>: STOR-829: Add CSIInlineVolumeSecurity admission plugin

The CSIInlineVolumeSecurity admission plugin inspects inline CSI
volumes on pod creation and compares the
security.openshift.io/csi-ephemeral-volume-profile label on the
CSIDriver object to the pod security profile on the namespace.

OpenShift-Rebase-Source: a65c34b

UPSTREAM: <carry>: add icsp,idms,itms validation reject creating icsp with idms/itms exist

    Reject icsp with idms.itms resources exists. According to the discuusion resolution https://docs.google.com/document/d/13h6IJn8wlzXdiPMvCWlMEHOXXqEZ9_GYOl02Wldb3z8/edit?usp=sharing,
            one of current icsp or new mirror setting crd should be rejected if a user tries to use them on the same cluster.

Signed-off-by: Qi Wang <qiwan@redhat.com>

UPSTREAM: <carry>: node admission plugin for cpu partitioning

The ManagedNode admission plugin makes the Infrastructure.Status.CPUPartitioning field authoritative.
This validates that nodes that wish to join the cluster are first configured to properly handle workload pinning
For more information see - openshift/enhancements#1213

Signed-off-by: ehila <ehila@redhat.com>

UPSTREAM: <carry>: kube-apiserver: allow injection of kube-apiserver options

UPSTREAM: <carry>: kube-apiserver: allow rewiring

OpenShift-Rebase-Source: 56b49c9
OpenShift-Rebase-Source: bcf574c
  • Loading branch information
deads2k authored and bertinatto committed Aug 24, 2023
1 parent 5292d6f commit b6d78bc
Show file tree
Hide file tree
Showing 11 changed files with 167 additions and 7 deletions.
6 changes: 6 additions & 0 deletions cmd/kube-apiserver/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ type Extra struct {
EndpointReconcilerType string

MasterCount int

OpenShiftConfig string
}

// NewServerRunOptions creates a new ServerRunOptions object with default parameters
Expand Down Expand Up @@ -153,5 +155,9 @@ func (s *ServerRunOptions) Flags() (fss cliflag.NamedFlagSets) {
"The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)")
fs.MarkDeprecated("apiserver-count", "apiserver-count is deprecated and will be removed in a future version.")

fs.StringVar(&s.OpenShiftConfig, "openshift-config", s.OpenShiftConfig, "config for openshift")
fs.MarkDeprecated("openshift-config", "to be removed")
fs.MarkHidden("openshift-config")

return fss
}
42 changes: 42 additions & 0 deletions cmd/kube-apiserver/app/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,10 @@ import (
"net/url"
"os"

"k8s.io/kubernetes/openshift-kube-apiserver/admission/admissionenablement"
"k8s.io/kubernetes/openshift-kube-apiserver/enablement"
"k8s.io/kubernetes/openshift-kube-apiserver/openshiftkubeapiserver"

"github.com/spf13/cobra"

apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
Expand Down Expand Up @@ -103,6 +107,35 @@ cluster's shared state through which all other components interact.`,
}
cliflag.PrintFlags(fs)

if len(s.OpenShiftConfig) > 0 {
// if we are running openshift, we modify the admission chain defaults accordingly
admissionenablement.InstallOpenShiftAdmissionPlugins(s)

openshiftConfig, err := enablement.GetOpenshiftConfig(s.OpenShiftConfig)
if err != nil {
klog.Fatal(err)
}
enablement.ForceOpenShift(openshiftConfig)

args, err := openshiftkubeapiserver.ConfigToFlags(openshiftConfig)
if err != nil {
return err
}

// hopefully this resets the flags?
if err := cmd.ParseFlags(args); err != nil {
return err
}

// print merged flags (merged from OpenshiftConfig)
cliflag.PrintFlags(cmd.Flags())

enablement.ForceGlobalInitializationForOpenShift()
} else {
// print default flags
cliflag.PrintFlags(cmd.Flags())
}

// set default options
completedOptions, err := s.Complete()
if err != nil {
Expand Down Expand Up @@ -311,6 +344,15 @@ func CreateKubeAPIServerConfig(opts options.CompletedOptions) (
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to create real dynamic external client: %w", err)
}

if err := openshiftkubeapiserver.OpenShiftKubeAPIServerConfigPatch(genericConfig, versionedInformers, &pluginInitializers); err != nil {
return nil, nil, nil, fmt.Errorf("failed to patch: %v", err)
}

if enablement.IsOpenShift() {
admissionenablement.SetAdmissionDefaults(&opts.CompletedOptions, versionedInformers, clientgoExternalClient)
}

err = opts.Admission.ApplyTo(
genericConfig,
versionedInformers,
Expand Down
4 changes: 4 additions & 0 deletions pkg/controlplane/apiserver/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@ import (
"k8s.io/kubernetes/pkg/kubeapiserver"
"k8s.io/kubernetes/pkg/kubeapiserver/authorizer/modes"
rbacrest "k8s.io/kubernetes/pkg/registry/rbac/rest"

"k8s.io/kubernetes/openshift-kube-apiserver/enablement"
)

// BuildGenericConfig takes the master server options and produces the genericapiserver.Config associated with it
Expand Down Expand Up @@ -134,6 +136,8 @@ func BuildGenericConfig(
// on a fast local network
genericConfig.LoopbackClientConfig.DisableCompression = true

enablement.SetLoopbackClientConfig(genericConfig.LoopbackClientConfig)

kubeClientConfig := genericConfig.LoopbackClientConfig
clientgoExternalClient, err := clientgoclientset.NewForConfig(kubeClientConfig)
if err != nil {
Expand Down
21 changes: 17 additions & 4 deletions pkg/kubeapiserver/authorizer/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ import (
"fmt"
"time"

"k8s.io/kubernetes/openshift-kube-apiserver/authorization/browsersafe"
"k8s.io/kubernetes/openshift-kube-apiserver/authorization/scopeauthorizer"

utilnet "k8s.io/apimachinery/pkg/util/net"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apiserver/pkg/authentication/user"
Expand Down Expand Up @@ -80,9 +83,11 @@ func (config Config) New() (authorizer.Authorizer, authorizer.RuleResolver, erro
ruleResolvers []authorizer.RuleResolver
)

// Add SystemPrivilegedGroup as an authorizing group
superuserAuthorizer := authorizerfactory.NewPrivilegedGroups(user.SystemPrivilegedGroup)
authorizers = append(authorizers, superuserAuthorizer)
if !skipSystemMastersAuthorizer {
// Add SystemPrivilegedGroup as an authorizing group
superuserAuthorizer := authorizerfactory.NewPrivilegedGroups(user.SystemPrivilegedGroup)
authorizers = append(authorizers, superuserAuthorizer)
}

for _, authorizationMode := range config.AuthorizationModes {
// Keep cases in sync with constant list in k8s.io/kubernetes/pkg/kubeapiserver/authorizer/modes/modes.go.
Expand Down Expand Up @@ -142,8 +147,16 @@ func (config Config) New() (authorizer.Authorizer, authorizer.RuleResolver, erro
&rbac.ClusterRoleGetter{Lister: config.VersionedInformerFactory.Rbac().V1().ClusterRoles().Lister()},
&rbac.ClusterRoleBindingLister{Lister: config.VersionedInformerFactory.Rbac().V1().ClusterRoleBindings().Lister()},
)
authorizers = append(authorizers, rbacAuthorizer)
// Wrap with an authorizer that detects unsafe requests and modifies verbs/resources appropriately so policy can address them separately
authorizers = append(authorizers, browsersafe.NewBrowserSafeAuthorizer(rbacAuthorizer, user.AllAuthenticated))
ruleResolvers = append(ruleResolvers, rbacAuthorizer)
case modes.ModeScope:
// Wrap with an authorizer that detects unsafe requests and modifies verbs/resources appropriately so policy can address them separately
scopeLimitedAuthorizer := scopeauthorizer.NewAuthorizer(config.VersionedInformerFactory.Rbac().V1().ClusterRoles().Lister())
authorizers = append(authorizers, browsersafe.NewBrowserSafeAuthorizer(scopeLimitedAuthorizer, user.AllAuthenticated))
case modes.ModeSystemMasters:
// no browsersafeauthorizer here becase that rewrites the resources. This authorizer matches no matter which resource matches.
authorizers = append(authorizers, authorizerfactory.NewPrivilegedGroups(user.SystemPrivilegedGroup))
default:
return nil, nil, fmt.Errorf("unknown authorization mode %s specified", authorizationMode)
}
Expand Down
8 changes: 8 additions & 0 deletions pkg/kubeapiserver/authorizer/modes/patch.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
package modes

var ModeScope = "Scope"
var ModeSystemMasters = "SystemMasters"

func init() {
AuthorizationModeChoices = append(AuthorizationModeChoices, ModeScope, ModeSystemMasters)
}
8 changes: 8 additions & 0 deletions pkg/kubeapiserver/authorizer/patch.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
package authorizer

var skipSystemMastersAuthorizer = false

// SkipSystemMastersAuthorizer disable implicitly added system/master authz, and turn it into another authz mode "SystemMasters", to be added via authorization-mode
func SkipSystemMastersAuthorizer() {
skipSystemMastersAuthorizer = true
}
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,7 @@ func buildControllerRoles() ([]rbacv1.ClusterRole, []rbacv1.ClusterRoleBinding)
// resource that is owned by the service and sets blockOwnerDeletion=true in its ownerRef.
rbacv1helpers.NewRule("update").Groups(legacyGroup).Resources("services/finalizers").RuleOrDie(),
rbacv1helpers.NewRule("get", "list", "create", "update", "delete").Groups(discoveryGroup).Resources("endpointslices").RuleOrDie(),
rbacv1helpers.NewRule("create").Groups(discoveryGroup).Resources("endpointslices/restricted").RuleOrDie(),
eventsRule(),
},
})
Expand All @@ -178,6 +179,7 @@ func buildControllerRoles() ([]rbacv1.ClusterRole, []rbacv1.ClusterRoleBinding)
// see https://github.com/openshift/kubernetes/blob/8691466059314c3f7d6dcffcbb76d14596ca716c/pkg/controller/endpointslicemirroring/utils.go#L87-L88
rbacv1helpers.NewRule("update").Groups(legacyGroup).Resources("endpoints/finalizers").RuleOrDie(),
rbacv1helpers.NewRule("get", "list", "create", "update", "delete").Groups(discoveryGroup).Resources("endpointslices").RuleOrDie(),
rbacv1helpers.NewRule("create").Groups(discoveryGroup).Resources("endpointslices/restricted").RuleOrDie(),
eventsRule(),
},
})
Expand Down
65 changes: 65 additions & 0 deletions plugin/pkg/auth/authorizer/rbac/bootstrappolicy/patch_policy.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
package bootstrappolicy

import (
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
rbacv1helpers "k8s.io/kubernetes/pkg/apis/rbac/v1"
)

var ClusterRoles = clusterRoles

func OpenshiftClusterRoles() []rbacv1.ClusterRole {
const (
// These are valid under the "nodes" resource
NodeMetricsSubresource = "metrics"
NodeStatsSubresource = "stats"
NodeSpecSubresource = "spec"
NodeLogSubresource = "log"
)

roles := clusterRoles()
roles = append(roles, []rbacv1.ClusterRole{
{
ObjectMeta: metav1.ObjectMeta{
Name: "system:node-admin",
},
Rules: []rbacv1.PolicyRule{
// Allow read-only access to the API objects
rbacv1helpers.NewRule(Read...).Groups(legacyGroup).Resources("nodes").RuleOrDie(),
// Allow all API calls to the nodes
rbacv1helpers.NewRule("proxy").Groups(legacyGroup).Resources("nodes").RuleOrDie(),
rbacv1helpers.NewRule("*").Groups(legacyGroup).Resources("nodes/proxy", "nodes/"+NodeMetricsSubresource, "nodes/"+NodeSpecSubresource, "nodes/"+NodeStatsSubresource, "nodes/"+NodeLogSubresource).RuleOrDie(),
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "system:node-reader",
},
Rules: []rbacv1.PolicyRule{
// Allow read-only access to the API objects
rbacv1helpers.NewRule(Read...).Groups(legacyGroup).Resources("nodes").RuleOrDie(),
// Allow read access to node metrics
rbacv1helpers.NewRule("get").Groups(legacyGroup).Resources("nodes/"+NodeMetricsSubresource, "nodes/"+NodeSpecSubresource).RuleOrDie(),
// Allow read access to stats
// Node stats requests are submitted as POSTs. These creates are non-mutating
rbacv1helpers.NewRule("get", "create").Groups(legacyGroup).Resources("nodes/" + NodeStatsSubresource).RuleOrDie(),
// TODO: expose other things like /healthz on the node once we figure out non-resource URL policy across systems
},
},
}...)

addClusterRoleLabel(roles)
return roles
}

var ClusterRoleBindings = clusterRoleBindings

func OpenshiftClusterRoleBindings() []rbacv1.ClusterRoleBinding {
bindings := clusterRoleBindings()
bindings = append(bindings, []rbacv1.ClusterRoleBinding{
rbacv1helpers.NewClusterBinding("system:node-admin").Users("system:master", "system:kube-apiserver").Groups("system:node-admins").BindingOrDie(),
}...)

addClusterRoleBindingLabel(bindings)
return bindings
}
6 changes: 3 additions & 3 deletions plugin/pkg/auth/authorizer/rbac/bootstrappolicy/policy.go
Original file line number Diff line number Diff line change
Expand Up @@ -189,8 +189,8 @@ func NodeRules() []rbacv1.PolicyRule {
return nodePolicyRules
}

// ClusterRoles returns the cluster roles to bootstrap an API server with
func ClusterRoles() []rbacv1.ClusterRole {
// clusterRoles returns the cluster roles to bootstrap an API server with
func clusterRoles() []rbacv1.ClusterRole {
roles := []rbacv1.ClusterRole{
{
// a "root" role which can do absolutely anything
Expand Down Expand Up @@ -601,7 +601,7 @@ func ClusterRoles() []rbacv1.ClusterRole {
const systemNodeRoleName = "system:node"

// ClusterRoleBindings return default rolebindings to the default roles
func ClusterRoleBindings() []rbacv1.ClusterRoleBinding {
func clusterRoleBindings() []rbacv1.ClusterRoleBinding {
rolebindings := []rbacv1.ClusterRoleBinding{
rbacv1helpers.NewClusterBinding("cluster-admin").Groups(user.SystemPrivilegedGroup).BindingOrDie(),
rbacv1helpers.NewClusterBinding("system:monitoring").Groups(user.MonitoringGroup).BindingOrDie(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -516,6 +516,12 @@ items:
- get
- list
- update
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices/restricted
verbs:
- create
- apiGroups:
- ""
- events.k8s.io
Expand Down Expand Up @@ -566,6 +572,12 @@ items:
- get
- list
- update
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices/restricted
verbs:
- create
- apiGroups:
- ""
- events.k8s.io
Expand Down
Binary file not shown.

0 comments on commit b6d78bc

Please sign in to comment.