Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2022259: Rebase v1.20.12 #1046

Merged
merged 47 commits into from Nov 19, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
8aab198
Bump golang.org/x/text to v0.3.6
dims May 28, 2021
023c9f0
Add explicit capability for online volume expansion
gnufied Jun 7, 2021
1db23ac
remove listx from OWNERS, OWNERS_ALIASES
Aug 10, 2021
2752ac9
backported PR#97721 from v1.21 ("fix slice controller logging for ser…
ricky-rav Aug 20, 2021
8991f3a
fix: ignore the case when updating tags
nilo19 Aug 25, 2021
df8e70d
fix: ignore the case when comparing azure tags in service annotation
nilo19 Sep 1, 2021
38efc03
fix detach disk issue on deleting node
andyzhangx Aug 25, 2021
e68bd7b
Fix unknown dangling volumes
gnufied Nov 18, 2020
dc91be9
Add docs about process of discovering disks from new nodes
gnufied Nov 19, 2020
0ab4ffc
Address review comments
gnufied Nov 19, 2020
a178772
Fix use variables in the loop in vsphere_util
skyguard1 Aug 17, 2021
b67d6d1
fix 104329: check for headless before trying to release the ClusterIPs
khenidak Aug 19, 2021
af0255c
integration test
khenidak Aug 20, 2021
e3b54a4
Propagate conversion errors
liggitt Sep 13, 2021
7ebedf7
Fix null JSON round tripping
liggitt Sep 13, 2021
82e3138
kube-controller-manager: properly check generic ephemeral volume feature
pohly Sep 10, 2021
9ed7fac
Release commit for Kubernetes v1.20.12-rc.0
Sep 15, 2021
472adb6
Refine locking in API Priority and Fairness config controller
MikeSpreitzer Sep 8, 2021
4f5111f
Update CHANGELOG/CHANGELOG-1.20.md for v1.20.11
Sep 15, 2021
478ce0c
Merge pull request #105051 from shyamjvs/automated-cherry-pick-of-#10…
k8s-ci-robot Sep 17, 2021
be98bdb
'New' Event namespace validate failed
h4ghhh Mar 11, 2021
2624cc6
Merge pull request #105087 from h4ghhh/automated-cherry-pick-of-#1001…
k8s-ci-robot Sep 19, 2021
5676b0d
tests: Wait for the network connectivity first
claudiubelu Jun 25, 2021
7ffe8e8
Merge pull request #104990 from liggitt/automated-cherry-pick-of-#104…
k8s-ci-robot Sep 27, 2021
0d31831
e2e scheduling priorities: do not reference control loop variable
ingvagabund Sep 23, 2021
8f3c8fa
Revert 102925: Fix Node Resources plugins score when there are pods w…
damemi Sep 24, 2021
7e4ea36
fix: consolidate logs for instance not found error
nilo19 Sep 22, 2021
729ebe9
fix: skip not found nodes when reconciling LB backend address pools
nilo19 Sep 28, 2021
31af72c
Ignore VMs in vmss delete backend pools
ialidzhikov Sep 21, 2021
58d0a4b
Merge pull request #105239 from damemi/1.20-revert-102925
k8s-ci-robot Oct 5, 2021
4491530
Merge pull request #104262 from listx/release-1.20
k8s-ci-robot Oct 6, 2021
3155e97
Merge pull request #104899 from andyzhangx/automated-cherry-pick-of-#…
k8s-ci-robot Oct 6, 2021
be3e7e6
Merge pull request #104910 from gnufied/backport-dangling-volume-fixes
k8s-ci-robot Oct 6, 2021
abd634e
Merge pull request #104975 from dprotaso/automated-cherry-pick-of-#10…
k8s-ci-robot Oct 6, 2021
2be8128
Merge pull request #105038 from pohly/automated-cherry-pick-of-#10491…
k8s-ci-robot Oct 6, 2021
180db17
Merge pull request #105280 from ingvagabund/automated-cherry-pick-of-…
k8s-ci-robot Oct 6, 2021
4aceb67
Merge pull request #102602 from jonesbr17/automated-cherry-pick-of-#1…
k8s-ci-robot Oct 6, 2021
0416250
Merge pull request #103164 from gnufied/automated-cherry-pick-of-#102…
k8s-ci-robot Oct 6, 2021
08e12d5
Merge pull request #104477 from ricky-rav/dev_bz_1994486
k8s-ci-robot Oct 6, 2021
0cf6951
Merge pull request #105364 from nilo19/automated-cherry-pick-of-#1051…
k8s-ci-robot Oct 13, 2021
af669d8
Merge pull request #105404 from ialidzhikov/automated-cherry-pick-of-…
k8s-ci-robot Oct 14, 2021
0360eea
Merge pull request #105442 from claudiubelu/automated-cherry-pick-of-…
k8s-ci-robot Oct 14, 2021
53027ef
Merge pull request #104687 from nilo19/automated-cherry-pick-of-#1045…
k8s-ci-robot Oct 14, 2021
4bf2e32
Release commit for Kubernetes v1.20.12
Oct 27, 2021
53fa52e
Merge tag 'v1.20.12' into release-4.7
tjungblu Nov 11, 2021
c98005a
UPSTREAM: <drop>: manually resolve conflicts
tjungblu Nov 11, 2021
d8db00b
UPSTREAM: <drop>: hack/update-vendor.sh, make update and update image
tjungblu Nov 11, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
353 changes: 235 additions & 118 deletions CHANGELOG/CHANGELOG-1.20.md

Large diffs are not rendered by default.

2 changes: 0 additions & 2 deletions OWNERS_ALIASES
Expand Up @@ -139,7 +139,6 @@ aliases:
- cblecker
- dims
- justaugustus # Release Manager / SIG Chair
- listx
build-image-reviewers:
- BenTheElder
- cblecker
Expand All @@ -149,7 +148,6 @@ aliases:
- hasheddan # Release Manager / SIG Technical Lead
- idealhack # Release Manager
- justaugustus # Release Manager / SIG Chair
- listx
- puerco # Release Manager
- saschagrunert # Release Manager / SIG Chair
- xmudrii # Release Manager
Expand Down
2 changes: 1 addition & 1 deletion cmd/kube-controller-manager/app/core.go
Expand Up @@ -581,7 +581,7 @@ func startPVCProtectionController(ctx ControllerContext) (http.Handler, bool, er
ctx.InformerFactory.Core().V1().Pods(),
ctx.ClientBuilder.ClientOrDie("pvc-protection-controller"),
utilfeature.DefaultFeatureGate.Enabled(features.StorageObjectInUseProtection),
utilfeature.DefaultFeatureGate.Enabled(features.StorageObjectInUseProtection),
utilfeature.DefaultFeatureGate.Enabled(features.GenericEphemeralVolume),
)
if err != nil {
return nil, true, fmt.Errorf("failed to start the pvc protection controller: %v", err)
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Expand Up @@ -479,7 +479,7 @@ replace (
golang.org/x/oauth2 => golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
golang.org/x/sync => golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9
golang.org/x/sys => golang.org/x/sys v0.0.0-20201112073958-5cba982894dd
golang.org/x/text => golang.org/x/text v0.3.4
golang.org/x/text => golang.org/x/text v0.3.6
golang.org/x/time => golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e
golang.org/x/tools => golang.org/x/tools v0.0.0-20210106214847-113979e3529a
golang.org/x/xerrors => golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Expand Up @@ -550,8 +550,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 h1:SQFwaSi55rU7vdNs9Yr0Z324
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd h1:5CtCZbICpIOFdgO940moixOPjc0178IU44m4EjOO5IY=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e h1:EHBhcS0mlXEAVwNyO2dLfjToGsyY4j24pTs2ScHnX7s=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a h1:CB3a9Nez8M13wwlr/E2YtwoU+qYHKfC+JrDa45RXXoQ=
Expand Down
4 changes: 2 additions & 2 deletions pkg/apis/core/validation/events.go
Expand Up @@ -21,7 +21,7 @@ import (
"reflect"
"time"

"k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
eventsv1beta1 "k8s.io/api/events/v1beta1"
apimachineryvalidation "k8s.io/apimachinery/pkg/api/validation"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Expand Down Expand Up @@ -140,7 +140,7 @@ func legacyValidateEvent(event *core.Event) field.ErrorList {
}

} else {
if len(event.InvolvedObject.Namespace) == 0 && event.Namespace != metav1.NamespaceSystem {
if len(event.InvolvedObject.Namespace) == 0 && event.Namespace != metav1.NamespaceDefault && event.Namespace != metav1.NamespaceSystem {
allErrs = append(allErrs, field.Invalid(field.NewPath("involvedObject", "namespace"), event.InvolvedObject.Namespace, "does not match event.namespace"))
}
if len(event.ReportingController) == 0 {
Expand Down
4 changes: 2 additions & 2 deletions pkg/apis/extensions/v1beta1/conversion.go
Expand Up @@ -189,7 +189,7 @@ func Convert_networking_IngressBackend_To_v1beta1_IngressBackend(in *networking.

func Convert_v1beta1_IngressSpec_To_networking_IngressSpec(in *extensionsv1beta1.IngressSpec, out *networking.IngressSpec, s conversion.Scope) error {
if err := autoConvert_v1beta1_IngressSpec_To_networking_IngressSpec(in, out, s); err != nil {
return nil
return err
}
if in.Backend != nil {
out.DefaultBackend = &networking.IngressBackend{}
Expand All @@ -202,7 +202,7 @@ func Convert_v1beta1_IngressSpec_To_networking_IngressSpec(in *extensionsv1beta1

func Convert_networking_IngressSpec_To_v1beta1_IngressSpec(in *networking.IngressSpec, out *extensionsv1beta1.IngressSpec, s conversion.Scope) error {
if err := autoConvert_networking_IngressSpec_To_v1beta1_IngressSpec(in, out, s); err != nil {
return nil
return err
}
if in.DefaultBackend != nil {
out.Backend = &extensionsv1beta1.IngressBackend{}
Expand Down
4 changes: 2 additions & 2 deletions pkg/apis/networking/v1beta1/conversion.go
Expand Up @@ -52,7 +52,7 @@ func Convert_networking_IngressBackend_To_v1beta1_IngressBackend(in *networking.
}
func Convert_v1beta1_IngressSpec_To_networking_IngressSpec(in *v1beta1.IngressSpec, out *networking.IngressSpec, s conversion.Scope) error {
if err := autoConvert_v1beta1_IngressSpec_To_networking_IngressSpec(in, out, s); err != nil {
return nil
return err
}
if in.Backend != nil {
out.DefaultBackend = &networking.IngressBackend{}
Expand All @@ -65,7 +65,7 @@ func Convert_v1beta1_IngressSpec_To_networking_IngressSpec(in *v1beta1.IngressSp

func Convert_networking_IngressSpec_To_v1beta1_IngressSpec(in *networking.IngressSpec, out *v1beta1.IngressSpec, s conversion.Scope) error {
if err := autoConvert_networking_IngressSpec_To_v1beta1_IngressSpec(in, out, s); err != nil {
return nil
return err
}
if in.DefaultBackend != nil {
out.Backend = &v1beta1.IngressBackend{}
Expand Down
2 changes: 1 addition & 1 deletion pkg/controller/endpointslice/utils.go
Expand Up @@ -410,7 +410,7 @@ func getAddressTypesForService(service *corev1.Service) map[discovery.AddressTyp
addrType = discovery.AddressTypeIPv6
}
serviceSupportedAddresses[addrType] = struct{}{}
klog.V(2).Infof("couldn't find ipfamilies for headless service: %v/%v. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use %s as the IP Family based on familyOf(ClusterIP:%v).", service.Namespace, service.Name, addrType, service.Spec.ClusterIP)
klog.V(2).Infof("couldn't find ipfamilies for service: %v/%v. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use %s as the IP Family based on familyOf(ClusterIP:%v).", service.Namespace, service.Name, addrType, service.Spec.ClusterIP)
return serviceSupportedAddresses
}

Expand Down
11 changes: 6 additions & 5 deletions pkg/registry/core/service/storage/rest.go
Expand Up @@ -744,6 +744,12 @@ func (rs *REST) handleClusterIPsForUpdatedService(oldService *api.Service, servi
}

// CASE B:

// if headless service then we bail out early (no clusterIPs management needed)
if len(oldService.Spec.ClusterIPs) > 0 && oldService.Spec.ClusterIPs[0] == api.ClusterIPNone {
return nil, nil, nil
}

// Update service from non-ExternalName to ExternalName, should release ClusterIP if exists.
if oldService.Spec.Type != api.ServiceTypeExternalName && service.Spec.Type == api.ServiceTypeExternalName {
toRelease = make(map[api.IPFamily]string)
Expand All @@ -760,11 +766,6 @@ func (rs *REST) handleClusterIPsForUpdatedService(oldService *api.Service, servi
return nil, toRelease, nil
}

// if headless service then we bail out early (no clusterIPs management needed)
if len(oldService.Spec.ClusterIPs) > 0 && oldService.Spec.ClusterIPs[0] == api.ClusterIPNone {
return nil, nil, nil
}

// upgrade and downgrade are specific to dualstack
if !utilfeature.DefaultFeatureGate.Enabled(features.IPv6DualStack) {
return nil, nil, nil
Expand Down
Expand Up @@ -80,16 +80,12 @@ func NewBalancedAllocation(_ runtime.Object, h framework.Handle) (framework.Plug

// todo: use resource weights in the scorer function
func balancedResourceScorer(requested, allocable resourceToValueMap, includeVolumes bool, requestedVolumes int, allocatableVolumes int) int64 {
// This to find a node which has most balanced CPU, memory and volume usage.
cpuFraction := fractionOfCapacity(requested[v1.ResourceCPU], allocable[v1.ResourceCPU])
memoryFraction := fractionOfCapacity(requested[v1.ResourceMemory], allocable[v1.ResourceMemory])
// fractions might be greater than 1 because pods with no requests get minimum
// values.
if cpuFraction > 1 {
cpuFraction = 1
}
if memoryFraction > 1 {
memoryFraction = 1
// This to find a node which has most balanced CPU, memory and volume usage.
if cpuFraction >= 1 || memoryFraction >= 1 {
// if requested >= capacity, the corresponding host should never be preferred.
return 0
}

if includeVolumes && utilfeature.DefaultFeatureGate.Enabled(features.BalanceAttachedNodeVolumes) && allocatableVolumes > 0 {
Expand Down
Expand Up @@ -213,13 +213,6 @@ func TestNodeResourcesBalancedAllocation(t *testing.T) {
},
},
}
nonZeroContainer := v1.PodSpec{
Containers: []v1.Container{{}},
}
nonZeroContainer1 := v1.PodSpec{
NodeName: "machine1",
Containers: []v1.Container{{}},
}
tests := []struct {
pod *v1.Pod
pods []*v1.Pod
Expand Down Expand Up @@ -275,24 +268,6 @@ func TestNodeResourcesBalancedAllocation(t *testing.T) {
{Spec: machine2Spec, ObjectMeta: metav1.ObjectMeta{Labels: labels1}},
},
},
{
// Node1 scores on 0-MaxNodeScore scale
// CPU Fraction: 300 / 250 = 100%
// Memory Fraction: 600 / 10000 = 60%
// Node1 Score: MaxNodeScore - (100-60)*MaxNodeScore = 60
// Node2 scores on 0-MaxNodeScore scale
// CPU Fraction: 100 / 250 = 40%
// Memory Fraction: 200 / 10000 = 20%
// Node2 Score: MaxNodeScore - (40-20)*MaxNodeScore= 80
pod: &v1.Pod{Spec: nonZeroContainer},
nodes: []*v1.Node{makeNode("machine1", 250, 1000*1024*1024), makeNode("machine2", 250, 1000*1024*1024)},
expectedList: []framework.NodeScore{{Name: "machine1", Score: 60}, {Name: "machine2", Score: 80}},
name: "no resources requested, pods scheduled",
pods: []*v1.Pod{
{Spec: nonZeroContainer1},
{Spec: nonZeroContainer1},
},
},
{
// Node1 scores on 0-MaxNodeScore scale
// CPU Fraction: 6000 / 10000 = 60%
Expand Down Expand Up @@ -351,17 +326,27 @@ func TestNodeResourcesBalancedAllocation(t *testing.T) {
},
{
// Node1 scores on 0-MaxNodeScore scale
// CPU Fraction: 6000 / 6000 = 1
// CPU Fraction: 6000 / 4000 > 100% ==> Score := 0
// Memory Fraction: 0 / 10000 = 0
// Node1 Score: MaxNodeScore - (1 - 0) * MaxNodeScore = 0
// Node1 Score: 0
// Node2 scores on 0-MaxNodeScore scale
// CPU Fraction: 6000 / 6000 = 1
// CPU Fraction: 6000 / 4000 > 100% ==> Score := 0
// Memory Fraction 5000 / 10000 = 50%
// Node2 Score: MaxNodeScore - (1 - 0.5) * MaxNodeScore = 50
// Node2 Score: 0
pod: &v1.Pod{Spec: cpuOnly},
nodes: []*v1.Node{makeNode("machine1", 6000, 10000), makeNode("machine2", 6000, 10000)},
expectedList: []framework.NodeScore{{Name: "machine1", Score: 0}, {Name: "machine2", Score: 50}},
name: "requested resources at node capacity",
nodes: []*v1.Node{makeNode("machine1", 4000, 10000), makeNode("machine2", 4000, 10000)},
expectedList: []framework.NodeScore{{Name: "machine1", Score: 0}, {Name: "machine2", Score: 0}},
name: "requested resources exceed node capacity",
pods: []*v1.Pod{
{Spec: cpuOnly},
{Spec: cpuAndMemory},
},
},
{
pod: &v1.Pod{Spec: noResources},
nodes: []*v1.Node{makeNode("machine1", 0, 0), makeNode("machine2", 0, 0)},
expectedList: []framework.NodeScore{{Name: "machine1", Score: 0}, {Name: "machine2", Score: 0}},
name: "zero node resources, pods scheduled with resources",
pods: []*v1.Pod{
{Spec: cpuOnly},
{Spec: cpuAndMemory},
Expand Down
3 changes: 2 additions & 1 deletion staging/src/k8s.io/api/go.sum

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion staging/src/k8s.io/apiextensions-apiserver/go.sum

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Expand Up @@ -17,6 +17,8 @@ limitations under the License.
package v1

import (
"bytes"

"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions"
apiequality "k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/conversion"
Expand All @@ -36,20 +38,29 @@ func Convert_apiextensions_JSONSchemaProps_To_v1_JSONSchemaProps(in *apiextensio
return nil
}

var nullLiteral = []byte(`null`)

func Convert_apiextensions_JSON_To_v1_JSON(in *apiextensions.JSON, out *JSON, s conversion.Scope) error {
raw, err := json.Marshal(*in)
if err != nil {
return err
}
out.Raw = raw
if len(raw) == 0 || bytes.Equal(raw, nullLiteral) {
// match JSON#UnmarshalJSON treatment of literal nulls
out.Raw = nil
} else {
out.Raw = raw
}
return nil
}

func Convert_v1_JSON_To_apiextensions_JSON(in *JSON, out *apiextensions.JSON, s conversion.Scope) error {
if in != nil {
var i interface{}
if err := json.Unmarshal(in.Raw, &i); err != nil {
return err
if len(in.Raw) > 0 && !bytes.Equal(in.Raw, nullLiteral) {
if err := json.Unmarshal(in.Raw, &i); err != nil {
return err
}
}
*out = i
} else {
Expand Down Expand Up @@ -103,7 +114,7 @@ func Convert_apiextensions_CustomResourceDefinitionSpec_To_v1_CustomResourceDefi

func Convert_v1_CustomResourceDefinitionSpec_To_apiextensions_CustomResourceDefinitionSpec(in *CustomResourceDefinitionSpec, out *apiextensions.CustomResourceDefinitionSpec, s conversion.Scope) error {
if err := autoConvert_v1_CustomResourceDefinitionSpec_To_apiextensions_CustomResourceDefinitionSpec(in, out, s); err != nil {
return nil
return err
}

if len(out.Versions) == 0 {
Expand Down
Expand Up @@ -17,6 +17,7 @@ limitations under the License.
package v1

import (
"encoding/json"
"reflect"
"strings"
"testing"
Expand Down Expand Up @@ -605,3 +606,57 @@ func TestJSONConversion(t *testing.T) {
}
}
}

func TestJSONRoundTrip(t *testing.T) {
testcases := []struct {
name string
in string
out string
}{
{
name: "nulls",
in: `{"default":null,"enum":null,"example":null}`,
out: `{}`,
},
{
name: "null values",
in: `{"default":{"test":null},"enum":[null],"example":{"test":null}}`,
out: `{"default":{"test":null},"enum":[null],"example":{"test":null}}`,
},
}

scheme := runtime.NewScheme()
// add internal and external types
if err := apiextensions.AddToScheme(scheme); err != nil {
t.Fatal(err)
}
if err := AddToScheme(scheme); err != nil {
t.Fatal(err)
}

for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
external := &JSONSchemaProps{}
if err := json.Unmarshal([]byte(tc.in), external); err != nil {
t.Fatal(err)
}

internal := &apiextensions.JSONSchemaProps{}
if err := scheme.Convert(external, internal, nil); err != nil {
t.Fatalf("unexpected error: %v", err)
}
roundtripped := &JSONSchemaProps{}
if err := scheme.Convert(internal, roundtripped, nil); err != nil {
t.Fatalf("unexpected error: %v", err)
}

out, err := json.Marshal(roundtripped)
if err != nil {
t.Fatal(err)
}
if string(out) != string(tc.out) {
t.Fatalf("expected\n%s\ngot\n%s", string(tc.out), string(out))
}
})
}
}
Expand Up @@ -17,6 +17,7 @@ limitations under the License.
package v1

import (
"bytes"
"errors"

"k8s.io/apimachinery/pkg/util/json"
Expand Down Expand Up @@ -128,7 +129,7 @@ func (s JSON) MarshalJSON() ([]byte, error) {
}

func (s *JSON) UnmarshalJSON(data []byte) error {
if len(data) > 0 && string(data) != "null" {
if len(data) > 0 && !bytes.Equal(data, nullLiteral) {
s.Raw = data
}
return nil
Expand Down