Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix test panic #64

Merged
merged 1 commit into from
Sep 8, 2018
Merged

fix test panic #64

merged 1 commit into from
Sep 8, 2018

Conversation

CaoShuFeng
Copy link
Contributor

Without this change, the following code would panic:

package main

import (
        "fmt"

        jsonpatch "github.com/evanphx/json-patch"
)

func main() {
        original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
        patchJSON := []byte(`[
                {"op": "test", "path": "/name"}
        ]`)

        patch, err := jsonpatch.DecodePatch(patchJSON)
        if err != nil {
		fmt.Printf("decode %v\n", err)
		return
        }

        modified, err := patch.Apply(original)
        if err != nil {
		fmt.Printf("aplly %v\n", err)
		return
        }

        fmt.Printf("Original document: %s\n", original)
        fmt.Printf("Modified document: %s\n", modified)
}

Without this change, the following code would panic:
```
package main

import (
        "fmt"

        jsonpatch "github.com/evanphx/json-patch"
)

func main() {
        original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
        patchJSON := []byte(`[
                {"op": "test", "path": "/name"}
        ]`)

        patch, err := jsonpatch.DecodePatch(patchJSON)
        if err != nil {
		fmt.Printf("decode %v\n", err)
		return
        }

        modified, err := patch.Apply(original)
        if err != nil {
		fmt.Printf("aplly %v\n", err)
		return
        }

        fmt.Printf("Original document: %s\n", original)
        fmt.Printf("Modified document: %s\n", modified)
}

```
@evanphx evanphx merged commit 36442db into evanphx:master Sep 8, 2018
CaoShuFeng added a commit to CaoShuFeng/kubernetes that referenced this pull request Sep 10, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64
k8s-publishing-bot pushed a commit to kubernetes/apimachinery that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/apiserver that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/kube-aggregator that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/sample-apiserver that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/apiextensions-apiserver that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/cli-runtime that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/sample-cli-plugin that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/apimachinery that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/apiserver that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/kube-aggregator that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/sample-apiserver that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/apiextensions-apiserver that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/cli-runtime that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
k8s-publishing-bot pushed a commit to kubernetes/sample-cli-plugin that referenced this pull request Sep 13, 2018
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
caesarxuchao pushed a commit to caesarxuchao/kubernetes that referenced this pull request Feb 15, 2019
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64
k8s-publishing-bot pushed a commit to kubernetes/apimachinery that referenced this pull request Feb 23, 2019
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: fff72767e8fe2b49d6bea3e33cb75e785c7db5b4
k8s-publishing-bot pushed a commit to kubernetes/apiserver that referenced this pull request Feb 23, 2019
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: fff72767e8fe2b49d6bea3e33cb75e785c7db5b4
k8s-publishing-bot pushed a commit to kubernetes/kube-aggregator that referenced this pull request Feb 23, 2019
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: fff72767e8fe2b49d6bea3e33cb75e785c7db5b4
k8s-publishing-bot pushed a commit to kubernetes/sample-apiserver that referenced this pull request Feb 23, 2019
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: fff72767e8fe2b49d6bea3e33cb75e785c7db5b4
k8s-publishing-bot pushed a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 23, 2019
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: fff72767e8fe2b49d6bea3e33cb75e785c7db5b4
rjaini added a commit to msazurestackworkloads/kubernetes that referenced this pull request Apr 14, 2019
* Default extensions/v1beta1 Deployment's ProgressDeadlineSeconds to MaxInt32.

1. MaxInt32 has the same meaning as unset, for compatibility
2. Deployment controller treats MaxInt32 the same as unset (nil)

* Update API doc of ProgressDeadlineSeconds

* Autogen

1. hack/update-generated-protobuf.sh
2. hack/update-generated-swagger-docs.sh
3. hack/update-swagger-spec.sh
4. hack/update-openapi-spec.sh
5. hack/update-api-reference-docs.sh

* Lookup PX api port from k8s service

Fixes kubernetes#70033

Signed-off-by: Harsh Desai <harsh@portworx.com>

* cache portworx API port

- reused client whenever possible
- refactor get client function into explicit cluster-wide and local functions

Signed-off-by: Harsh Desai <harsh@portworx.com>

* Fix bug with volume getting marked as not in-use with pending op

Add test for verifying volume detach

* Fix flake with e2e test that checks detach while mount in progress

A volume can show up as in-use even before it gets attached
to the node.

* fix node and kubelet start times

* Bump golang to 1.10.7 (CVE-2018-16875)

* Kubernetes version v1.11.7-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.6.

* New sysctls to improve pod termination

* Retry scheduling on various events.

* Test rescheduling on various events. - Add resyncPeriod parameter for setupCluster to make resync period of scheduler configurable. - Add test case for static provisioning and delay binding storage class. Move pods into active queue on PV add/update events. - Add a stress test with scheduler resync to detect possible race conditions.

* fix predicate invalidation method

* Fixed clearing of devicePath after UnmountDevice

UnmountDevice must not clear devicepath, because such devicePath
may come from node.status (e.g. on AWS) and subsequent MountDevice
operation (that may be already enqueued) needs it.

* fix race condition when attach azure disk in vmss

fix gofmt issue

* Check for volume-subpaths directory in orpahaned pod cleanup

* Leave refactoring TODO

* Update BUILD file

* Protect Netlink calls with a mutex

* Fix race in setting nominated node

* autogenerated files

* update cloud provider boilerplate

The pull-kubernetes-verify presubmit is failing on
verify-cloudprovider-gce.sh because it is a new year and thus current
test generated code doesn't match the prior committed generated code in
the copyright header.  The verifier is removed in master now, so for
simplicity and rather than fixing the verifier to ignore the header
differences for prior supported branched, this commit is the result of
rerunning hack/update-cloudprovider-gce.sh.

Signed-off-by: Tim Pepper <tpepper@vmware.com>

* Cluster Autoscaler 1.3.5

* Move unmount volume util from pkg/volume/util to pkg/util/mount

* Update doCleanSubpaths to use UnmountMountPoint

* Add unit test for UnmountMountPoint

* Add comments around use of PathExists

* Move linux test utils to os-independent test file

* Rename UnmountMountPoint to CleanupMountPoint

* Add e2e test for removing the subpath directory

* change azure disk host cache to ReadOnly by default

change cachingMode default value for azure disk PV

revert back to ReadWrite in azure disk PV setting

* activate unschedulable pods only if the node became more schedulable

* make integration/verify script look for k8s under GOPATH

* Clean up artifacts variables in hack scripts

* use json format to get rbd image size

* change sort function of scheduling queue to avoid starvation when unschedulable pods are in the queue

When starvation heppens:
- a lot of unschedulable pods exists in the head of queue
- because condition.LastTransitionTime is updated only when condition.Status changed
- (this means that once a pod is marked unschedulable, the field never updated until the pod successfuly scheduled.)

What was changed:
- condition.LastProbeTime is updated everytime when pod is determined
unschedulable.
- changed sort function so to use LastProbeTime to avoid starvation
described above

Consideration:
- This changes increases k8s API server load because it updates Pod.status whenever scheduler decides it as
unschedulable.

Signed-off-by: Shingo Omura <everpeace@gmail.com>

* Fix action required for pr 61373

* Fix kube-proxy PodSecurityPolicy RoleBinding namespace

* Find current resourceVersion for waiting for deletion/conditions

* Add e2e test for file exec

* Fix nil panic propagation

* Add `metrics-port` to kube-proxy cmd flags.

* Fix AWS NLB security group updates

This corrects a problem where valid security group ports were removed
unintentionally when updating a service or when node changes occur.

Fixes kubernetes#60825, kubernetes#64148

* Unit test for aws_lb security group filtering

kubernetes#60825

* Do not snapshot scheduler cache before starting preemption

* Fix and improve preemption test to work with the new logic

* changelog duplicate

* Increase limit for object size in streaming serializer

* Attempt to deflake HPA e2e test

Increase CPU usage requested from resource consumer. Observed CPU usage
must:
- be consistently above 300 milliCPU (2 pods * 500 mCPU request per
pod * .3 target utilization) to avoid scaling down below 3.
- never exceed 600 mCPU (4 pods * ...) to avoid scaling up above 4.

Also improve logging in case this doesn't solve the problem.

Change-Id: Id1d9c0193ccfa063855b29c5274587f05c1eb4d3

* Kubernetes version v1.11.8-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.7.

* Correlate max-inflight values in GCE with master VM sizes

* Update to go1.10.8

* Don't error on error on deprecated native http_archive rule

* add goroutine to move unschedulablepods to activeq regularly

* Always select the in-memory group/version as a target when decoding from storage

* fix mac filtering in vsphere cloud provider

* fix mac filtering in vsphere cloud provider

* Fix kubernetes#73479 AWS NLB target groups missing tags

`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
  Error adding tags after modifying load balancer targets:
  "ValidationError: Only one resource can be tagged at a time"

This can happen when using AWS NLB with multiple listeners pointing
to different node ports.

When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.

Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.

This small changes assigns tags one resource at a time instead of
batching them as before.

Signed-off-by: Brice Figureau <brice@daysofwonder.com>

* support multiple cidr vpc for nlb health check

* Use watch cache when rv=0 even when limit is set

* Avoid going back in time in watchcache watchers

* Bump the pod memory to higher levels to work on power

* vendor: bump github.com/evanphx/json-patch

Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch@73af7f5

Signed-off-by: Brandon Philips <brandon@ifup.org>

* vendor: bump github.com/evanphx/json-patch

Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

* update json-patch to pick up bug fixes

* Importing latest json-patch.

* Set the maximum size increase the copy operations in a json patch can cause

* Adding a limit on the maximum bytes accepted to be decoded in a resource write request.

* Cluster Autoscaler 1.3.7

* Make intergration test helper public.

This was done in the master branch in
kubernetes#69902. The pull includes many
other changes, so we made this targeted patch.

* add integration test

* Loosing the request body size limit to 100MB to account for the size ratio between json and protobuf.

* Limit the number of operations in a single json patch to be 10,000

* Fix testing if an interface is the loopback

It's not guaranteed that the loopback interface only has the loopback
IP, in our environments our loopback interface is also assigned a 169
address as well.

* fix smb remount issue on Windows

add comments for doSMBMount func

fix comments about smb mount

fix build error

* Allow headless svc without ports to have endpoints

As cited in
kubernetes/dns#174 - this is documented to
work, and I don't see why it shouldn't work.  We allowed the definition
of headless services without ports, but apparently nobody tested it very
well.

Manually tested clusterIP services with no ports - validation error.

Manually tested services with negative ports - validation error.

New tests failed, output inspected and verified.  Now pass.

* do not return error on invalid mac address in vsphere cloud provider

* remove get azure accounts in the init process set timeout for get azure account operation

use const for timeout value

remove get azure accounts in the init process

add lock for account init

* add timeout in GetVolumeLimits operation

add timeout for getAllStorageAccounts

* record event on endpoint update failure

* fix parse devicePath issue on Azure Disk

* add retry for detach azure disk

add more logging info in detach disk

add azure disk attach/detach logs

* Fix find-binary to locate bazel e2e tests

* Reduce cardinality of admission webhook metrics

* Explicitly set GVK when sending objects to webhooks

* Kubernetes version v1.11.9-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.8.

* add Azure Container Registry anonymous repo support

apply fix for msi and fix test failure

* cri_stats_provider: overload nil as 0 for exited containers stats

Always report 0 cpu/memory usage for exited containers to make
metrics-server work as expect.

Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>

* Fix panic in kubectl cp command

* Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata.

* add module 'nf_conntrack' in ipvs prerequisite check

* Ensure Azure load balancer cleaned up on 404 or 403

* Allow disable outbound snat when Azure standard load balancer is used

* Allow session affinity a period of time to setup for new services.

This is to deal with the flaky session affinity test.

* build/gci: bump CNI version to 0.7.5

* Fix size of repd e2e to use Gi

* Missed one changes.
rjaini added a commit to msazurestackworkloads/kubernetes that referenced this pull request May 6, 2019
* Default extensions/v1beta1 Deployment's ProgressDeadlineSeconds to MaxInt32.

1. MaxInt32 has the same meaning as unset, for compatibility
2. Deployment controller treats MaxInt32 the same as unset (nil)

* Update API doc of ProgressDeadlineSeconds

* Autogen

1. hack/update-generated-protobuf.sh
2. hack/update-generated-swagger-docs.sh
3. hack/update-swagger-spec.sh
4. hack/update-openapi-spec.sh
5. hack/update-api-reference-docs.sh

* Lookup PX api port from k8s service

Fixes kubernetes#70033

Signed-off-by: Harsh Desai <harsh@portworx.com>

* cache portworx API port

- reused client whenever possible
- refactor get client function into explicit cluster-wide and local functions

Signed-off-by: Harsh Desai <harsh@portworx.com>

* Fix bug with volume getting marked as not in-use with pending op

Add test for verifying volume detach

* Fix flake with e2e test that checks detach while mount in progress

A volume can show up as in-use even before it gets attached
to the node.

* fix node and kubelet start times

* Bump golang to 1.10.7 (CVE-2018-16875)

* Kubernetes version v1.11.7-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.6.

* New sysctls to improve pod termination

* Retry scheduling on various events.

* Test rescheduling on various events. - Add resyncPeriod parameter for setupCluster to make resync period of scheduler configurable. - Add test case for static provisioning and delay binding storage class. Move pods into active queue on PV add/update events. - Add a stress test with scheduler resync to detect possible race conditions.

* fix predicate invalidation method

* Fixed clearing of devicePath after UnmountDevice

UnmountDevice must not clear devicepath, because such devicePath
may come from node.status (e.g. on AWS) and subsequent MountDevice
operation (that may be already enqueued) needs it.

* fix race condition when attach azure disk in vmss

fix gofmt issue

* Check for volume-subpaths directory in orpahaned pod cleanup

* Leave refactoring TODO

* Update BUILD file

* Protect Netlink calls with a mutex

* Fix race in setting nominated node

* autogenerated files

* update cloud provider boilerplate

The pull-kubernetes-verify presubmit is failing on
verify-cloudprovider-gce.sh because it is a new year and thus current
test generated code doesn't match the prior committed generated code in
the copyright header.  The verifier is removed in master now, so for
simplicity and rather than fixing the verifier to ignore the header
differences for prior supported branched, this commit is the result of
rerunning hack/update-cloudprovider-gce.sh.

Signed-off-by: Tim Pepper <tpepper@vmware.com>

* Cluster Autoscaler 1.3.5

* Move unmount volume util from pkg/volume/util to pkg/util/mount

* Update doCleanSubpaths to use UnmountMountPoint

* Add unit test for UnmountMountPoint

* Add comments around use of PathExists

* Move linux test utils to os-independent test file

* Rename UnmountMountPoint to CleanupMountPoint

* Add e2e test for removing the subpath directory

* change azure disk host cache to ReadOnly by default

change cachingMode default value for azure disk PV

revert back to ReadWrite in azure disk PV setting

* activate unschedulable pods only if the node became more schedulable

* make integration/verify script look for k8s under GOPATH

* Clean up artifacts variables in hack scripts

* use json format to get rbd image size

* change sort function of scheduling queue to avoid starvation when unschedulable pods are in the queue

When starvation heppens:
- a lot of unschedulable pods exists in the head of queue
- because condition.LastTransitionTime is updated only when condition.Status changed
- (this means that once a pod is marked unschedulable, the field never updated until the pod successfuly scheduled.)

What was changed:
- condition.LastProbeTime is updated everytime when pod is determined
unschedulable.
- changed sort function so to use LastProbeTime to avoid starvation
described above

Consideration:
- This changes increases k8s API server load because it updates Pod.status whenever scheduler decides it as
unschedulable.

Signed-off-by: Shingo Omura <everpeace@gmail.com>

* Fix action required for pr 61373

* Fix kube-proxy PodSecurityPolicy RoleBinding namespace

* Find current resourceVersion for waiting for deletion/conditions

* Add e2e test for file exec

* Fix nil panic propagation

* Add `metrics-port` to kube-proxy cmd flags.

* Fix AWS NLB security group updates

This corrects a problem where valid security group ports were removed
unintentionally when updating a service or when node changes occur.

Fixes kubernetes#60825, kubernetes#64148

* Unit test for aws_lb security group filtering

kubernetes#60825

* Do not snapshot scheduler cache before starting preemption

* Fix and improve preemption test to work with the new logic

* changelog duplicate

* Increase limit for object size in streaming serializer

* Attempt to deflake HPA e2e test

Increase CPU usage requested from resource consumer. Observed CPU usage
must:
- be consistently above 300 milliCPU (2 pods * 500 mCPU request per
pod * .3 target utilization) to avoid scaling down below 3.
- never exceed 600 mCPU (4 pods * ...) to avoid scaling up above 4.

Also improve logging in case this doesn't solve the problem.

Change-Id: Id1d9c0193ccfa063855b29c5274587f05c1eb4d3

* Kubernetes version v1.11.8-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.7.

* Correlate max-inflight values in GCE with master VM sizes

* Update to go1.10.8

* Don't error on error on deprecated native http_archive rule

* add goroutine to move unschedulablepods to activeq regularly

* Always select the in-memory group/version as a target when decoding from storage

* fix mac filtering in vsphere cloud provider

* fix mac filtering in vsphere cloud provider

* Fix kubernetes#73479 AWS NLB target groups missing tags

`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
  Error adding tags after modifying load balancer targets:
  "ValidationError: Only one resource can be tagged at a time"

This can happen when using AWS NLB with multiple listeners pointing
to different node ports.

When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.

Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.

This small changes assigns tags one resource at a time instead of
batching them as before.

Signed-off-by: Brice Figureau <brice@daysofwonder.com>

* support multiple cidr vpc for nlb health check

* Use watch cache when rv=0 even when limit is set

* Avoid going back in time in watchcache watchers

* Bump the pod memory to higher levels to work on power

* vendor: bump github.com/evanphx/json-patch

Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch@73af7f5

Signed-off-by: Brandon Philips <brandon@ifup.org>

* vendor: bump github.com/evanphx/json-patch

Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

* update json-patch to pick up bug fixes

* Importing latest json-patch.

* Set the maximum size increase the copy operations in a json patch can cause

* Adding a limit on the maximum bytes accepted to be decoded in a resource write request.

* Cluster Autoscaler 1.3.7

* Make intergration test helper public.

This was done in the master branch in
kubernetes#69902. The pull includes many
other changes, so we made this targeted patch.

* add integration test

* Loosing the request body size limit to 100MB to account for the size ratio between json and protobuf.

* Limit the number of operations in a single json patch to be 10,000

* Fix testing if an interface is the loopback

It's not guaranteed that the loopback interface only has the loopback
IP, in our environments our loopback interface is also assigned a 169
address as well.

* fix smb remount issue on Windows

add comments for doSMBMount func

fix comments about smb mount

fix build error

* Allow headless svc without ports to have endpoints

As cited in
kubernetes/dns#174 - this is documented to
work, and I don't see why it shouldn't work.  We allowed the definition
of headless services without ports, but apparently nobody tested it very
well.

Manually tested clusterIP services with no ports - validation error.

Manually tested services with negative ports - validation error.

New tests failed, output inspected and verified.  Now pass.

* do not return error on invalid mac address in vsphere cloud provider

* remove get azure accounts in the init process set timeout for get azure account operation

use const for timeout value

remove get azure accounts in the init process

add lock for account init

* add timeout in GetVolumeLimits operation

add timeout for getAllStorageAccounts

* record event on endpoint update failure

* fix parse devicePath issue on Azure Disk

* add retry for detach azure disk

add more logging info in detach disk

add azure disk attach/detach logs

* Fix find-binary to locate bazel e2e tests

* Reduce cardinality of admission webhook metrics

* Explicitly set GVK when sending objects to webhooks

* Kubernetes version v1.11.9-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.8.

* add Azure Container Registry anonymous repo support

apply fix for msi and fix test failure

* cri_stats_provider: overload nil as 0 for exited containers stats

Always report 0 cpu/memory usage for exited containers to make
metrics-server work as expect.

Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>

* Fix panic in kubectl cp command

* Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata.

* Update Cluster Autoscaler version to 1.3.8

* add module 'nf_conntrack' in ipvs prerequisite check

* Ensure Azure load balancer cleaned up on 404 or 403

* Allow disable outbound snat when Azure standard load balancer is used

* kubelet: updated logic of verifying a static critical pod

- check if a pod is static by its static pod info
- meanwhile, check if a pod is critical by its corresponding mirror pod info

* Allow session affinity a period of time to setup for new services.

This is to deal with the flaky session affinity test.

* Restore username and password kubectl flags

* build/gci: bump CNI version to 0.7.5

* Fix size of repd e2e to use Gi

* bump repd min size in e2es

* allows configuring NPD release and flags on GCI and add cluster e2e test

* allows configuring NPD image version in node e2e test and fix the test

* Kubernetes version v1.11.10-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.11.md for v1.11.9.

* stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236

* Restore *filter table for ipvs

Resolve: kubernetes#68194

* Update gcp images with security patches

[stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes.
[fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes.
[fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes.

* Bump debian-iptables to v11.0.2.

* Updated Regional PD failover test to use node taints instead of instance group deletion

* Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template

* Removed istio related addon manifests, as the directory is deprecated.

* Use Node-Problem-Detector v0.6.3 on GCI

* Increase default maximumLoadBalancerRuleCount to 250

* Fix Azure SLB support for multiple backend pools

Azure VM and vmssVM support multiple backend pools for the same SLB, but
not for different LBs.

* disable HTTP2 ingress test

* Upgrade compute API to version 2019-03-01

* Replace vmss update API with instance-level update API

* Cleanup codes that not required any more

* Cleanup interfaces and add unit tests

* Update vendors

* Create the "internal" firewall rule for kubemark master.

This is equivalent to the "internal" firewall rule that is created for
the regular masters.
The main reason for doing it is to allow prometheus scraping metrics
from various kubemark master components, e.g. kubelet.

Ref. kubernetes/perf-tests#503

* Move back APIs to Azure stack supported version (#19)
tamalsaha pushed a commit to gomodules/encoding that referenced this pull request Aug 10, 2021
Grab important bug fix that can cause a `panic()` from this package on
certain inputs. See evanphx/json-patch#64

Kubernetes-commit: 2e974f30ab728f2f105af30d4de9db01d02e9514
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants