Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix many typos in both code and comments #74125

Merged
merged 1 commit into from
Mar 1, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion test/e2e/apps/rc.go
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ func TestReplicationControllerServeImageOrFail(f *framework.Framework, test stri

// Create a replication controller for a service
// that serves its hostname.
// The source for the Docker containter kubernetes/serve_hostname is
// The source for the Docker container kubernetes/serve_hostname is
// in contrib/for-demos/serve_hostname
By(fmt.Sprintf("Creating replication controller %s", name))
newRC := newRC(name, replicas, map[string]string{"name": name}, name, image)
Expand Down
20 changes: 10 additions & 10 deletions test/e2e/common/downwardapi_volume.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, pod name
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the Pod name. The container runtime MUST be able to access Pod name from the specified path on the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the Pod name. The container runtime MUST be able to access Pod name from the specified path on the mounted volume.
*/
framework.ConformanceIt("should provide podname only [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand Down Expand Up @@ -73,7 +73,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, file mode 0400
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the Pod name with the file mode set to -r--------. The container runtime MUST be able to access Pod name from the specified path on the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the Pod name with the file mode set to -r--------. The container runtime MUST be able to access Pod name from the specified path on the mounted volume.
This test is marked LinuxOnly since Windows does not support setting specific file permissions.
*/
framework.ConformanceIt("should set mode on item file [LinuxOnly] [NodeConformance]", func() {
Expand Down Expand Up @@ -118,7 +118,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, update label
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains list of items for each of the Pod labels. The container runtime MUST be able to access Pod labels from the specified path on the mounted volume. Update the labels by adding a new label to the running Pod. The new label MUST be available from the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains list of items for each of the Pod labels. The container runtime MUST be able to access Pod labels from the specified path on the mounted volume. Update the labels by adding a new label to the running Pod. The new label MUST be available from the mounted volume.
*/
framework.ConformanceIt("should update labels on modification [NodeConformance]", func() {
labels := map[string]string{}
Expand Down Expand Up @@ -150,7 +150,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, update annotations
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains list of items for each of the Pod annotations. The container runtime MUST be able to access Pod annotations from the specified path on the mounted volume. Update the annotations by adding a new annotation to the running Pod. The new annotation MUST be available from the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains list of items for each of the Pod annotations. The container runtime MUST be able to access Pod annotations from the specified path on the mounted volume. Update the annotations by adding a new annotation to the running Pod. The new annotation MUST be available from the mounted volume.
*/
framework.ConformanceIt("should update annotations on modification [NodeConformance]", func() {
annotations := map[string]string{}
Expand Down Expand Up @@ -184,7 +184,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, CPU limits
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the CPU limits. The container runtime MUST be able to access CPU limits from the specified path on the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the CPU limits. The container runtime MUST be able to access CPU limits from the specified path on the mounted volume.
*/
framework.ConformanceIt("should provide container's cpu limit [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand All @@ -198,7 +198,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, memory limits
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the memory limits. The container runtime MUST be able to access memory limits from the specified path on the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the memory limits. The container runtime MUST be able to access memory limits from the specified path on the mounted volume.
*/
framework.ConformanceIt("should provide container's memory limit [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand All @@ -212,7 +212,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, CPU request
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the CPU request. The container runtime MUST be able to access CPU request from the specified path on the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the CPU request. The container runtime MUST be able to access CPU request from the specified path on the mounted volume.
*/
framework.ConformanceIt("should provide container's cpu request [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand All @@ -226,7 +226,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, memory request
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the memory request. The container runtime MUST be able to access memory request from the specified path on the mounted volume.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the memory request. The container runtime MUST be able to access memory request from the specified path on the mounted volume.
*/
framework.ConformanceIt("should provide container's memory request [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand All @@ -240,7 +240,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, CPU limit, default node allocatable
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the CPU limits. CPU limits is not specified for the container. The container runtime MUST be able to access CPU limits from the specified path on the mounted volume and the value MUST be default node allocatable.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the CPU limits. CPU limits is not specified for the container. The container runtime MUST be able to access CPU limits from the specified path on the mounted volume and the value MUST be default node allocatable.
*/
framework.ConformanceIt("should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand All @@ -252,7 +252,7 @@ var _ = Describe("[sig-storage] Downward API volume", func() {
/*
Release : v1.9
Testname: DownwardAPI volume, memory limit, default node allocatable
Description: A Pod is configured with DownwardAPIVolumeSource and DownwartAPIVolumeFiles contains a item for the memory limits. memory limits is not specified for the container. The container runtime MUST be able to access memory limits from the specified path on the mounted volume and the value MUST be default node allocatable.
Description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles contains a item for the memory limits. memory limits is not specified for the container. The container runtime MUST be able to access memory limits from the specified path on the mounted volume and the value MUST be default node allocatable.
*/
framework.ConformanceIt("should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance]", func() {
podName := "downwardapi-volume-" + string(uuid.NewUUID())
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/common/projected_secret.go
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ var _ = Describe("[sig-storage] Projected secret", func() {
})

//The secret is in pending during volume creation until the secret objects are available
//or until mount the secret volume times out. There is no secret object defined for the pod, so it should return timout exception unless it is marked optional.
//or until mount the secret volume times out. There is no secret object defined for the pod, so it should return timeout exception unless it is marked optional.
//Slow (~5 mins)
It("Should fail non-optional pod creation due to secret object does not exist [Slow]", func() {
volumeMountPath := "/etc/projected-secret-volumes"
Expand Down
4 changes: 2 additions & 2 deletions test/e2e/framework/framework.go
Original file line number Diff line number Diff line change
Expand Up @@ -716,7 +716,7 @@ type PodStateVerification struct {

// Optional: only pods passing this function will pass the filter
// Verify a pod.
// As an optimization, in addition to specfying filter (boolean),
// As an optimization, in addition to specifying filter (boolean),
// this function allows specifying an error as well.
// The error indicates that the polling of the pod spectrum should stop.
Verify func(v1.Pod) (bool, error)
Expand Down Expand Up @@ -856,7 +856,7 @@ func (cl *ClusterVerification) WaitForOrFail(atLeast int, timeout time.Duration)
}
}

// ForEach runs a function against every verifiable pod. Be warned that this doesn't wait for "n" pods to verifiy,
// ForEach runs a function against every verifiable pod. Be warned that this doesn't wait for "n" pods to verify,
// so it may return very quickly if you have strict pod state requirements.
//
// For example, if you require at least 5 pods to be running before your test will pass,
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/framework/ingress/ingress_utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -718,7 +718,7 @@ func (j *TestJig) VerifyURL(route, host string, iterations int, interval time.Du
framework.Logf(b)
return err
}
j.Logger.Infof("Verfied %v with host %v %d times, sleeping for %v", route, host, i, interval)
j.Logger.Infof("Verified %v with host %v %d times, sleeping for %v", route, host, i, interval)
time.Sleep(interval)
}
return nil
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/framework/jobs_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ func WaitForAllJobPodsRunning(c clientset.Interface, ns, jobName string, paralle
})
}

// WaitForJobComplete uses c to wait for compeletions to complete for the Job jobName in namespace ns.
// WaitForJobComplete uses c to wait for completions to complete for the Job jobName in namespace ns.
func WaitForJobComplete(c clientset.Interface, ns, jobName string, completions int32) error {
return wait.Poll(Poll, JobTimeout, func() (bool, error) {
curr, err := c.BatchV1().Jobs(ns).Get(jobName, metav1.GetOptions{})
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/framework/rc_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func RcByNamePort(name string, replicas int32, image string, port int, protocol
}, gracePeriod)
}

// RcByNameContainer returns a ReplicationControoler with specified name and container
// RcByNameContainer returns a ReplicationController with specified name and container
func RcByNameContainer(name string, replicas int32, image string, labels map[string]string, c v1.Container,
gracePeriod *int64) *v1.ReplicationController {

Expand Down
2 changes: 1 addition & 1 deletion test/e2e/network/ingress.go
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ var _ = SIGDescribe("Loadbalancing: L7", func() {
// TODO: uncomment the restart test once we have a way to synchronize
// and know that the controller has resumed watching. If we delete
// the ingress before the controller is ready we will leak.
// By("restaring glbc")
// By("restarting glbc")
// restarter := NewRestartConfig(
// framework.GetMasterHost(), "glbc", glbcHealthzPort, restartPollInterval, restartTimeout)
// restarter.restart()
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/network/ingress_scale.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ var _ = SIGDescribe("Loadbalancing: L7 Scalability", func() {

scaleFramework = scale.NewIngressScaleFramework(f.ClientSet, ns, framework.TestContext.CloudConfig)
if err := scaleFramework.PrepareScaleTest(); err != nil {
framework.Failf("Unexpected error while preraring ingress scale test: %v", err)
framework.Failf("Unexpected error while preparing ingress scale test: %v", err)
}
})

Expand Down
2 changes: 1 addition & 1 deletion test/e2e/network/service_latency.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ var _ = SIGDescribe("Service endpoints latency", func() {
/*
Release : v1.9
Testname: Service endpoint latency, thresholds
Description: Run 100 iterations of create service with the Pod running the pause image, measure the time it takes for creating the service and the endpoint with the service name is available. These durations are captured for 100 iterations, then the durations are sorted to compue 50th, 90th and 99th percentile. The single server latency MUST not exceed liberally set thresholds of 20s for 50th percentile and 50s for the 90th percentile.
Description: Run 100 iterations of create service with the Pod running the pause image, measure the time it takes for creating the service and the endpoint with the service name is available. These durations are captured for 100 iterations, then the durations are sorted to compute 50th, 90th and 99th percentile. The single server latency MUST not exceed liberally set thresholds of 20s for 50th percentile and 50s for the 90th percentile.
*/
framework.ConformanceIt("should not be very high ", func() {
const (
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/perftype/perftype.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ package perftype
// DataItem is the data point.
type DataItem struct {
// Data is a map from bucket to real data point (e.g. "Perc90" -> 23.5). Notice
// that all data items with the same label conbination should have the same buckets.
// that all data items with the same label combination should have the same buckets.
Data map[string]float64 `json:"data"`
// Unit is the data unit. Notice that all data items with the same label combination
// should have the same unit.
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/scheduling/equivalence_cache_predicates.go
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ var _ = framework.KubeDescribe("EquivalenceCache [Serial]", func() {
return err
}, ns, labelRCName, false)

// these two replicas should all be rejected since podAntiAffinity says it they anit-affinity with pod {"service": "S1"}
// these two replicas should all be rejected since podAntiAffinity says it they anti-affinity with pod {"service": "S1"}
verifyReplicasResult(cs, 0, replica, ns, labelRCName)
})
})
Expand Down
12 changes: 6 additions & 6 deletions test/e2e/scheduling/predicates.go
Original file line number Diff line number Diff line change
Expand Up @@ -570,13 +570,13 @@ var _ = SIGDescribe("SchedulerPredicates [Serial]", func() {

port := int32(54321)
By(fmt.Sprintf("Trying to create a pod(pod1) with hostport %v and hostIP 127.0.0.1 and expect scheduled", port))
creatHostPortPodOnNode(f, "pod1", ns, "127.0.0.1", port, v1.ProtocolTCP, nodeSelector, true)
createHostPortPodOnNode(f, "pod1", ns, "127.0.0.1", port, v1.ProtocolTCP, nodeSelector, true)

By(fmt.Sprintf("Trying to create another pod(pod2) with hostport %v but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled", port))
creatHostPortPodOnNode(f, "pod2", ns, "127.0.0.2", port, v1.ProtocolTCP, nodeSelector, true)
createHostPortPodOnNode(f, "pod2", ns, "127.0.0.2", port, v1.ProtocolTCP, nodeSelector, true)

By(fmt.Sprintf("Trying to create a third pod(pod3) with hostport %v, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides", port))
creatHostPortPodOnNode(f, "pod3", ns, "127.0.0.2", port, v1.ProtocolUDP, nodeSelector, true)
createHostPortPodOnNode(f, "pod3", ns, "127.0.0.2", port, v1.ProtocolUDP, nodeSelector, true)
})

It("validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP", func() {
Expand All @@ -596,10 +596,10 @@ var _ = SIGDescribe("SchedulerPredicates [Serial]", func() {

port := int32(54322)
By(fmt.Sprintf("Trying to create a pod(pod4) with hostport %v and hostIP 0.0.0.0(empty string here) and expect scheduled", port))
creatHostPortPodOnNode(f, "pod4", ns, "", port, v1.ProtocolTCP, nodeSelector, true)
createHostPortPodOnNode(f, "pod4", ns, "", port, v1.ProtocolTCP, nodeSelector, true)

By(fmt.Sprintf("Trying to create another pod(pod5) with hostport %v but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled", port))
creatHostPortPodOnNode(f, "pod5", ns, "127.0.0.1", port, v1.ProtocolTCP, nodeSelector, false)
createHostPortPodOnNode(f, "pod5", ns, "127.0.0.1", port, v1.ProtocolTCP, nodeSelector, false)
})
})

Expand Down Expand Up @@ -803,7 +803,7 @@ func CreateHostPortPods(f *framework.Framework, id string, replicas int, expectR
}

// create pod which using hostport on the specified node according to the nodeSelector
func creatHostPortPodOnNode(f *framework.Framework, podName, ns, hostIP string, port int32, protocol v1.Protocol, nodeSelector map[string]string, expectScheduled bool) {
func createHostPortPodOnNode(f *framework.Framework, podName, ns, hostIP string, port int32, protocol v1.Protocol, nodeSelector map[string]string, expectScheduled bool) {
createPausePod(f, pausePodConfig{
Name: podName,
Ports: []v1.ContainerPort{
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/scheduling/taint_based_evictions.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ var _ = SIGDescribe("TaintBasedEvictions [Serial]", func() {
// 1. node lifecycle manager generate a status change: [NodeReady=true, status=ConditionUnknown]
// 1. it's applied with node.kubernetes.io/unreachable=:NoExecute taint
// 2. pods without toleration are applied with toleration with tolerationSeconds=300
// 3. pods with toleration and without tolerationSeconds won't be modifed, and won't be evicted
// 3. pods with toleration and without tolerationSeconds won't be modified, and won't be evicted
// 4. pods with toleration and with tolerationSeconds won't be modified, and will be evicted after tolerationSeconds
// When network issue recovers, it's expected to see:
// 5. node lifecycle manager generate a status change: [NodeReady=true, status=ConditionTrue]
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/storage/vsphere/pvc_label_selector.go
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,6 @@ func testCleanupVSpherePVClabelselector(c clientset.Interface, ns string, nodeIn
framework.ExpectNoError(framework.DeletePersistentVolumeClaim(c, pvc_vvol.Name, ns), "Failed to delete PVC ", pvc_vvol.Name)
}
if pv_ssd != nil {
framework.ExpectNoError(framework.DeletePersistentVolume(c, pv_ssd.Name), "Faled to delete PV ", pv_ssd.Name)
framework.ExpectNoError(framework.DeletePersistentVolume(c, pv_ssd.Name), "Failed to delete PV ", pv_ssd.Name)
}
}
2 changes: 1 addition & 1 deletion test/e2e/storage/vsphere/vsphere.go
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ func (vs *VSphere) GetFolderByPath(ctx context.Context, dc object.Reference, fol
return vmFolder.Reference(), nil
}

// CreateVolume creates a vsphere volume using given volume paramemters specified in VolumeOptions.
// CreateVolume creates a vsphere volume using given volume parameters specified in VolumeOptions.
// If volume is created successfully the canonical disk path is returned else error is returned.
func (vs *VSphere) CreateVolume(volumeOptions *VolumeOptions, dataCenterRef types.ManagedObjectReference) (string, error) {
ctx, cancel := context.WithCancel(context.Background())
Expand Down