Skip to content

Release 1.0.1

Compare
Choose a tag to compare
@brendandburns brendandburns released this 17 Jul 23:22
· 108972 commits to master since this release

Documentation

Examples

Changes since 1.0.0

  • Fix a backwards incompatability with service acounts #11389 (mbforbes)
  • Stop Monit Addons on nodes #11320 (dchen1107)
  • Fix duplicate swagger nicknames #11258 (nikhiljindal)
  • Add monitoring and healthz based on tunnel health #11250 (brendandburns)
  • Fix load-balancer firewall messages #11254 (thockin)
  • Fix get instance private ip in AWS #11241 (justinsb)
  • Fix document generation #11113 (zmerlynn)
  • Fix a scheduler race #11150 (bprashanth)

Known issues

  • exec liveness/readiness probes leak resources due to Docker exec leaking resources (#10659)
  • docker load sometimes hangs which causes the kube-apiserver not to start. Restarting the Docker daemon should fix the issue (#10868)
  • The kubelet on the master node doesn't register with the kube-apiserver so statistics aren't collected for master daemons (#10891)
  • Heapster and InfluxDB both leak memory (#10653)
  • Wrong node cpu/memory limit metrics from Heapster (kubernetes-retired/heapster#399)
  • Services that set type=LoadBalancer can not use port 10250 because of Google Compute Engine firewall limitations
  • Add-on services can not be created or deleted via kubectl or the Kubernetes API (#11435)
  • If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (http://issue.k8s.io/11231#issuecomment-122049113)
    • Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node.
  • Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321)
  • GCE PDs may sometimes fail to attach (#11302)
  • If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed.
    • A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, rados -p poolname listwatchers image_name.rbd can list RBD clients that are mapping the image.
binary hash alg hash
kubernetes.tar.gz md5 e7c9f28bf4d2bebd54ca9372d76aaf5e
kubernetes.tar.gz sha1 39e56947e3a42bbec0641486710dcd829123c472