Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS Support for Kubernetes 1.12 #24

Closed
pauncejones opened this issue Dec 5, 2018 · 56 comments
Closed

EKS Support for Kubernetes 1.12 #24

pauncejones opened this issue Dec 5, 2018 · 56 comments
Labels
EKS Amazon Elastic Kubernetes Service

Comments

@pauncejones
Copy link
Contributor

No description provided.

@pauncejones pauncejones created this issue from a note in containers-roadmap (We're Working On It) Dec 5, 2018
@pauncejones pauncejones added the EKS Amazon Elastic Kubernetes Service label Dec 5, 2018
@theherk
Copy link

theherk commented Dec 13, 2018

The day preceding this issue's creation, k8s v1.13 was released. Is there a reason to work toward v1.12 rather than v1.13?

@DanyC97
Copy link

DanyC97 commented Dec 16, 2018

i suspect the upgrade is incremental/ ladder mode ?

@omerfsen
Copy link

Also will it have HPA (Horizontal Pod Autoscaling) support by default ? since EKS using 1.11 still does not have this feature.

@abby-fuller abby-fuller moved this from We're Working On It to Coming Soon in containers-roadmap Jan 16, 2019
@tabern tabern changed the title EKS Kubernetes 1.12 EKS Support for Kubernetes 1.12 Jan 16, 2019
@geerlingguy
Copy link

Also, for myself, I'm really excited to be able to drop my Job garbage collection script once 1.12 comes, since ttlSecondsAfterFinished will be a supported spec parameter in 1.12.

I'm happy to do these incremental upgrades as long as this public roadmap continues to be maintained—I'm guessing that the EKS team is working on finding ways to make the release process / cycle a bit tighter (it seems like 1.11 was kind of rushed because of the sec concerns, though the upgrade was quite smooth for both my clusters), and it will take a few more before it's seamless and EKS catches up to stable K8s.

@max-rocket-internet
Copy link

since EKS using 1.11 still does not have this feature.

@omerfsen yes it does.

@whereisaaron
Copy link

Too late for 1.12, 1.13 is GA for two months already.
https://kubernetes.io/blog/2018/12/03/kubernetes-1-13-release-announcement/

Kubernetes has usually had four releases per year, in 2019 it is moving to five releases per year (not counting patch releases).

Does AWS have a policy or goal about this? E.g. does AWS aspire to release all releases within X months of GA, or else can customers count on at least X releases per year?

@arminc
Copy link

arminc commented Feb 6, 2019

It would be a good idea to at least know what the commitment is regarding the releases. I understand AWS needs to catch up to 1.12 and 1.13 first before it can focus on the long run.
It is also interesting to know how many versions AWS is going to support? For example: let's say they do N+1 that means that every 3 to 4 months business will need to upgrade, I am all for it but I doubt big companies will like that. I doubt any of the on-prem companies running Kubernetes is upgrading that fast either so I am wondering what AWS is going to do here to help everybody move forward or is there going to be an LTS version?

@ejc3
Copy link

ejc3 commented Feb 6, 2019

The k8s project supports backporting security fixes for three minor revisions, so that might be a good rule of thumb: https://kubernetes.io/docs/setup/version-skew-policy/

@dawidmalina
Copy link

I assume that next month 1.14 (as usual) will be released so security updates will be provided only for 1.14, 1.13, 1.12. It's mean that EKS will not have any supported version available! If till then 1.12 will be somehow released by the team it will be the then only version with security patches!

I don't think that AWS will backport security patches for versions not supported by the community. Please correct me if I am wrong.

@whereisaaron
Copy link

whereisaaron commented Feb 6, 2019

Yes the 1.14 release process started in November and releases next month (25 March). Alpha releases are available for EKS testing now and code freeze is 7 March (https://github.com/kubernetes/sig-release/tree/master/releases/release-1.14).

EKS has been GA for a while, is recently ISO and PCI Compliant, and has an SLA commitment. All of that is fabulous stuff and no doubt a lot of legwork for the team 👏. Now it would great to see a plan or commitment from AWS for maintaining and regularly updating the EKS service.

@countspongebob
Copy link

The day preceding this issue's creation, k8s v1.13 was released. Is there a reason to work toward v1.12 rather than v1.13?

As one of the other commenters noted... as we need to support incremental upgrades we can't skip 1.12. The upstream community doesn't support skip-release upgrades, either. We are matching our release support cadence to community supported releases, so we do also need to support 1.12 since that is a project-supported release.

@whereisaaron
Copy link

Thanks for the info @countspongebob! So EKS users should expect to every project-supported release to arrive eventually.

How about @dawidmalina's concern about the possibility even the newest EKS version is out of community security support? Will AWS back-port security patches in that case? Or do you intend to always have a supported version available?

@snstanton
Copy link

One of the reasons my team switched from kops to EKS was to get a more timely release train. I am hoping that EKS will at least keep up with the oldest supported release that is getting security patches. I was pleased with the speed of rollout of the critical security fix a few weeks back, but if we aren't on a maintained release, that's going to be a lot harder to do.

@micahlmartin
Copy link

Any update on this? We're going to be lagging pretty far behind if this doesn't get released anytime soon.

@countspongebob
Copy link

We are expecting to support 1.12.6 once it is released by the community next week, assuming it passes our internal qualification criteria.

@tabern
Copy link
Contributor

tabern commented Feb 22, 2019

Additional clarification / deeper dive on this. We have been waiting on 1.12.6 as this fixes an important Golang vulnerability by updating to Go 1.10.8.

@geerlingguy
Copy link

1.12.6 was released today (https://github.com/kubernetes/kubernetes/releases/tag/v1.12.6); my team is excited to be able to knock now 3 prod-use-blocking bugs off our tracker that are caused simply by running an old version of Kubernetes. Fingers crossed we're still on track for an update this week?

@parthpatel1001
Copy link

Has there been any movement on this &|| a 1.11.8 patch considering the public announcement here

@geerlingguy
Copy link

geerlingguy commented Mar 7, 2019

Possibly related:
#188

@axelborja
Copy link

Any news?

@uprightvinyl
Copy link

uprightvinyl commented Mar 25, 2019

The EKS FAQ states AWS supports 1.10.11 and 1.11.5, so that should demonstrate back-ported support.

The fact that GKE doesn't currently support a version of 1.12 for new clusters, and kops is still at 1.11 support, to me demonstrates that this isn't just an AWS issue and that the bug fixes coming 1.12.6 are worth waiting for.

@anurag
Copy link

anurag commented Mar 26, 2019

The fact that GKE doesn't currently support a version of 1.12 for new clusters

GKE does support 1.12.5-gke.5, just not 1.12.5-gke.10. They also now have 1.13 in public preview.

https://cloud.google.com/kubernetes-engine/docs/release-notes

@uprightvinyl
Copy link

That's my poor interpretation of the GKE release notes, I stand corrected - I can indeed select 1.12.5-gke.5 for a new cluster in GKE. Thanks @anurag.

@w32-blaster
Copy link

amazon-eks-node-1.12-v20190327 AMI in us-west-2 got just released. Hopefully this means we will get 1.12 today :)

@tdmalone
Copy link

image

🎉

@tabern
Copy link
Contributor

tabern commented Mar 29, 2019

All - we’re super excited to announce that Amazon EKS now supports Kubernetes version 1.12 for all clusters. You can create new clusters using version 1.12.6 or update existing clusters to 1.12 using the console or APIs.

Please note with this version release, EKS now supports 3 versions of Kuberentes. Starting with the next version release (1.13), EKS will begin end of life for support of Kubernetes version 1.10. If you are running 1.10 clusters, we recommend beginning the process to update to 1.11.

@tabern tabern closed this as completed Mar 29, 2019
@tabern tabern moved this from Coming Soon to Just Shipped in containers-roadmap Mar 29, 2019
@errordeveloper
Copy link

Also, eksctl 0.1.26 is on it's way, thanks to @christopherhein for making it possible via eksctl-io/eksctl#680 🎉

@christopherhein
Copy link

Update eksctl to 0.1.26 from https://eksctl.io/ then:

$ eksctl create cluster --name 1-12 --version 1.12

@dinos80152
Copy link

Hi there:

I found you must forget put kubectl non-encoded version for linux amd64 on your s3 bucket
https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/kubectl

please check it, thanks!

@max-rocket-internet
Copy link

@dinos80152 you can just use the normal kubectl from wherever you get software. e.g. brew, apt, snap etc.

@pawelprazak
Copy link

pawelprazak commented Mar 29, 2019

yes, but the docs are broken now:
https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

also something went wrong here as well:

aws-iam-authenticator version
{}

@tabern
Copy link
Contributor

tabern commented Apr 1, 2019

@pawelprazak can you give us more details on the IAM authenticator issue?

@pawelprazak
Copy link

pawelprazak commented Apr 2, 2019 via email

@whereisaaron
Copy link

@pawelprazak I checked and the AWS aws-iam-authenticator binaries are different from either the 0.3.0 heptio-authenticator-aws and the 0.4.0-alpha aws-iam-authenticator release binaries. That are almost twice the size for a start (26MB vs 15MB). I can also confirm the AWS aws-iam-authenticator release I downloaded for 1.11 in Dec is a different build from the 1.12 release, even though there have been no upstream releases in that time. I suspect AWS are doing their own builds from the HEAD of the upstream project or a private fork (maybe explaining the lack of a version number?). It would be nice if AWS tagged and used actual upstream releases.

This comment on the Getting Started page was probably was true back for 1.10 but reality has clearly diverged since 😄

"Amazon EKS vends aws-iam-authenticator binaries that you can use that are identical to the upstream aws-iam-authenticator binaries with the same version."

@aparamon
Copy link

@tabern After migrating from 1.11 to 1.12 I now frequently get:

kubernetes.client.rest.ApiException: (500)
E           Reason: Internal Server Error
E           HTTP response headers: HTTPHeaderDict({'Audit-Id': '8680fa4a-4b8d-4508-99fe-c72fe534c652', 'Content-Type': 'application/json', 'Date': 'Tue, 16 Apr 2019 03:18:33 GMT', 'Content-Length': '268'})
E           HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Get https://192.168.162.254:10250/containerLogs/genghis-test-44b68da2-f40c-4856-95d4-688f265a3064/genghis-7cc677b5b5-8xh89/genghis: dial tcp 192.168.162.254:10250: i/o timeout","code":500}

Is there additional info I could provide?

@tabern
Copy link
Contributor

tabern commented Apr 16, 2019

@aparamon did you update your addons including CoreDNS and KubeProxy (https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html)?

@aparamon
Copy link

aparamon commented Apr 16, 2019

Ah, I relied on eksctl to do that stuff correctly! Running eksctl utils update-coredns additionally to eksctl update cluster seemingly did the job. Thanks!

@errordeveloper
Copy link

errordeveloper commented Apr 16, 2019 via email

@pc-rshetty
Copy link

When we upgrade the EKS cluster from 1.11 to 1.12 the master node will have a downtime, does anyone know if the applications on worker node too will have a downtime

@stefansedich
Copy link

@pc-rshetty it really depends on what strategy you take when you upgrade your worker nodes to the latest 1.12 AMI and how you perform pod migration which depends on many variables including what PDBs you have setup, how many replicas of those pods are running, etc...

@pc-rshetty
Copy link

It will be a rolling upgrade using Cloud Formation template . We have enough buffer to handle failed host so pods can get created on additional nodes. My concern more is in terms of scheduler, controller etc being upgraded and its effect on the pods on worker nodes.

@max-rocket-internet
Copy link

does anyone know if the applications on worker node too will have a downtime

No they won't. Nothing changes on the worker nodes during this window. The containers keep running. You will just in theory lose access to the k8s API so kubectl might stop working for a moment.

@tabern
Copy link
Contributor

tabern commented Apr 18, 2019

@errordeveloper yes - we've been thinking about managed Addons for a while and I've created a new roadmap item to continue this discussion - #252

@aparamon
Copy link

@tabern Unfortunately, I keep getting "i/o timeout" even after following the upgrade procedure manually:
https://forums.aws.amazon.com/thread.jspa?messageID=897791

@stereoj
Copy link

stereoj commented Aug 22, 2019

Any news and plans for support of k8s v1.12 ?

@errordeveloper
Copy link

errordeveloper commented Aug 22, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service
Projects
Development

No branches or pull requests