Skip to content
This repository has been archived by the owner on Nov 15, 2022. It is now read-only.

Strategy to support various Kubernetes versions #141

Open
surajssd opened this issue Jul 11, 2017 · 25 comments
Open

Strategy to support various Kubernetes versions #141

surajssd opened this issue Jul 11, 2017 · 25 comments

Comments

@surajssd
Copy link
Member

surajssd commented Jul 11, 2017

Problem:

Right now we support k8s 1.5 (if I am not mistaken). Which means the artifacts generated will work on Kubernetes 1.5 and if we don't use any new feature we in effect also support older version of Kubernetes.

But this could be a problem if we generate something that is not supported on old version then the generation is of no use.

For e.g. (assume we have already added support for StatfulSets) If we generate the artifacts calling them StatefulSets then this will not work on Kubernetes older versions since it was called as PetSets back then.

Another e.g. envFrom is a really cool feature and will ease our life in many ways, but it is only in Kubernetes 1.6 so anyone using older versions won't have it. But we have added it with some back ports but it is also limited.

My question here being how do we support multiple versions of Kubernetes? If someone has the latest Kubernetes we should not stop them from having all the latest features just because we don't support some of them.

How do we solve it?

User: I think we can have a flag where you can mention what version of Kubernetes you want output for and then we generate the artifacts for that.

Development approach: From the code perspective we can use the approach of what we did in kompose. Define the conversion in various K8S versioned generator and depending on what version we are generating for we use that particular version of genertor.

@concaf
Copy link
Collaborator

concaf commented Jul 12, 2017

@surajssd I think introducing a flag and supporting multiple releases at the same time will be a maintenance hell, especially because kedge is super new right now.

For now (and only for now), we can stick with supporting either the latest Kubernetes release, or the Kubernetes being used in the latest OpenShift release.

And when we have requests to support multiple branches, we can have different branches for different Kubernetes versions that we support, instead of putting all in the master.

Thoughts?

@surajssd
Copy link
Member Author

@containscafeine our potential users are not on latest of either

@concaf
Copy link
Collaborator

concaf commented Jul 12, 2017

@surajssd are we not expecting at least OpenShift v1.5.1, which is the latest tag?

@kadel
Copy link
Member

kadel commented Jul 12, 2017

And when we have requests to support multiple branches, we can have different branches for different Kubernetes versions that we support, instead of putting all in the master.

Multiple branches will be maintenance hell :-D and usability hell for users.

I don't think that there is other way than flags.

@concaf
Copy link
Collaborator

concaf commented Jul 12, 2017

@kadel if we do everything in master, we will encounter problems like different versions of vendored packages. The first one that comes to my mind is client-go. Different Kubernetes versions will require different versions of client-go, how do we vendor them?

Another problem would be the ever growing size of the binary because of multiple versions being supported.

In case of multiple branches, we will need to tag the branches, update the docs (add a table for downloading binaries for different versions) and build RPMs in such a way that the user gets the binary that he/she requires.

The multiple branches approach is followed by Kubernetes, OpenShift, etc, so I think it should be fine.

WDYT?

@kadel
Copy link
Member

kadel commented Jul 12, 2017

So If I want to convert for multiple for Kuberntes version I'll have to download multiple different binaries?
That doesn't feel user-friendly.

The multiple branches approach is followed by Kubernetes, OpenShift, etc, so I think it should be fine.

This is something else, they don't add new features to the old versions, so it's easier to maintain.

@surajssd
Copy link
Member Author

@containscafeine client-go supports all older version from which it is being used, that is what README of client go says.

See https://github.com/kubernetes/client-go#compatibility-matrix

We will have to see how do we output k8s version specific thing.

@kadel
Copy link
Member

kadel commented Jul 14, 2017

Yep, as @surajssd there shouldn't be any vending issues, we just have to take care of outputting right stuff

@kadel
Copy link
Member

kadel commented Jul 26, 2017

I think we should increase the priority of this. We can start working on this as soon as #181 is merged.

@kadel
Copy link
Member

kadel commented Jul 31, 2017

I think that adding --k8s-version as command line argument would be the best solution for this.

@concaf
Copy link
Collaborator

concaf commented Jul 31, 2017

@kadel minikube has a --kubernetes-version, so maybe that, but yes, +1 for that flag with whatever name.

@kadel
Copy link
Member

kadel commented Jul 31, 2017

yes, it can be --kubernetes-version

@kadel
Copy link
Member

kadel commented Aug 9, 2017

I'm bumping priority on this, as I think we should solve this soon.

@concaf
Copy link
Collaborator

concaf commented Aug 30, 2017

We could also leverage Kubernetes plugins for this.
So if --kubernetes-version flag is not set, and if kedge is being used as a plugin, then we can get the version from the cluster itself.
But this is for laters, just putting it out.

@concaf
Copy link
Collaborator

concaf commented Aug 30, 2017

Okay, so we need to figure out multiple things. Here is my brain dump.

So, like @surajssd pointed out earlier, client-go has a compatibility matrix - https://github.com/kubernetes/client-go#compatibility-matrix, so

  1. We need to figure out which versions of Kubernetes do we need to support, which will decided on the client-go version we will end up using. Godeps.json from OpenShift Origin v3.6.0 (latest stable release) points to v1.6.1, so IMO client-go 4.0 should be fine.

  2. We need to have a table of new and deprecated API resources for every Kubernetes release we will be supporting.
    We can take help from https://github.com/kubernetes/features, but I'm not sure how to parse the data there to provide concrete API definitions being added/removed in every Kubernetes release.

@kadel
Copy link
Member

kadel commented Sep 4, 2017

because we are also doing #210 we need also think about supporting multiple OpenShift versions, so this issue should also take into account that.

@kadel kadel mentioned this issue Sep 4, 2017
3 tasks
@kadel kadel added the kind/epic label Sep 4, 2017
concaf added a commit to concaf/kedge that referenced this issue Sep 5, 2017
This commit updates client-go to v4.0.0 from v3.0.0.

This is being done as part of kedgeproject#141 since v4.0.0 has a much broader
coverage of Kubernetes versions than v3.0.0, which is what our
target is.

There have been other slight modifications in glide.yaml to make
this client-go version bump work like k8s.io/apimachinery is no
longer being tied to a particular version, since it was throwing
some errors and was not required anymore.

Also, pflag has been tied to version v1.0.0 since cobra was
complaining and refusing to compile to some issues.

No change has been made in code.

To update the vendor directory, the following 2 commands were
run -

glide update --strip-vendor
glide-vc --only-code --no-tests --use-lock-file
@concaf
Copy link
Collaborator

concaf commented Sep 6, 2017

Cabal log -

  • First step - Kubernetes has functions that can convert between versions
  • We will need to have different versions of Kedge spec as well, later on, but for now we can move forward kith Kubernetes 1.7
  • We should only support Kubernetes v1.6 and v1.7, since latest OpenShift is based on v1.6

@concaf
Copy link
Collaborator

concaf commented Sep 6, 2017

And had the following conversation with @kadel on our slack -

concaf @tkral I'm looking in client-go v4.0.0 branch. There are multiple
conversion.go files which look relevant.


[2:30 PM] 
e.g.
https://github.com/kubernetes/client-go/blob/v4.0.0/pkg/apis/extensions/v1beta1/conversion.go
GitHub
kubernetes/client-go
client-go - Go client for Kubernetes.



[2:31 PM] 
so they are taking in runtime.Scheme, and doing the conversions


[2:31 PM] 
tkral hmm, interesting, and this is not in master?


[2:32 PM] 
concaf @tkral didn't check


[2:32 PM] 
let me see


[2:33 PM] 
well, the function names don't exist in master


[2:33 PM] 
and there is only one conversion.go -
https://github.com/kubernetes/client-go/blob/master/tools/clientcmd/api/v1/conversion.go
GitHub
kubernetes/client-go
client-go - Go client for Kubernetes.



[3:04 PM] 
concaf @tkral so, I found that NetworkPolicy was moved from extensions/v1beta1
to v1, in v4.0.0


[3:05 PM] 
I found 2 functions around this


[3:06 PM] 
Step 1. `func Convert_v1beta1_NetworkPolicy_To_networking_NetworkPolicy()`
https://github.com/kubernetes/client-go/blob/v4.0.0/pkg/apis/extensions/v1beta1/conversion.go#L278
(edited)


[3:07 PM] 
This converts v1beta1 NetworkPolicy to their internal
networking.NetworkPolicy


[3:09 PM] 
Step 2: `func Convert_networking_NetworkPolicy_To_v1_NetworkPolicy()`
https://github.com/kubernetes/client-go/blob/v4.0.0/pkg/apis/networking/v1/zz_generated.conversion.go#L79
(edited)


[3:09 PM] 
This converts the internal networking.NetworkPolicy to v1.NetworkPolicy


[3:11 PM] 
There are other 2 way functions as well, for e.g. `v1beta1 --> internal`,
`internal --> v1beta1`, `internal --> v1`, `v1 --> internal`


[3:11 PM] 
wherever applicable


[3:11 PM] 
tkral but you don't call those function directly right?


[3:12 PM] 
concaf @tkral well, they are all exported functions, so they should be fine to
be called, no?


[3:12 PM] 
They are all added to runtime.Scheme, like here -
https://github.com/kubernetes/client-go/blob/v4.0.0/pkg/apis/networking/v1/zz_generated.conversion.go#L40
(edited)


[3:13 PM] 
tkral yep, I meant that you should call them via runtime.Scheme


[3:14 PM] 
concaf @tkral yep


[3:16 PM] 
@tkral so, does it look good to proceed that we do this 2 step conversion
thing. Ideally it'd be cool to unmarshal into internal structs of client-go and
then call the `internal --> v1beta1` or `internal --> v1` functions. But the
internal structs don't have JSON tags :disappointed:


[3:17 PM] 
So, maybe we need to standardize on, let's say Kubernetes 1.6, and unmarshal
everything in that, and then based on `--kubernetes-version`, we call these
conversion functions.


[3:18 PM] 
tkral hmm, i'm not sure about that :disappointed: if we do that than we might
miss some features in the future


[3:18 PM] 
I'm not sure to be honest :disappointed:


[3:18 PM] 
concaf I agree :disappointed:


[3:19 PM] 
tkral It need a little bit more thinking, can write sum summary of this and
post it to the issue?


[3:19 PM] 
concaf Yeah, I'll post the cabal log and this conversation on the issue as it
is


[3:19 PM] 
the relevant bits :stuck_out_tongue:


[3:21 PM] 
tkral :+1:


[3:21 PM] 
the big question is what version should kedge spec follow


[3:23 PM] 
concaf @tkral I think the kedge spec should also be tied to the Kubernetes
version being used. So, 1.6 by default, and then for 1.7, it changes.


[3:23 PM] 
concaf The shortcuts should be Kubernetes version independent (maybe?)
1 reply Today at 3:27 PM View thread


[3:23 PM] 
tkral that is not possible to do


[3:23 PM] 
look at envFrom


[3:24 PM] 
concaf envFrom is the only shortcut field which we have borrowed from
Kubernetes, the rest are our own thing, no?


[3:25 PM] 
tkral yep


[3:25 PM] 
concaf so, envFrom is a valid field in Kubernetes >= 1.6


[3:25 PM] 
and we are planning to only support Kubernetes >= 1.6, so maybe it goes away
completely


[3:26 PM] 
tkral yep you are right, I forgot that


[3:26 PM] 
we should change it and start doing proper envFrom


[3:26 PM] 
concaf yeah


[3:27 PM] 
concaf
So this is good?


[3:27 PM] 
tkral it still doesn't have to be true


[3:27 PM] 
look at ingress


[3:28 PM] 
we have shortcut for it, but it can change anytime, as it is in beta


[3:28 PM] 
so `endpoint` shortcut depends on k8s version


[3:28 PM] 
concaf yep, makes sense :smile: Haha.
So, we need different kedge spec for different Kubernetes version completely,
then.


[3:29 PM] 
tkral it will also change once Ingress is moved to v1


[3:29 PM] 
yep :disappointed:


[3:29 PM] 
concaf this is going to be a ride :stuck_out_tongue:


[3:29 PM] 
tkral different Kedge spec, and shortcuts with slightly different behavior
:disappointed:


[3:29 PM] 
tkral but it will be up to the user what he writes (in Kedge spec)
1 reply Today at 3:31 PM View thread


[3:30 PM] 
tkral and it is up to us what we generate when shortcut is used


[3:31 PM] 
concaf
yep, by default we default to the Kedge spec for Kubernetes 1.6, and then
switch case when Kubernetes version is specified


[3:31 PM] 
tkral why 1.6?


[3:32 PM] 
concaf @tkral because OpenShift latest is based on that, no? Like we discussed
in the cabal yesterday


[3:33 PM] 
tkral how about if we default to latest stable for k8s, and for OpenShift to
latest OpenShift stable version?


[3:33 PM] 
concaf @tkral makes sense, yep, should be doable once we have both OpenShift
support and `--kubernetes-version` in


[3:33 PM] 
tkral yep


[3:34 PM] 
once we have `--distribution openshift` it will generate 1.6 kuberentes
articats + OpenShift artifacts by default


[3:35 PM] 
now the hard part:


[3:36 PM] 
concaf :face_with_head_bandage:


[3:38 PM] 
tkral lets say that in 1.8 something is changed in  Ingress structure.  Users
has its Kedge file according to 1.8 structs  (he doesn't use `endpoint`
shortcut) and now he runs kedge with `--kubernetes-version 1.7`


[3:38 PM] 
we do backwards conversion ourselfs?


[3:39 PM] 
concaf @tkral I don't think we should, we should error out with a version
mismatch maybe.


[3:40 PM] 
tkral and how about the other way around?


[3:41 PM] 
concaf same I think


[3:41 PM] 
tkral than why we are adding that version switch? :smile:


[3:42 PM] 
I'm starting to think that that version switch should only control what we
generate when shortcuts are used


[3:42 PM] 
concaf By default, we can only convert to one Kubernetes version (the latest
Kubernetes stable), but if the user wants a different one, they can write their
Kedge spec in that version and pass using `--kubernetes-version` falg
3 replies Last reply today at 3:44 PM View thread


[3:43 PM] 
tkral the rest in in the user control


[3:43 PM] 
tkral
wait....


[3:44 PM] 
tkral
So you are saying that `--kuberntes-version` is not controlling just what
version is outputted, but also what version is used in the Kedge file?


[3:44 PM] 
concaf
yep


[3:45 PM] 
tkral ah, Now I see. good point, I was looking at it as flag that just controls
output.


[3:46 PM] 
tkral would it be better to have this version in kedge file than?
1 reply Today at 3:47 PM View thread


[3:46 PM] 
tkral as its tied to given kedge file and its format


[3:47 PM] 
concaf
this version == Kubernetes version or Kedge version?


[3:47 PM] 
concaf but, idk, maybe in the file only the application definition bits
remain


[3:47 PM] 
and this can be passed via the flag


[3:47 PM] 
not sure, I'm fine either way


[3:49 PM] 
tkral kubernetes version, Reason why I'm saying this is that if you write file
in some other version that latest stable, you will have to use version flag
every time


[3:49 PM] 
if we put optional field into Kedge file, users don't have to thing about that
flag every time


[3:50 PM] 
concaf makes sense

[3:50 PM] 
does it make sense to do both?

[3:51 PM] 
it's also possible to deploy one kedge file written for 1.6 to 1.7 as well


[3:51 PM] 
maybe the shortcuts will behave a bit differently for both, or some new fields
are added for 1.7 which are not in 1.6


[3:51 PM] 
idk


[3:52 PM] 
how about - there is an extra field like `kubernetesVersion: 1.6`, which can be
overwritten using `--kubernetes-version=1.7`


[3:53 PM] 
tkral it sure if that makes sense, you said it yourself, 1.6 can be deployed to
1.7 so why would you want to overwrite it with flag?


[3:54 PM] 
concaf because the output for Kubernetes version 1.7 might be different from
Kubernetes version 1.6, no?


[3:54 PM] 
e.g. with 1.7, an extra field for ingress is generated, by default (edited)


[3:55 PM] 
which is not there in 1.6


[3:55 PM] 
wdyt


[3:58 PM] 
tkral yes, but it will be compatible


[3:59 PM] 
i don't think that kubernetes will break something, what is working in lover
versions will work in newer


[3:59 PM] 
i think that only expection is alpha versions


[3:59 PM] 
concaf sure, we can later change if need arises, I'm okay with not doing it for
now


[4:00 PM] 
tkral https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-changes
Kubernetes
The Kubernetes API
Production-Grade Container Orchestration


[4:06 PM] 
concaf So, for now, I'll start with this -
1. Add a new optional field `kubernetesVersion:`, which defaults to Kubernetes
latest version i.e. 1.7 right now, version the current Kedge spec for latest
version. Make the kedge+Kubernetes version being used depend on the value of
`kubernetesVersion`
2. Add Kubernetes 1.6 support and introduce the conversion functions, etc,
after looking more client-go

@concaf
Copy link
Collaborator

concaf commented Sep 6, 2017

  • Add a new optional field kubernetesVersion:, which defaults to Kubernetes latest version i.e. 1.7 right now, version the current Kedge spec for latest version. Make the kedge+Kubernetes version being used depend on the value of kubernetesVersion

@concaf
Copy link
Collaborator

concaf commented Sep 7, 2017

Okay, so as per the current resources that we support in spec.go, there are only 2 beta v1 resources, which are Deployment and Ingress, except those all are v1 resources. None of these resources have changed between the versions 1.6 and 1.7.
The only thing that has moved from beta v1 to v1 is NetworkPolicy, which we don't support anyway.

Talking to @surajssd, we were thinking if we actually have this problem right now. If we are going to support 1.6+, there are no breaking changes. Is this a feature for the future, or ... ?

ping @kedgeproject/maintainers, do we need to rescope or reprioritize this one?

@surajssd
Copy link
Member Author

surajssd commented Sep 8, 2017

Can we please finalize the priority of this issue? If it is high is there a justifiable reason to put all our efforts to do this ?

@concaf
Copy link
Collaborator

concaf commented Sep 8, 2017

ping @kadel @pradeepto

@concaf
Copy link
Collaborator

concaf commented Sep 8, 2017

So we talked, and we will put in maybe a maximum of 2 days into this and see if we can come up with a non-convoluted solution to this since we've already investigated a bit into this. If we cannot, we put it on the backburner since this is not a problem we're facing right now.

@kadel
Copy link
Member

kadel commented Dec 13, 2017

It might be time to reopen discussion on this.

Here are some information changes in Workloads API in versions 1.8 and 1.9 https://kubernetes.io/docs/reference/workloads-18-19/

With default selector changes and introducing apps/v1 in 1.9 we should start thinking how we are going to handle all this.

The first question that we should answer is how many versions of Kubernetes are going to support.
Are going to always support just the current stable? Or maybe current stable and one before that? Or more? But that might be bigger maintenance cost if we don't figure out a good way to handle it.

@pradeepto

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants