Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supposed conflict between k8s 1.14 and 1.17 imports #53

Open
arnaudbos opened this issue Feb 12, 2020 · 14 comments
Open

Supposed conflict between k8s 1.14 and 1.17 imports #53

arnaudbos opened this issue Feb 12, 2020 · 14 comments

Comments

@arnaudbos
Copy link
Contributor

@arnaudbos arnaudbos commented Feb 12, 2020

Hello folks 馃憢

First of all, thank you for your work, and if there's a preferred way to ask questions than opening an issue, please redirect me (as I didn't find anything) and slam this issue closed.

I'm experimenting with Dhall and Argo Workflows (and intend to compare with Pachyderm, FWIW) and I'm new to both.
The rationale for using Dhall for me is I'm sick of YAML and Helm, basically.

I've tried the example from the readme, even though it's Argo CD and not Workflows it's okay because I'm just getting started.
But for some reason the ObjectMeta type keeps failing me.

let packages = https://raw.githubusercontent.com/EarnestResearch/dhall-packages/master/package.dhall

let argocd = packages.kubernetes.argocd

let k8s = packages.kubernetes.k8s.schemas

in  argocd.Application::{
    , metadata = k8s.ObjectMeta::{ name = "hello-app" }
    , spec =
        argocd.ApplicationSpec::{
        , project = "hello-project"
        , source =
            argocd.SourceSpec.TypesUnion.Plugin
              argocd.PluginSourceSpec::{
              , repoURL =
                  "https://github.com/EarnestResearch/dhall-packages.git"
              , path = "kubernetes"
              , plugin = argocd.PluginSpec::{ name = "dhall-to-yaml" }
              }
        , destination =
            argocd.DestinationSpec::{
            , server = "kubernetes.svc.local"
            , namespace = "default"
            }
        }
    }

I get

位 dhall-to-json --explain <<< './test.dhall'
鈫 ./test.dhall

Error: Missing record field: schemas
5鈹           packages.kubernetes.k8s.schemas

/Users/arnaud/Lab/monkeypatch/CS-Group/test.dhall:5:11
1鈹 ./test.dhall

(stdin):1:1

I've messed around this example a bit, removed the .schema part and other things. And eventually I've found something more interesting with an example which, I think, is more illustrative.
deployment is from dhall-kubernetes' readme and works ok, but application fails even though they're using the same k8s.ObjectMeta:: defaults completion.

let k8s = https://raw.githubusercontent.com/dhall-lang/dhall-kubernetes/381306bcc3fd87aafe042c23bb66fe58227b85f4/1.17/package.dhall

let deployment =
      k8s.Deployment::{
      , metadata = k8s.ObjectMeta::{ name = "nginx" }
      , spec = Some k8s.DeploymentSpec::{
        , selector = k8s.LabelSelector::{
          , matchLabels = Some (toMap { name = "nginx" })
          }
        , replicas = Some 2
        , template = k8s.PodTemplateSpec::{
          , metadata = k8s.ObjectMeta::{ name = "nginx" }
          , spec = Some k8s.PodSpec::{
            , containers =
              [ k8s.Container::{
                , name = "nginx"
                , image = Some "nginx:1.15.3"
                , ports = Some
                    [ k8s.ContainerPort::{ containerPort = 80 } ]
                }
              ]
            }
          }
        }
      }

let argocd = https://raw.githubusercontent.com/EarnestResearch/dhall-packages/8d228f578fbc7bb16c04a7c9ac8c6c7d2e13d1f7/kubernetes/argocd/package.dhall

let application = 
    argocd.Application::{
    , metadata = k8s.ObjectMeta::{ name = "hello-app" }
    , spec =
        argocd.ApplicationSpec::{
        , project = "hello-project"
        , source =
            argocd.SourceSpec.TypesUnion.Plugin
              argocd.PluginSourceSpec::{
              , repoURL =
                  "https://github.com/EarnestResearch/dhall-packages.git"
              , path = "kubernetes"
              , plugin = argocd.PluginSpec::{ name = "dhall-to-yaml" }
              }
        , destination =
            argocd.DestinationSpec::{
            , server = "kubernetes.svc.local"
            , namespace = "default"
            }
        }
    }

in  { deployment = deployment, application = application }

I get

位 dhall-to-json --explain <<< './test.dhall'
鈫 ./test.dhall

Error: Expression doesn't match annotation
{ metadata : { - initializers : 鈥
             ,   annotations : - List
                               + Optional
                               - { 鈥 : 鈥 }
                               + List 鈥
             ,   finalizers : - List
                              + Optional
                              - Text
                              + List 鈥
             ,   labels : - List
                          + Optional
                          - { 鈥 : 鈥 }
                          + List 鈥
             ,   managedFields : - List
                                 + Optional
                                 - { 鈥 : 鈥 }
                                 + List 鈥
             ,   ownerReferences : - List
                                   + Optional
                                   - { 鈥 : 鈥 }
                                   + List 鈥
             , 鈥
             }
, 鈥
}

30鈹     argocd.Application::{
31鈹     , metadata = k8s.ObjectMeta::{ name = "hello-app" }
32鈹     , spec =
33鈹         argocd.ApplicationSpec::{
34鈹         , project = "hello-project"
35鈹         , source =
36鈹             argocd.SourceSpec.TypesUnion.Plugin
37鈹               argocd.PluginSourceSpec::{
38鈹               , repoURL =
39鈹                   "https://github.com/EarnestResearch/dhall-packages.git"
40鈹               , path = "kubernetes"
41鈹               , plugin = argocd.PluginSpec::{ name = "dhall-to-yaml" }
42鈹               }
43鈹         , destination =
44鈹             argocd.DestinationSpec::{
45鈹             , server = "kubernetes.svc.local"
46鈹             , namespace = "default"
47鈹             }
48鈹         }
49鈹     }

/Users/arnaud/Lab/monkeypatch/CS-Group/test.dhall:30:5
1鈹 ./test.dhall

(stdin):1:1

I'm not sure if my intuition is correct, but here I go:
I think there is an issue with the definition of types and defaults inside dhall-packages/kubernetes/k8s/.

These are the two reasons leading me to this assumption:

  1. Missing initializers

The first line of the error message { metadata : { - initializers : 鈥 suggests that initializers is missing.
However, if it is indeed specified in dhal-kubernetes 1.14 (line 10), it is no longer defined in dhal-kubernetes 1.17.

  1. dhall-packages/kubernetes/k8s/1.17.dhall points to dhall-kubernetes 1.14

As you can see in this file, the link points to 1.14 instead of 1.17.

I have a feeling that this is linked to PR #41 but am not sure.
I'm very unfamiliar with Dhall as I said, so the whole 1.14 thing at the beginning of files, such as this one confuses me.

Anyways, I hope this is useful. If I'm wrong or mistaken in any way, please let me know so that I can diagnose further what's going on with my experiment.

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Feb 15, 2020

Hi @arnaudbos , thanks for opening this issue and for opening the PR! The 1.17 link and the example definitely needed to be fixed, thanks for taking care of that! :)

With regards to the types compatibility, I mentioned in #54 that we can go different routes:

  1. create a custom internal ObjectMeta (and any other dhall-kubernetes) type which has as optional values the difference between versions, and provide converting functions from dhall-kubernetes to this internal type
  2. replace ObjectMeta (and other dhall-kubernetes types) with a union type with the different versions
  3. split the kubernetes directory by version (like dhall-kubernetes)

1 and 2 fix the issue but are less ergonomic for the user, while 3 is better for the user but less ergonomic for the maintainers of hand-crafted packages.

I think we can take the maintainer tax (so adopt solution 3) if we build some automation around replacing and freezing the different versions (e.g. a mantainer only writes 1.14, and then runs a script to do it in the other versions).

I'm not sure if there is a fourth solution that will address all of this, so 3 seems the more likely to me right now. What do you think? Would you have time to help with this?

@arnaudbos

This comment has been minimized.

Copy link
Contributor Author

@arnaudbos arnaudbos commented Feb 18, 2020

Yes I'd like to help with this.

However, I think simply keeping the 1.17 link and example fixes in the aforementioned PR is not a good idea. Because, in the end, the "new" example would work "by accident", without the quick-and-dirty fix I've made in the argo Application type.

I you agree, as per your suggestion, I will simply delete the other PR and open a new one with just a s/schemas/1.14/ substitution in the README. So that the example can at least run even if it's based on 1.14. And then we can work on a fix for 1.17, knowing that other new{comers/bies}, like me, are not confused.

I think we can keep this issue open to discuss further.

I'd be happy to have guidance on how to automate things. I don't quite know how to start yet, but I'd be worried about the combinatorial explosion of maintaining every version of argo/argocd/ambassador/etc on top of every kubernetes variation.
So if you have any thoughts on this, I'd be happy to follow your lead.

In the mean time, I will use my local "patched" version to get stuff done at work 馃檪

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Feb 18, 2020

Thank you for your help :)

However, I think simply keeping the 1.17 link and example fixes in the aforementioned PR is not a good idea. Because, in the end, the "new" example would work "by accident", without the quick-and-dirty fix I've made in the argo Application type.
I you agree, as per your suggestion, I will simply delete the other PR and open a new one with just a s/schemas/1.14/ substitution in the README. So that the example can at least run even if it's based on 1.14. And then we can work on a fix for 1.17, knowing that other new{comers/bies}, like me, are not confused.

I agree with this, I think fixing the link in the 1.17 file wouldn't hurt either since right now it's pointing to the wrong location, but i can take care of that separately 馃憤

I'd be happy to have guidance on how to automate things. I don't quite know how to start yet, but I'd be worried about the combinatorial explosion of maintaining every version of argo/argocd/ambassador/etc on top of every kubernetes variation.
So if you have any thoughts on this, I'd be happy to follow your lead.

I think we should have the following directory structure:

kubernetes
 / base
           / argocd
           / argo
           / cert-manager
           ... etc
           k8s.dhall -- contains link to k8s 1.14
 / 1.14 
           / argocd
           / argo
           / cert-manager
           ... etc
           k8s.dhall -- contains link to k8s 1.14
 / 1.15 
           / argocd
           / argo
           / cert-manager
           ... etc
           k8s.dhall -- contains link to k8s 1.15
... etc

where "base" contains the file manually authored, and then the other directories contain a copy of them. The only different file should be k8s.dhall, which will point to the right kubernetes version. The other file are identical, except they will import the "local" k8s.dhall file, and imports will be re-frozen because the hashes will be different. So the "automation" could be just a script that copies over the files (except k8s.dhall) and refreezes them.

I'd be happy to have guidance on how to automate things. I don't quite know how to start yet, but I'd be worried about the combinatorial explosion of maintaining every version of argo/argocd/ambassador/etc on top of every kubernetes variation.
So if you have any thoughts on this, I'd be happy to follow your lead.

Much like what happens with helm charts usually, I don't think the goal of this project should be supporting multiple versions of the underlying apps. It's up to each app's maintainer (contributors are more than welcome) to keep versions up to date. If a user needs to point to a older version, they can just refer to an older tag / commit hash, if it exists. The only case i think it's worth supporting is different kubernetes versions. We should aim to support all versions until they are EOL or cloud providers don't offer an upgrade (AWS for example are very behind schedule, but many users are on AWS and they shouldn't be left behind).

This is quite a big design decision for this repository, so I'll post this issue on the dhall discourse and hopefully someone more from the community can chime in. Maybe there is a better solution than what I proposed here.

In the mean time, I will use my local "patched" version to get stuff done at work 馃檪

Sounds good, thanks for your patience!
I hope we will come up with a final solution quickly so that this use case is covered 馃檪

@Gabriel439

This comment has been minimized.

Copy link
Contributor

@Gabriel439 Gabriel439 commented Feb 19, 2020

I agree with @arnaudbos that we should probably avoid a combinatorial explosion of multiple supported versions for each dependency.

I would go with the "low-tech" solution of picking only one Kubernetes version to support. From my perspective it is okay if this project is opinionated and doesn't address every potential user's needs. Even the users who cannot use this project will still find it useful as a starting point that they can learn from and or fork for their own internal needs.

I did a quick search for supported Kubernetes versions for various cloud providers and found:

  • EKS: 1.12, 1.13, 1.14
  • GKE: 1.13, 1.14, 1.15
  • AKS: 1.13, 1.14, 1.15 (soon to be 1.14, 1.15, 1.16)

Given that, I propose that we only need to support Kubernetes version 1.14 and that should cover most cloud providers. I can also change the default ./package.dhall in dhall-kubernetes to point to version 1.14

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Feb 20, 2020

@Gabriel439: I didn't mean for the versions to be combinatorial.
Only different kubernetes versions would be supported in this model, not different versions of the packages themselves. But keeping the current state is reasonable for me, and from a maintainer's perspective that's the easiest solution

@arnaudbos: Have you tried applying the 1.14 manifests to your cluster? They should work on a 1.17 cluster. Or do you need some fields from the 1.17 ObjectMeta that make this unfeasible? I'm trying to assess the tradeoffs.

@Gabriel439

This comment has been minimized.

Copy link
Contributor

@Gabriel439 Gabriel439 commented Feb 20, 2020

@amarrella: The scope of this repository isn't just Kubernetes, though. My understanding from the name of the repository and the README is that this repository is intended to host non-Kubernetes-related packages, too. Creating an entire copy of the repository for each Kubernetes version is giving the Kubernetes dependency a privileged status over other potential dependencies.

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Feb 20, 2020

@Gabriel439 you are right, it's not a kubernetes only repository, all kubernetes dependencies are in the kubernetes directory for this reason. In fact, we already have non kubernetes stuff in util.

OK, if there is no user need for the replication i'll old off from it (at least for now). I'd like to provide users a better way of handling this use case though, but perhaps this repo is not the place where this should happen.

arnaudbos added a commit to arnaudbos/dhall-packages that referenced this issue Feb 24, 2020
* Fix k8s/1.17.dhall reference to upstream 1.14

* Fix README example to point to 1.14 (see EarnestResearch#53)

* Fix argocd/Project/Type.dhall pointing to different commit
@arnaudbos

This comment has been minimized.

Copy link
Contributor Author

@arnaudbos arnaudbos commented Feb 25, 2020

Another route would be to split the repo in two, between k8s-related packages and non-k8s-related packages? It would allow a specific directory structure in one without impacting the other.

Let me explain.
The "combinatorial explosion" exists as a matter of fact, but the current repository structure can't cope with it. At least I don't see how for the moment.

@amarrella I think the structure you've described above works. Maybe it could be "extended" to allow contributors to add any package version (e.g. argo 1.2.0), not just the latest, to whichever k8s version they like.

My point is that:

"they can just refer to an older tag / commit hash, if it exists"

indeed, allows going backward in time, as a user, to a previous version if it has existed.
Whereas as a contributor it only allows going forward in packages versions, not backward.

For instance, let's say the kubernetes/1.14/argo/package.dhall defines version 1.2.1 (as currently stated in the README).
I wouldn't quite see how to contribute version 1.2.0 to this repository.

Adding subdirectories for package versions would allow a contributor to provide package 1.4.0 even though 1.4.2, for instance, already exists.

kubernetes
 / base
           / argocd
                      / 1.2.1
                      / 1.4.0
           / argo
           ... etc
           k8s.dhall -- contains link to k8s 1.14
 / 1.14 
           / argocd
                      / 1.2.1
                      / 1.4.0
           / argo
           ... etc
           k8s.dhall -- contains link to k8s 1.14
 / 1.15 
           / argocd
                      / 1.4.2
           / argo
           ... etc
           k8s.dhall -- contains link to k8s 1.15
... etc

However, I'd be worried that such specific directory structure could mislead users/contributors into thinking that it is expected that all versions of each package should be supported/contributed. (It certainly happened to me when I came across this repo!)

Splitting the repo in two would segregate the curious case of the kubernetes package from "traditional" packages for which such nested structure is unnecessary.

I would also advise adding a note into the README to explain the repository structure, and explicitly state why there may be holes between versions and that contributors are welcome to provide new versions of any package, be it forward or backward according to the set of versions already contributed.

My two cents.

@Gabriel439

This comment has been minimized.

Copy link
Contributor

@Gabriel439 Gabriel439 commented Feb 25, 2020

@arnaudbos: I still don't think it's desirable to support multiple versions even if it's technically feasible. The problem with doing so is that it leads to poor use of developer resources due to spreading the community/ecosystem thin which in turn leads to a lower level of quality.

@arnaudbos

This comment has been minimized.

Copy link
Contributor Author

@arnaudbos arnaudbos commented Feb 25, 2020

Yes it's true, I understand.

On the topic of only supporting one version, what's Kubernetes' story in terms of forward compatibility? You've said 1.14 would be the soundest choice, is it a safe bet? I'm willing to believe it, just asking for confirmation.

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Feb 26, 2020

@arnaudbos I believe for most cases the types will be forward compatible, but:

So I expect to update the version to 1.15 soon after Amazon releases EKS version 1.15. This will make sure we cover all the 3 major cloud providers, and also 1.17 users will be fully covered by the deprecation policy for both stable and beta apiVersions.

This is also motivated by the fact that many new versions of open source kubernetes packages don't support 1.14 already anymore

@arnaudbos

This comment has been minimized.

Copy link
Contributor Author

@arnaudbos arnaudbos commented Feb 28, 2020

Thanks for the link. So I guess it's settled? Status quo until EKS bumps to 1.15 and then bump here too?

I'd still be favorable to a note about this repo policy in the README though.
Do you want me to draft something?

A note to explicitly surface the supported versions of the packages themselves wouldn't hurt either (current versions for Argo and Argo CD are 1.4.2 and 2.5.2, respectively, and at first glance it's hard to tell which versions the dhall packages map to).
Although I don't know how to assess the forward compatibility of those (can't trust semver).

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Mar 2, 2020

@arnaudbos I agree versions should be documented for easier discoverability.

I think I will create a dhall file in each major package directory documenting the version and the maintainer and add information about that in the README.

For Kubernetes, I'll also add a note to state that the current supported version is 1.14 and we plan to update to 1.15 as soon as it's supported by all the major cloud providers (AWS, Google, Azure). I won't call this a general policy though yet. I think that for now while the project is small it's more reasonable to proceed on a per-request basis.

@amarrella

This comment has been minimized.

Copy link
Collaborator

@amarrella amarrella commented Mar 11, 2020

Sorry, I have been quite busy and didn't do this yet but it's on my list of things to do!

Meanwhile, looks like Amazon released 1.15: https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-now-supports-kubernetes-version-1-15/

I'll wait a few days for tooling to catch up and then 1.15 will become the official supported version.

@amarrella amarrella mentioned this issue Mar 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can鈥檛 perform that action at this time.