Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes: inconsistent handling of kubernetes.name/group/version properties #6726

Closed
Ladicek opened this issue Jan 22, 2020 · 8 comments
Closed
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@Ladicek
Copy link
Contributor

Ladicek commented Jan 22, 2020

Describe the bug

I'm using the kubernetes extension to generate Kubernetes and OpenShift resources, and find that the kubernetes.name/group/version configuration properties are handled wildly inconsistently.

The problem seems to be in Dekorate, but the KubernetesProcessor in Quarkus is also involved (it sets the app.group system property). I think the root cause is that the Dekorate Resources object, which is shared among the entire Dekorate Session, is being modified in the middle of Dekorate processing (by the callers of AbstractKubernetesHandler.setApplicationInfo).

Here's my application.properties file:

quarkus.application.name=test
kubernetes.name=foo
kubernetes.group=bar
kubernetes.version=my1.0version
kubernetes.expose=true

And the generated kubernetes.yml; marking the relevant lines by !!!:

---
apiVersion: "v1"
kind: "ServiceAccount"
metadata:
  labels:
    app: "test" # !!!
    version: "1.0.0-SNAPSHOT" # !!!
    group: "bar" # !!!
  name: "foo" # !!!
---
apiVersion: "v1"
kind: "Service"
metadata:
  labels:
    app: "test" # !!!
    version: "1.0.0-SNAPSHOT" # !!!
    group: "bar" # !!!
  name: "foo" # !!!
spec:
  ports:
  - name: "http"
    port: 8080
    targetPort: 8080
  selector:
    app: "foo" # !!!
    version: "my1.0version" # !!!
    group: "bar" # !!!
  type: "ClusterIP"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
  labels:
    app: "foo" # !!!
    version: "my1.0version" # !!!
    group: "bar" # !!!
  name: "foo" # !!!
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "foo" # !!!
      version: "my1.0version" # !!!
      group: "bar" # !!!
  template:
    metadata:
      labels:
        app: "foo" # !!!
        version: "my1.0version" # !!!
        group: "bar" # !!!
    spec:
      containers:
      - env:
        - name: "KUBERNETES_NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: "metadata.namespace"
        image: "bar/test:1.0.0-SNAPSHOT" # !!!
        imagePullPolicy: "IfNotPresent"
        name: "foo" # !!!
        ports:
        - containerPort: 8080
          name: "http"
          protocol: "TCP"
---
apiVersion: "extensions/v1beta1"
kind: "Ingress"
metadata:
  labels:
    app: "test" # !!!
    version: "1.0.0-SNAPSHOT" # !!!
    group: "bar" # !!!
  name: "foo" # !!!
spec:
  rules:
  - host: ""
    http:
      paths:
      - backend:
          serviceName: "foo" # !!!
          servicePort: 8080
        path: "/"

Expected behavior

I'd expect kubernetes.name to have priority over quarkus.application.name, but I see that sometimes it's one and sometimes it's the other.

The kubernetes.group setting seems to be set consistently. But if I rename it to openshift.group, it still affects the Kubernetes resources, which sounds wrong.

I'd expect kubernetes.version to have priority over what Dekorate reads from pom.xml, but I see that sometimes it's one and sometimes it's the other.

Actual behavior

See above.

Also I think that kubernetes.* properties should only affect Kubernetes resources, openshift.* properties should only affect OpenShift resources, etc. This seems to be the case for 99% of properties, the only problematic ones are potentially the name, group and version things. This seems to be because Dekorate expects these properties to be always the same (app.name/group/version). I'd be fine with keeping them always the same, too, but then there should be no openshift.name/group/version or knative.name/group/version. (Preferrably, there would be no kubernetes.name/group/version and it would be centralized under quarkus.application, in my opinion.)

To Reproduce
Steps to reproduce the behavior:

  1. Generate a simple application with the kubernetes extension
  2. Use the application.properties per above
  3. mvn clean package
  4. See target/kubernetes/kubernetes.yml

Environment (please complete the following information):

  • Output of uname -a or ver:
$ uname -a
Linux argondie 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Output of java -version:
$ java -version
openjdk version "1.8.0_232"
OpenJDK Runtime Environment (build 1.8.0_232-8u232-b09-0ubuntu1~18.04.1-b09)
OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)
  • GraalVM version (if different from Java): N/A
  • Quarkus version or git rev: bbe3de7
@Ladicek Ladicek added the kind/bug Something isn't working label Jan 22, 2020
@Ladicek
Copy link
Contributor Author

Ladicek commented Jan 22, 2020

CC @iocanel

@geoand
Copy link
Contributor

geoand commented Jan 22, 2020

@iocanel I assume you will be handling this one?

@iocanel
Copy link
Contributor

iocanel commented Jan 23, 2020

I think that we have multiple issues here.

  • kubernetes properties leaking to openshift resources and vice versa.
  • kubernetes.name should have priority over quarkus.application.name
  • kubernetes.version should have priority over pom.version.

The first task is probably a dekorate issue.
The other two are possibly something we deal with in KubernetesProcessor.

@Ladicek: I am not sure what's the intention here. Do you want me to pick this issue up? Is this something you were planning to work on?

@Ladicek
Copy link
Contributor Author

Ladicek commented Jan 23, 2020

@iocanel I played with this a bit yesterday and it doesn't seem like an easy fix, due to all the mutability inside Dekorate. My current assignment is to find bugs, not fix them, so if you could take this, that would be great :-)

@rsvoboda
Copy link
Member

@Ladicek can you double check if this issue is still valid ?

@rsvoboda
Copy link
Member

@Ladicek re-ping

@Ladicek
Copy link
Contributor Author

Ladicek commented Mar 26, 2020

Ah, totally missed this one. Will check.

@Ladicek
Copy link
Contributor Author

Ladicek commented Mar 26, 2020

This works fine now, closing. Thanks!

@Ladicek Ladicek closed this as completed Mar 26, 2020
@gsmet gsmet added this to the 1.3.1.Final milestone Mar 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants