Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add configuration options for services and ingresses #557

Closed
Zodor opened this issue Mar 30, 2023 · 11 comments
Closed

Add configuration options for services and ingresses #557

Zodor opened this issue Mar 30, 2023 · 11 comments
Labels
enhancement New feature or request

Comments

@Zodor
Copy link

Zodor commented Mar 30, 2023

I would like to have configuration options in the helmchart
Today we have no way of changing the typ of the services and setting the annotation which AWS EKS clusters are depending on. ClusterIP is not allowed when autocreating NLB's for example. We need to set NodePort och LoadBalancer.

Example chart

  • New stuff below are the type and annotations under the acceptors, console and connectors
kind: ActiveMQArtemis
metadata:
  name: activemq-artemis-prod
spec:
  ingressDomain: example.io
  adminUser: admin
  adminPassword: blablabla
  deploymentPlan:
    size: 2
    image: quay.io/artemiscloud/activemq-artemis-broker-kubernetes:1.0.14
    messageMigration: true
    resources:
      limits:
        cpu: "1000m"
        memory: "2024Mi"
      requests:
        cpu: "500m"
        memory: "1024Mi"
  acceptors:
    - name: activemq
      port: 61616
      type: NodePort
      annotations:
          external-dns.alpha.kubernetes.io/hostname: amq-prod-0.example.io
          service.beta.kubernetes.io/aws-load-balancer-internal: "true"
          service.beta.kubernetes.io/aws-load-balancer-type: nlb
  console:
    expose: true
      type: NodePort
      annotations:
          service.beta.kubernetes.io/key: data
  addressSettings:
    addressSetting:
      - autoCreateJmsQueues: true
        autoCreateQueues: true
  connectors:
    - name: connectors
      host: amq-prod.example.io
      expose: true
      port: 61616
      annotations:
          alb.ingress.kubernetes.io/backend-protocol: HTTP

If the above could be implemented you have the options to install this in AWS EKS cluster and let EKS create Route53 dns records, setting up ALB/NLB's and you can connect to the brokers from outside the cluster in a easy way

@Zodor Zodor added the enhancement New feature or request label Mar 30, 2023
@gtully
Copy link
Contributor

gtully commented Apr 24, 2023

I wonder if this is getting to the point of probes, where we allowed an entire probe to be configured for liveness and readyness, reusing the corev1.Probe kind directly inline.

In this case, we would support a service or ingress or route definition in full, rather than providing some subset via an alias.

maybe worth a thought at this point. The simplest thing may be to just add some extra fields, but why not reference the existing kinds and have the operator manage their lifecycle.

@jseiser
Copy link

jseiser commented May 5, 2023

Would like to add onto this.

To deploy an ingress, we would need to be able to specify additioanl annotations, labels as well as specifying a specific Ingress Class.

Our clusters have multiple ingress classes, depending on if a service is private or public. Then within that, external-dns will only creates DNS entries for specific ingress classes is there is a label present telling it to do so.

e.g.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/service-upstream: "true" <-- mesh the ingress
  creationTimestamp: "2023-04-03T14:22:25Z"
  generation: 1
  labels:
    ingress: externaldns  <-- tell external DNS we want a dns record
spec:
  ingressClassName: nginx-internal  <-- use the private NLB.

@davesargrad
Copy link

Hi. I am very interested in the resolution of this issue. It strikes me that this is a critical usability issue. In general our producers/consumers will be outside our cluster, and will need to connect into the cluster. The current inability to define a loadbalancer service makes this complicated.

Is there any thought on when this might be resolved?

@brusdev
Copy link
Contributor

brusdev commented Jun 14, 2023

The support for the ingress hosts will be added by #614

@lordigon
Copy link

Defining an ingressClassName is something important to release for usability on a real k8s cluster. Do you think it is possible to release just this feature?

@brusdev
Copy link
Contributor

brusdev commented Jul 5, 2023

@lordigon I don't think that other specific ingress fields will be added to the ActiveMQArtemis CR because the solution proposed by @gtully will address ingressClassName and all other required settings, see #557 (comment)

@jseiser
Copy link

jseiser commented Aug 30, 2023

Wanted to reach out and see if this was still something in the works? We are wanting to use this operator, but without being able to configure the ingress we cant move forward.

We do have some go developers, if The Answer here could be explained more.

@brusdev
Copy link
Contributor

brusdev commented Sep 29, 2023

@jseiser I'm not aware of any WIP on this issue but there is an opened PR related to #481 and #588 that could help.

@gtully
Copy link
Contributor

gtully commented Nov 10, 2023

The Annotations in the template from #737 maybe sufficient, possibly the value for the ingressClassName field can be provided in an annotation kubernetes.io/ingress.class @jseiser

@Raschmann
Copy link

The Annotations in the template from #737 maybe sufficient, possibly the value for the ingressClassName field can be provided in an annotation kubernetes.io/ingress.class @jseiser

FYI

KubeAPIWarningLogger annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead

brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 16, 2024
The expose mode affects how the internal services are exposed.
 Currently the supported modes are `route` and `ingress`.
 * `route` mode uses OpenShift Routes to expose the internal service.
 * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
 with TLS passthrough.
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 16, 2024
The expose mode affects how the internal services are exposed.
Currently the supported modes are `route` and `ingress`.
* `route` mode uses OpenShift Routes to expose the internal service.
* `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
  with TLS passthrough.
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 16, 2024
The expose mode affects how the internal services for acceptors, connectors and
console are exposed. Currently the supported modes are `route` and `ingress`.
* `route` mode uses OpenShift Routes to expose the internal service.
* `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
  with TLS passthrough.
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 16, 2024
The expose mode affects how the internal services for acceptors, connectors and
console are exposed. Currently the supported modes are `route` and `ingress`.
* `route` mode uses OpenShift Routes to expose the internal service.
* `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
  with TLS passthrough.
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 16, 2024
The expose mode affects how the internal services for acceptors, connectors and
console are exposed. Currently the supported modes are `route` and `ingress`.
* `route` mode uses OpenShift Routes to expose the internal service.
* `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
  with TLS passthrough.
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 16, 2024
The expose mode affects how the internal services for acceptors, connectors and
console are exposed. Currently the supported modes are `route` and `ingress`.
Default is `route` on OpenShift and `ingress` on Kubernetes.
* `route` mode uses OpenShift Routes to expose the internal service.
* `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
  with TLS passthrough.
brusdev added a commit that referenced this issue Feb 16, 2024
The expose mode affects how the internal services for acceptors, connectors and
console are exposed. Currently the supported modes are `route` and `ingress`.
Default is `route` on OpenShift and `ingress` on Kubernetes.
* `route` mode uses OpenShift Routes to expose the internal service.
* `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service
  with TLS passthrough.
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 19, 2024
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 19, 2024
brusdev added a commit to brusdev/activemq-artemis-operator that referenced this issue Feb 19, 2024
@brusdev
Copy link
Contributor

brusdev commented Feb 20, 2024

The customisation of managed resources from #758 is sufficient to customize services and ingresses created by the operator.

@brusdev brusdev closed this as completed Feb 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants