New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add configuration options for services and ingresses #557
Comments
I wonder if this is getting to the point of probes, where we allowed an entire probe to be configured for liveness and readyness, reusing the corev1.Probe kind directly inline. In this case, we would support a service or ingress or route definition in full, rather than providing some subset via an alias. maybe worth a thought at this point. The simplest thing may be to just add some extra fields, but why not reference the existing kinds and have the operator manage their lifecycle. |
Would like to add onto this. To deploy an ingress, we would need to be able to specify additioanl annotations, labels as well as specifying a specific Ingress Class. Our clusters have multiple ingress classes, depending on if a service is private or public. Then within that, external-dns will only creates DNS entries for specific ingress classes is there is a label present telling it to do so. e.g.
|
Hi. I am very interested in the resolution of this issue. It strikes me that this is a critical usability issue. In general our producers/consumers will be outside our cluster, and will need to connect into the cluster. The current inability to define a loadbalancer service makes this complicated. Is there any thought on when this might be resolved? |
The support for the ingress hosts will be added by #614 |
Defining an ingressClassName is something important to release for usability on a real k8s cluster. Do you think it is possible to release just this feature? |
@lordigon I don't think that other specific ingress fields will be added to the ActiveMQArtemis CR because the solution proposed by @gtully will address ingressClassName and all other required settings, see #557 (comment) |
Wanted to reach out and see if this was still something in the works? We are wanting to use this operator, but without being able to configure the ingress we cant move forward. We do have some go developers, if The Answer here could be explained more. |
FYI KubeAPIWarningLogger annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead |
The expose mode affects how the internal services are exposed. Currently the supported modes are `route` and `ingress`. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The expose mode affects how the internal services are exposed. Currently the supported modes are `route` and `ingress`. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The expose mode affects how the internal services for acceptors, connectors and console are exposed. Currently the supported modes are `route` and `ingress`. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The expose mode affects how the internal services for acceptors, connectors and console are exposed. Currently the supported modes are `route` and `ingress`. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The expose mode affects how the internal services for acceptors, connectors and console are exposed. Currently the supported modes are `route` and `ingress`. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The expose mode affects how the internal services for acceptors, connectors and console are exposed. Currently the supported modes are `route` and `ingress`. Default is `route` on OpenShift and `ingress` on Kubernetes. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The expose mode affects how the internal services for acceptors, connectors and console are exposed. Currently the supported modes are `route` and `ingress`. Default is `route` on OpenShift and `ingress` on Kubernetes. * `route` mode uses OpenShift Routes to expose the internal service. * `ingress` mode uses Kubernetes Nginx Ingress to expose the internal service with TLS passthrough.
The customisation of managed resources from #758 is sufficient to customize services and ingresses created by the operator. |
I would like to have configuration options in the helmchart
Today we have no way of changing the typ of the services and setting the annotation which AWS EKS clusters are depending on. ClusterIP is not allowed when autocreating NLB's for example. We need to set NodePort och LoadBalancer.
Example chart
If the above could be implemented you have the options to install this in AWS EKS cluster and let EKS create Route53 dns records, setting up ALB/NLB's and you can connect to the brokers from outside the cluster in a easy way
The text was updated successfully, but these errors were encountered: