-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ALB targets not registered #3569
Comments
seems related to this #2339 |
i'm attaching the alb logs
|
Your selector labels on your Service object don't match the ones on your deployment/pod, so your pods aren't actually attached to your Service, hence the load balancer controller cannot find matching pods and no pods are attached to the Target Group. |
Thanks @stroebs Sorry for the ignorance. This worked in the end: apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: test-deployment
labels:
app: test
spec:
replicas: 2
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test-container
image: test-image
env:
- name: PORT
value: '80'
resources:
limits:
memory: 512Mi
cpu: '0.25'
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: test-service
labels:
app: test
spec:
type: NodePort
selector:
app: test
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip # instance types are not available to Fargate
alb.ingress.kubernetes.io/load-balancer-name: test-alb
alb.ingress.kubernetes.io/healthcheck-path: /healthz
alb.ingress.kubernetes.io/security-groups: sg-XXXXXXXXXX # attach cluster security group to the load balancer
alb.ingress.kubernetes.io/manage-backend-security-group-rules: 'true' # manage security group rules for the load balancer
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"503 error text"}}
# external-dns specific configuration for creating route53 record-set
external-dns.alpha.kubernetes.io/hostname: test.services.internal
labels:
app: test
spec:
ingressClassName: alb
rules:
- host: test.ml-services.internal
http:
paths:
- path: /503
pathType: Exact
backend:
service:
name: response-503
port:
name: use-annotation
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 80
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
I'm trying to use annotations on an Ingress to create an Application Load Balancer for a service in a private, fargate-only EKS cluster.
Here is a list of things that are true or I have tried
Not sure what to try next
Steps to reproduce
Expected outcome
targets to be registered
Environment
Additional Context:
The text was updated successfully, but these errors were encountered: