Skip to content

Cannot adjust Gateway API loadbalancer healthcheck paths because TargetGroupConfiguration seems broken #4249

Open
@timothy-spencer

Description

@timothy-spencer

Bug Description
TargetGroupConfiguration for the Gateway API stuff seems to be broken. This is making it so that I cannot change the health check path, so my service looks like it is down to the loadbalancer.

Steps to Reproduce
Apply a TargetGroupConfiguration to something like this:

---
apiVersion: gateway.k8s.aws/v1beta1
kind: TargetGroupConfiguration
metadata:
  name: gitlab-tg-config
spec:
  targetReference:
    kind: Service
    name: gitlab-gw
  defaultConfiguration:
    targetType: ip
  routeConfigurations:
    - routeIdentifier:
        kind: HTTPRoute
        name: gitlab
        # patch this
        namespace: CHANGEME
      targetGroupProps:
        healthCheckConfig:
          healthCheckPath: /-/health
          healthCheckProtocol: http

Expected Behavior
I expect the resource to apply cleanly, and health checks to start working.

Actual Behavior
The k8s API says: TargetGroupConfiguration.gateway.k8s.aws "gitlab-tg-config" is invalid: spec.routeConfigurations[0].identifier: Required value

I can't seem to find "identifier" documented anywhere, or in the source.

Regression
Was the functionality working correctly in a previous version ? [No idea]

Current Workarounds
If you look at the error messages from the API, it seems to want an "identifier" field of the format ^(HTTPRoute|TLSRoute|TCPRoute|UDPRoute|GRPCRoute)?:([^:]+)?:([^:]+)?$. When I supply HTTPRoute:gitlab:gitlab2, it seems to accept it, and discards the routeIdentifier.

This isn't a workaround, because it doesn't seem to change anything, but it a way to get the resource to get added, even if it doesn't work.

Environment

  • AWS Load Balancer controller version: 2.13.3
  • Kubernetes version: 1.33
  • Using EKS (yes/no), if so version?: Yes, 1.33
  • Using Service or Ingress: Gateway
  • AWS region: us-west-2
  • How was the aws-load-balancer-controller installed: We used the argocd helm app to install it with these values
createNamespace: true
enableCertManager: true
priorityClassName: system-cluster-critical
serviceAccount:
  create: true
  name: aws-load-balancer-controller-sa
podDisruptionBudget:
  maxUnavailable: "10%"
securityContext:
  capabilities:
    drop: ["ALL"]
  seccompProfile:
    type: RuntimeDefault
tolerations:
  - key: "CriticalAddonsOnly"
    operator: "Exists"
  - key: ondemand
    operator: Equal
    value: "true"
    effect: NoSchedule
resources:
  requests:
    cpu: 10m
    memory: 100Mi
  limits:
    memory: 100Mi
controllerConfig:
  featureGates:
    NLBGatewayAPI: true
    ALBGatewayAPI: true
  • Current state of the Controller configuration:
Name:                   aws-load-balancer-controller
Namespace:              kube-system
CreationTimestamp:      Fri, 07 Jun 2024 16:34:23 -0700
Labels:                 app.kubernetes.io/instance=aws-load-balancer-controller
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=aws-load-balancer-controller
                        app.kubernetes.io/version=v2.13.3
                        argocd.argoproj.io/instance=aws-load-balancer-controller
                        helm.sh/chart=aws-load-balancer-controller-1.13.3
Annotations:            deployment.kubernetes.io/revision: 17
Selector:               app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=aws-load-balancer-controller
                    app.kubernetes.io/name=aws-load-balancer-controller
  Annotations:      kubectl.kubernetes.io/restartedAt: 2025-06-11T21:48:49Z
                    prometheus.io/port: 8080
                    prometheus.io/scrape: true
  Service Account:  aws-load-balancer-controller-sa
  Containers:
   aws-load-balancer-controller:
    Image:           public.ecr.aws/eks/aws-load-balancer-controller:v2.13.3
    Ports:           9443/TCP, 8080/TCP
    Host Ports:      0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      --cluster-name=tspencertest
      --ingress-class=alb
      --aws-region=us-west-2
      --aws-vpc-id=vpc-08291ee586de1ce4e
      --feature-gates=ALBGatewayAPI=true,NLBGatewayAPI=true
    Limits:
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     100Mi
    Liveness:     http-get http://:61779/healthz delay=30s timeout=10s period=10s #success=1 #failure=2
    Readiness:    http-get http://:61779/readyz delay=10s timeout=10s period=10s #success=1 #failure=2
    Environment:  <none>
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   cert:
    Type:               Secret (a volume populated by a Secret)
    SecretName:         aws-load-balancer-tls
    Optional:           false
  Priority Class Name:  system-cluster-critical
  Node-Selectors:       <none>
  Tolerations:          CriticalAddonsOnly op=Exists
                        ondemand=true:NoSchedule
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  aws-load-balancer-controller-79bc9f8bd6 (0/0 replicas created), aws-load-balancer-controller-7576bdcf7b (0/0 replicas created), aws-load-balancer-controller-577c6cc4df (0/0 replicas created), aws-load-balancer-controller-6dbcc88dd7 (0/0 replicas created), aws-load-balancer-controller-796c895667 (0/0 replicas created), aws-load-balancer-controller-78589869bf (0/0 replicas created), aws-load-balancer-controller-74f66c9c6f (0/0 replicas created), aws-load-balancer-controller-5b8f876894 (0/0 replicas created), aws-load-balancer-controller-858485f84b (0/0 replicas created), aws-load-balancer-controller-8f779fccc (0/0 replicas created)
NewReplicaSet:   aws-load-balancer-controller-7dfc49c98d (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  31m   deployment-controller  Scaled up replica set aws-load-balancer-controller-7dfc49c98d from 0 to 2
  • Current state of the Ingress/Service configuration:
Name:         gitlab-tg-config
Namespace:    gitlab2
Labels:       argocd.argoproj.io/instance=gitlab2-extras
Annotations:  <none>
API Version:  gateway.k8s.aws/v1beta1
Kind:         TargetGroupConfiguration
Metadata:
  Creation Timestamp:  2025-06-26T23:05:13Z
  Finalizers:
    gateway.k8s.aws/targetgroupconfigurations
  Generation:        2
  Resource Version:  422713724
  UID:               d2d71b3d-3401-4db9-9c2f-f0e68de55414
Spec:
  Default Configuration:
    Target Type:  ip
  Route Configurations:
    Identifier:  HTTPRoute:gitlab2:gitlab
    Target Group Props:
      Health Check Config:
        Health Check Path:      /-/health
        Health Check Protocol:  http
  Target Reference:
    Group:  
    Kind:   Service
    Name:   gitlab-gw
Events:     <none>

Possible Solution (Optional)
It seems as if this is kinda half-implemented somehow? I can't find any mention of "identifier" in the code anywhere except in another context.

Contribution Intention (Optional)

  • Yes, I'm willing to submit a PR to fix this issue
  • No, I cannot work on a PR at this time
  • Maybe, it depends on what the actual problem is, which I don't understand.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions