Skip to content

Conversation

@salonichf5
Copy link
Contributor

@salonichf5 salonichf5 commented Nov 12, 2025

Proposed changes

Write a clear and concise description that helps reviewers understand the purpose and impact of your changes. Use the
following format:

Problem: Users want to be able to specify ip_hash load balancing for upstreams

Solution: Add support for session affinity using ip_hash directive in upstreams

Testing: Manual testing

Scenarios covered in this PR

  1. Invalid loadBalancingMethod is rejected

Create an UpstreamSettingsPolicy with an unsupported method:

apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
  name: usp-lb-invalid
spec:
  loadBalancingMethod: invalid_method
  targetRefs:
    - group: core
      kind: Service
      name: tea


Result:

spec.loadBalancingMethod: Unsupported value: "invalid_method": supported values: "ip_hash", "random two least_conn"
  1. Conflicting policies on the same target

Two policies targeting the same Service (tea) with different methods:

# Policy A (accepted)
spec:
  loadBalancingMethod: ip_hash
  zoneSize: 1m
  targetRefs:
    - group: core
      kind: Service
      name: tea

# Policy B (conflicts with A)
spec:
  loadBalancingMethod: random two least_conn
  zoneSize: 1m
  targetRefs:
    - group: core
      kind: Service
      name: tea
Spec:
  Load Balancing Method:  ip_hash
  Target Refs:
    Group:    core
    Kind:     Service
    Name:     tea
  Zone Size:  1m
Status:
  Ancestors:
    Ancestor Ref:
      Group:      gateway.networking.k8s.io
      Kind:       Gateway
      Name:       gateway
      Namespace:  default
    Conditions:
      Last Transition Time:  2025-11-12T02:05:05Z
      Message:               The Policy is accepted
      Observed Generation:   1
      Reason:                Accepted
      Status:                True
      Type:                  Accepted
    Controller Name:         gateway.nginx.org/nginx-gateway-controller
Events:                      <none>

Spec:
  Load Balancing Method:  random two least_conn
  Target Refs:
    Group:    core
    Kind:     Service
    Name:     tea
  Zone Size:  1m
Status:
  Ancestors:
    Ancestor Ref:
      Group:      gateway.networking.k8s.io
      Kind:       Gateway
      Name:       gateway
      Namespace:  default
    Conditions:
      Last Transition Time:  2025-11-12T02:05:21Z
      Message:               Conflicts with another UpstreamSettingsPolicy
      Observed Generation:   1
      Reason:                Conflicted
      Status:                False
      Type:                  Accepted
    Controller Name:         [gateway.nginx.org/nginx-gateway-controller](http://gateway.nginx.org/nginx-gateway-controller)

Merging behaviour

apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
  name: usp-lb
spec:
  targetRefs:
  - group: core
    kind: Service
    name: tea
  zoneSize: 1m
---
apiVersion: gateway.nginx.org/v1alpha1
kind: UpstreamSettingsPolicy
metadata:
  name: usp-lb-2
spec:
  targetRefs:
  - group: core
    kind: Service
    name: tea
  loadBalancingMethod: ip_hash

generated config

upstream default_tea_80 {
    ip_hash;
    zone default_tea_80 1m;


    server 10.244.0.137:8080;
    server 10.244.0.136:8080;
    server 10.244.0.138:8080;

}

Working scenarios

Applying ip_hash load balancing only to tea service using Upstream settings policy, coffee should have default lb which is random two least_conn

Generated config.

upstream default_coffee_80 {
    random two least_conn;
    zone default_coffee_80 512k;

    server 10.244.0.90:8080;
    server 10.244.0.88:8080;
    server 10.244.0.89:8080;
}

upstream default_tea_80 {
    ip_hash;
    zone default_tea_80 1m;


    server 10.244.0.91:8080;
    server 10.244.0.87:8080;
    server 10.244.0.92:8080;

}

Now sending request to tea application:

All requests get pinned to one backend IP (session affinity)

sa.choudhary@N9939CQ4P0 upstream-settings-policy % sh testing-script.sh 
Sending 50 requests to /tea via cafe.example.com (127.0.0.1:8080)

Summary (how many times each backend was chosen):
  50 10.244.0.92:8080

Now lets see what happens when we do the same for coffee applications

sa.choudhary@N9939CQ4P0 upstream-settings-policy % sh testing-script.sh
Sending 50 requests to /coffee via cafe.example.com (127.0.0.1:8080)

Summary (how many times each backend was chosen):
  19 10.244.0.88:8080
  18 10.244.0.89:8080
  13 10.244.0.90:8080

Note: I tried running curls from K8s pod to send a different IP address to show cases backend selection changes but ip_hash uses my system IP($remote_addr) which was the same. I'll try more creative ways to do that today But could definitely distinguish between an upstream with ip_hash showcasing stickiness and a regular upstream.

Tested with secure-app with backend TLS Policy

upstream default_secure-app_8443 {
    ip_hash;
    zone default_secure-app_8443 1m;


    server 10.244.0.105:8443;
    server 10.244.0.107:8443;
    server 10.244.0.106:8443;

}

Please focus on (optional): If you any specific areas where you would like reviewers to focus their attention or provide
specific feedback, add them here.

Closes #4230

Checklist

Before creating a PR, run through this checklist and mark each as complete.

  • I have read the CONTRIBUTING doc
  • I have added tests that prove my fix is effective or that my feature works
  • I have checked that all unit tests pass after adding my changes
  • I have updated necessary documentation
  • I have rebased my branch onto main
  • I will ensure my PR is targeting the main branch and pulling from my branch from my own fork

Release notes

If this PR introduces a change that affects users and needs to be mentioned in the release notes,
please add a brief note that summarizes the change.

Add session affinity support for NGINX OSS users.

@salonichf5 salonichf5 changed the title Add ip_hash based session affinity support for NGINX OSS Do not review: Add ip_hash based session affinity support for NGINX OSS Nov 12, 2025
@salonichf5 salonichf5 force-pushed the feat/oss-nginx-session-persistence branch 3 times, most recently from ff3ad8e to 698f6ee Compare November 12, 2025 17:27
@salonichf5 salonichf5 changed the title Do not review: Add ip_hash based session affinity support for NGINX OSS Add ip_hash based session affinity support for NGINX OSS Nov 12, 2025
@salonichf5 salonichf5 marked this pull request as ready for review November 12, 2025 17:29
@salonichf5 salonichf5 requested a review from a team as a code owner November 12, 2025 17:29
Copy link
Contributor

@bjee19 bjee19 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

@salonichf5 salonichf5 force-pushed the feat/session-persistence branch from 2d2539b to 7f1b7c3 Compare November 13, 2025 00:35
@salonichf5 salonichf5 force-pushed the feat/oss-nginx-session-persistence branch from 5403bf5 to 03c61d2 Compare November 13, 2025 01:08
@salonichf5 salonichf5 merged commit bc330d1 into feat/session-persistence Nov 13, 2025
55 of 56 checks passed
@salonichf5 salonichf5 deleted the feat/oss-nginx-session-persistence branch November 13, 2025 02:02
@github-project-automation github-project-automation bot moved this from 🆕 New to ✅ Done in NGINX Gateway Fabric Nov 13, 2025
salonichf5 added a commit that referenced this pull request Nov 18, 2025
Problem: Users want to be able to specify ip_hash load balancing for upstreams

Solution: Add support for session affinity using ip_hash directive in upstreams
salonichf5 added a commit that referenced this pull request Nov 21, 2025
Problem: Users want to be able to specify ip_hash load balancing for upstreams

Solution: Add support for session affinity using ip_hash directive in upstreams
salonichf5 added a commit that referenced this pull request Nov 24, 2025
Problem: Users want to be able to specify ip_hash load balancing for upstreams

Solution: Add support for session affinity using ip_hash directive in upstreams
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request release-notes

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants