Skip to content

Conversation

@singh-prem
Copy link
Contributor

  • As per description, TE14.1 requires prefixes in REPAIR VRF to use 700 NHGs that point to 700 decap+encap tunnels (NHG:NH=1:1). However, existing code is configuring V4TunnelCount=20000 decap+encap tunnels. Same code is also used for TE14.2 test.
  • For prefixes in REPAIR VRF, proposed code increases NHGs to 768 (keeping a multiple of 256) while allowing only single decap+encap tunnel per NHG.
  • This code fix also removes vendor specific deviation override_default_nh_scale=true from TE-14.2

@singh-prem singh-prem requested review from a team as code owners December 15, 2025 10:06
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @singh-prem, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the scaling behavior of decap+encap tunnels within gRIBI tests, particularly addressing requirements for TE-14.1 and TE-14.2. The changes involve adjusting the default count for re-encapsulation next-hop groups, modifying the next-hop group creation logic to ensure a 1:1 relationship between next-hop groups and decap+encap tunnels, and eliminating a vendor-specific scaling override to standardize test configurations.

Highlights

  • Adjusted Tunnel Scaling Parameters: The number of next-hop groups for re-encapsulating IPv4 tunnels (V4ReEncapNHGCount) has been increased from 256 to 768 to align with specific test requirements, particularly for TE-14.1.
  • Simplified Next-Hop Group Configuration: The logic for creating next-hop groups and next-hops for decap+encap tunnels has been revised to ensure that each next-hop group points to a single decap+encap tunnel, improving consistency and adherence to test specifications.
  • Removed Vendor-Specific Override: A vendor-specific override for default next-hop scale (override_default_nh_scale) has been removed from both the test code and metadata, streamlining the configuration and reducing deviations for TE-14.2.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@OpenConfigBot
Copy link

OpenConfigBot commented Dec 15, 2025

Pull Request Functional Test Report for #4928 / 76c3581

Virtual Devices

Device Test Test Documentation Job Raw Log
Arista cEOS status
TE-14.2: encap and decap scale
Cisco 8000E status
TE-14.2: encap and decap scale
Cisco XRd status
TE-14.2: encap and decap scale
Juniper ncPTX status
TE-14.2: encap and decap scale
Nokia SR Linux status
TE-14.2: encap and decap scale
Openconfig Lemming status
TE-14.2: encap and decap scale

Hardware Devices

Device Test Test Documentation Raw Log
Arista 7808 status
TE-14.2: encap and decap scale
Cisco 8808 status
TE-14.2: encap and decap scale
Juniper PTX10008 status
TE-14.2: encap and decap scale
Nokia 7250 IXR-10e status
TE-14.2: encap and decap scale

Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix the decap+encap tunnel scaling for TE-14.1 and TE-14.2 tests and removes a vendor-specific deviation. The changes adjust the number of Next-Hop Groups (NHGs) for re-encapsulation and modify the logic for how prefixes in the REPAIR VRF are mapped to decap+encap tunnels.

My review focuses on the correctness of the new scaling logic. I've found a potential issue in internal/tescale/scale.go where the number of created NHGs might not match the intended count due to integer division, especially when the total number of tunnels is not perfectly divisible by the number of NHGs. I've provided a suggestion to make this logic more robust. The rest of the changes, including the removal of the deviation and the adjustment of the default argument, look good and align with the goals of the pull request.

Comment on lines 217 to 236
reEncapNHGRatio := param.V4TunnelCount / param.V4ReEncapNHGCount
nhgID = idPool.NextNHGID()
nhgEntry := fluent.NextHopGroupEntry().WithID(nhgID).WithNetworkInstance(defaultVRF).WithBackupNHG(nhgDecapToDefault)
for idx, ip := range v4TunnelIPAddrs.AllIPs() {
nhID = idPool.NextNHID()
vrfDefault.NHs = append(vrfDefault.NHs,
fluent.NextHopEntry().WithIndex(nhID).WithDecapsulateHeader(fluent.IPinIP).WithEncapsulateHeader(fluent.IPinIP).
WithNetworkInstance(defaultVRF).WithIPinIP(tunnelSrcIP, v4TunnelIPAddrs.AllIPs()[(idx+1)%len(v4TunnelIPAddrs.AllIPs())]),
)
if idx != 0 && idx%reEncapNHGRatio == 0 {
vrfDefault.NHGs = append(vrfDefault.NHGs, nhgEntry)
allTunnelIPs := v4TunnelIPAddrs.AllIPs()
for idx, ip := range allTunnelIPs {
if idx%reEncapNHGRatio == 0 {
nhgID = idPool.NextNHGID()
nhgEntry = fluent.NextHopGroupEntry().WithID(nhgID).WithNetworkInstance(defaultVRF).WithBackupNHG(nhgDecapToDefault)
nhgEntry := fluent.NextHopGroupEntry().WithID(nhgID).WithNetworkInstance(defaultVRF).WithBackupNHG(nhgDecapToDefault)
nhID = idPool.NextNHID()
// Use same NHG and DecapEncap tunnel for "reEncapNHGRatio" prefixes
destinationTunnelIP := allTunnelIPs[(idx/reEncapNHGRatio)%len(allTunnelIPs)]
vrfDefault.NHs = append(vrfDefault.NHs,
fluent.NextHopEntry().WithIndex(nhID).WithDecapsulateHeader(fluent.IPinIP).WithEncapsulateHeader(fluent.IPinIP).
WithNetworkInstance(defaultVRF).WithIPinIP(tunnelSrcIP, destinationTunnelIP),
)
nhgEntry = nhgEntry.AddNextHop(nhID, 1)
vrfDefault.NHGs = append(vrfDefault.NHGs, nhgEntry)
}
nhgEntry = nhgEntry.AddNextHop(nhID, 1)
vrfRConf.V4Entries = append(vrfRConf.V4Entries,
fluent.IPv4Entry().WithPrefix(ip+"/32").WithNextHopGroup(nhgID).WithNetworkInstance(VRFR).WithNextHopGroupNetworkInstance(defaultVRF),
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic for creating Next-Hop Groups (NHGs) for re-encapsulation can lead to creating more NHGs than specified by param.V4ReEncapNHGCount. This happens when param.V4TunnelCount is not perfectly divisible by param.V4ReEncapNHGCount.

For example, with V4TunnelCount=20000 and V4ReEncapNHGCount=768, the reEncapNHGRatio becomes 26 due to integer division. The condition idx % reEncapNHGRatio == 0 will then trigger ceil(20000 / 26) = 770 times, creating 770 NHGs instead of the intended 768. This also results in creating 770 decap+encap tunnels.

To ensure the correct number of NHGs and tunnels are created, I suggest using a more robust distribution logic that correctly maps prefixes to the intended number of NHGs, regardless of whether the counts are perfectly divisible.

 	allTunnelIPs := v4TunnelIPAddrs.AllIPs()
 	var nhgIndex int = -1
 	for idx, ip := range allTunnelIPs {
 		currentNHGIndex := (idx * param.V4ReEncapNHGCount) / param.V4TunnelCount
 		if currentNHGIndex != nhgIndex {
 			nhgIndex = currentNHGIndex
 			nhgID = idPool.NextNHGID()
 			nhgEntry := fluent.NextHopGroupEntry().WithID(nhgID).WithNetworkInstance(defaultVRF).WithBackupNHG(nhgDecapToDefault)
 			nhID := idPool.NextNHID()
 			// Use same NHG and DecapEncap tunnel for a group of prefixes
 			destinationTunnelIP := allTunnelIPs[nhgIndex%len(allTunnelIPs)]
 			vrfDefault.NHs = append(vrfDefault.NHs,
 				fluent.NextHopEntry().WithIndex(nhID).WithDecapsulateHeader(fluent.IPinIP).WithEncapsulateHeader(fluent.IPinIP).
 					WithNetworkInstance(defaultVRF).WithIPinIP(tunnelSrcIP, destinationTunnelIP),
 			)
 			nhgEntry = nhgEntry.AddNextHop(nhID, 1)
 			vrfDefault.NHGs = append(vrfDefault.NHGs, nhgEntry)
 		}
 		vrfRConf.V4Entries = append(vrfRConf.V4Entries,
 			fluent.IPv4Entry().WithPrefix(ip+"/32").WithNextHopGroup(nhgID).WithNetworkInstance(VRFR).WithNextHopGroupNetworkInstance(defaultVRF),
 		)
 	}

@coveralls
Copy link

Pull Request Test Coverage Report for Build 21517132914

Details

  • 0 of 13 (0.0%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 9.919%

Changes Missing Coverage Covered Lines Changed/Added Lines %
internal/tescale/scale.go 0 13 0.0%
Totals Coverage Status
Change from base Build 21508799190: 0.0%
Covered Lines: 2247
Relevant Lines: 22653

💛 - Coveralls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants