Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add static rate limit on slice traffic #324

Merged
merged 23 commits into from Sep 20, 2021
Merged

Add static rate limit on slice traffic #324

merged 23 commits into from Sep 20, 2021

Conversation

pudelkoM
Copy link
Member

@pudelkoM pudelkoM commented Sep 1, 2021

As per planning for SD-Core release 1.6, this PR adds the option to configure a slice-wide rate limit in both uplink and downlink direction. The rates and burst sizes can be configured via the up4.json config file and are applied at startup by the pfcpagent. Run-time configuration is possible.

The meter rates do account for various headers and subtract them before counting to effectively meter user/UE traffic, not link rates. In upstream direction the Ethernet header is ignored, downstream it's the Ethernet/IP/UDP/GTP encapsulation header combo.

Open questions:

  • The current implementations drop packets near the end of the pipeline indiscriminately. Should we be smarter on which packets to drop?
    • Will be solved in later PRs, possibly through one hierarchical scheduler
  • Should the limit be runtime configurable? How?
    • Rates are now configured through PFCP agent, thus runtime configurable

@pudelkoM pudelkoM added the wip label Sep 1, 2021
@thakurajayL
Copy link
Contributor

As of today we are having all non-gbr bearers so single rate limiting looks ok. But when we bring in GBR bearers into picture then can we have 2 separate rate controllers ? 1 for GBR bearers and 1 for non-gbr bearers ?

e.g. SLICE to support 500 Mbps data rate for gbr bearers/flows, and 200 Mbps for all non-gbr bearers ?

@pudelkoM
Copy link
Member Author

pudelkoM commented Sep 2, 2021

e.g. SLICE to support 500 Mbps data rate for gbr bearers/flows, and 200 Mbps for all non-gbr bearers ?

Very good question. I think the BESS scheduler can be configured to support that, as long as we can identify those gbr/non-gbr flows. @ccascone and I will sync on BESS QoS, we might know more soon.

conf/up4.bess Outdated
Comment on lines 207 to 208
executeFAR::Split(size=1, attribute='action')
afterFarMerge = executeFAR
Copy link
Member Author

@pudelkoM pudelkoM Sep 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe moving the rate limiter after the qerLookup and before farLookup is better:

  • avoids extra work for dropped packets
  • no need to account for gtp header overhead

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see now why the programming of entries will need to go into the agent. The GBR flows cannot be dropped after QERLookup. So basically the rest of bandwidth available on the slice need to be the entry. Since GBR flows are added dynamically, the remaining bandwidth needs to updated in the entry accordingly.

cc @badhrinathpa

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to work our way backwards from total available wire bandwidth (extra 20B) per slice, for each logical interface N3/N6/N9 because even though N6 and N9, both say core, N6 does not have GTPu overhead and N9 does.

conf/up4.bess Outdated Show resolved Hide resolved
conf/parser.py Outdated Show resolved Hide resolved
conf/up4.bess Outdated Show resolved Hide resolved
pfcpiface/bess.go Outdated Show resolved Hide resolved
pfcpiface/bess.go Outdated Show resolved Hide resolved
@pudelkoM pudelkoM removed the wip label Sep 9, 2021
@pudelkoM pudelkoM marked this pull request as ready for review September 9, 2021 02:40
pfcpiface/bess.go Outdated Show resolved Hide resolved
pfcpiface/bess.go Outdated Show resolved Hide resolved
@krsna1729
Copy link
Member

krsna1729 commented Sep 11, 2021

  • rename adjust_meter_packet_length to something short like deduct_len. We can keep it uint64 and subtraction in code, until we come across a usecase where we need addition
  • Make it default to size of ethernet when not provided in args. This way the example wont change, existing qerlookup go code wont change, and bit less repetitive data to move.
  • Replace field source interface with attribute action set in farLookup. This will tell uplink or downlink.
  • Support all 3 interfaces N3, N6 and N9 by additionally checking for attribute tunnel_out_type which we currently set as 1 for traffic needing encapsulation in farLookup. Test with sim mode code N6 and N9 work as expected.
    f.tunnelType = uint8(1)
  • Use the _in pattern used in the file to conditionally insert a module
       # Insert NTF module, if enabled
       _in = preQoSCounter
       if ntf:
         _in -> ntf
         _in = ntf
         
       _in -> qerLookup::Qos(fields=[{'attr_name':'src_iface', 'num_bytes':1}, \

pudelkoM and others added 11 commits September 16, 2021 19:18
This parameter allows adjusting the packet length value passed to the trtrc meter. It can be used to exclude certain headers or to account for encapsulation.
This change makes the metering packet length value adjustable per flow entry, from module-wide before. This allows applying different values for uplink and downlink metering.
@pudelkoM
Copy link
Member Author

I'm not sure about the default value for deduct_len. Unset means 0 in protobufs, which is a value we want to be able to use, so we can't map it to 14.

@krsna1729
Copy link
Member

I'm not sure about the default value for deduct_len. Unset means 0 in protobufs, which is a value we want to be able to use, so we can't map it to 14.

Sad limitation of proto3.

We can keep it uint64 and subtraction in code, until we come across a usecase where we need addition

I think I understand how we can use int64. Today we would have to guess what we should set as the slice limit in the config. If we support int64, we can add to total_len: CRC + 20 Bytes on wire to get the wire length of that frame. This way our config can talk in terms of absolute wire throughput if needed, without having to assume packet size mix.

Can you please put it back to int64? Sorry about flip-flop.

Cbs: cbs, /* committed burst size */
Pbs: pbs, /* Peak burst size */
Ebs: ebs, /* Excess burst size */
DeductLen: 50, /* Exclude Ethernet,IP,UDP,GTP header */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do support GTPU extension header(s) too. What will happen then?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I see two options:

  1. Passing deduct_len per packet, so we can use different values depending on the presence of GTPU extension header. Not sure if possible with the existing QoS module, might require some changes (e.g., to read deduct_len from packet metadata)
  2. Read deduct_len from the JSON config. This would work only if all traffic seen by the same BESS-UPF instance will have GTP-U extensions or not. This is a fair assumption for now in Aether, where we use different UPFs for 5G and 4G base stations, but I'm not sure if this is a valid assumption for Intel's use cases.

Whatever options we decide to pursue, I suggest we push this to the backlog (the SD-Fabric team can take care of this), leave 50 (since we mostly work with 4G base stations for now) and put a FIXME comment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a todo right above the entry.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can solve this simply by making sure the config talks about the limits from the wire perspective as described here right -
#324 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accordingly we can use deduct_len be -24 Bytes. This way we do not have to bother if there way extension header or not, encapsulation or not.

@pudelkoM
Copy link
Member Author

Flipped it back to int64.

Also tried making the field optional. Let me know what you think.

pfcpiface/bess.go Outdated Show resolved Hide resolved
pfcpiface/bess.go Outdated Show resolved Hide resolved
pfcpiface/bess.go Outdated Show resolved Hide resolved
pfcpiface/bess.go Outdated Show resolved Hide resolved
pudelkoM and others added 3 commits September 17, 2021 12:36
This reverts commit f7acb30.
N9 is not addressed in this PR, but can be added later.
@pudelkoM
Copy link
Member Author

@krsna1729 I think I addressed all comments as good as possible, given the constraints.
Support for N9 is moved to the backlog, but can based on the initial work done in f7acb30.

@ccascone ccascone merged commit df79a8a into master Sep 20, 2021
@ccascone ccascone deleted the slice-rate-limit branch September 20, 2021 06:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants