New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add static rate limit on slice traffic #324
Conversation
As of today we are having all non-gbr bearers so single rate limiting looks ok. But when we bring in GBR bearers into picture then can we have 2 separate rate controllers ? 1 for GBR bearers and 1 for non-gbr bearers ? e.g. SLICE to support 500 Mbps data rate for gbr bearers/flows, and 200 Mbps for all non-gbr bearers ? |
Very good question. I think the BESS scheduler can be configured to support that, as long as we can identify those gbr/non-gbr flows. @ccascone and I will sync on BESS QoS, we might know more soon. |
conf/up4.bess
Outdated
executeFAR::Split(size=1, attribute='action') | ||
afterFarMerge = executeFAR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe moving the rate limiter after the qerLookup and before farLookup is better:
- avoids extra work for dropped packets
- no need to account for gtp header overhead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see now why the programming of entries will need to go into the agent. The GBR flows cannot be dropped after QERLookup. So basically the rest of bandwidth available on the slice need to be the entry. Since GBR flows are added dynamically, the remaining bandwidth needs to updated in the entry accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to work our way backwards from total available wire bandwidth (extra 20B) per slice, for each logical interface N3/N6/N9 because even though N6 and N9, both say core, N6 does not have GTPu overhead and N9 does.
28d4f58
to
15c2ec6
Compare
|
bb7d1aa
to
a324e7c
Compare
This parameter allows adjusting the packet length value passed to the trtrc meter. It can be used to exclude certain headers or to account for encapsulation.
This change makes the metering packet length value adjustable per flow entry, from module-wide before. This allows applying different values for uplink and downlink metering.
a324e7c
to
05f4c7d
Compare
I'm not sure about the default value for |
b9ec5f9
to
8dcf671
Compare
Sad limitation of proto3.
I think I understand how we can use int64. Today we would have to guess what we should set as the slice limit in the config. If we support int64, we can add to total_len: CRC + 20 Bytes on wire to get the wire length of that frame. This way our config can talk in terms of absolute wire throughput if needed, without having to assume packet size mix. Can you please put it back to int64? Sorry about flip-flop. |
pfcpiface/bess.go
Outdated
Cbs: cbs, /* committed burst size */ | ||
Pbs: pbs, /* Peak burst size */ | ||
Ebs: ebs, /* Excess burst size */ | ||
DeductLen: 50, /* Exclude Ethernet,IP,UDP,GTP header */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do support GTPU extension header(s) too. What will happen then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I see two options:
- Passing
deduct_len
per packet, so we can use different values depending on the presence of GTPU extension header. Not sure if possible with the existing QoS module, might require some changes (e.g., to readdeduct_len
from packet metadata) - Read
deduct_len
from the JSON config. This would work only if all traffic seen by the same BESS-UPF instance will have GTP-U extensions or not. This is a fair assumption for now in Aether, where we use different UPFs for 5G and 4G base stations, but I'm not sure if this is a valid assumption for Intel's use cases.
Whatever options we decide to pursue, I suggest we push this to the backlog (the SD-Fabric team can take care of this), leave 50 (since we mostly work with 4G base stations for now) and put a FIXME comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a todo
right above the entry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracked here: https://jira.opennetworking.org/browse/SDFAB-571
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can solve this simply by making sure the config talks about the limits from the wire perspective as described here right -
#324 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accordingly we can use deduct_len be -24 Bytes
. This way we do not have to bother if there way extension header or not, encapsulation or not.
Flipped it back to Also tried making the field optional. Let me know what you think. |
This reverts commit f7acb30. N9 is not addressed in this PR, but can be added later.
5699171
to
d6b9e16
Compare
@krsna1729 I think I addressed all comments as good as possible, given the constraints. |
As per planning for SD-Core release 1.6, this PR adds the option to configure a slice-wide rate limit in both uplink and downlink direction. The rates and burst sizes can be configured via the up4.json config file and are applied at startup by the pfcpagent. Run-time configuration is possible.
The meter rates do account for various headers and subtract them before counting to effectively meter user/UE traffic, not link rates. In upstream direction the Ethernet header is ignored, downstream it's the Ethernet/IP/UDP/GTP encapsulation header combo.
Open questions: