Skip to content

Commit

Permalink
Updated debloat and debloat.sh slightly
Browse files Browse the repository at this point in the history
debloat does not set the bql limit at 10Mbit as low as debloat.sh now
does.

Still need to detect bridges
  • Loading branch information
Dave Täht committed Dec 12, 2012
1 parent 38da877 commit cc4ee67
Show file tree
Hide file tree
Showing 2 changed files with 30 additions and 12 deletions.
16 changes: 9 additions & 7 deletions src/debloat
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ is helpful.
This started out life as a shell script to exercise qfq,
Now it does a lot more than that and is getting crufty.
SFQ is now the default. SFQ itself has been improved significantly
FQ_CODEL is now the default. SFQ has been improved significantly
in Linux 3.3 (eliminating a head of line problem), and in this case
no new TC utility is required. Also a bug in red was fixed, and no
new tc utility is required there either. So if you were using either
Expand All @@ -135,20 +135,20 @@ and with GSO on, at 100Mbit, I have seen latency spikes of up to 70ms.
(Not recently tested, however)
A per queue limit of 2-3 large packets appears to be the best
A per queue limit of 2 large packets appears to be the best
compromise at 100Mbit and below. So typically I hammer down BQL to
4.5k or 3k at < 100Mbit, and turn GSO/TSO off, and as a result see
3k at < 100Mbit, and turn GSO/TSO off, and as a result see
ping against load latencies in the 1 to 2ms range, which is about
what you would expect. I have tried 1500 bytes, which limited the top
end performance to about 84Mbit.
end performance to about 84Mbit. At 10Mbit, 1514 works on most OSes.
For comparison, you will see PFIFO_FAST doing 130+ms, pre BQL, no
SFQ at 100Mbit.
* A BQL enabled ethernet device driver is helpful
But there is currently no good way to detect if you have one at run
time. 6-7 of the most major drivers have been convered to BQL, more
time. 10 of the most major drivers have been convered to BQL, more
remain.
* Wireless still has problems
Expand Down Expand Up @@ -199,10 +199,12 @@ I have tried pfifo_drop_head, SFB, and RED here. All had bugs until
pfifo_drop_head generates interesting results.
The very new combination of REDSFQ which compensates for both bytes
and packets is very interesting, as it combines everything we have
and packets was very interesting, as it combines everything we have
learned in the past year into one single qdisc which can be brought up
as a shaper in three lines of code.
FQ_Codel is better.
In other news:
I have not tried the new 'adaptive red' implementation as a stand
Expand Down Expand Up @@ -1619,7 +1621,7 @@ local function ethernet_fq_codel(queues)
end

local function ethernet_fq_codel_ll(queues)
qa("handle %x root fq_codel quantum 1000 ",10)
qa("handle %x root fq_codel limit 1000 quantum 1000 ",10)
end

local function ethernet_codel(queues)
Expand Down
26 changes: 21 additions & 5 deletions src/debloat.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,12 @@

LL=1 # go for lowest latency
ECN=1 # enable ECN
BQLLIMIT=3000 # at speeds below 100Mbit, 2 big packets is enough
BQLLIMIT100=3000 # at speeds below 100Mbit, 2 big packets is enough
BQLLIMIT10=1514 # at speeds below 10Mbit, 1 big packet is enough.
# Actually it would be nice to go to just one packet
QDISC=fq_codel # There are multiple variants of fq_codel in testing
FQ_LIMIT="" # the default 10000 packet limit mucks with slow start at speeds
# at 1Gbit and below. Somewhat arbitrary figures selected.

[ -z "$IFACE" ] && echo error: $0 expects IFACE parameter in environment && exit 1
[ -z `which ethtool` ] && echo error: ethtool is required && exit 1
Expand All @@ -19,15 +23,19 @@ QDISC=fq_codel # There are multiple variants of fq_codel in testing
# BUGS - need to detect bridges.
# - Need filter to distribute across mq ethernet devices
# - needs an "undebloat" script for ifdown to restore BQL autotuning
# - should use a lower $QDISC limit at wifi and <10Gbit

S=/sys/class/net
FQ_OPTS=""
#FQ_OPTS="FLOWS 2048 TARGET 5ms LIMIT 1000"
#FQ_OPTS="FLOWS 2048 TARGET 5ms"

[ $LL -eq 1 ] && FQ_OPTS="$FQ_OPTS quantum 500"
[ $ECN -eq 1 ] && FQ_OPTS="$FQ_OPTS ecn"

FLOW_KEYS="src,dst,proto,proto-src,proto-dst"
# For 5-tuple (flow) fairness when the same device is performing NAT
#FLOW_KEYS="nfct-src,nfct-dst,nfct-proto,nfct-proto-src,nfct-proto-dst"


# Offloads are evil in the quest for low latency
# And ethtool will abort if you attempt to turn off a
# nonexistent offload.
Expand Down Expand Up @@ -66,20 +74,28 @@ mq() {
tc qdisc add dev $IFACE parent 1:$(printf "%x" $I) $QDISC $FQ_OPTS
I=`expr $I + 1`
done
I=`expr $I - 1`
tc filter add dev $IFACE prio 1 protocol ip parent 1: handle 100 \
flow hash keys ${FLOW_KEYS} divisor $I baseclass 1:1
}

fq_codel() {
tc qdisc add dev $IFACE root $QDISC $FQ_OPTS
tc qdisc add dev $IFACE root $QDISC $FQ_OPTS $FQ_LIMIT
}

fix_speed() {
local SPEED=`cat $S/$IFACE/speed` 2> /dev/null
if [ -n "$SPEED" ]
then
[ "$SPEED" = 4294967295 ] && echo "no ethernet speed selected. debloat estimate will be WRONG"
[ "$SPEED" -lt 1001 ] && FQ_LIMIT=1200
if [ "$SPEED" -lt 101 ]
then
[ $LL -eq 1 ] && et # for lowest latency disable offloads
for I in /sys/class/net/$IFACE/queues/tx-*/byte_queue_limits/limit_max
BQLLIMIT=$BQLLIMIT100
FQ_LIMIT="limit 800"
[ "$SPEED" -lt 11 ] && BQLLIMIT=$BQLLIMIT10 && FQ_LIMIT="limit 400"
for I in /sys/class/net/$IFACE/queues/tx-*/byte_queue_limits/limit_max
do
echo $BQLLIMIT > $I
done
Expand Down

0 comments on commit cc4ee67

Please sign in to comment.