You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying to enforce bandwidth/latency limits, I'm not sure how the settings finally apply. It certainly seems that latency plays a big role with respect to bandwidth as well. Latency, by itself, is however mostly OK. The problem is the effect it has on bandwidth. I'm running the following setups with 4 containers, deployed as two server/client pairs. These are only some examples that show this problem, I've run a lot more similar setups. I'm using iperf3 to calculate the actual bandwidth between containers.
Setting bandwidth: 100Mbps (bidirectional) and delay: 3ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~750Mbits/sec
Setting bandwidth: 100Mbps (bidirectional) and delay: 30ms for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~710Mbits/sec
Setting bandwidth: 100Mbps (bidirectional) and delay: 60ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~380 Mbits/sec
Setting bandwidth: 1000Mbps (bidirectional) and delay: 3ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~7.10 Gbits/sec
Setting bandwidth: 1000Mbps (bidirectional) and delay: 30ms for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~790Mbits/sec
Setting bandwidth: 10000Mbps (bidirectional) and delay: 3ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~7.10 Gbits/sec
Setting bandwidth: 10000Mbps (bidirectional) and delay: 30ms for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~780Mbits/sec
Setting bandwidth: 10000Mbps (bidirectional) and delay: 60ms for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~380 Mbits/sec
I should note that on my system, connecting 2 docker containers with no traffic shaping whatsoever (and without fogify) results to actual bandwidth measurements of ~27Gbits/sec, so there is no bottleneck on the host system.
The text was updated successfully, but these errors were encountered:
While trying to enforce bandwidth/latency limits, I'm not sure how the settings finally apply. It certainly seems that latency plays a big role with respect to bandwidth as well. Latency, by itself, is however mostly OK. The problem is the effect it has on bandwidth. I'm running the following setups with 4 containers, deployed as two server/client pairs. These are only some examples that show this problem, I've run a lot more similar setups. I'm using iperf3 to calculate the actual bandwidth between containers.
bandwidth: 100Mbps
(bidirectional) anddelay: 3ms
for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~750Mbits/secbandwidth: 100Mbps
(bidirectional) anddelay: 30ms
for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~710Mbits/secbandwidth: 100Mbps
(bidirectional) anddelay: 60ms
for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~380 Mbits/secbandwidth: 1000Mbps
(bidirectional) anddelay: 3ms
for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~7.10 Gbits/secbandwidth: 1000Mbps
(bidirectional) anddelay: 30ms
for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~790Mbits/secbandwidth: 10000Mbps
(bidirectional) anddelay: 3ms
for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~7.10 Gbits/secbandwidth: 10000Mbps
(bidirectional) anddelay: 30ms
for all 4 containers. Ping results between containers are indeed ~30ms, but bandwidth is actually ~780Mbits/secbandwidth: 10000Mbps
(bidirectional) anddelay: 60ms
for all 4 containers. Ping results between containers are indeed ~3ms, but bandwidth is actually ~380 Mbits/secIf you want to try similar setups, you can look into the
start-network-test.sh
script in this repo: https://github.com/Datalab-AUTH/fogify-db-benchmarksI should note that on my system, connecting 2 docker containers with no traffic shaping whatsoever (and without fogify) results to actual bandwidth measurements of ~27Gbits/sec, so there is no bottleneck on the host system.
The text was updated successfully, but these errors were encountered: