link: Changing bandwidth limits of TCIntf at runtime #650

Open
wants to merge 1 commit into
from

Projects

None yet

2 participants

@AlexanderFroemmgen

@dstohr, @jfornoff and I found two problems in the config function for changing the link bandwidth limits (TCIntf) at runtime:

1.The config function deletes the existing tc configuration and creates a new one. This hinders in-place modifications of tc and introduces measurement artefacts.
2.The config function uses a shortcut for performance reasons which obstructs setting the limit to “None” coming from a limit which is "not None”.

This pull request fixes both problems and provides an additional example file which illustrates the discussed aspects. Please see the attached sample output figure of the example which illustrates the measurement artefacts.

pullrequest

We will use these changes to provide further support for changing bandwidths as presented in “Capture and Replay: Reproducible Network Experiments in Mininet” (http://dl.acm.org/citation.cfm?id=2959076) soon.

@lantz
Member
lantz commented Aug 11, 2016 edited

Interesting. A few thoughts:

  1. None should in theory be the same as a huge limit, but in practice it doesn't seem to be, so perhaps we do wish to change the handling slightly as you suggest.
  2. Dynamic TCLink changes haven't been used a lot as far as I know, so it's good to see someone taking a look at how they work out in practice. There are certainly classes of experiments (e.g. MPTCP) which can make good use of the feature. And as you note, the original design choices could probably be improved upon.
  3. Your graph doesn't seem to show a huge difference between smooth and hard. If I understand it correctly, it is showing that smooth is taking effect faster than hard (which is probably good), but also that smooth seems to be, ah, less smooth in terms of the actual throughput (which is bad.)
  4. It's always great to see more Mininet demos!
  5. I wasn't entirely able to discern what you were doing (the PDF seems missing from the ACM site?) but it appears that you have traces of bandwidth changes over time (e.g. on a wireless link whose quality varies over time, or a shared uplink which is rate limited based on usage) which you are applying to an experiment? That seems interesting and possibly useful.
@AlexanderFroemmgen

Thanks for the feedback.

Regarding 3:
The attached graph is just for illustration of the effect. I attached a second graph with a delay of 20ms. The delay increases the artefact, as TCP requires more time to recover from the deleted traffic shaper. Deleting the tc configuration implies, e.g., for the sched_hbt shaper that htb_delete and htb_destroy (http://lxr.free-electrons.com/source/net/sched/sch_htb.c#L1237) are called.

The proposed modification is especially important for recurrent bandwidth limit changes, as otherwise the artefacts build up.

pull2

Regarding 5:
Regarding the additional replay functionality, we plan to make a second pull request which provides a simple API for replaying bandwidth traces. Even though both pull requests are related, the first one is the more general and provides the basic functionality for the second one.

What are the next steps to decide if this pull request is worth it? Please let me know if I should provide you additional measurements and examples!

@lantz
Member
lantz commented Aug 23, 2016

I think it's mostly OK although there are a bunch of minor changes I'd want to make to it, including cosmetic changes (e.g. matching the mininet python style), and also thinking about how the configuration caching should work if it is in fact necessary.

@lantz lantz added the discussion label Jan 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment