?

Log in

No account? Create an account

fanf

Windows, rates, and congestion control

« previous entry | next entry »
7th Apr 2006 | 18:19

On Wednesday I asked about alternatives to window-based protocols. An answer is "rate-based". For example, see this paper from 1995 which discusses congestion control in ATM.

Of course TCP has an implicit measure of its sending rate, because it maintains both a congestion window and an RTT, so its average sending rate is cwnd/rtt - but this is not explicit in the way it is for ATM. And in fact TCP's rate can be very bursty and is often unstable in adverse conditions. However the cwnd does give you a direct measurement of the amount of buffering you need in case it is necessary to retransmit lost packets.

Edit: There's a really nice summary of the burstiness of TCP in this paper which brilliantly turns what is commonly viewed as a disadvantage into a benefit for dynamic traffic splitting.

Someone on #cl pointed me to XCP which is a really clever congestion control protocol, which has been implemented to work under TCP but could in principle work with any unicast transport protocol. One thing that I quite like about it is that it agrees with my intuition that the Internet would work better if routers and hosts co-operated more (one two). In XCP, senders annotate their packets with their current RTT and cwnd parameters (measured by TCP in the usual way) plus their desired cwnd increase. Routers along the way can adjust the increase according to how busy the links are, and even make it negative if there is too much traffic. The receiver then returns the feedback to the sender in its acks. The really brilliant thing is that routers do not need to keep any per-flow state - compare the ATM paper above, which does require per-flow state, and is therefore hopelessly unscalable. Imagine the number of concurrent flows and the flow churn rate of a router on the LINX! Furthermore, XCP separates aggregate efficiency from fair allocation of bandwidth between flows. The fairness controller can implement different policies, which could allow for more precise QOS guarantees, or bandwidth allocation based on price paid, etc.

Really really cool, but really really hard to deploy widely. It's the kind of thing that I suppose would fit in with David D. Clark's Future Internet Network Design research programme, which is supposed to come up with new ideas for the long term. "To do that, you have to free yourself from what the world looks like now."

| Leave a comment | Share

Comments {4}

from: dwmalone
date: 8th Apr 2006 20:54 (UTC)

We held a workshop on congestion control last summer, including some work on an experimental XCP implementation for FreeBSD. The program is here including the slides. There is another option for congestion control, which is to use backpressure, but the network rarely provides you with explicit and useful backpressure (beyond your local switch).

Reply | Thread

Tony Finch

from: fanf
date: 8th Apr 2006 21:26 (UTC)

Simple backpressure happens too slowly for use on a network.

One thing that I wondered was if it would be worth using a more sophisticated model when transmitting on an ethernet (especially wifi), to minimise the probability of a collision.

Reply | Parent | Thread

from: dwmalone
date: 9th Apr 2006 11:27 (UTC)

Would backpressure be slower than (say) TCP waiting about an RTT to find out that there had been a loss?

There are certainly interesting interactions between TCP and shared media (because you may have multiple buffers that the TCP streams don't share) and between TCP and media where transmission chances are shared with ethernet like mechanisms. We've looked at using some of the wifi priotisation stuff to work around these effects (see here) and other people have suggested things like modifing TCP or active queueing to get around it. As with all this type of research YMMV ;-)

On wifi there are also other clever things you can do. There was a paper at SIGCOMM last year suggesting a nice way to adjust the random backoffs automatically to match the amount of traffic on the network and reduce the collisions to a reasonable level (I think it was called something like "Idle Sense").

(For faster wifi networks, it looks like the per-packet overheads are really the killer, rather than the collisions. Reducing the actualy frame headers seems to be hard, according to the PHY level guys as you need to train your reciever. So people are looking at other ways of getting around them.)

Reply | Parent | Thread

Tony Finch

from: fanf
date: 10th Apr 2006 22:47 (UTC)

Ooh, yes, the idle sense paper looks close to one of the possibilities I considered :-) I also thought that more explicit signalling between senders might help, but given the problems 802.11 hosts have with hearing each other it probably wouldn't work. (I haven't read your paper yet...)

Regarding backpressure, surely it would take up to half an RTT for any backpressure to reach you, by which time it is probably too late. Looking at it from this point of view I think it boils down to windowing.

Reply | Parent | Thread