next up previous
Next: Conclusion Up: Performance Results Previous: Traffic on Reverse Link

Asymmetric Links

In this section we investigate performance over networks that exhibit asymmetry, e.g., ADSL, HFC or combination networks, which may have a high bandwidth cable downstream link and a slower telephone upstream link. TCP has been shown to perform poorly over asymmetric links [11] primarily because of ACK loss, which causes burstiness at the source (the size of the bursts are proportional to the degree of asymmetry) and leads to buffer overflow along the higher bandwidth forward path; and also reduced throughput because of slow window growth at the source due to lost ACK packets. Lakshman et. al. [11] define the normalized asymmetry k of a path as the ratio of the transmission capacity of data packets on the forward path to ACK packets on the reverse path. This is an important measurement because it shows the source puts out k times as many data packets as the reverse link has capacity. This means once the queues in the reverse path fill, only one ACK out of k will make it back to the receiver. Each ACK that does arrive at the source then generates a burst of k packets in the forward path. In addition, during congestion avoidance the window growth will be slowed by 1/kas compared to a symmetric connection.


  
Figure 13: Simulation configuration for asymmetric links

The simulation configuration depicted in Figure 13 has been studied by Lakshman et. al. in detail and is used here to examine performance. In this configuration the forward buffer, Bf = 9 packets. Using 1 Kbyte data packets this results in a normalized asymmetry factor k=3.

Figure 14 (a) shows the congestion window growth for Reno. Because of the burstiness of the connection due to ACK loss, there are several lost data packets per window of data, causing Reno to suffer timeouts every cycle (that is why the congestion window reduces to 1 packet). Figure 14 (b) shows the development of the window with TCP-Santa Cruz. In this case, the congestion window settles a few packets above the BWDP (equal to 31 packets) of the connection.2 During slow start there is an initial overshoot of the window size due to one round-trip time delay in calculations, i.e., the final round before the algorithm picks up the growing queue, a burst of packets is sent, which ultimately overflows the buffer.


  
Figure 14: Comparison of congestion window growth: (a) TCP-Santa Cruz (b) Reno

A comparison of the overall throughput and delay obtained by a Reno Vegas and Santa Cruz (n =1.5, n=3 and n=5) sources is shown below in Table 4. This table shows that Reno and Vegas are unable to achieve link utilization above 52 %. Because of the burstiness of the data traffic, Santa Cruz needs an operating point of at least n=3 in order to achieve high throughput. For n=3 and n=5 Santa Cruz is able to achieve 99% link utilization. The end-to-end delays for Reno are around twice that of Santa Cruz and the delay variance is 7 orders of magnitude greater than Santa Cruz. Because Vegas has such low link utilization the queues are generally empty, thus there is a very low delay and no appreciable delay variance.


 
Table 4: Throughput, delay and delay variance comparisons for Reno, Santa Cruz and Vegas over asymmetric links.
Comparison of Throughput and Average Delay
Protocol   Throughput     Utilization      Average delay      Delay variance
  (Mbps)   (msec) ($\mu$sec)
Reno 1.253 0.52 8.4 1400
Santa Cruz n=1.5 1.275 0.53 3.5 0.0004
Santa Cruz n=3 23.72 0.99 4.6 0.0003
Santa Cruz n=5 23.73 0.99 4.8 0.0003
Vegas (1,3) 0.799 0.33 3.3 0.0000
 


next up previous
Next: Conclusion Up: Performance Results Previous: Traffic on Reverse Link
Chris Parsa
2000-01-25