next up previous
Next: Traffic on Reverse Link Up: Performance Results Previous: Performance Results

Basic Bottleneck Configuration

Our first experiment shows protocol performance on a simple network, depicted in Figure 8, consisting of a TCP source, which sends 1Kbyte data packets to a receiver via two intermediate routers. The bottleneck is between the two routers, where traffic must traverse a 1.5Mbps link. The bandwidth delay product (BWDP) of this configuration is equal to 16.3Kbytes; therefore, in order to accommodate up to a windowful of data, the routers are set to hold 17 packets.
  
Figure 8: Simulation configuration for basic experiment

Figures 9 (a) and (b) show the evolution of the sender's congestion window and the queue buildup at the bottleneck for for TCP Reno. Once the congestion window grows beyond 17 packets (the BWDP of the connection) the bit pipe is full and the queue begins to fill. Once the queue is full it begins to drop packets and eventually Reno notices the loss, retransmits, and cuts the congestion window in half. This produces see-saw oscillations in both the window size and the bottleneck queue length. These oscillations greatly increase not only delay but also delay variance for the application. It is increasingly important for real-time and interactive applications to keep delay and delay variance to a minimum.

In contrast, Figures 10 (a) and (b) show the evolution of the sender's congestion window and the queue buildup at the bottleneck for TCP-Santa Cruz. These figures demonstrate the main strength of TCP-Santa Cruz: adaptation of the congestion control algorithm to transmit at the bandwidth of the connection without congesting the network and without overflowing the bottleneck queues. In this example the threshold value of n, the desired additional number of packets in the network beyond the BWDP, is set to n = 1.5. Figure 10(b) shows the queue length at the bottleneck link for TCP Santa Cruz reaches a steady-state value between 1 and 2 packets. We also see that the congestion window, depicted in Figure 10(b) reaches a peak value of 18 packets, which is the sum of the BWDP (16.5) and n. The algorithm maintains this value for the duration of the connection.


  
Figure 9: TCP Reno: (a) congestion window (b) bottleneck queue


  
Figure 10: TCP Santa Cruz: (a) congestion window (b) bottleneck queue

Table 2 compares the throughput, average delay and delay variance for Reno, Vegas and Santa Cruz. All protocols achieve similar throughput, with Santa Cruz n=5 performing slightly better than Reno. This is explained by examining Reno's window evolution in Figure 9 (a) and queue length in Figure 9(b) and noticing that most of the time Reno's congestion window is well above the BWDP of the connection and packets are queued at the bottleneck link. The overall throughput will not suffer as a result of oscillations in the congestion window because, as Figure 9(b) shows, the queue is rarely empty and packets are always available for transmission over the bottleneck. What does suffer, however, is the delay experienced by packets transmitted through the network.

The minimum forward delay through the network is equal to 40 msec delay plus 6.9 msec packet forwarding time, yielding a total minimum forward delay of approximately 47 msec. Reno is the clear loser in this case with not only the highest average delay, but also a high delay variation. Santa Cruz with n=1.5 provides the same average delay as Vegas, but a with lower delay deviation. As n increases the delay in Santa Cruz also increases because more packets are allowed to sit in the bottleneck queue. Also, throughput is seen to grow with n because not only does the $\tt
slow start$ period last longer, but the peak window size is reached earlier in the connection, leading to a faster transmission rate earlier in the transfer. In addition, with a larger n there is more likely a packet available in the queue awaiting transmission.


 
Table 2: Throughput, delay and delay variance comparisons for Reno, Vegas and Santa Cruz
Comparison of Throughput and Average Delay
Protocol   Throughput     Utilization      Average delay      Delay variance
  (Mbps)   (msec) (msec)
Reno 1.45 0.97 99.4 2.06
Santa Cruz n=1.5 1.42 0.94 55.1 0.0041
Santa Cruz n=3 1.45 0.97 60.6 0.0063
Santa Cruz n=5 1.47 0.98 79.2 0.0073
Vegas (1,3) 1.40 0.94 55.2 0.0077
 


next up previous
Next: Traffic on Reverse Link Up: Performance Results Previous: Performance Results
Chris Parsa
2000-01-25