next up previous contents
Next: Simulation systems Up: Project aims and motivations Previous: Project aims and motivations   Contents


The Internet runs on a hierarchical protocol stack. A simplified version of this is shown in figure 11. The layer common to all Internet applications is the IP (Internet Protocol) layer. This layer provides a connectionless, unreliable packet based delivery service. It can be described as connectionless because packets are treated independently of all others. The service is unreliable because there is no guarantee of delivery. Packets may be silently dropped, duplicated or delayed and may arrive out of order. The service is also called a best effort service, all attempts to deliver a packet will be made, with unreliability only caused by hardware faults or exhausted resources.

As there is no sense of a connection at the IP level there are no simple methods to provide a quality of service (QoS). QoS is a request from an application to the network to provide a guarantee on the quality of a connection. This allows an application to request a fixed amount of bandwidth from the network, and assume it will be provided, once the QoS request has been accepted. Also a fixed delay, i.e. no jitter and in order delivery can be assumed. A network that supports QoS will be protected from congestion problems, as the network will refuse connections that request larger resources than can be supplied. An example of a network that supports QoS is the current telephone network, where every call is guaranteed the bandwidth for the call. Most users at some point have heard the overloaded signal where the network cannot provide the requested resource required to make a call.

The application decides which transport protocol is used. The two protocols shown here, TCP and UDP are the most commonly used ones. TCP provides a reliable connection and is used by the majority of current Internet applications. TCP, besides being responsible for error checking and correcting, is also responsible for controlling the speed at which this data is sent. TCP is capable of detecting congestion in the network and will back off transmission speed when congestion occurs. These features protect the network from congestion collapse.

As discussed in the introduction, VoIP is a real-time service. For real-time properties to be guaranteed to be met, a network with QoS must be used to provide fixed delay and bandwidth. It has already been said that IP cannot provide this. This then presents a choice. If IP is a requirement, which transport layer should be used to provide a system that is most likely to meet real-time constraints.

As TCP provides features such as congestion control, it would be the preferred protocol to use. Unfortunately due to the fact that TCP is a reliable service, delays will be introduced whenever a bit error or packet loss occurs. This delay is caused by retransmission of the broken packet, along with any successive packets that may have already been sent. This can be a large source of jitter.

TCP uses a combination of four algorithms to provide congestion control, slow start, congestion avoidance, fast retransmit and fast recovery [Ste97]. These algorithms all use packet loss as an indication of congestion, and all alter the number of packets TCP will send before waiting for acknowledgments of those packets. These alterations affect the bandwidth available and also change delays seen on a link, providing another source of jitter.

Figure 1: Simplified IP protocol stack

Combined, TCP raises jitter to an unacceptable level rendering TCP unusable for real-time services. Voice communication has the advantage of not requiring a completely reliable transport level. The loss of a packet or bit error will often only introduce a click or a minor break into the output.

For these reasons most VoIP applications use UDP for the voice data transmission. UDP is a thin layer on top of IP that provides a way to distinguish among multiple programs running on a single machine. UDP also inherits all of the properties of IP that TCP attempts to hide. UDP is therefore also a packet based, connectionless, best-effort service. It is up to the application to split data into packets, and provide any necessary error checking that is required.

Because of this, UDP allows the fastest and most simple way of transmitting data to the receiver. There is no interference in the stream of data that can be possibly avoided. This provides the way for an application to get as close to meeting real-time constraints as possible.

UDP however provides no congestion control systems. A congested link that is only running TCP will be approximately fair to all users. When UDP data is introduced into this link, there is no requirement for the UDP data rates to back off, forcing the remaining TCP connections to back off even further. This can be though of as UDP data not being a ``good citizen''. The aim of this project is to characterise the quantity of this drop off in TCP performance.

next up previous contents
Next: Simulation systems Up: Project aims and motivations Previous: Project aims and motivations   Contents
James Curtis 2000-01-17