14 May 2010

TCP/IP Protocol Stack, Transport Layer

Transport Layer
The TCP/IP transport layer is responsible for providing a logical connection between two devices and can provide these two functions:
■ Flow control (through the use of windowing or acknowledgements)
■ Reliable connections (through the use of sequence numbers and acknowledgements)

The transport layer packages application layer data into segments to send to a destination device. The remote destination is responsible for taking the data from these segments and forwarding it to the correct application. TCP/IP has two transport layer protocols: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). These protocols are discussed in the following sections.

TCP
TCP’s main responsibility is to provide a reliable connection-oriented logical service between two devices. It can also use windowing to implement flow control so that a source device doesn’t overwhelm a destination
with too many segments.

TCP Segment
TCP transmits information between devices in a data unit called a segment. Table 3-2 shows the components of a segment. The segment is composed of a header, followed by the application data. Without any options, the TCP header is 20-bytes in length.


TCP’s Multiplexing Function
TCP, and UDP, provide a multiplexing function for a device: This allows multiple applications to simultaneously send and receive data. With these protocols, port numbers are used to differentiate the connections. Port numbers are broken into two basic categories: well-known port numbers (sometimes called
reserved port numbers) and source connection port numbers. Each application is assigned a well-known port number that is typically between 1 and 1,023. Any time you want to make a connection to a remote application, your application program will use the appropriate well-known port number.

As you saw in Table 3-2, however, there happens to be two port numbers in the segment: source and destination. When you initiate a connection to a remote application, your operating system will pick a currently unused port number greater than 1,023 and assign this number as the source port number. Based on the application that you are running, the application will fill in the destination port number with the well-known port number of the application. When the destination receives this traffic, it looks at the destination port number and knows which application this traffic should be directed to. This is also true for returning traffic from the destination.

Port numbers are assigned by the Internet Assigned Numbers Authority (IANA). When a vendor develops a new commercial application and wants a reserved (well-known) port number, he applies for one to this organization. Here are some common TCP applications with their assigned port numbers: FTP (20 and 21), HTTP (80), SMTP (25), and telnet (23).

TCP’s Reliability
TCP provides a reliable connection between devices by using sequence numbers and acknowledgements. Every TCP segment sent has a sequence number in it. This not only helps the destination reorder any incoming frames that arrived out of order, but it also provides a method of verifying if all sent segments were received. The destination responds to the source with an acknowledgment indicating receipt of the sent segments.

Before TCP can provide a reliable connection, it has to go through a synchronization phase, called a three-way handshake. Here are the steps that occur during this setup process:

1. The source sends a synchronization frame with the SYN bit marked in the Code field. This segment contains an initial sequence number. This is referred to as a SYN segment.
2. Upon receipt of the SYN segment, the destination responds back with its own segment, with its own initial sequence number and the appropriate value in the acknowledgement field indicating the receipt of the source’s original SYN segment. This notifies the source that the original SYN segment was received. This is referred to as a SYN/ACK segment.
3. Upon receipt of the SYN/ACK segment, the source will acknowledge receipt of this segment by responding back to the destination with an ACK segment, which has the acknowledgment field set to an appropriate value based on the destination’s sequence number.

Here is a simple example of this three-way handshake:
1. Source sends a SYN: sequence number = 1
2. Destination responds with a SYN/ACK: sequence number = 10, acknowledgement = 2
3. Source responds with an ACK segment: sequence number = 2, acknowledgement = 11

In this example, the destination’s acknowledgment (step 2) is one greater than the source’s sequence number, indicating to the source that the next segment expected is 2. In the third step, the source sends the second segment, and, within the same segment in the Acknowledgement field, indicates the receipt of the destination’s segment with an acknowledgment of 11--one greater than the sequence number in the destination’s SYN/ACK segment.

Windowing
TCP allows the regulation of the flow of segments, ensuring that one device doesn’t flood another device with too many segments. TCP uses a sliding windowing mechanism to assist with flow control. For example, if you have a window size of 1, a device can send only one segment, and then must wait for a corresponding acknowledgement before sending the next segment. If the window size is 20, a device can send 20 segments and then has to wait for an acknowledgment before sending 20 additional segments.

The larger the window size is for a connection, the less acknowledgments that are sent, thus making the connection more efficient. Too small a window size can affect throughput, since a device has to send a small number of segments, wait for an acknowledgment, send another bunch of small segments, and wait again. The trick is to figure out an optimal window size: one that allows for the best efficiency based on the current conditions in the network and on the two devices.

A nice feature of this process is that the window size can be dynamically changed through the lifetime of the connection. This is important because many more connections may come into a device with varying bandwidth needs. Therefore, as a device becomes saturated with segments from many connections, it can, assuming that these connections are using TCP, lower the window size to slow the flow of segments coming into it. TCP windowing is covered in RFC 793 and 813.

UDP
Where TCP provides a reliable connection, UDP provides an unreliable connection. UDP doesn’t go through a 3-way handshake to set up a connection--it just begins sending its information. Likewise, UDP doesn’t check to see if sent segments were received by a destination; in other words, it doesn’t have an acknowledgment process. Typically, if an acknowledgment process is necessary, the transport layer (UDP) won’t provide it; instead, the application itself, at the application layer, will provide this verification.

Given these deficiencies, UDP does have an advantage over TCP: it has less overhead. For example, if you only need to send one segment, and receive one segment back, and that’s the end of the transmission, it makes no sense to go through a 3-way handshake to first establish a connection and then send and receive the two segments: this is not very efficient. DNS queries are a good example where the use of UDP makes sense. Of course, if you are sending a large amount of data to a destination, and need to verify that it was received, then TCP would be a better transport mechanism.

Table 3-3 contains the components of a UDP segment. Examining this table, you can notice a lot of differences between a UDP and TCP segment. First, since UDP is connectionless, there is no need for sequence and acknowledgment numbers. And second, since there is no flow control, there is no need for a window size field. As you can see, UDP is a lot simpler, and more efficient, than TCP. Any control functions that need to be implemented for the connection are not done at the transport layer--instead, these are handled at the application layer.

TCP/IP Protocol Stack, Application Layer


The Transmission Control Protocol/Internet Protocol (TCP/IP) is a standard that includes many protocols. It defines how machines on an internetwork can communicate with each other. It was initially funded by and developed for DARPA (Defense Advanced Research Protects Agency), which is a conglomeration of U.S. military and government organizations. Developed initially for the government, it was later made available to the public, mainly seen on Unix systems. First specified in RFC 791, it has become the defacto standard for networking protocols. The Internet uses TCP/IP to carry data between

Hierarchical Network Model

Cisco has developed a three-layer hierarchical model to help you design campus networks. Cisco uses this model to simplify designing, implementing, and managing large-scale networks. With traditional network designs, it was common practice to place the networking services at the center of the network and the users at the periphery.

However, many things in networking have changed over the past decade, including advancements in applications, developments in graphical user interfaces (GUIs), the proliferation of multimedia applications, the explosion of the Internet, and fast-paced changes in

Going Down and Up the Protocol Stack

Going Down the Protocol Stack
This section covers the basic mechanics as to how information is processed as it is sent down the protocol stack on a computer. I’ll use the diagram shown in Figure 2-10 to illustrate this process as PC-A sends information to PC-B. In this example, assume that the data link layer is Ethernet and the physical layer is copper.

The first thing that occurs on PC-A is that the user, sitting in front of the computer, creates some type of information, called data, and then

Transferring Information Between Computers

Before delving into the mechanics of how information is transferred between computers, you must grow familiar with the terminology used to describe the transmitted data. Many of the layers of the OSI Reference Model use their own specific terms to describe data transferred back and forth. As this information is passed from higher to lower layers, each layer adds information to the original data—typically a header and possibly a trailer. This process is called encapsulation. Generically speaking, the term protocol data unit (PDU) is