14 May 2010

TCP/IP Protocol Stack, Transport Layer

Transport Layer
The TCP/IP transport layer is responsible for providing a logical connection between two devices and can provide these two functions:
■ Flow control (through the use of windowing or acknowledgements)
■ Reliable connections (through the use of sequence numbers and acknowledgements)

The transport layer packages application layer data into segments to send to a destination device. The remote destination is responsible for taking the data from these segments and forwarding it to the correct application. TCP/IP has two transport layer protocols: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). These protocols are discussed in the following sections.

TCP
TCP’s main responsibility is to provide a reliable connection-oriented logical service between two devices. It can also use windowing to implement flow control so that a source device doesn’t overwhelm a destination
with too many segments.

TCP Segment
TCP transmits information between devices in a data unit called a segment. Table 3-2 shows the components of a segment. The segment is composed of a header, followed by the application data. Without any options, the TCP header is 20-bytes in length.


TCP’s Multiplexing Function
TCP, and UDP, provide a multiplexing function for a device: This allows multiple applications to simultaneously send and receive data. With these protocols, port numbers are used to differentiate the connections. Port numbers are broken into two basic categories: well-known port numbers (sometimes called
reserved port numbers) and source connection port numbers. Each application is assigned a well-known port number that is typically between 1 and 1,023. Any time you want to make a connection to a remote application, your application program will use the appropriate well-known port number.

As you saw in Table 3-2, however, there happens to be two port numbers in the segment: source and destination. When you initiate a connection to a remote application, your operating system will pick a currently unused port number greater than 1,023 and assign this number as the source port number. Based on the application that you are running, the application will fill in the destination port number with the well-known port number of the application. When the destination receives this traffic, it looks at the destination port number and knows which application this traffic should be directed to. This is also true for returning traffic from the destination.

Port numbers are assigned by the Internet Assigned Numbers Authority (IANA). When a vendor develops a new commercial application and wants a reserved (well-known) port number, he applies for one to this organization. Here are some common TCP applications with their assigned port numbers: FTP (20 and 21), HTTP (80), SMTP (25), and telnet (23).

TCP’s Reliability
TCP provides a reliable connection between devices by using sequence numbers and acknowledgements. Every TCP segment sent has a sequence number in it. This not only helps the destination reorder any incoming frames that arrived out of order, but it also provides a method of verifying if all sent segments were received. The destination responds to the source with an acknowledgment indicating receipt of the sent segments.

Before TCP can provide a reliable connection, it has to go through a synchronization phase, called a three-way handshake. Here are the steps that occur during this setup process:

1. The source sends a synchronization frame with the SYN bit marked in the Code field. This segment contains an initial sequence number. This is referred to as a SYN segment.
2. Upon receipt of the SYN segment, the destination responds back with its own segment, with its own initial sequence number and the appropriate value in the acknowledgement field indicating the receipt of the source’s original SYN segment. This notifies the source that the original SYN segment was received. This is referred to as a SYN/ACK segment.
3. Upon receipt of the SYN/ACK segment, the source will acknowledge receipt of this segment by responding back to the destination with an ACK segment, which has the acknowledgment field set to an appropriate value based on the destination’s sequence number.

Here is a simple example of this three-way handshake:
1. Source sends a SYN: sequence number = 1
2. Destination responds with a SYN/ACK: sequence number = 10, acknowledgement = 2
3. Source responds with an ACK segment: sequence number = 2, acknowledgement = 11

In this example, the destination’s acknowledgment (step 2) is one greater than the source’s sequence number, indicating to the source that the next segment expected is 2. In the third step, the source sends the second segment, and, within the same segment in the Acknowledgement field, indicates the receipt of the destination’s segment with an acknowledgment of 11--one greater than the sequence number in the destination’s SYN/ACK segment.

Windowing
TCP allows the regulation of the flow of segments, ensuring that one device doesn’t flood another device with too many segments. TCP uses a sliding windowing mechanism to assist with flow control. For example, if you have a window size of 1, a device can send only one segment, and then must wait for a corresponding acknowledgement before sending the next segment. If the window size is 20, a device can send 20 segments and then has to wait for an acknowledgment before sending 20 additional segments.

The larger the window size is for a connection, the less acknowledgments that are sent, thus making the connection more efficient. Too small a window size can affect throughput, since a device has to send a small number of segments, wait for an acknowledgment, send another bunch of small segments, and wait again. The trick is to figure out an optimal window size: one that allows for the best efficiency based on the current conditions in the network and on the two devices.

A nice feature of this process is that the window size can be dynamically changed through the lifetime of the connection. This is important because many more connections may come into a device with varying bandwidth needs. Therefore, as a device becomes saturated with segments from many connections, it can, assuming that these connections are using TCP, lower the window size to slow the flow of segments coming into it. TCP windowing is covered in RFC 793 and 813.

UDP
Where TCP provides a reliable connection, UDP provides an unreliable connection. UDP doesn’t go through a 3-way handshake to set up a connection--it just begins sending its information. Likewise, UDP doesn’t check to see if sent segments were received by a destination; in other words, it doesn’t have an acknowledgment process. Typically, if an acknowledgment process is necessary, the transport layer (UDP) won’t provide it; instead, the application itself, at the application layer, will provide this verification.

Given these deficiencies, UDP does have an advantage over TCP: it has less overhead. For example, if you only need to send one segment, and receive one segment back, and that’s the end of the transmission, it makes no sense to go through a 3-way handshake to first establish a connection and then send and receive the two segments: this is not very efficient. DNS queries are a good example where the use of UDP makes sense. Of course, if you are sending a large amount of data to a destination, and need to verify that it was received, then TCP would be a better transport mechanism.

Table 3-3 contains the components of a UDP segment. Examining this table, you can notice a lot of differences between a UDP and TCP segment. First, since UDP is connectionless, there is no need for sequence and acknowledgment numbers. And second, since there is no flow control, there is no need for a window size field. As you can see, UDP is a lot simpler, and more efficient, than TCP. Any control functions that need to be implemented for the connection are not done at the transport layer--instead, these are handled at the application layer.

TCP/IP Protocol Stack, Application Layer


The Transmission Control Protocol/Internet Protocol (TCP/IP) is a standard that includes many protocols. It defines how machines on an internetwork can communicate with each other. It was initially funded by and developed for DARPA (Defense Advanced Research Protects Agency), which is a conglomeration of U.S. military and government organizations. Developed initially for the government, it was later made available to the public, mainly seen on Unix systems. First specified in RFC 791, it has become the defacto standard for networking protocols. The Internet uses TCP/IP to carry data between

Hierarchical Network Model

Cisco has developed a three-layer hierarchical model to help you design campus networks. Cisco uses this model to simplify designing, implementing, and managing large-scale networks. With traditional network designs, it was common practice to place the networking services at the center of the network and the users at the periphery.

However, many things in networking have changed over the past decade, including advancements in applications, developments in graphical user interfaces (GUIs), the proliferation of multimedia applications, the explosion of the Internet, and fast-paced changes in

Going Down and Up the Protocol Stack

Going Down the Protocol Stack
This section covers the basic mechanics as to how information is processed as it is sent down the protocol stack on a computer. I’ll use the diagram shown in Figure 2-10 to illustrate this process as PC-A sends information to PC-B. In this example, assume that the data link layer is Ethernet and the physical layer is copper.

The first thing that occurs on PC-A is that the user, sitting in front of the computer, creates some type of information, called data, and then

Transferring Information Between Computers

Before delving into the mechanics of how information is transferred between computers, you must grow familiar with the terminology used to describe the transmitted data. Many of the layers of the OSI Reference Model use their own specific terms to describe data transferred back and forth. As this information is passed from higher to lower layers, each layer adds information to the original data—typically a header and possibly a trailer. This process is called encapsulation. Generically speaking, the term protocol data unit (PDU) is

13 May 2010

Transport Layer, Unreliable Connections

One of the issues of connection-oriented services is that they must always go through a three-way handshake before you can transfer data. In some instances, like file transfers, this makes sense, because you want to make sure that all data for the file is transferred successfully. However, in other cases, when you want to send only one piece of information and get a reply back, going through the three-way handshake process adds additional
overhead that isn’t necessary.

A DNS query is a good example where using a connection-oriented service doesn’t make sense. With a DNS query, a device is trying to resolve a fully qualified domain name to an IP address. The device sends the single query to a DNS server and waits for the server’s response.

In this process, only two messages are generated: the client’s query and the server’s response. Because of the minimal amount of information shared between these two devices, it makes no sense to establish a reliable connection first before sending the query. Instead, the device should just send its information and wait for a response.

If a response doesn’t come back, the application can send the information again or the user can get involved. Again, with DNS, you can configure two DNS servers in the Microsoft Windows operating system. If you don’t get a reply from the first server, the application can use the second configured server.

Because no “connection” is built up front, this type of connection is referred to as a connectionless service. The TCP/IP protocol stack uses the User Datagram Protocol (UDP) to provide unreliable connections.

Transport Layer, Reliable Connections

The fourth layer of the OSI Reference Model is the transport layer. The transport layer has four main functions:
■ It sets up and maintains a session connection between two devices.
■ It can provide for the reliable or unreliable delivery of data across this connection.
■ It can implement flow control through ready/not ready signals or windowing to ensure one device doesn’t overflow another device with too much data on a connection.
■ It multiplexes connections, allowing multiple applications to simultaneously send and receive data.

The following sections cover these processes.


Reliable Connections
The transport layer can provide reliable and unreliable transfer of data between networking devices. TCP/IP’s Transmission Control Protocol (TCP) is an example of a transport layer protocol that

Advantages of Routers

Because routers operate at a higher layer than the network layer and use logical addressing, they provide many advantages over bridges and switches, including:

■ Logical addressing at layer-3 allows you to build hierarchical networks that scale to very large sizes. This is discussed in Chapter 12.
■ They contain broadcasts and multicasts. When a broadcast or multicast is received on an interface, it is not forwarded to

Routing Tables

Routers are devices that function at the network layer; they use network numbers to make routing decisions: how to get a packet to its destination. Routers build a routing table, which contains path information. This information includes the network number, which interface the router should use to reach the network number, the metric of the path (what it costs to reach the destination), and how the router learned about this network number. Metrics are used to weight the different paths to a destination. If there is more than one way to

Network Layer, Layer-3 Addressing

Network Layer
Layer 3 of the OSI Reference Model is the network layer. This layer is responsible for three main functions:
■ Defines logical addresses used at layer-3
■ Finds paths, based on the network numbers of logical addresses, to reach destination devices
■ Connects different data link types together, such as Ethernet, FDDI, Serial, and Token Ring

The following sections cover the network layer in more depth.

Layer-3 Addressing
Many protocols function at the network layer: AppleTalk, DECnet, IP, IPX, Vines, XNS, and others. Each of these protocols has its own method of defining logical addressing. Correct assignment of these addresses on devices across your network allows you to build a hierarchical design that can scale to very large sizes. This provides an advantage over layer-2 addresses, which use a flat design and are not scalable.

All layer-3 addressing schemes have two components: network and host (or node). Each segment (physical or logical) in your network needs a unique network number. Each host on these segments needs a unique host number from within the assigned network number. The combination of the network and host number assigned
to a device provides a unique layer-3 address throughout the entire network. For example, if you had 500 devices in your network that were running IP, each of these devices would need a unique IP layer-3 address.

This process is different with MAC addresses, which are used at layer-2. MAC addresses need to be unique only on a physical (or logical) segment. In other words, within the same broadcast domain, all of the MAC addresses must be unique. However, MAC addresses do not need to be unique between two different broadcast domains.

An example of this appears later in this chapter.
To understand the components of layer-3 addresses, let’s look at a few examples. TCP/IP addresses are 32 bits in length. To make these addresses more readable, they are broken up into four bytes, or octets, where any two bytes are separated by a period. This is commonly referred to as dotted decimal notation. Here’s a simple example of an IP address: 10.1.1.1. An additional value, called a subnet mask, determines the boundary between the network and host components of an address. When comparing IP addresses to other protocols’ addressing schemes, IP is the most complicated. IP addressing is thoroughly covered in Chapter 3.

Most other protocols have a much simpler format. For example, IPX addresses are 80 bits in length. The first 32 bits are always the network number, and the last 48 bits are always the host address. IPX addresses are represented in hexadecimal. Here’s an example: ABBA.0000.0000.0001. In this example, ABBA is the network number and 0000.0000.0001 is the host number. Every protocol has its own addressing scheme. However, each scheme always begins with a network component followed by a host component.

12 May 2010

Bridge, Data Link Devices

Bridges are data link layer devices that switch frames between different layer-2 segments. They perform their switching in software, and their switching decisions are based on the destination MAC address in the header of the data link layer frame.

Bridges perform three main functions:

■ They learn where devices are located by placing the MAC address of a device and the identifier of the port it is connected to in a port address table.
■ They forward traffic intelligently, drawing on information they have in their port address table.
■ They remove layer-2 loops by running the Spanning Tree Protocol (STP).

Actually, these three functions are implemented in bridges that perform transparent bridging. There are other types of

Ethernet II’s Version of Ethernet

Ethernet II is the original Ethernet frame type. Ethernet II and 802.3 are very similar: they both use CSMA/CD to determine their operations. Their main difference is the frames used to transmit information between NICs. The bottom part of earlier Figure 2-3 shows the fields in an Ethernet II frame. Here are the two main differences between an Ethernet II and IEEE:

■ Ethernet II does not have any sublayers, while IEEE 802.2/3 have two: LLC and MAC.

■ Ethernet II has a type field instead of a length field (used in 802.3). IEEE 802.2 defines the type for IEEE Ethernet.

If you examine the IEEE 802.3 frame and the Ethernet II frame, you can see that they are very similar. NICs differentiate them by examining the value in the type field for an Ethernet II frame and the value in the

IEEE’s Version of Ethernet

There are actually two variants of Ethernet: IEEE’s implementation and the DIX implementation. Ethernet was developed by three different companies in the early 1980s: Digital, Intel, and Xerox, or DIX for short. This implementation of Ethernet has evolved over time; its current version is called Ethernet II. Devices running TCP/IP typically use the Ethernet II implementation.

The second version of Ethernet was developed by IEEE and is standardized in the IEEE 802.2 and 802.3 standards. IEEE has split the data link layer into two components: MAC and LLC. These components are

Data Link Layer

Layer 2 of the OSI Reference Model is the data link layer. This layer is responsible for defining the format of layer-2 frames as well as the mechanics of how devices communicate with each other over the physical layer. Here are the components the data link layer is responsible for:

■ Defining the Media Access Control (MAC) or hardware addresses
■ Defining the physical or hardware topology for connections
■ Defining how the network layer protocol is encapsulated in the

Wireless Concept Basic

Wireless transmission has been used for a very long time to transmit data by using infrared radiation, microwaves, or radio waves through a medium like air. With this type of connection, no wires are used. Typically, three terms are used to group different wireless technologies: narrowband, broadband, and circuit/packet data. Whenever you are choosing a wireless solution for your WAN or LAN, you should always consider the following criteria: speed, distance, and number of devices to connect.

Narrowband solutions typically require a license and operate at a low data rate. Only one frequency is used for transmission: 900 MHz, 2.4 GHz, or 5 GHz. Other technologies—household wireless phones, for instance—also use these technologies. Through the use of spread spectrum, higher data rates can be achieved by spreading the signal across multiple frequencies. However, transmission of these signals is

Fiber Cabling

LANs typically use either copper or fiber-optic cabling. Copper cabling is discussed in more depth in the section “Ethernet” later in this chapter.

Fiber-optic cabling uses light-emitting diodes (LEDs) and lasers to transmit data. With this transmission, light is used to represent binary 1’s and 0’s: if there is light on the wire, this represents a 1; if there is no light, this represents a 0. Fiber-optic cabling is typically used to provide very high speeds and to span connections across very large distances. For example,

OSI Reference Model 3

Network Layer

The third layer of the OSI Reference Model is the network layer. The network layer provides quite a few functions. First, it provides for a logical topology of your network using logical, or layer-3, addresses. These addresses are used to group machines together. The network component is used to group devices together. Layer-3 addresses allow devices that are on the same or different media types to communicate with each other. Media types define types of connections, such as Ethernet, Token Ring, or serial.

To move information between devices that have different network numbers, a router is used. Routers use information in the logical address to make intelligent decisions about how to

OSI Reference Model 2

Layer Definitions

There are seven layers in the OSI Reference Model, shown in Figure 2-1: application, presentation, session, transport, network, data link, and physical. The functions of the application, presentation, and session layers are typically part of the user’s application. The transport, network, data link, and physical layers are

Ethernet

Ethernet is a LAN media type that functions at the data link layer. Ethernet uses the Carrier Sense Multiple Access/Collision Detection (CSMA/CD) mechanism to send information in a shared environment. Ethernet was initially developed with the idea that many devices would be connected to the same physical piece of wiring. The acronym CSMA/CD describes the actual process of how Ethernet functions.

In a traditional, or hub-based, Ethernet environment, only one NIC can successfully send a frame at a time. All NICs, however, can simultaneously listen to information on the wire. Before an Ethernet NIC puts a frame on the wire, it will first sense the wire to ensure that no other frame is

Unicast, Multicast, Broadcast

Unicast
A frame with a destination unicast MAC address is intended for just one device on a segment. The top part of Figure 2-2 shows an example of a unicast. In this example, PC-A creates an Ethernet frame with a destination MAC address that contains PC-C’s address. When PC-A places this data link layer frame on the wire, all the devices on the segment receive. Each of the NICs of PC-B, PC-C, and PC-D examine the destination MAC address in the frame. In this instance, only PC-C’s NIC will process the frame, since the destination MAC address in the frame matches the MAC address of its NIC. PC-B and PC-D will ignore the frame.

Multicast

Unlike a unicast address, a multicast address represents a group of devices on a segment. The multicast group can contain anywhere from no devices to every device on a segment. One of the interesting things about multicasting is that the membership of a group is dynamic—devices can join and leave as they please. The detailed process of multicasting is beyond the scope of this book, however.

The middle portion of Figure 2-2 shows an example of a multicast. In this example, PC-A sends a data link layer frame to a multicast group on its segment. Currently, only PC-A, PC-C, and PC-D are members of this group. When each of the PCs receives the frame, its NIC examines the destination MAC address in the data link layer frame. In this example, PC-B ignores the frame, since it is not a member of the group. However, PC-C and PC-D will process the frame.

Broadcast

A broadcast is a data link layer frame that is intended for every networking device on the same segment. The bottom portion of Figure 2-2 shows an example of a broadcast. In this example, PC-A puts a broadcast address in the destination field of the data link layer frame. For MAC broadcasts, all of the bit positions in the address are enabled, making the address FFFF.FFFF.FFFF in hexadecimal. This frame is then placed on the wire. Notice that in this example, when PC-B, PC-C, and PC-D receive the frame, they all process it.

Broadcasts are mainly used in two situations. First, broadcasts are more effective than unicasts if you need to send the same information to every machine. With a unicast, you would have to create a separate frame for each machine on the segment; with a broadcast, you could accomplish the same thing with one frame. Second, broadcasts are used to discover the unicast address of a device. For instance, when you turn on your PC, initially, it doesn’t know about any MAC addresses of any other machines on the network. A broadcast can be used to discover the MAC addresses of these machines, since they will all process the broadcast frame. In IP, the Address Resolution Protocol (ARP) uses this process to discover another device’s MAC address.

11 May 2010

OSI Reference Model 1

The International Organization for Standardization (ISO) developed the Open Systems Interconnection (OSI) Reference Model to describe how information is transferred from one machine to another, from the point when a user enters information using a keyboard and mouse to when that information is converted to electrical or light signals transferred along a piece of wire or radio waves transferred through the air. It is important to understand that the OSI Reference Model describes concepts and terms in a general manner, and that many network protocols, such as IP and IPX, fail to fit nicely into the scheme explained in ISO’s model. Therefore, the OSI Reference Model is most often used as a teaching and troubleshooting tool. By understanding the basics of the OSI Reference Model, you can apply these to real protocols to gain a better understanding of them as well as

10 May 2010

Network Types - 3


Content Networks

Content networks (CNs) were developed to ease users’ access to Internet resources. CNs are aware of layers 4–7 of the OSI Reference Model and use this information to make intelligent decisions about how to obtain the information for the user or users. CNs come in the following categories: content distribution, content routing, content switching, content management, content delivery, and intelligent network
services, which include QoS, security, multicasting, and virtual private networks (VPNs).

Companies deploy basically two types of CNs:

Network Types - 2

Metropolitan Area Networks

A metropolitan area network (MAN) is a hybrid between a LAN and a WAN. Like a WAN, it connects two or more LANs in the same geographic area. A MAN, for example, might connect two different buildings or offices in the same city. However, whereas WANs typically provide low- to medium-speed access, MANs provide high-speed connections, such as T1 (1.544 Mbps) and optical services.

The optical services provided include SONET (the Synchronous Optical Network standard) and SDH (the Synchronous Digital Hierarchy standard).With these optical services, carriers can

Network Types - 1

Networks come in a wide variety of types. The most common are LANs and WANs, but there are many other types of networks, including metropolitan area networks (MANs), storage area networks (SANs), content networks (CNs), intranets and extranets, VPNs, and others. The following sections provide a brief overview of each of these network types.

Local Area Networks

Local area networks (LANs) are used to connect networking devices that are

Mesh Topology


Meshing generically describes how devices are connected together. There are two types of meshed topologies: partial and full. In a partially meshed environment, every device is not connected to every other device. In a fully meshed environment, every device is connected to every other device. Figure 1-3 shows examples of these two types of topologies.

Note that like the topologies in the preceding section, partial and full mesh can be seen from

Physical Versus Logical Topology


A distinction needs to be made between physical and logical topologies. A physical topology describes how devices are physically cabled together. For instance, 10BaseT has a physical star topology and FDDI has a physical dual ring topology. A logical topology describes how devices communicate across the physical topology.

The physical and logical topologies are independent of each other. For example, any variety of Ethernet uses a logical bus topology when

Network Topology

When you are cabling up your computers and networking devices, various types of topologies can be used. A topology defines how the devices are connected. Figure 1-1 shows examples of topologies that different media types use.

A point-to-point topology has a single connection between two devices. In this topology, two devices can directly communicate without interference from other devices. These types of connections are

Introduction to Networking

Networks
A network is basically all of the components (hardware and software) involved in connecting computers across small and large distances. Networks are used to provide easy access to information, thus increasing productivity for users. This section covers some of the components involved with networking, as well as the basic types of topologies used to connect networking devices, including computers.


Components
One of the main components of networking is applications, which