Networking basics – the TCP protocol

In our discussion of the IP protocol, the reader might have noticed that there are many desirable features that the IP protocol does not have. Suppose for instance that we are building an application that needs to transmit data in a stream oriented way – this could be a video, an MP3 file or a data transfer modelling a conversation. When we simply split our data into IP packets and send them directly via IP, there are many issues that we have to solve. IP does not guarantee us that packets even arrive, so we will have to deal with lost packets and build some acknowledgement and retransmission mechanism. Even if the packets arrive, the order in which they arrive is not guaranteed – different packets could be routed along different paths and arrive in reversed order. So we need a mechanism to encode the order and reassemble them at the destination in the right order. If the receiver needs some time to process the packets, we might want to control and dynamically adjust the rate of transmission. And finally, IP only assigns one address to a host (more precisely, a network card), so our transmission might conflict with other applications running on the same host and we need a way to deal with this.

Fortunately, the internet protocol stack offers a protocol sitting on top of IP and offering all this – the transmission control protocol, commonly known as TCP (there are other protocols on top of IP like ICMP that we have already seen in action and UDP, but we will focus on TCP in this post).

The main properties of the TCP protocol are:

  • It is reliable – the transmission of each piece of data is guaranteed by acknowledgement and retransmission capabilities of the protocol
  • It is connection oriented. When TCP is used, a connection with endpoints on both hosts is established first through a handshake procedure. Once the connection is established, both parties can write and read from the connection at the same time until all the data is transferred, then the connection is closed again. A connection endpoint (a socket) is identified using the IP address and an additional number, called the port number so that different connections originating or ending at the same host can operate independently.
  • It is stream oriented. An application dealing with TCP does not have to know anything about packet sizes, fragmentation, reassembly, MTUs and so forth – it just writes data sequentially into a socket or reads data sequentially from a socket. Most operating systems make writing to and reading from a socket as easy as dealing with a file. The protocol makes sure that the bytes that are written into one of the endpoints arrive at the second endpoint completely and in the same order.
  • And finally, TCP offers congestion control, i.e. the protocol automatically throttles the transmission speed if it realizes congestion.

TCP is a rather complicated protocol, and it is hopeless to cover it entirely in one post. Instead, we will look at a few of those points in more detail in the following sections.

Connection endpoints and ports

The TCP protocol has first been standardized in RFC 793 (and since then adapted and enhanced by many other RFCs). This is were we also find the structure of a TCP header (see section 3.1 of the document). The first two 16 bit words in the header are called source port and destination port.

Together with the IP address, the port number defines the full address relevant for a TCP connection. The combination of an IP address with a port number is sometimes called a socket in the RFC and is conventionally denoted by prefixing the port number by the IP address followed by a colon, for instance 192.168.178.1:23 for the port number 23 on the host with IP address 192.168.178.1. However, the port number does not simply supplement the IP address. Instead, TCP operates by building connections which are determined by the full endpoints – IP address and port numbers – on both sides.

Let us look at an example to explain this. Suppose you are running a web server on a host with IP address 10.0.2.20. Traditionally, a web server uses the port number 80.

Now suppose a first client connects to this web server. Let us assume that the client has the IP address 10.0.2.21 and that the operating system of the client decides to use the port number 3333 on the client to establish the connection (we will see below how exactly that works). When this connection is established, the web server will typically spawn off a separate thread to handle the communication with this client. So there is now one connection

10.0.2.21:3333 — 10.0.2.20:80

Now a second client might connect to the web server as well – of course we want this to be possible. If this client is running on a machine with IP address 10.0.2.22 and using port 3334, we obtain a second connection

10.0.2.22:3334 — 10.0.2.20:80

The situation is displayed in the image below.

TCPConnections

The web server will create a second thread that serves HTTP requests that are placed using this connection. Now the point is that even though both connections share the same endpoint 10.0.2.20:80, they operate completely independently! If a TCP message arrives at port 80 of the web server, the operating system will inspect the source IP address and source port number to match the message to an existing open connection. It will then forward the message to the thread responsible for this connection and to no other thread. Thus a connection, identified by the quadruple IP source address, IP target address, TCP source port, TCP target port, serves as a channel through which data can flow independently of any other connections, even if they share a common endpoint. This makes TCP compatible with the paradigm of multi-threaded servers.

Handshakes and the TCP state machine

The connection based approach of TCP obviously requires a mechanism to establish a connection when the communication between two nodes begins. It also implies that a TCP connection has a state. This is a major difference between TCP and pure IP. In the IP protocol, every packet is treated the same – the processing of a packet is independent of any previously received packet (which is strictly speaking only true if we ignore fragmentation for a moment). For TCP, this is not true. A TCP connection has a state and the processing triggered by the arrival of a packet is in the context of that state.

Thus, from a theoretical point of view, a TCP connection can be described as a state machine. There is a (finite) number of states, and the connection will move from one state to the next state triggered by events and packets.

The full TCP state machine is rather complicated, and we will not discuss all possible transitions. Rather, we will focus on those transitions that a connection goes through until it is fully established. A graphical representation of this part of the state machine would look as follows.

TCPStateMachine

To understand this image, let us go through the transitions and events one by one for a real world example. Suppose you are pointing your web browser to a specific site on the WWW, say http://www.wordpress.com. Your browser will then use a DNS service to turn this human readable address into an IP address, say 192.0.78.13. At this IP address, a web server is listening on port 80.

When this web server was started, it did ask the operating system on which it is running to reserve port 80 for it, so that it is automatically owning all incoming connections on this port. Technically, this is done using an operating system call called listen on most operating systems. The operating system now knows that the web server is claiming this port. It will establish an object called a socket and move this socket into the state “listening”. Thus, the endpoint on the server side does actually go through the transition at the top left of the image above – transitioning a connection from “closed” (i.e. not existing in this case) to “listening”.

You can use the command netstat -tln -4 on a Linux machine to get a list of all listening TCP connections (for IPv4). Depending on your configuration, you might see sockets listening on the ports 53 (DNS server), 445 (Microsoft Windows shares/CIFS), 80 (Web server) or 139 (Microsoft NetBios).

Back to our example – what is the web browser doing? After having resolved the IP address, it will try to establish a connection to the IP address / port number 192.0.78.13:80. To do this, it will assemble and send a special TCP packet called a SYN packet. This packet does not contain any data, but a special bit (the SYN bit) in the TCP header of this packet is set. After sending this packet, the client side endpoint is now in the state “SYN-SENT”, i.e. the client part of the connection did traverse the path on the upper right of our image.

Once the SYN packet arrives at the server side, the server will reply with a packet that has the acknowledgement bit set in its header, to let the client know that the SYN packet was received. It will then move into the next state – SYN-RCVD. As the SYN packet does contain the IP address and port number of the client, the server know knows the other endpoint of the connection.

Next, the client will receive the acknowledgement of the SYN packet. It will then reply with another acknowledgement, this time to let in turn the server know that is has received the servers acknowledgement. It then moves into the final state “ESTABLISHED”. Once the server receives the acknowledgement, it will do the same. At this point, the connection between both parties is fully established and the exchange of data can start.

Acknowledgement and retransmission

During the initial handshake, we have already seen an important mechanisms – acknowledgements. TCP is a reliable protocol, i.e. it guarantees that packets are received by the other endpoint. To make this work, each endpoint acknowledges receipt of all packets so that the sender knows that the data has been received. If a packet is not acknowledged within a certain period of time, it is retransmitted.

To allow for a retransmission, a sender needs to maintain a buffer of data which has been sent, but not yet acknowledged by the peer, so that the data is still available for retransmission. When data is eventually acknowledged, the part of the buffer containing that data can be emptied.

Conversely, the receiver will typically also have to maintain a buffer, as the application might need some time to read and process all the data. This raises another complication – how can we make sure that this buffer does not overflow? What we need is a mechanism for the receiver to inform the sender about the size of the available buffer. This is called the send window.

To explain these concepts in a bit more detail, let us take a look at the following diagram.

What we see here is a short piece of a longer stream of data that needs to be transmitted. Each of the little boxes is one byte of data, and the number within the box contains the offset of this byte within the stream. This number is reflected during the transition by a sequence number which is part of the header of a TCP packet and marks where in the stream the data within the packet is located. When a receiver acknowledges receipt of a packet, it adds the sequence number of the acknowledged data to the acknowledgement message to avoid a dependency on the physical order of packets.

To keep track of the stream status, the sender maintains two numbers. The first number, usually abbreviated as SND_UNA, contains the smallest sequence number that has been sent, but not yet acknowledged by the peer. Thus every byte with a smaller offset has been acknowledged and can safely be removed from the buffer.

The pointer SND_NXT contains the next sequence number that can be sent. Thus the bytes between SND_UNA and SND_NXT have been sent, but not yet acknowledged, and the bytes after SND_NXT have been passed to the operating system on the sender side, but not yet transmitted. If additional data is passed to the operating system for sending, they are stored in the buffer and SND_NXT is incremented. When an acknowledgement is received, SND_UNA is incremented and older bytes are removed from the buffer.

An additional restriction is now given by the size of the send window. During the initial handshake, both endpoints exchange their desired send windows. The send window is then used as an upper bound for the number of bytes that are allowed to be in transit. Thus, in the example above, the bytes starting at offset 110 have already been handed over to the operating system for sending, but are in fact not yet ready to be sent as the send window of the peer is only 10 bytes.

All this sounds still comparatively simple, but can in fact become quite complicated. Complex algorithms have been defined in several RFCs to determine when exactly data is to be sent, what happens if an acknowledgement is not received, when exactly acknowledgements are sent, how the send rate is to be adapted to the capacity of the connection (congestion control) and so forth. Discussing all this would go far beyond the scope of this blog post. For those interested in the details, I recommend the two volumes of TCP/IP Illustrated by W. R. Stevens. If you prefer to look at source code, there are the implementations in the open source operating systems like FreeBSD and Linux. In addition, there is the implementation that I coded for my own toy operating system, especially the documentation of the networking stack and the source code of the TCP module.

Networking basics – IP routing and the ARP protocol

In the last post in this series, we have covered the basics of the IP protocol – the layout of a network message and the process of fragmentation. However, there is one point which we have not yet discussed. Assume that an application or operating system has actually assembled a message and applied fragmentation so that the message is now ready to be sent. How would that actually work?

Routing in local networks: the ARP protocol

To understand the situation, assume for a moment that we are dealing with a very simple network topology. We are looking at a host which is part of a small network, and the host is directly connected to the same Ethernet network segment as the destination host, as illustrated in the following diagram.

RoutingI

Here we are considering a network consisting of a small number of workstations and one Ethernet switch (in the middle of the diagram). Each workstation is equipped with a network interface card (NIC) which has an Ethernet (MAC) address. Thanks to the switch, Ethernet frames sent out by one NIC can be directly read by any other NIC.

At configuration time, each NIC receives an assigned IP address. This is usually done using a technology like DHCP, but can also be done manually as long as no IP address is used twice.

Now suppose that the workstation with IP address 192.168.178.2 (more precisely: the workstation to which the network interface card with assigned IP address 192.168.178.2 is attached) wishes to send an IP packet to the workstation with IP address 192.168.178.3. It then needs to make several decisions:

  • which network interface card should be used to transmit the packet?
  • which Ethernet target address should be used?

In the simple case that we consider, the answer to the first question is obvious, as there is only one NIC attached to the workstation, but this question will become more relevant in the more complex setup that we will study later. The second question is more interesting – to answer it, the workstation somehow needs a way to translate an IP address into the MAC address of the corresponding NIC.

To do this, the ARP protocol is at our disposal. ARP is the abbreviation for Address resolution protocol and is defined in RFC 826. ARP messages are designed to travel on top of Ethernet or other link layer protocols. Essentially, the ARP protocol is request-reply based. If a host wishes to translate an IP address into an Ethernet address, it will send an ARP request to all hosts on the local network, using an Ethernet broadcast. This message will contain the own IP and MAC address and the IP address that the host is looking for. Each host on the network will compare the IP address to its own IP address. If they match, it will respond with a reply message that again contains the own IP and MAC address. The requesting host can then use this message to retrieve the correct MAC address and use it for further communication.

Of course this procedure is not repeated every time a host wants to send a packet to another host in the local network. Instead, a host will cache a mapping of IP addresses to Ethernet MAC address in a so called ARP cache. As the assignment of IP addresses to network interface cards can vary over time, entries in this cache typically have a timeout so that they become invalid after some time. On a Linux workstation, the arp command can be used to print the current content of the ARP cache, i.e. the set of hosts to which the workstation has a direct connection that has been recently used. On my PC, the output looks as follows.

$ arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.178.1            ether   08:96:d7:75:7e:80   C                     enp4s0
192.168.178.33           ether   ac:b5:7d:34:3a:a6   C                     enp4s0
192.168.178.28           ether   00:11:32:77:fe:46   C                     enp4s0

Here we see that the network card of my PC is able to connect directly to three other hosts on the same Ethernet network. The first one is my router, the second one is a laptop connected to the same router via WLAN (the router actually contains a switch that makes the devices connected via WLAN to appear on the network as an Ethernet device) and the third one is a NAS.

Summarizing, here are the steps that a host would typically take to send an IP packet to another host on the local network.

  • Look up the target IP address in the ARP cache
  • If there is a match, retrieve the MAC address from the ARP cache entry, assemble an Ethernet frame with that target address and send it
  • If there is no match, send an ARP request as broadcast into the local network. Once a reply arrives, add a corresponding entry to the ARP cache. Then proceed as above by assembling and sending the Ethernet frame
  • If no ARP reply arrives, give up – this will typically result in an error message like “destination host unreachable”

Note that the ARP protocol is designed to determine the target Ethernet address inside a local network. ARP requests will be dropped at network boundaries. Now, the Internet is by design a network of network – it consists of many small networks that are connected to each other. Obviously, the ARP protocol is no longer sufficient to solve the routing challenge in these more complex networks, and we need additional tools. This will be discussed in the next section.

Routing across network boundaries

For the sake of concreteness, let us again take a look at a slighly modified version of the example network that we have already used earlier in this series.

MultiNetworkRouting

In this example, our entire network is comprised of three different networks, called network 1, network 2 and network 3. In each of these networks, each host is reachable from any other host directly via an Ethernet medium. Thus for the communication within each of these networks, the mechanisms explained in the previous section apply – a host uses the ARP protocol to translate IP addresses into MAC addresses and sends IP messages directly as payload of Ethernet frames.

Now let us walk through the chain of events that takes place when in this topology, host B wishes to send an IP packet to host A. The first thing that host B needs to detect is that host A is not part of the same Ethernet network. To be able to do this, an additional configuration item is used that we have ignored so far – the subnet mask.

When a network interface card is set up, we typically do not only assign an IP address to it, but also a network mask. Technically speaking, a network mask is – as the IP address itself – a sequence of four bytes, using the same decimal dot notation that we use for the IP address. Thus, as the IP address, it is a sequence of 32 bits. We can therefore apply a boolean AND operation to the IP address and the network mask. The result is, by definition, the network part of the IP address, the remaining part is the host part of the IP address. All IP addresses which share a common network part are considered to be part of the same subset, and the standard IP routing algorithms assume that they are connected directly via Ethernet or another link layer protocol.

Let us look at an example to make this clearer. In our case, the network mask for all three subnets is 255.255.255.0. When we take the IP address 192.168.1.3 of host B and apply a logical AND to this and the network mask, we obtain the network part 192.168.1.0, as displayed in the table below.

NetworkMask

When we apply the same procedure to host A, we obtain the network 192.168.3.0. Thus the two hosts are not in the same subnet, and host B can use that information to determine that a direct routing attempt via ARP will not work (this is actually a bit of a simplification – typically, the host will use an algorithm known as longest match prefix algorithm involving the network mask).

Instead, in order to reach host A, host B will have to make use of a so-called gateway or router. Roughly speaking, a gateway is a host that is connected to more than one network and can therefore transmit or route (hence the name router which is often used as a synonym, even though this is not entirely correct, see RFC 4949 for a discussion of the terminology) packets between the networks.

In our example, there are two gateways. The first gateway connects the networks 1 and 2. It has two network interface cards. The first NIC is connected to network 1 and has the assigned IP address 192.168.1.1. The second NIC attached to this host is part of network 2 and has the assigned IP address 192.168.2.2 (this example makes it clear that, strictly speaking, the IP address is not an attribute of a host but of the network interfaces attached to it).

When host A wishes to send an IP packet to host B, it will send the packet to this gateway via the NIC attached to network 1. As this NIC is on the same Ethernet network, this can be done using the ARP resolution protocol discussed earlier. The gateway will then inspect the destination IP address of the packet and consult a table of possible next stations called the routing table. Based on that table, it will decide that the best next station is the gateway connecting network 2 and network 3. This gateway will finally determine that the destination IP address is part of network 3 to which it is directly attached and eventually deliver the packet to host B.

Thus our IP packet did have to pass several intermediate hosts on its way from source to destination. Each such host is called a hop. The traceroute utility can be used to print out the hops that are required to find a way to a given destination address. Essentially, the way this utility works is as follows. It will send out sequences of packets (typically UDP) towards a given destination, with increasing value of the TTL (time-to-live) field. If the value of this field is n, it will only survice n hops, until it will be dropped. The host dropping the packet will send an ICMP packet back to the host on which traceroute runs. This ICMP packet is used by the utility to determine that the sender of the ICMP packet is part of the route and is sitting at station n in the route. By increasing the TTL further until no packets are dropped anymore, the entire route to the destination can be probed in this way.

Here is the output of a traceroute on the workstation on which I am writing this post.

$ traceroute www.wordpress.com
traceroute to www.wordpress.com (192.0.78.13), 30 hops max, 60 byte packets
 1  fritz.box (192.168.178.1)  2.289 ms  3.190 ms  5.086 ms
 2  dslb-178-004-200-001.178.004.pools.vodafone-ip.de (178.4.200.1)  22.503 ms  24.686 ms  25.227 ms
 3  * * *
 4  * * *
 5  92.79.212.61 (92.79.212.61)  33.985 ms 92.79.212.45 (92.79.212.45)  35.649 ms  36.373 ms
 6  145.254.2.175 (145.254.2.175)  38.205 ms 145.254.2.191 (145.254.2.191)  23.090 ms  25.321 ms
 7  edge-a01.fra.automattic.net (80.81.193.69)  25.815 ms  18.981 ms  19.729 ms
 8  192.0.78.13 (192.0.78.13)  22.331 ms  19.813 ms  19.593 ms

We can see that the first station in the path to the destination http://www.wordpress.com is my local DSL router (192.168.178.1), which, not surprising, acts as a default gateway for my local home network. The DSL router than forwards the packet to the next hop (178.4.200.1), which, judging by its name, is part of the DSL infrastructure of my ISP (Vodafone). The next two lines indicate that for the packets with TTL 3 and 4, the utility did not get an answer, most likely because some firewalls were preventing either the probing UDP packet from reaching its destination or because the ICMP message was not sent or not received. Finally, there are three more hops, corresponding to the TTL values 5,6 and 7, before the final destination is reached.

This sounds simple, but in fact routing is a fairly complex process. In a home network, routing is comparatively easy and the routing table is fairly short (you can use the command route on a Linux system to print out the routing table of your machine). Typically, there are only two entries in the routing table of an ordinary PC at home. One entry tells the operating system that all packets that are targeted to an IP address in the local network are to be sent to the local network interface without any gateway. The second entry is the so-called default gateway and simply defines that all other entries are to be sent to a default router, which is for instance the cable modem or DSL router that you use to connect to your ISP.

However, once we leave a home network, life becomes more complicated as there is typically more than one possible path from a source host to a destination host. Thus a host might have more than one possible choice for the next hop, and a lot hinges on routers correctly building their routing tables. Several routing protocols exist that routers use to exchange information among each other to find the best path to the destination efficiently, like ICMP, OSPF, BGP or IS-IS, see RFC 1812, RFC 1142 or RFC 1247 for more details.

There are many topics related to IP networking and routing that we have not yet discussed, for instance network address translation (NAT), details on the ICPM protocol, CIDR notation and address classes, and IP version 6. Instead of getting further into these details, however, we will devote the next post in this series to a protocol sitting on top of IP – the TCP protocol.

Networking basics – IP

In the previous post in my networking series, we have looked in detail at the Ethernet protocol. We now understand how two machines can communicate via Ethernet frames. We have also seen that an Ethernet frame consists of an Ethernet header and some payload which can be used to transmit data using higher level protocols.

So the road to implementing the IP protocol appears to be well-paved. We will probably need some IP header that contains metadata like source and destination, combine this header with the actual IP payload to form an Ethernet payload and happily hand this over to the Ethernet layer of our networking stack to do the real work.

Is is really that easy? Well, almost – but before looking into the subtleties, let us again take a look at a real world example which will show us that our simple approach is not so far off as you might think.

Structure of an IP packet

We will use the same example that we did already examine in the post in this series that did cover the Ethernet protocol. In this post, we used the ping command to create a network packet. Ping is using a protocol called ICMP that actually travels on top of IP, so the packet generated was in fact an IP packet. The output that we did obtain using tcpdump was

21:28:18.410185 IP your.host > your.router: ICMP echo request, id 6182, seq 1, length 64
0x0000: 0896 d775 7e80 1c6f 65c0 c985 0800 4500
0x0010: 0054 e6a3 4000 4001 6e97 c0a8 b21b c0a8
0x0020: b201 0800 4135 1826 0001 d233 de5a 0000
0x0030: 0000 2942 0600 0000 0000 1011 1213 1415
0x0040: 1617 1819 1a1b 1c1d 1e1f 2021 2223 2425
0x0050: 2627 2829 2a2b 2c2d 2e2f 3031 3233 3435
0x0060: 3637

We have already seen that the first few bytes form the Ethernet header. The last two bytes (0x0800) of the Ethernet header are called the Ethertype and indicate that the payload is to be interpreted as an IP packet. As expected, this packet again starts with a header.

The IP header can vary in length, but is at least 20 bytes long. Its exact layout is specified in RFC 791. We will now go through the individual fields in detail, but can already note that there are some interesting similarities between the Ethernet header and the IP header. Both contain a source and target address for the respective layer – the Ethernet header contains the source and target Ethernet (MAC) address, the IP header contains the source and target IP address. Also, both headers contain a field (the Ethertype for the Ethernet header and the protocol field for the IP header) that defines the type of the payload and thus establishes the link to the next layer of the protocol stack. And there are checksums – the Ethernet checksum (FCS) being located at the end of the packet while the IP checksum is part of the header.

IP

The first byte (0x45) of the IP header is in fact a combination of two fields. The first part (0x4) is the IP protocol version. The value 0x4 indicates IPv4, which is more and more replaced by IPv6. The second nibble (0x5) is the length of the header in units of 32 bit words, i.e. 20 bytes in this case.

The next byte (0x00 in our case) is called type of service and not used in our example. This field has been redefined several times in the history of the IP protocol but never been widely used – see the short article on Wikipedia for a summary.

The two bytes after the type of service are the total length of the packet, and are followed by two bytes called identification that are related to fragmentation that we will discuss further below. The same applies to
the next two bytes (0x4000 in hexadecimal notation or 0100000000000000 in binary notation).

The following two 8-bit fields are called time to live and the protocol. The time to live (TTL), 0x40 or 64 decimal in our case, specifies the time a packet will survive while being transmitted through the internet network. Whenever an IP packet is routed by a host in the network, the field will be reduced by one. When the value of the field reaches zero, the packet will be dropped. The idea of this is to avoid that a packet which cannot be delivered to its final destination circulates in the network forever. We will learn more about the process of routing in one of the following posts in this series.

The protocol field (0x1 in our case) is the equivalent of the ethertype in the Ethernet header. It indicates which protocol the payload belongs to. The valid values are specified in RFC790. Looking up the value 0x1 there, we find that it stands for ICMP as expected.

After the following two bytes which are the header checksum, we find the source address and destination address. Both addresses are encoded as 32 bit values, to be read as a sequence of four 8 bit values. The source address, for instance, is

c0 a8 b2 1b

If we translate each byte into a decimal value, this becomes

192 168 178 27

which corresponds to the usual notation 192.168.178.27 of the IP address of the PC on which the packet was generated. Similarly, the target address is 192.168.178.1.

In our case, the IP header is 20 bytes long, with the target address being the last field. The general layout allows for some additional options to be added to the header which we will not discuss here. Instead, we will now look at a more subtle point of the protocol – fragmentation.

Fragmentation

What is the underlying problem that fragmentation tries to solve? When looking at how the Ethernet protocol works, we have seen that the basic idea is to enable several stations to share a common medium. This only works because each station occupies the medium for a limited time, i.e. because an Ethernet frame has a limited size. Traditionally, an Ethernet frame is not allowed to be longer than 1518 bytes, where 18 bytes are reserved for the header and the checksum. Thus, the actual payload that one frame can carry is limited to 1500 bytes.

This number is called the maximum tranmission unit (MTU) of the medium. Other media like PPP or WLAN have different MTUs, but the size of a packet is also limited there.

Now suppose that a host on the internet wishes to assemble an IP packet with a certain payload. If the payload is so small that the payload including the header fit into the MTU of all network segments that the packet has to cross until it reaches its final destination, there is no issue. If, however, the total size of the packet exceeds the smallest MTU of all the network segments that the packet will visit (called the path MTU), we need a way to split the packet into pieces.

This is exactly what fragmentation is doing. Fragmentation refers to the process of splitting an IP packet into smaller pieces that are then reassembled by the receiver. This process involves the fields in the IP header that we have not described so far – the identification field and the two bytes immediately following it which consist of a 3 bit flags field and a 13 bit fragment offset field.

When a host needs to fragment an IP packet, it will split the payload into pieces that are small enough to pass all segments without further fragmentation (the host will typically use ICMP to determine the path MTU to the destination as we will see below). Each fragment receives its own IP header, with all fragments sharing the same values for the source IP address, target IP address, identification and protocol. The flags field is used to indicate whether a given fragment is the last one or is followed by additional fragments, and the fragment offset field indicates where the fragment belongs to in the overall message.

When a host receives fragments, it will reassemble them, using again the four fields identification, source address, target address and protocol. It can use the fragment offset to see where the fragment at hand belongs in the overall message to be assembled and the flags field to see whether a given fragment is the last one – so that the processing of the message can start – or further fragments need to be waited for. Note that the host cannot assume that the fragments arrive in the correct order, as they might travel along different network paths.

Note that the flags field also contains a bit which, if set, instructs all hosts not to fragment this datagram. Thus if the datagram exceeds one of the MTUs, it will be dropped and the sender will be informed by a so specific ICMP message. This process can be used by a sender to determine the path MTU on the way to the destination and is called path MTU discovery.

Packets can be dropped, of course, for other reasons as well. IP is inherently not establishing any guarantees that a message arrives at the destination, there are no handshakes, connections or acknowledgements. This is the purpose of the higher layer protocols like TCP that we will look at in a later post.

So far, we have ignored one point. When a host assembles an IP packet, it needs to determine the physical network interface to use for the transmission and the Ethernet address of the destination. And what if the target host is not even part of the same network? These decisions are part of a process called routing that we will discuss in the next post in this series.

Networking basics – Ethernet

In the previous post in this series, we have looked at the layered architecture of a typical network stack. In this post, we will dive into the lowest layer of a typical TCP/IP implementation – the Ethernet protocol.

The term Ethernet originally refers to a set of standards initially developed in the seventies by Xerox and then standardized jointly with DEC and Intel (see for instance this link to the original specification of what became knowns as Ethernet II). These standards describe both the physical layer (PHY), i.e. physical media, encoding, voltage levels, timing and so forth, as well as the data link layer that specifies the layout of the messages and the handling of collisions.

To understand how Ethernet works, it is useful to look at a typical network topology used by the early versions of the Ethernet standard. In such a setup, several hosts would be connected to a common medium in a bus topology, as indicated in the diagram below.EthernetBusTopology

 
Here, a shared medium – initially a coaxial cable – was used to transmit messages, indicated by the thick horizontal line in the diagram. Each station in the network is connected to this cable. When a station wants to transmit a message, it translates this message into a sequence of bits (Ethernet is inherently a serial protocol), checks that no other message is currently in transit and forces a corresponding voltage pattern onto the medium. Any other station can sense that voltage pattern and translate the message back.

Each station in the network is identified by a unique address. This address is an 48 bit number called the MAC address. When a station detects an Ethernet message (called a frame) on the shared medium, it extracts the target address from that frame and compares it to its own MAC address. If they do not match, the frame is ignored (there are a few exceptions to this rule, as there are broadcast addresses and a device can be operated in promiscuity mode in which it will pick up all frames regardless of their target address).

There is one problem, though. What happens if several stations want to transmit messages at the same time? If that happens, the signals overlap and the transmission is disturbed. This is called a collision. Part of the Ethernet protocol is a mechanism called CSMA/CD (carrier sense multiple access with collision detection) that specifies how these situations are detected and handled. Essentially, the idea is that a a station that detects a collision will wait for a certain random time and simply retry. If that fails again, it will again wait, using a different value for the wait time. After a certain number of failed attempts, the transmission will be aborted.

This mechanisms works a bit like a conference call. As you cannot see the other participants, it is very hard to tell whether someone wants to speak. So you first wait for some time, and if nobody else has started to talk, you start. If there is still a collision, both speakers will back off and wait for some time, hoping that their next attempt will be successful. Given the random wait times, it is rather likely that using this procedure, one of the speakers will be able to start talking after a few attempts.

This is nice and worked well for smaller networks, but has certain disadvantages when striving for larget networks with higher transfer rates. First, collision resolution consumes time and slows down the traffic significantly if too many collisions occur. Second, communication in this topology is half-duplex: every station can either transmit or receive, but not both at the same time. Both issues are addressed in more modern networks where a switch-based topology is used.

A switch or bridge is an Ethernet device that can connect several physical network segments. A switch has several ports to which network segments are connected. A switch knows (in fact, it learns that information over time) which host is connected to which port. When an Ethernet frame arrives at one of the ports, the switch uses that information to determine the port to which the frame needs to be directed and forwards the frame to this port.

EthernetSwitchedTopology

In such a topology, collisions can be entirely avoided once the switch has learned which device is behind which port. Each station is connected to the switch using a twisted pair cable and can talk to the switch in full duplex mode (if the connection allows for it), i.e. receive and transmit at the same point in time.  Most switches have the ability to buffer a certain amount of data to effectively serialize the communication at the individual ports, so that collisions can be avoided even if two frames for the same destination port arrive at the switch simultaneously. This sort of setup is more expensive due to the additional costs for the switches, but has become the standard topology even in small home networks.

Having looked at the physical realization of an Ethernet network, let us now try to observe this in action.  For these tests, I have used my home PC running Ubuntu Linux 16.04 which is connected to a home network via an Ethernet adapter.  This adapter is known to the operating system as enp4s0 (on older Ubuntu versions, this would be eth0).

First, let us collect some information about the local network setup using the tool ifconfig.

$ ifconfig enp4s0
enp4s0    Link encap:Ethernet  HWaddr 1c:6f:65:c0:c9:85  
          inet addr:192.168.178.27  Bcast:192.168.178.255  Mask:255.255.255.0
          inet6 addr: fe80::dd59:ad15:4f8e:6a87/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:48431 errors:0 dropped:0 overruns:0 frame:0
          TX packets:37871 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:46537519 (46.5 MB)  TX bytes:7483354 (7.4 MB)

Here we see that our network device enps4s0 is an Ethernet device with the MAC address 1c:6f:65:c0:c9:85. Here the 48 bit MAC address is printed as a sequence of 6 bytes, separated by colons.

Now open a terminal, become root and enter the following command:

tcpdump -i enp4s0 -xxx

This will instruct tcpdump to dump packages going over enp4s0, printing all details including the Ethernet header (-xxx). On another terminal, execute

ping 192.168.178.1

Then tcpdump will produce the following output (plus a lot of other stuff, depending on what you currently run in parallel):

21:28:18.410185 IP your.host > your.router: ICMP echo request, id 6182, seq 1, length 64
0x0000: 0896 d775 7e80 1c6f 65c0 c985 0800 4500
0x0010: 0054 e6a3 4000 4001 6e97 c0a8 b21b c0a8
0x0020: b201 0800 4135 1826 0001 d233 de5a 0000
0x0030: 0000 2942 0600 0000 0000 1011 1213 1415
0x0040: 1617 1819 1a1b 1c1d 1e1f 2021 2223 2425
0x0050: 2627 2829 2a2b 2c2d 2e2f 3031 3233 3435
0x0060: 3637
21:28:18.412823 IP your.router > your.host: ICMP echo reply, id 6182, seq 1, length 64
0x0000: 1c6f 65c0 c985 0896 d775 7e80 0800 4500
0x0010: 0054 0b2c 0000 4001 8a0f c0a8 b201 c0a8
0x0020: b21b 0000 4935 1826 0001 d233 de5a 0000
0x0030: 0000 2942 0600 0000 0000 1011 1213 1415
0x0040: 1617 1819 1a1b 1c1d 1e1f 2021 2223 2425
0x0050: 2627 2829 2a2b 2c2d 2e2f 3031 3233 3435
0x0060: 3637

Let us examine this output in detail. Each packet printed by tcpdump is an Ethernet frame. Every frame starts with an Ethernet specific part called the Ethernet header followed by a part determined by the higher layers of the protocol stack that we will not discuss in this but in later posts.

The first Ethernet frame starts with the destination MAC address 08:96:d7:75:7e:80 which in this case is the MAC address of my router (you can figure out the MAC address of your router using the `arp` command).

The next six bytes in the frame contain the source MAC address 1c:6f:65:c0:c9:85, i.e. the MAC address of my network card in this case. Note that this matches the output of `ifconfig` displayed above.

The next two bytes of the Ethernet frame are still part of the header and are called the ethertype. This field holds a number that specifies the higher layer protocol to which the data in the Ethernet frame refers. This field is not used by the Ethernet protocol itself, but is relevant for an operating system as it determines to which protocol stack the content of the Ethernet frame is routed for further processing. In our case, the ethertype is 0x800, indicating that this is an IP request.

The next bytes starting with the value 0x45 form the data part of the Ethernet frame and contain the actual payload.

In addition to the data displayed by tcpdump, there are additional bytes at the start and end of each Ethernet frame that are handled by the network card and usually not visible to applications. Preceding the Ethernet header, there is a so called preamble which is a fixed 56 bit pattern designed to allow the stations to synchronize their clock. The preamble is followed by an 8-bit pattern called the start frame delimiter (SDF), which is again a fixed value indicating the start of the actual frame (in some sources, the SFD is considered to be part of the preamble). These bits are then followed by the fields described above:

  • Destination MAC address
  • Source MAC address
  • Ethertype
  • Data

Finally, the Ethernet frame ends with a checksum called Frame check sequence which is used to detect transmission errors.

This simple structure of an Ethernet frame is virtually unchanged compared to the original Ethernet II specification. However, over time, some extensions and protocol variations have been defined. The most notable one is VLAN tagging according to the IEEE 802.1Q standard.

A VLAN or virtual LAN is a technology to split a single Ethernet network into logically separated areas. One way to do this is to program switches in such a way that they assign stations and the corresponding ports to different virtual LANs and allow traffic to flow only within each VLAN. However, this simple implementation fails if the network spans more than one switch. To support these cases as well, an Ethernet frame can contain an optional field – the VLAN tag – that contains the ID of the virtual LAN in which the frame is supposed to be distributed. This tag is placed right after the ethertype. To indicate its presence, the dummy ethertype 0x8100 is used, followed by the VLAN tag and the actual ethertype.

This concludes our short introduction to the Ethernet protocol. In the next post, we will discuss the network layer and the IP protocol before we then move on to ARP and routing.

References

 

 

 

Networking basics – the layered networking model

Recently, I picked up an old project of mine – implementing a Unix like operating kernel from scratch. I will post more on this later, but one of the first things I stumbled across when browsing my old code and my old documentation was the networking stack. I used this as an opportunity to refresh my own understanding of networking basics, and reckoned it might help others to post my findings here. So I decided to write a short series of posts on the basics of networking – ethernet, ARP, IP, TCP/UDP and all that stuff.

Before we start to get into details on the different networking technologies, let me first explain the idea of a layered networking architecture.

Ultimately, networking is about physical media (copper cables, for instance) connecting physical machines. Each of these machines will be connected to the network using a network adapter. These adapters will write messages into the network, i.e. create a certain sequence of electrical signals on the medium, and read other messages coming from the network.

However, the magic of a modern networking architecture is that the same physical network can be used to connect different machines with different operating systems using different networking protocols. When you use your web browser to connect to some page on the web – this one for instance – your web browser will use a protocol called HTTP(S) for that purpose. From the web browsers point of view, this appears to be a direct connection to the web server. The underlying physical network can be quite different – it can be a combination of Ethernet or WLAN to connect your PC to a router, some technology specific to your ISP or even a mobile network. The beauty of that is that the browser does not have to care. As often in computer science, this becomes possible by an abstract model organizing networking capabilities into layers.

Several models for this layering exist, like the OSI model and the TCP/IP layered model defined in RFC1122. For the sake of simplicity, let us use the four-layer TCP/IP model as an example.

 

TCPIPLayers

 

The lowest layer in this model is called the link layer. Roughly speaking, this layer is the layer which is responsible for the actual physical connection between hosts. This covers things like the physical media connecting the machines and the mechanisms used to avoid collisions on the media, but also the addressing of network interfaces on this level.

One of the most commonly used link layer protocols is the Ethernet protocol. When the Ethernet protocol is used, hosts in the network are addressed using the so-called MAC address which uniquely identifies a network card within the network. In the Ethernet protocol (and probably in most other protocols), the data is transmitted in small units called packets or frames. Each packet contains a header with some control data like source and destination, the actual data and maybe a checksum and some end-of-data marker.

Now Ethernet is the most common, but by far not the only available link layer protocol. Another protocol which was quite popular at some point at the end of the last century is the Token Ring protocol, and in modern networks, a part of the path between two stations could be bridged by a provider specific technology or a mobile network. So we need a way to make hosts talk to each other which are not necessarily connected via a direct Ethernet link.

The approach taken by the layered TCP/IP model to this is as follows. Suppose you have – as in the diagram below – two sets of machines that are organized in networks. On the left hand side, we see an Ethernet network with three hosts that can talk to each other using Ethernet. On the right hand side, there is a smaller network that consists of two hosts, also connected with each other via Ethernet (this picture is a bit misleading, as in a real modern network, the topology would be different, but let us ignore this for a moment).

NetworksAndGatewaysBoth networks are connected by a third network, indicated by the dashed line. This network can use any other link layer technology. Each network contains a dedicated host called the gateway that connects the network to this third network.

Now suppose host A wants to send a network message to host B. Then, instead of directly using the link layer protocol in its network, i.e. Ethernet, it composes a network message according to a protocol that is in the second layer of the TCP/IP networking model, called the Internet layer which uses the IP protocol. Similar to an Ethernet packet, this IP packet contains again a header with target and source address, the actual data and a checksum.

Then host A takes this message, puts that into an Ethernet packet and sends it to the gateway. The gateway will now extract the IP message from the Ethernet packet, put it into a message specific to the networking connecting the two gateways and transmit this message.

When the message arrives at the gateway on the right hand side, this gateway will again extract the IP message, put it into an Ethernet message and send this via Ethernet to host B. So eventually, the unchanged IP message will reach host B, after traveling through several networks, piggybacked on messages specific to the used network protocols. However, for applications running on host A and B, all this is irrelevant – they will only get to see IP messages and do not have to care about the details and layer below.

In this way, the internet connects many different networks using various different technologies – you can access hosts on the internet from your mobile device using mobile link layer protocols, and still communicate with a host in a traditional data center using Ethernet or something else. In this sense, the internet is a network of networks, powered by the layer model.

But we are not yet done. The IP layer is nice – it allows us to send messages across the internet to other hosts, using the addresses specific to the IP layer (yes, this is the IP address). But it does, for instance, not yet guarantee that the message ever arrives, nor does it provide a way to distinguish between different communication end points (aka ports) on one host.

These items are the concern of the next layer, the transport layer. The best known example of a transport layer protocol is TCP. On top of IP, TCP offers features like stream oriented processing, re-transmission of lost messages and ports as additional address components. For an application using TCP, a TCP connection appears a bit like a file into which bytes can be written and out of which bytes can be read, in a well defined order and with guaranteed delivery.

Finally, the last layer is the application layer. Common application layer protocols are HTTP (the basis of the world wide web), FTP (the file transfer protocol), or SMTP (the mail transport protocol).

The transitions between the different layers work very much like the transition between the internet layer and the link layer. For instance, if an application uses TCP to send a message, the operating system will take the message (not exactly true, as TCP is byte oriented and not message oriented, but let us gracefully ignore this), add a TCP header, an IP header and finally an Ethernet header, and ask the network card to transmit the message on the Ethernet network. When the message is received by the target host, the operating system of that host strips off the various headers one by one and finally obtains the original data sent by the first host.

The part of the operating system responsible for this mechanism is naturally itself organized in layers  – there could for instance be an IP layer that receives data from higher layers without having to care whether the data represents a TCP message or a UDP message. This layer then adds an IP header and forwards the resulting message to another layer responsible for operating the Ethernet network device and so forth. Due to this layered architecture stacking different components on top of each other, the part of the operating system handling networking is often called the networking stack. The Linux kernel, for instance, is loosely organized in this way (see for instance this article).

This completes our short overview of the networking stack. In the next few posts, we will look at each layer one by one, getting our hands dirty again, i.e. we will create and inspect actual network messages and see the entire stack in action.