Lab 3: Observing layers and measure network performance

Introduction

The purpose of this Lab is twofold.

First, you will use Wireshark to observe packets and better understand how the different Internet layers appear on the wire. This will make the abstract layered architecture seen in class much more concrete.

Second, you will extend last week’s file transfer application so as to measure network performance. In particular, you will add support to measure one-way delay and throughput.

1 Observing layers with Wireshark

The goal of this first part is to get you familiar with a network inspection tool called Wireshark and to help you visually identify the different Internet layers inside the same packet.

Wireshark is already installed on the EPFL VM. If you are performing the lab on your own Windows/Mac computer, install Wireshark from its official website. On (your own) Ubuntu, you can install it with:

sudo apt-get install wireshark

1.1 Layers and headers

The Internet architecture operates in layers. As a result, a packet that traverses the Internet looks, in a way, like an onion: On the “outside,” it is “wrapped up” in a link-layer header (which can be understood only by the link layer of computers and packet switches). If we “peel away” the link-layer header, we will find a network-layer header (which can be understood only by the network layer of computers and packet switches). If we also peel away the network-layer header, we will find a transport-layer header (which can be understood only by the transport layer of the end-point computers). And if we peel that away, too, we will find the application-layer header and data, which is the actual message that this packet is carrying. So, if we look inside an Internet packet, we will find a lot more information than the application-layer message that the packet is carrying: we will find meta-data, in the form of headers, which are needed by the various Internet layers in order to get the message from its source to its destination.

We will now use Wireshark to look inside Internet packets. To get started, do the following:

  1. Start your browser and clear its cache. For Firefox, click on the ≡ symbol on the upper-right, go to Settings → Privacy & Security → Cookies & Site Data → Clear Data.

  2. Start Wireshark, for instance by typing in the command line:

    wireshark
  3. You should see a list of your computer’s network interfaces. Identify the one whose packets you will capture.

    • If you are working through an INFs computer or connected through VDI, capture packets from your ethernet interface.
    • If you are working on a wirelessly connected computer, capture packets from your WiFi interface.
  4. Start a capture by double-clicking on the chosen interface. You may see data rolling inside the top part of your Wireshark window. These are the packets that are departing from and arriving at your network interface.

  5. Use your web browser to visit:

    http://www.mit.edu
  6. Once the page is fully loaded, stop the capture by clicking on the red square button at the left of the top menu.

  7. Right underneath the top menu, Wireshark lets you specify a display filter that you want to apply to the packets that you see. We will use it repeatedly below.

Questions

Answer the following questions.

  1. What messages were exchanged at the application layer, i.e. between your web browser and the MIT web server?

    Type the following in the Wireshark filter line:

    http

    You should now see the packets carrying HTTP messages. Look at the Info column and identify the request(s) and response(s).

    Is the MIT server really serving the final content over plain HTTP, or does it redirect your browser to another protocol? Look carefully at the HTTP response and its header fields.

  2. Which transport protocol was used: TCP or UDP?

    Click on one of the HTTP packets in the top section, then inspect the packet details in the middle section. You should see information about each layer. Near the bottom, you should see the application-layer entry, namely Hypertext Transfer Protocol. What is the line just above it?

  3. What messages were exchanged at the transport layer between your computer and the MIT web server?

    Replace http in the filter line by the proper transport protocol you identified above. Of course, using only tcp or only udp will display many unrelated packets, whereas you only want the ones exchanged with the computer running the MIT web server, so refine your filter to keep only the traffic exchanged with the MIT server.

    A possible strategy is to combine the transport filter with the relevant IP address.

  4. Do application-layer and transport-layer messages travel in separate packets?

    Explain, based on what you see, why the answer is no.

Important note: a key point here is that the same packet carries information for several layers at once. The transport-layer information is stored in the transport-layer header, whereas the application-layer information is stored in the application-layer header and data.

1.2 Encapsulation

We now examine the notion of encapsulation, meaning that each layer wraps the data coming from the higher layer. For example, a network-layer message consists of a network-layer header plus a transport-layer message; that transport-layer message itself contains a transport-layer header plus an application-layer message.

Display again the HTTP packets and click on one of them. In the packet details, identify where each of the following starts:

  • link layer header;
  • network layer header;
  • transport layer header;
  • application layer header and data.

Note: Notice that each header has different fields from the other headers. Each field is there to serve a specific functionality related to that layer. Then answer the following questions:

  1. How many bytes does the HTTP message contain?

    Look in the transport-layer information and, in particular, at the Len field, which gives the size of the application-layer message encapsulated in the transport segment.

  2. How many bytes do the transport-layer and network-layer headers add to the HTTP message?

  3. How many bytes does the link layer add?

  4. Which fields seem to identify the sender and the receiver at each layer?

    You are not expected to know all field names yet, but you should already be able to spot that the different layers use different kinds of identifiers.

1.3 Use Wireshark on your own programs

Before moving to the measurement part, use Wireshark once more, this time on the client-server programs you developed last week.

If needed, restart your UDP and TCP toy examples from last week. Then:

  1. capture one execution of your UDP test client and server;
  2. capture one execution of your TCP test client and server;
  3. compare what Wireshark shows in both cases.

Answer the following questions:

  1. How does a UDP exchange appear in Wireshark?
  2. How does a TCP exchange appear in Wireshark?
  3. Can you spot the TCP connection establishment?
  4. Can you spot when the client sends data and when the server replies?
  5. In which sense is TCP more verbose on the wire than UDP?

This small comparison will be useful for the second part of the Lab.

2 Measure network performance

In this second part, you will extend last week’s work in two directions:

  • a TCP file transfer application (reliable data transfer);
  • a UDP measurement tool (to measure delay under different conditions).

2.1 Restart from part 4 of last week

Start again from part 4 of last week’s Lab, namely the simple client-server file exchange application over TCP.

Before adding any measurement feature, make sure that your TCP version is complete and robust enough:

  • the client asks for a filename and checks that it exists;
  • the client connects to the server;
  • the client sends the filename;
  • the client sends the file size;
  • the client sends the file content in chunks;
  • the server receives all this information in the correct order;
  • the server stores the received file locally;
  • both sides terminate properly in case of success and in case of error.

If your TCP version is not finished yet, finish it first.

Commit and push this step.

2.2 Build a UDP-based delay measurement tool

We now want to add a measurement feature on top of your UDP client-server application.

The first metric is one-way delay.

The idea is simple:

the client sends probe packets periodically (every t seconds); each probe contains a timestamp; the server receives the probes and measures one-way delay.

For this, modify your protocol so that each data unit sent by the client carries a timestamp indicating when it was sent. The server, upon reception, records its own reception time. From these two times, you will compute the one-way delay.

Important remark

One-way delay measurement requires that the clocks used on the sender and receiver be reasonably synchronized. If you run both programs on the same machine, this is naturally the case. If you run them on two different machines, you should keep in mind that clock differences may affect the results.

Suggested steps

  1. Choose a time source.

    Use a sufficiently precise clock, for instance gettimeofday(). Prefer a monotonic clock if both timestamps are taken on the same machine.

  2. Modify your UDP client such that: sends one probe every t seconds (e.g., using usleep() or nanosleep() for 1ms or 5ms); includes a timestamp in each probe; runs for a fixed duration or a fixed number of probes.

  3. Extend your message format. For each transmitted packet(probe), include at least:

    • a sequence number or chunk id;
    • the sending timestamp;
  4. On the server side, upon reception of each packet(probe):

    • record the reception timestamp;
    • compute the delay for that chunk;
    • print it;
    • and, store it for later summary.
  5. Display summary statistics every 1 second as:

    • average one-way delay.

Example

This is just an example. You are completely free to code the client and the server the most appropriate way for you (to understand and to debug) provided that they fulfill the items bullet lists above. In particular, you're free to choose the messages you'd like to be displayed on the terminal.

Server (in one terminal):

./udp-test-server

Server listening on 127.0.0.1:1234
### [AFTER THE CLIENT INTERACTION BELOW]

Probe 0: One Way Delay = 204 us.

Probe 1: One Way Delay = 200 us.

Probe 2: One Way Delay = 200 us.

Probe 3: One Way Delay = 195 us.


Average one-way delay of past 1s : 199.75 us.
...

Client (in another terminal):

./udp-test-client
Connected to 127.0.0.1:1234
Sent Probe 0.
Sent Probe 1.
Sent Probe 2.
Sent Probe 3.

Questions

  1. Are the measured one-way delays stable from one probe to another?
  2. Do you observe variability between packets?
  3. What factors could explain this variability?

2.3 Measure delay with and without background traffic

We now use the TCP file transfer as background traffic.

Experiment

  • Run the UDP measurement tool alone:
  • record the measured delays. Then run:
  • the UDP measurement tool and a TCP file transfer at the same time. Compare the results.

Questions

  1. Does the delay change when the TCP transfer is running?
  2. Is the delay more variable?
  3. Can you observe signs of queuing?
  4. Which protocol (TCP or UDP) do you think is better for measureing one-way delay?

2.4 Measure throughput (Optional)

The second metric is throughput.

At a high level, throughput is the amount of useful data transferred per unit of time.

You now have two natural levels at which you may measure it:

  • per chunk, using the chunk size and the time difference between consecutive receptions;
  • for the whole file, using the total file size and the total transfer duration.

For this Lab, we look at both ways and compare them.

Suggested steps

  1. On the client side, record a timestamp just before sending the first piece of file data.

  2. On the server side, record:

    • a timestamp when the first piece of file data is received;
    • a timestamp when the last piece of file data is received.
    • a timestamp when each piece of file data is received.
  3. Compute the throughput as:

    • instantanous throughput = bytes_receveived / transfer_time
    • overall throughput = total_bytes_received / transfer_duration
  4. Display the result in a readable unit, for example:

    • bytes/s;
    • KiB/s;
    • MiB/s;
    • or bits/s if you prefer.
  5. Run several experiments with files of different sizes.

Example

This is just an example. You are completely free to code the client and the server the most appropriate way for you (to understand and to debug) provided that they fulfill the items bullet lists above. In particular, you're free to choose the messages you'd like to be displayed on the terminal.

Server (in one terminal):

./tcp-file-server

Server listening on 127.0.0.1:1234
### [AFTER THE CLIENT INTERACTION BELOW]
Receiving measured file "temp.txt" with size 4054 bytes from 127.0.0.1:39572.

Chunk 0: Payload length = 1024 bytes.
Chunk 0: throughput at receiver side = 5019607.84 B/s.

Chunk 1: Payload length = 1024 bytes.
Chunk 1: throughput at receiver side = 5120000.00 B/s.

Chunk 2: Payload length = 1024 bytes.
Chunk 2: throughput at receiver side = 5120000.00 B/s.

Chunk 3: Payload length = 982 bytes.
Chunk 3: throughput at receiver side = 5035897.44 B/s.

Stored file "temp.txt" (4054 bytes).

Measured throughput at receiver side: 40949494.95 B/s.
...

Client (in another terminal):

./tcp-file-client
Filename to send: temp.txt
Connected to 127.0.0.1:1234
Sent file "temp.txt" (4054 bytes).
Measured throughput at sender side: 29591240.88 B/s.

Questions

  1. Is the throughput the same for small and large files?
  2. How much of what you measure corresponds to actual file data, and how much is protocol overhead?

2.5 Observe the measurements in Wireshark

Now return to Wireshark and capture one TCP file transfer of your own application.

Try to relate what you see on the wire with the performance values you measured.

In particular:

  1. Compare the number of packets/datagrams needed for the same file.
  2. Compare the visible overhead.
  3. For TCP, look for connection setup and acknowledgments.

4 Conclusion

In this Lab, you connected three viewpoints that are too often studied separately:

  • the programmer viewpoint, where you implement communication with syscalls and sockets;
  • the packet viewpoint, where Wireshark shows you the actual layers and headers sent on the network;
  • the performance viewpoint, where you quantify what your application and protocol are doing in terms of delay and throughput.

This is precisely why observing layers and measuring performance belong together: the protocol design choices made at one layer have a visible impact on what happens in the packets and on the performance perceived by the application.