One of the core issues in data communication is the network performance. This is very important for providing network services. A good network service will be very attractive to customers.
Regardless how you deployed your network. I mean every network topology needs to perform well apart from its pros and cons. In this lecture we are going to discuss the important network performance parameters which tell you how good is your network.
Network performance is measured by the following parameters
What is Bandwidth in Networking?
Bandwidth is sometimes used as a synonym of data rate. But technically bandwidth meaning is different from the data rate. There are two different contexts of bandwidth
- Bandwidth in hertz
- Bandwidth in bits per second
Bandwidth in hertz is usually used in wireless communication. It means how much a channel can support a range of frequencies or number of frequencies. For example you see multiple channels on your cable TV network. Every channel is transmitted on different frequency from the other. If you want to see a particular channel (let say sky sports) you should tune to the frequency of that particular channel. Coaxial cable can support a frequency range from 3 MHz to 3 GHz. Similar is the case with radio channels and many other applications.
Bandwidth in bits per second (bps) is referred to as how much bits are transmitted in one second. You often see the broadband companies sell their connections by promoting their internet services with promotional taglines something like speed up to 4 megabits per second or bandwidth up to 4 megabit per second. They actually mean their system can send 4000000 bits in one second.
Channel Capacity in Data Communication
You can measure the channel capacity using the bandwidth. In other words you can measure the number of bits per second a channel can supports with the given bandwidth. In 1944 Shannon Claude presented a formula which can calculate maximum theoretical channel capacity in bits per second for a noisy channel.
Capacity = B log2(1+S/N)
Here “B” represents bandwidth and S/N is signal-to-noise ratio. This channel capacity theorem also known as Shannon capacity formula. You can observe in the above formula it doesn’t depend upon the signal strength. it means whatever signal strength we have we cannot achieve the channel capacity more than this. You can also observe if bandwidth increases the capacity also increases.
Telephone line has a bandwidth of 3 KHz and signal-to-noise (S/N) ratio is 3162. Calculate the capacity of this channel.
Capacity = 3 KHz Log2(1+3162)
= 3000xlog2(3163) = 35 kbps
Consider an extremely noisy channel where signal to noise ratio is almost 0. The bandwidth is 3 kHz. The capacity for the channel can be calculated as
Capacity = 3000 xl og2 (1+0) = 3000 x log2 (1) = 3000 x 0 = 0
What is Throughput in Networking?
Throughput meaning the actual or average data we can send over the network link.
Throughput vs Bandwidth
Throughput is often confused with bandwidth but actually they are completely different. The bandwidth we calculated in the above example shows theoretically the maximum number of bits we can send via this channel. But throughput tells how much practically we can send data. In other words we can say throughput is always lesser than the bandwidth.
Let’s understand it with an example. Suppose 1000 cars per minute can travel on the highway. But due to some traffic congestion or some other reasons, this reduces to 500 cars per minute. 1000 cars per minute is the bandwidth and 500 cars per minute is the throughput.
Similarly suppose a network link can transmit 1 Mbps. But the device is connected with the link can only support 200 kbps. 1 Mbps is the bandwidth and 200 kbps is the throughput.
Factors Affect the Throughput
- Medium Limitation
Different transmission mediums support different bandwidths. For example Cat5 Ethernet cable can support 100 Mbps where as Cat6a cable can support up to 10,000 Mbps. This means that whatever the environment you cannot send more than the limit supported by the cable.
- Network Traffic
The network traffic also affects the throughput. The heavy network congestion will decrease the throughput. For example, if you are the only user on a link you have the freedom to use the maximum throughput. But if more users come to the same channel from other nodes, now you will be limited to transmit the data through the same channel.
- Data Loss
Sometimes during the transmission we lose some data packets. Hence we need to re-transmit those data packets. This reduces the average throughput.
The protocols used in different technologies can impact throughput. Based on the protocols, networking systems decide when and which data needs to send. In addition to this the number of overhead bits depending upon the protocol used increase the congestion and consequently decrease throughput.
- Hardware Limitation
Sometimes the server supports a very high throughput but on the other hand client’s hardware does not support good data speed. So you have to upgrade your hardware in accordance with the system you are using. Many companies offer new upgraded devices to their clients when they upgrade their systems or they request their clients to upgrade their devices to get very good data speed.
How to measure network throughput?
There are many tools available in the market which help you to measure your actual throughput. There are many factors like packet loss percentage, bandwidth, bandwidth-delay product etc need to consider while calculating throughput. But here i will learn by using a simple practical example ignoring other factors.
Consider a link of 50 Mbps which can support 30,000 frames per minute. Each frame contains 20,000 bits. The maximum theoretical throughput of the system is given below:
Throughput = (30000 x 10000)/60 = 10 Mbps
What is Latency (Delay) in Networking?
In the field of networking, latency meaning how much time taken by the whole data message from transmitter to receiver. I mean from source to destination. Latency is basically divided into four parts. In other words these four things add up to make a complete delay. These four parts are given below
- Propagation time
- Transmission time
- Queuing time
- Processing Time
- Propagation Time
Its mean the time required by one bit to reach from source to destination.
Propagation Time = Distance/ Speed
Speed depends upon the medium. For example light waves travel with more speed in space than in air.
- Transmission Time
One message contains many bits. The time taken by all bits from 1st bit to last bit of the message is the transmission time of the data packet. Larger the message size larger the transmission time.
- Queuing Time
Queue meaning a line or sequence of somethings waiting for their turn to be attended or processed. This is not fixed, it depends upon the load and the capability of the system to process the load.
- Processing Delay
Bits have to wait in queue if coming faster but being processed at very low pace. For example you send 1Mbps from one end but the receiver node at the other end can only handle or process 512 kbps. So the upcoming bits will have to wait in the queue. This increases delay or latency.
Note: Please note the latency meaning in wireless communication and many other application is different. In wireless communication latency meaning the total time taken by the signal to reach destination and then get back to the sender again. Engineers try to reduce latency to control automatic cars which require real-time control from some distant location.
What is Jitter in Networking?
Until now i hope you have complete idea of latency. Assume you send multiple data packets of the same size. According to latency definition these packets must take equal time to reach at other side. I mean total time taken by the 2nd packet should be equal to the time taken by the 1st packet. No doubt the 1st packet reaches first and 2nd packet follows it.
Now imagine if 1st packet take 5 ms to reach the destination but 2nd packet takes 8 ms to reach. Its mean latency for both packets is different. This is basically Jitter. In short the variation in the periodicity of the signal is known as Jitter.
Lets understand it with another example. Sometimes you observe in real-time data communication applications that you listen to audio before and then see video or sometime you see video which follows audio. Audio and video must have synchronization but they don’t. This means audio and video signal have different latency. Which was not supposed to happen. The following figure will explain you more