Oxford dictionary defines Congestion as:
Congestion (Noun) - the state of being crowded and full of traffic
We all experience Congestion in our day to day life for example traffic, food store, stadium, fair ground and tourist spot etc
The truth is that no one likes congestion. We have developed many rules to avoid congestion, get out of congestion and control the congestion.
If everyone wants to get rid of the congestion then why congestion happen. Answer is very simple It can be caused by two sets of people
1. Who don't care for others.
2. Who don't know about others.
Set 1: They will not be influenced by congestion avoiding mechanism. they heed to no principle.
Set 2: situation is tricky, this set of people are considerate but they dont have information about other people. They just keep on estimating based on their experience.
Solution designer, the Architects or system designer has to design a mechanism to assist people being more considerate towards each other.
Let me move from human world to world of IP network.
Network is a graph defined by links and vertices's. Vertices's can any be network element ( host switch router etc.) A link connects two vertices's.
So where is the congestion link or vertices's ?
A link can never be congested as it can be either 100% utilized and 0% utilized. When there is a packet forwarded over the link it goes at link speed when there is no packet to forward the link utilization is zero. So if a packet keeps on coming at line rate and leaves at line rate the link utilization is 100% . Is this a congested environment-- No. If incoming traffic rate is less than the line rate condition is not congested.
So where is the congestion?
It is at the buffer of the vertices's. If the incoming traffic rate is more than the line rate the packet gets queued in the buffer. Delay increases due to queuing. If queue keeps on increasing the problem can become further severe. Remember, that when there is a packet in the buffer, Link functions at the 100% rate.
Is there any problem? We are utilizing the link at 100% and we should not expect to do more.
Yes there is a problem and i.e Size of buffer. If a node keeps on getting traffic at higher rate than the output port line rate, the buffer will be full and any more packet to it would be dropped. Excessive congestion may overflow the buffer. So who will keep check that buffer is not overflown and many innocent packets are not dropped?
Whose responsibility is to check that buffer is not overflown ?
Traditionally, when the responsibility is big it comes to bigger shoulder. TCP being one of the main pillar of TCP/IP stack ( Most common stack in IP network) takes the responsibility. How?
How TCP does the congestion control?
It keeps on monitoring end to end delay ( RTT Round Trip time). If the queues are bigger then a packet will experience more delay ( i.e more RTT) . If innocent packets are dropped TCP will not hear about their ACKs in few RTT time and realizes that somewhere down the path there is congestion and reduces the rate ( window size).
Poor TCP it can only rely on end to end information. For TCP it seems a big pipe is directly connecting the source and destination . If there is more e2e delay or packet drops in big pipe it relies on its congestion control mechanism.
So the crux is we want to utilize all the link to 100 % provided we have that much traffic. To tackle imbalance in traffic we would like to queue the packets in the buffer and keep link utilization to 100%. We can keep lot of packets in buffer because of its limited size. Somebody has to take care of it, either TCP or IP layer or link layer.
So finally an account of the efforts of different layer of TCP/ip stack
Transport-- TCP has many variants to improve the congestion control mechanism
Network ---Lots of routing protocol take bandwidth* delay product as the metric while making the forwarding plane
Link-- Layer 2 congestion control are upcoming a looks promising.
-------- More on TCP awesomeness and L2 promise on Congestion Control

No comments:
Post a Comment