Computer Science

Congestion Control Use of traffic

Congestion Control overload is a problem with network sharing packages. Network congestion is part of the network of many packets and is a problem. Network congestion can occur when the network load (for example, packets sent to the network) exceeds the network capacity (eg network handles). Network congestion occurs when there is a traffic jam.

In other words, when there is too much traffic, the speed increases and performance slows down.

In this textbook we will consider the following Topics:

    1. How to correct the Congestion Problem?

    2. Closed Loop Congestion Control
    3. Open Loop Congestion Control
    4. Congestion control algorithms
    5. Leaky Bucket Algorithm
    6. Token bucket Algorithm
    7. Causing of Congestion

1. How to correct the Congestion Problem?

Block control means prevention before a traffic jam occurs or removing it when a traffic jam occurs. Transportation is divided into two categories, the anti-collision group and the other group to clear the jam after the jam.



Categories are that:

    • Open loop
    • Closed loop

2. Open loop:

    • The law is used to prevent conflicts.
    • Traffic jams are protected by location or location.
The most common methods of controlling the opening of the jam cycle:

Retransmission Policy:

If the shipper thinks the package he sent is lost or damaged, he will return the package. However, general changes can cause network connectivity. But we must have a good return policy to prevent overloading. Southern Transmission Redirection Timer should be designed to optimize efficiency and avoid congestion.

Window Policy:

The option window that denies congestion management is used to activate the right pane. The rejection option is best for Go-back-n windows, as in the Go-back-n system, when a packet expires, some packets will be returned to the recipient if they can be safely delivered. So this challenge can lead to violence. The opt-out option is given only for lost or damaged packages.

Acknowledgement Policy:

The acceptance policy imposed by the recipient can also affect the audience. If the receiver does not accept each packet, it may delay the transmitter and prevent congestion. Recognition also increases network traffic. This way we can reduce the network load by sending a smaller number of bills.

  • To important it, Different methods can be used for its application:
    1. The recipient can send a confirmation when the counter expires.
    2. The recipient can also choose to accept only N packets at a time.
    3. The recipient can send a receipt only if he has a package to send.

Discarding Policy:

The router can release very small packets when blocking is possible. This cancellation policy can prevent restrictions without compromising the integrity of the transfer.

Admission Policy:

Acceptance policies that are a service quality system may limit social media appeal. Find a stream to test asset requests before logging in to the site. If there is a network problem or network problem, Direct may refuse to create a separate network.

3. Closed Loop :

Efforts are being made to control the eyelids to eliminate the infection immediately.

Many methods are used to control the installation of closed rings:


The load behind the node to the node starts from the stack and reaches the node in the opposite direction from the flow of information.

Backpressure Method

Back print technology can only be applied to virtual circuit networks. In such a virtual circuit, each node knows the top node where the data stream has reached. With this method of density control, the locked node immediately stops receiving information from the top node. This may interfere with the node above the node and, as a result, deny the data on the node above.

As shown in the picture. Crossroads 3 is congested, will not accept packets, and will provide information to Node 2 to slow it down. Second, connection 2 can be tight and connection 1 provides delay information. Node 1 is now stacked and can provide information about the slowdown of the source node. This will reduce congestion. Therefore, the pressure at node 3 returns to the source and the lock is cleared.

Choke Packet :

In this density control scheme, a blocked router or node sends a special type of packet, called a throttle packet, to the source to provide information about the block.

Choke Packet Method

Connection blocking here does not provide information about blocking upstream connections as with the pushback method. In the choke packet method, the locked node sends an alert directly to the source station. That is, the intermediate node through which the packet passes is not notified.

Implicit Signaling:

There is no connection between the source and one or more nodes full of standard signals. The source assumes that it is congested somewhere without confirming it. Therefore, the delay in obtaining acceptance is interpreted as a network congestion. If this block is detected, the source will be delayed. This type of traffic control policy is used by TCP.

Explicit Signaling:

In this method, a close connection explicitly sends a signal to the source or destination to indicate a lock. Open alarms are different from the method of drowning packages. In the “choke packet” method, another package is used for this, while in the open signal method, the signal is in the package containing the information.

Open signals may occur in the forward or reverse direction. Reverse signals assign bits to packets moving in the opposite direction. This bit warns of the cause of the block and tells the source to slow down. The forward signal moves the bits of the packet to the socket. This small warning warns that there are traffic jams. In this case, the recipient uses policies such as delayed acceptance to eliminate overcrowding.

4. Congestion control algorithms:


Congestion can hinder communication. As the number of subnet packets increases, so does the performance of the subnet. As a result, network communication channels are congested across the road, slowing growth.

Which algorithm is used for congestion control?

Congestion prevention is a concept that regulates access to packet data on a network, making better use of shared network infrastructure and avoiding conflicts. The Congestion Avoidance (CAA) algorithm is used in the TCP layer as a mechanism to prevent network congestion.

5. Leaky Bucket Algorithm:

This is a traffic modeling mechanism that controls the amount and speed of traffic sent to your network. Sealed Bucket algorithm converts explosive traffic to constant speed traffic by mediating data speed.

Leaky Bucket

Imagine a bowl with a small hole in the bottom. The speed at which water flows into a vessel is not constant and can change, but the rate at which it flows is constantly constant. Therefore (while there is water in the vessel), the drainage rate of the water does not depend on the water entering the vessel.

In addition, when the container is full, excess water from the container will drain to the side and disappear. The same concept applies to network packets. Note that the data is a variable rate source. Suppose the source sends data at a speed of 12 Mbit / s in 4 seconds.

Bursty data and fixed rate data

After that, there is no information for 3 seconds. The source sends data at a speed of 10 Mbit / s in 2 seconds. Therefore, 68MB of data will be sent in 9 seconds. Using the Leaky bucket algorithm, the data flow can reach 8 Mbps in 9 seconds. So the constant flow is maintained.

6. Token bucket Algorithm:

The garbage algorithm only supports regular (continuous) data flows. The main problem is that it can’t handle streaming files. Garbage algorithm does not include host idle time. For example, if the host is running for 10 seconds and is ready to send emergency data for another 10 seconds, the entire data transfer is split into 20 seconds, and you can control the average file.

The host does not have the advantage of standing still for 10 seconds. To solve this problem, use the token box algorithm. The token algorithm allows you to send streaming files.

The token container algorithm is a modification of the token container empty token box. With this algorithm, tokens are generated every minute of the clock. The system must remove the token from the box to send the package. Therefore, the token box algorithm allows inactive owners to have future equivalents as tokens

For example:

For example, if the system generates 100 tokens and the host does not operate with 100 tokens. The cube has 10,000 tokens.

Token bucket Algorithm

Now, if the owner wants to send streaming data, he can send 10,000 cells or bytes at a total of 10,000 tokens at a time. So while the box is empty, the host can send streaming files.

7. Causing of Congestion:

The most common causes of subnet congestion are:

  • The value of the input path exceeds the amplitude of the output line. In all cases at the same time, the packet feed begins to reach three or four inputs, all of which require the same output line.
  • In this case, a line is drawn. If there is not enough memory to store all the packages, the packages will be lost. Reducing memory does not solve the problem. This is because the package has already expired when it arrived before the queue (because it is waiting in the queue).
  • During execution, the route sends duplicate packets that are still in line. As a result, older packages are being added to improve portability. The router takes a long time to perform saving tasks (queue, board upgrade, etc.). The router is unobstructed.
  • If the process is slow, subnet congestion can occur. Slowly slowing down the router’s CPU performs routine tasks such as buffer queues and board updates. As a result, queues are created even in the event of an increase.
  • Data from three input lines at same timeDensity can also occur due to slow connections. This issue can be resolved by using a quick link. However, this is not always the case. High speed connections can lead to network instability and congestion can lead to increased congestion, which may lead to increased congestion as connection bandwidth increases. right!
  • There is no free buffer and new packages will start bypassing / bypassing. Once these packages are packed, the sender can return them after the delivery time. These packages will be sent by the sender so that the sender can receive a receipt for these packages. Therefore, many packet distributions implement the final distribution of the package.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button