Skip to main content

Computer Network | Delays in Networks

Data transmission from one point to another point, is a combination of different movements of data over multiple stages. Each of these movements takes its own time depending of the location of data, capacity of transmission medium, signal velocity and various other factors. The following are some basic time aspects which can be used to asses a transmission network or a protocol.

Transmission Delay(Tt)

Time taken to schedule a data packet from a host system to its outgoing link is known as transmission delay. It depends on mainly two parameters- band width and size of data packets.

Band width is the amount of data which can send in unit time.

If Band Width= 1 bps [bit per second] and Data packet/unit is of 10 bits, then Transmission Delay
Tt= 10 bits /(1 bit/ second) = 10 seconds
which implies, If Band Width is B bps and Data is of length L bits the Transmission Delay Tt=(L/B)

Which means the transmission delay will becomes too big for a larger data packet size on narrow band width. So to boost transmission transmission speed, it requires a much wider band width.

Representation of Kilo(K) for data will be interpreted as 1024 and for band width it is the usual notation of standard '1000' (Kilogram=1000gram), which means Band width given as 1 Kbps is equal to 1000 bps and length of data packet given as 1 Kb is equal to 1024 bits

Propagation Delay(Tp)
Assume a data packet is sending form source and receiver. As explained above time taken for the source to set the data into a link which connects receiver and sender is called transmission delay, then the time to propagate from source to receiver is called propagation delay(Tp).
Propagation delay depends on distance between source and receiver and how faster the signal can move in its propagation medium or simply velocity of the signal.
Tp= d/v
d: distance between source and receiver
v: velocity of the signal in its propagation medium

Now a days propagation mediums are mainly optical fibers, in which signal travel with a speed, about 70% of actual speed of light(210,000,000 m/s).

Let d = 42 Km & v= 2.1 * 108 m/s
Tp = d/ v = (42 * 103) / (2.1 * 108) = 2* 10-4 = 0.2 ms

So total time required to send a packet of data form source to destination is the sum of transmission time Tt and propagation time Tp.

Queuing Delay

The packets of data reached at receiver end will be aligned in a buffer queue, the amount of time a packet spend in this queue before being processed by receiver is called queuing delay.

Processing Delay
The time for fetching and processing of data packets by the receiver is called processing delay.

Queuing delay and processing delays are solely depend on processing capacity of the receiving system.


Popular posts from this blog

Operating Systems | Scheduling Algorithms : Round Robin

Round Robin Features : Most popular algorithm of all Practically implementable Implementable with basic data structures like queue Extremely lesser starvation Optimum efficiency can be set by controlling time quantum The round robin algorithm will continuously switch between processes if a process in CPU (under execution) exceeds a time limit set by OS called time quantum . Flow Chart : Scheduler allocated process to for execution. CPU starts monitoring the execution time right after from allocation. If the process completes its execution before exceeding time quantum, OS forwards the process to termination state. else, the processes gets preempted once the time quantum limit exceeded and if the process finished at this moment, OS moves the process to termination state, else it moves to ready queue and iterates over the whole process listed above. Example : Consider the following table of processes and calculate completion time, turn around time and waiting tim

Operating Systems | Concept of shared memory and critical section

While printing your document opened in Adobe Acrobat Reader, if you gave a same command for document opened in Microsoft Word! How will it be? These are the real time situations where two processes competing together for same resources due to improper process synchronization. An unstructured access approval to these resources may lead to a abnormal output or a breach in information security or even it may cause for data loss. In the above case which might be a bunch of over-written papers coming out of the printer connected to that system. Shared Memory and Critical Section Resources or memory spaces where different processes have access to read, write or update at same time are known as shared memory . And the piece of program which tries to access the shared memory is known as Critical Section . There are situations where an operating system need to handle multiple request from critical sections of various processes in a machine , in order to maintain da

Operating Systems | Lock Variable Synchronization Mechanism

To learn lock variable synchronization mechanism, you have to understand different modes of execution of instructions in an operating system. Basically there are two modes of execution in a system, User Mode and Kernel Mode. The unrestricted access to any resources in the system, for executing instructions in Kernel Mode makes resources available only for most trusted basic level functions of operating system . Direct access of hardware or memory references are not allowed in User Mode , due security concerns over higher level applications. System APIs provide a path way to reference these resources. Features of Lock Variable A software logic implemented in User Mode. A Busy waiting method of synchronization. Multiple processes can be handled by single mechanism. There are two basic section for every synchronization method, which decides the nature of that algorithm, Entry and Exit sections. Arranged as shown. Here in lo