Skip to main content

Computer Network | Delays in Networks

Data transmission from one point to another point, is a combination of different movements of data over multiple stages. Each of these movements takes its own time depending of the location of data, capacity of transmission medium, signal velocity and various other factors. The following are some basic time aspects which can be used to asses a transmission network or a protocol.

Transmission Delay(Tt)

Time taken to schedule a data packet from a host system to its outgoing link is known as transmission delay. It depends on mainly two parameters- band width and size of data packets.

Band width is the amount of data which can send in unit time.

If Band Width= 1 bps [bit per second] and Data packet/unit is of 10 bits, then Transmission Delay
Tt= 10 bits /(1 bit/ second) = 10 seconds
which implies, If Band Width is B bps and Data is of length L bits the Transmission Delay Tt=(L/B)

Which means the transmission delay will becomes too big for a larger data packet size on narrow band width. So to boost transmission transmission speed, it requires a much wider band width.

Representation of Kilo(K) for data will be interpreted as 1024 and for band width it is the usual notation of standard '1000' (Kilogram=1000gram), which means Band width given as 1 Kbps is equal to 1000 bps and length of data packet given as 1 Kb is equal to 1024 bits



Propagation Delay(Tp)
Assume a data packet is sending form source and receiver. As explained above time taken for the source to set the data into a link which connects receiver and sender is called transmission delay, then the time to propagate from source to receiver is called propagation delay(Tp).
Propagation delay depends on distance between source and receiver and how faster the signal can move in its propagation medium or simply velocity of the signal.
Tp= d/v
d: distance between source and receiver
v: velocity of the signal in its propagation medium

Now a days propagation mediums are mainly optical fibers, in which signal travel with a speed, about 70% of actual speed of light(210,000,000 m/s).

Example:
Let d = 42 Km & v= 2.1 * 108 m/s
Tp = d/ v = (42 * 103) / (2.1 * 108) = 2* 10-4 = 0.2 ms

So total time required to send a packet of data form source to destination is the sum of transmission time Tt and propagation time Tp.



Queuing Delay

The packets of data reached at receiver end will be aligned in a buffer queue, the amount of time a packet spend in this queue before being processed by receiver is called queuing delay.



Processing Delay
The time for fetching and processing of data packets by the receiver is called processing delay.

Queuing delay and processing delays are solely depend on processing capacity of the receiving system.

Comments

Popular posts from this blog

Operating Systems | Scheduling Algorithms : Round Robin

Round RobinFeatures : Most popular algorithm of allPractically implementableImplementable with basic data structures like queueExtremely lesser starvationOptimum efficiency can be set by controlling time quantumThe round robin algorithm will continuously switch between processes if a process in CPU (under execution) exceeds a time limit set by OS called time quantum.Flow Chart :Scheduler allocated process to for execution.CPU starts monitoring the execution time right after from allocation.If the process completes its execution before exceeding time quantum, OS forwards the process to termination state.else, the processes gets preempted once the time quantum limit exceeded and if the process finished at this moment, OS moves the process to termination state, else it moves to ready queue and iterates over the whole process listed above.Example : Consider the following table of processes and calculate completion time, turn around time and waiting time using Round Robin algorithm.Assumpt…

Operating Systems | Scheduling Algorithms : SJF

Shortest Job First(SJF)SJF algorithm schedules the shortest job(low burst time) available on main memory at the time of context switching in ready state . It can be either preemptive or non-preemptive depending on the nature of scheduler.Since the SJF select the job which is shorted among available jobs on main memory, Non-preemptive SJF algorithm will have larger waiting time if a job with longer burst time get scheduled first. In short SJF is also vulnerable to Convoy EffectTo find shortest process among min-heap would be the best data structure to be usedExample : Consider the following table of processes and calculate completion time, turn around time and waiting time using SJF algorithm with the following assumptions. No preemption.No I/O request will be there from any process.Arrival time is relative to CPU on time.Process NumberATBT103222324442View Answer

The execution of processes can be visualized as follows : By analyzing above representation completion time, turn around ti…

Operating Systems | Concept of shared memory and critical section

While printing your document opened in Adobe Acrobat Reader, if you gave a same command for document opened in Microsoft Word! How will it be?These are the real time situations where two processes competing together for same resources due to improper process synchronization. An unstructured access approval to these resources may lead to a abnormal output or a breach in information security or even it may cause for data loss. In the above case which might be a bunch of over-written papers coming out of the printer connected to that system.

Shared Memory and Critical Section Resources or memory spaces where different processes have access to read, write or update at same time are known as shared memory.And the piece of program which tries to access the shared memory is known as Critical Section. There are situations where an operating system need to handle multiple request from critical sections of various processes in a machine, in order to maintain data consistency in resources