Skip to main content

Operating Systems | Process: State Transition

The new state is condition of program which is about to be created. Once the creation operation in done, the program will turn to a process in ready state at main memory.

The scheduler dispatches the process to CPU to run the process. Once the execution is completed, there will be deallocation of memories which was allocated run the program, so the process gets terminated.

There are two major cause for preemption from running state

  1. Arrival of high priority job which will lead to sending of current executing process back to run state .
  2. if a process requests for I/O during it's running state, CPU will move it to wait/bock state and continues the execution with another one, and the process in the block state which may send back to run state if it completes the I/O.

Arrival of an higher priority job/process when main memory is full, causes for the suspension of less important existing job.

If the suspension is for a process in wait state , resulting state is called suspend wait and if it was for a process in ready state , resulting state is called suspend ready.

A process is suspend ready state or suspend wait can resume back to ready and wait state respectively, when there is no process which has higher priority than these processes.

If a process in suspend wait completes it's I/O, and if it does not find any fittest place with reference to any criteria for getting back to run state, it will shift into suspend ready state.

  • Jobs are represented as queues in each state(ready queue, block queue...) which are implemented using data structure linked list
  • Burst Time, Arrival Time, Priority, size of process... are the common criteria used by the schedulers
  • I/O requests, events, timers, priority and fork() command are the common cause for preemption from CPU.
  • Transitions among various states are done by different schedulers
    • NEW to READY : Long Term Scheduler(LTS)
    • READY to SUSPEND READY : Medium Term Scheduler(MTS)
    • READY to RUN : Short Term Scheduler(STS)

Comments

Popular posts from this blog

Operating Systems | Scheduling Algorithms : Round Robin

Round RobinFeatures : Most popular algorithm of allPractically implementableImplementable with basic data structures like queueExtremely lesser starvationOptimum efficiency can be set by controlling time quantumThe round robin algorithm will continuously switch between processes if a process in CPU (under execution) exceeds a time limit set by OS called time quantum.Flow Chart :Scheduler allocated process to for execution.CPU starts monitoring the execution time right after from allocation.If the process completes its execution before exceeding time quantum, OS forwards the process to termination state.else, the processes gets preempted once the time quantum limit exceeded and if the process finished at this moment, OS moves the process to termination state, else it moves to ready queue and iterates over the whole process listed above.Example : Consider the following table of processes and calculate completion time, turn around time and waiting time using Round Robin algorithm.Assumpt…

Operating Systems | Scheduling Algorithms : SJF

Shortest Job First(SJF)SJF algorithm schedules the shortest job(low burst time) available on main memory at the time of context switching in ready state . It can be either preemptive or non-preemptive depending on the nature of scheduler.Since the SJF select the job which is shorted among available jobs on main memory, Non-preemptive SJF algorithm will have larger waiting time if a job with longer burst time get scheduled first. In short SJF is also vulnerable to Convoy EffectTo find shortest process among min-heap would be the best data structure to be usedExample : Consider the following table of processes and calculate completion time, turn around time and waiting time using SJF algorithm with the following assumptions. No preemption.No I/O request will be there from any process.Arrival time is relative to CPU on time.Process NumberATBT103222324442View Answer

The execution of processes can be visualized as follows : By analyzing above representation completion time, turn around ti…

Operating Systems | Concept of shared memory and critical section

While printing your document opened in Adobe Acrobat Reader, if you gave a same command for document opened in Microsoft Word! How will it be?These are the real time situations where two processes competing together for same resources due to improper process synchronization. An unstructured access approval to these resources may lead to a abnormal output or a breach in information security or even it may cause for data loss. In the above case which might be a bunch of over-written papers coming out of the printer connected to that system.

Shared Memory and Critical Section Resources or memory spaces where different processes have access to read, write or update at same time are known as shared memory.And the piece of program which tries to access the shared memory is known as Critical Section. There are situations where an operating system need to handle multiple request from critical sections of various processes in a machine, in order to maintain data consistency in resources