Skip to main content

Operating Systems | Lock Variable Synchronization Mechanism

To learn lock variable synchronization mechanism, you have to understand different modes of execution of instructions in an operating system.

Basically there are two modes of execution in a system, User Mode and Kernel Mode.

The unrestricted access to any resources in the system, for executing instructions in Kernel Mode makes resources available only for most trusted basic level functions of operating system.
Direct access of hardware or memory references are not allowed in User Mode, due security concerns over higher level applications. System APIs provide a path way to reference these resources.


Features of Lock Variable

  • Lock variable synchronization mechanism is a software logic implemented in User Mode.
  • Lock variable synchronization mechanism is a Busy waiting method of synchronization.
  • Multiple processes can be handled by single lock variable mechanism.

There are two basic section for every synchronization method, which decides the nature of that algorithm, Entry and Exit sections. Arranged as shown.

Here in lock variable method, a global variable Lock acts as a decision making key parameter, that decides entry and exit of processes into critical section.

The structure of the synchronization program will be as as follows:

Code snippet of entry and exit section will decide, whether a program has access to critical resource or not.

If a process sees Lock variable as Zero- means there will no process in critical section. So while( Lock !=0 ); loop fails and Lock variable get set again HIGH to prevent other processes and enter into critical section.

Once the process leaves the critical section, it reset Lock flag as LOW, to indicate the critical resource is free.



Performance Of Lock Variable Synchronization Method

The assembly equivalent of entry section of Lock variable synchronization method is as shown :

  • Let Lock flag be zero. ( Lock = 0 )
  • A process ( say P1 ) is executed its entry section and got preempted at instruction number four.
  • { Instruction 4: The barrier of while loop is over and the flag Lock if not raised to HIGH. }

  • Let another process ( say P2 ) executes its entry section and enters into critical section.
  • { It is possible, since Lock is not raised to HIGH by the P1. }

  • Then there is a chance to get P2 preempted from critical section and P1 get reschedules to execute from where it got preempted previously.

Which means the lock variable fails to prevent another process from accessing a shared resource, while there is a process working over the same resource.

That is Mutual exclusion is not achieved. So we need much sophisticated algorithm to implement synchronization among processes.

Comments

Popular posts from this blog

Operating Systems | Scheduling Algorithms : Round Robin

| FCFS | SJF | SRTS | Round Robin | LJF | Priority Scheduling | HRRN | Round RobinFeatures : Most popular algorithm of allPractically implementableImplementable with basic data structures like queueExtremely lesser starvationOptimum efficiency can be set by controlling time quantumThe round robin algorithm will continuously switch between processes if a process in CPU (under execution) exceeds a time limit set by OS called time quantum.Flow Chart :Scheduler allocated process to for execution.CPU starts monitoring the execution time right after from allocation.If the process completes its execution before exceeding time quantum, OS forwards the process to termination state.else, the processes gets preempted once the time quantum limit exceeded and if the process finished at this moment, OS moves the process to termination state, else it moves to ready queue and iterates over the whole process listed above.Example : Consider the following table of processes and calculate complet…

Operating Systems | Concept of Process

Hard disk drive of the system in called primary memory and secondary memory is Random Access Memory(RAM). Either a program written in High Level Language(HLL) or a executable code generated by the sequence of works done by pre-processor, compiler and assembler, resides in secondary memory of the system. To start an execution, operating system allocates some space in the main memory, for the program to be executed and loads the program in the secondary memory to the allocated space. The piece of work which is loaded by the operating system to the main memory in order to execute program is called processEvery program loaded by operating system will create a focus boundary(or process body), a partitioned memory area where all memory requirements for the execution of program is satisfied.Variable which will not change its value through out life time of process is called static variable and variables which are globally accessible in a process known as global variable.Heap area is reserved f…

Operating Systems | Scheduling Algorithms : SJF

| FCFS | SJF | SRTS | Round Robin | LJF | Priority Scheduling | HRRN | Shortest Job First(SJF)SJF algorithm schedules the shortest job(low burst time) available on main memory at the time of context switching in ready state . It can be either preemptive or non-preemptive depending on the nature of scheduler.Since the SJF select the job which is shorted among available jobs on main memory, Non-preemptive SJF algorithm will have larger waiting time if a job with longer burst time get scheduled first. In short SJF is also vulnerable to Convoy EffectTo find shortest process among min-heap would be the best data structure to be usedExample : Consider the following table of processes and calculate completion time, turn around time and waiting time using SJF algorithm with the following assumptions. No preemption.No I/O request will be there from any process.Arrival time is relative to CPU on time.Process NumberATBT103222324442View Answer

The execution of processes can be visualized …