Site icon Dheeraj Patidar

Operating System Full Course (All In One)

Operating System

(Unit - 1, 2, 4, 3, 5)


Unit 1: Basics of Operating System.

Basic Overviews of The Course:

What is an Operating System?

Operating SYstem is a fundamental Software of the computer which Establishes the communication between User & Computer System or Hardware.


Operating System is a program which acts as an Intermediary b/w the user and Computer hardware.

The one program which run all the time in the computer is called kernel, is the part of operating system.

Goals of operating System:

Structure of Computer System

Computer System can be divided into 4 components:

  1. Hardware: These are the physical component of the Computer used to perform specific tasks like printer, Speaker, Monitor, Mouse, and more.
  2. Operating System: It is the special program which is used to operator the computer system. It establish the communication b/w user & computer hardwares.
  3. Application Program: Application Programs are special kind of programs used to perform a single specific operation like Word processor, Any Game, Compiler, Video game, and more.
  4. Users: Every system require any user for giving Instruction so it is also a part of Structure Computer System.

What Operating System Do?

There are lot of works operating system do, but some basic & fundamental Works of Operating system are:

Resource Allocation: The major task of the operating system is allocating resources to multiple processes with scheduling. Resource allocation is another big concept of operating system where multiple resources are allocated and deallocated by multiple processes at the same time.

Task Management: Task management is also a duty of operating system in Computer where OS decide the priority & timing of each of task.

Multi User handling: Some operating systems are also multi user supported which means a single operating system can handle the task of multiple users at the time with efficiency & correctness.

Memory management: The allocation of memory to various live programs, is also perform by the operating system.

Generations of Operating System:

There are total four generations of OS:

The First Generation ( 1945 – 1955 ): Vacuum Tubes and Plugboards

The Second Generation ( 1955 – 1965 ): Transistors and Batch Systems:

The Third Generation ( 1965 – 1980 ): Integrated Circuits and Multiprogramming

The Fourth Generation ( 1980 – Present ): Personal Computers

Types of OS:

Batch Operating System:

Time Sharing Operating System

Structures of Operating System

Simple Structure

This OS approach is not well designed, not well secure, because any external process or user can access the Bottom hardware layer easily.

Monolithic Structure

Layered Approach


Services of OS:

Following are a few common services provided by an operating system:

System Calls:

The interface b/w a process and an operating System is provided by System calls. In general, System calls are available as an assembly language interface.

In computing, a system call is the programmatic way in which a computer program requests a service from the kernel of the operating system on which it is executed. 

A system call is a way to interact with the operating system via program.

System call provides the services of the operating system to the user programs via Application Program Interface(API) , means any application software uses system calls in the backend to perform various operations.

System calls are the only entry points into the kernel system.

All programs needing resources must use system calls.

Needs of System Calls:

There are 5 different categories of system calls:

Process control: end, abort, create, terminate, allocate and free memory.

File management: create, open, close, delete, read file etc.

Device management: ioctl(), read(), write()

Information maintenance: getpid(), alarm(), sleep()

Communication: pipe(), shmget(), mmap()

Protection: chmod(), umask(), chown()


System Boot:

The procedure of starting the computer system by loading the kernel is known as system boot.

On most of the computer systems, a small piece of code known as the bootstrap loader, is responsible for loading the kernel into main memory when the user switches on the computer.

Bootstrap loaders reside in the ROM memory of the computer so that no one can edit, update, or remove it.

Booting is the process in which the processor & BIOS of the Computer system check all connected devices to the motherboard.

In simple words we can say that Booting is the process of loading all the essential drivers & programs into the main memory.

Types of Booting:

Cold Booting: Turning on the computer is normally called cold booting. Here Initially process doesn’t have an electric supply.

Warm Booting: Sometimes the computer gets hung up due to some technical error, or task overloading. At that time the user has to restart the computer and the duration of restarting the computer is called warm booting.

Steps of System Boot:

System Programs:

In an operating system a user is able to use different types of system programs and the system program is responsible for all the application software performance of the computer. We can say that all Application Software depend on the system programs for their operations.

System programs are responsible for the development and execution of a program. They can be used with the help of system calls because system calls define different types of system programs for different tasks.

System Programs can be divided into these categories:

Types of System Programs:

Utility Programs: help to access the Additional functionalities of the OS. Example: antivirus software, backup software and disk tools

Device drivers: Include all peripheral devices drivers. Example: printer driver, scanner driver, storage device driver etc.

Directory reporting tools: Help to navigate in the Computer File System. Example: dir, ls, Windows Explorer etc.

Protection and Security:


It is the mechanism for controlling access of processes or users to the resources defined by the OS.  Protection is all about avoiding the malicious activities into the computer system.


It is all about protecting the computer system against  internal & external Threats. Security system of the OS always allows authorised processes for the execution.

There are various protection & Security methods:

Unit – 2: Process management

Process Concepts:

Process is basically the execution state of any program. The execution of a process must progress in a sequential fashion.

In simple words, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. One by one each of the instructions will be executed in sequential manner.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections:


Program is a piece of code which may be in millions of lines. A Computer Program is basically a set of instructions written in any programming language like C, C++, Java, python, and more.

A Computer program is a set of instructions which i perform a specific task when executed by computer. 

When we compare a program with a process, we can conclude that a process is a dynamic instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm.

Process States or Process Life Cycle:

When a process executes, it passes through different states. These states may differ in different operating systems, and the names of these states are also not standardised.

In general, a process can have one of the following five states at a time:


This is the initial state when a process is first started or created.



Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions.


Process moves into the waiting state if it needs to wait for any I/O operation like waiting for user input, requirement of any hardware, and more.

After the completion of I/O operation the process will be moved to Ready State, and OS Scheduler schedules the process for CPU allocation accordingly.

Terminated or Exit:

Once the process finishes its execution, it is moved to the terminated state where it waits to be removed from main memory.

Process Control Block (PCB):

A Process Control Block is a data structure maintained by the Operating System for every process. There millions of process in the computer system and to maintain these processes with proper management plan the PCB concept was introduced.

It is basically a small block of memory consist of the whole fundamental information of a process.

A PCB keeps all the information needed to keep track of a process.

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.

There are following sections of PCB:


Different CPU scheduling algorithms have different properties and the choice of a particular algorithm depends on the various factors. 

On the basis of type of problem or situation we use one of the suitable scheduling algorithm.

Many criteria have been suggested for comparing CPU scheduling algorithms:

1. CPU utilisation

The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. 

Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending on the load upon the system. 

2.. Throughput: 

It can be defined as the number of processes executed by the CPU in a given amount of time. It is used to find the efficiency of a CPU.

3. Response Time: 

The Response time is the time taken to start the job when the job enters the queues so that the scheduler should be able to minimise the response time. 

Response time = Time at which the process gets the CPU for the first time – Arrival time

4. Turnaround time:

Turnaround time is the total amount of time spent by the process from coming in the ready state for the first time to its completion.

Turnaround time = Burst time + Waiting time


Turnaround time = Exit time – Arrival time

5. Waiting time:

The Waiting time is nothing but where there are many jobs that are competing for the execution, so that the Waiting time should be minimised.

Waiting time = Turnaround time – Burst time

Burst time is the amount of time required by a process for executing on CPU. It is also called as execution time or running time.

All Creterias in Short:

Turn Around Time = Completion Time  –  Arrival Time

Waiting Time = Turn Around Time  –  Burst Time

Scheduling Algorithms and their Evaluation:

Process Scheduling is the process of the process manager handling the removal of an active process from the CPU and selecting another process based on a specific strategy.

There are three types of process schedulers:

CPU scheduling is the process of deciding which process will own the CPU to use while another process is suspended. 

The main function of the CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready queue.

Basically CPU shceduling algorithm require to achieve multi-programming or multi-tasking.

Preemptive & Non-preemptive Algorithms:

Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time.

The preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state.

First-Come, First-Served (FCFS) Scheduling:

Lets understand with example:

1, Shortest Job Next (SJN):

Lets understand with the Example:

2. Shortest Remaining Time First (SRTF)
3. Round Robin Scheduling (RR):


Difference between Process and Thread:

S.N. Process Thread
1 Process is heavy weight, and require high resources comparitively. Thread is light weight, taking lesser resources than a process.
2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system.
3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share same set of open files, child processes.
4 If one process is blocked, then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, a second thread in the same task can run.
5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources.
6 In multiple processes each process operates independently of the others. One thread can read, write or change another thread’s data.


Advantages of Thread:

Types of Thread:

Threads are implemented in following two ways:

Threading Issues:

Certainly! Let’s go through each point in more detail:

1. Race conditions: A race condition occurs when multiple threads access and modify shared data concurrently without proper synchronization. This can lead to unpredictable results because the order of execution or access to shared resources is not controlled. For example, if two threads simultaneously attempt to update a shared variable, the final value of the variable may depend on the timing and interleaving of their operations. Race conditions can cause data corruption, incorrect computations, or other unexpected behavior.

To address race conditions, synchronization mechanisms such as locks, mutexes, or atomic operations are used. These mechanisms ensure that only one thread can access the shared data at a time, preventing simultaneous modifications and maintaining data integrity.

2. Deadlocks: A deadlock occurs when two or more threads are waiting for each other to release resources, resulting in a circular dependency. For instance, Thread A holds Resource X and waits for Resource Y, while Thread B holds Resource Y and waits for Resource X. As a result, both threads are stuck, unable to proceed.

Deadlocks can happen if synchronization primitives are not used correctly or if resources are not properly managed. To prevent deadlocks, operating systems and programming languages provide techniques such as deadlock detection, avoidance, and recovery. These techniques involve careful resource allocation, resource ordering, and algorithms to detect and break circular dependencies.

3. Livelocks: Livelocks are similar to deadlocks in that threads are unable to make progress. However, in a livelock, threads are not blocked or waiting for resources. Instead, they are constantly responding to each other’s actions, resulting in a loop where no thread can proceed.

Livelocks often occur due to incorrect algorithms or designs that involve excessive interaction between threads. For example, in a distributed system, livelocks can arise when multiple nodes continuously exchange messages but cannot progress toward a solution.

Mitigating livelocks involves careful design and analysis of algorithms to avoid situations where threads are constantly reacting to each other without making progress. Introduction of randomness or adaptive strategies can also help break the cycles and allow threads to move forward.

4. Priority inversion: Priority inversion happens when a low-priority thread holds a resource that a high-priority thread needs. This can occur when a medium-priority thread preempts the low-priority thread and prevents the high-priority thread from making progress.

To understand this, consider a scenario where a low-priority thread locks a shared resource, and a high-priority thread becomes ready to execute and requires access to that resource. However, due to the priority of the low-priority thread, the high-priority thread is not scheduled, and the low-priority thread continues to hold the resource, causing a delay.

Priority inversion can be mitigated by mechanisms like priority inheritance or priority ceiling protocols. These techniques temporarily elevate the priority of a low-priority thread to prevent higher-priority threads from being blocked by it. This ensures that the high-priority thread can access the required resource promptly.

5. Starvation: Starvation occurs when a thread is indefinitely delayed or denied access to resources it needs to make progress. This can happen if a thread with lower priority constantly gets scheduled ahead of a thread with higher priority, causing the higher-priority thread to be starved of resources.

Starvation can arise from improper scheduling algorithms or incorrect priority assignment. To mitigate starvation, operating systems use scheduling policies and mechanisms that ensure fairness, such as priority-based scheduling, round-robin scheduling, or aging techniques. These techniques ensure that all threads have a fair chance to execute, preventing any particular thread from being indefinitely delayed or denied resources.

6. Thread synchronization issues: Threads often need to synchronize their access to shared resources to maintain consistency. Without proper synchronization, issues such as data races, inconsistent states, or incorrect outcomes can arise.

Synchronization mechanisms like locks, semaphores, and condition variables are used to coordinate thread access to shared resources. These mechanisms ensure that only one thread can modify a resource at a time or that threads wait for specific conditions before proceeding. By enforcing synchronization, threads can safely access shared data without interfering with each other or causing inconsistencies.

However, improper use of synchronization mechanisms, such as forgetting to release a lock or incorrectly ordering lock acquisitions, can lead to deadlocks, race conditions, or other synchronization issues. Thorough testing, proper design, and understanding of synchronization primitives are essential to avoid such problems.

7. Oversubscription: Oversubscription occurs when there are more threads or processes in the system than the available processing resources can handle efficiently. This can lead to increased context switching overhead, resource contention, and degraded overall system performance.

When the number of threads exceeds the system’s capacity to execute them simultaneously, the operating system has to schedule and switch between threads frequently, resulting in increased overhead. Additionally, excessive competition for resources, such as CPU time, memory, or I/O bandwidth, can cause bottlenecks and slowdowns.

To address oversubscription, system administrators and developers need to carefully manage the number of threads and processes, taking into account the available resources and workload characteristics. Techniques such as load balancing, resource monitoring, and optimization can help ensure efficient utilization of resources and prevent performance degradation caused by oversubscription.

Overall, understanding and addressing these threading issues are crucial for developing robust, efficient, and reliable multi-threaded applications and operating systems.


Unit – 3: Process Synchronization & Deadlock


The running State of the program is called Process.

On the basis of synchronization, processes are categorized as one of the following two types:

Independent Process: The execution of one process does not affect the execution of other processes.

Cooperative Process: A process that can affect or be affected by other processes executing in the system. Cooperative processes share common variables, resources, memory, and code.

Process synchronization problem arises in the case of Cooperative process also because resources are shared in Cooperative processes.   

Process Synchronization:

When two or more processes cooperate with each other, their order of execution must be preserved otherwise there can be conflicts in their execution and inappropriate outputs can be produced.

A cooperative process is the one which can affect the execution of other process or can be affected by the execution of other processes. Such processes need to be synchronized so that their order of execution can be guaranteed.

The procedure involved in preserving the appropriate order of execution of cooperative processes is known as Process Synchronization. There are various synchronization mechanisms that are used to synchronize the processes.

Process Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition. There are a software as well as hardware solutions to this problem.

Race Condition:

A Race Condition typically occurs when two or more threads try to read, write and possibly make the decisions based on the memory that they are accessing concurrently.

When more than one process is executing the same code or accessing the same memory or any shared variable in that condition there is a possibility that the output or the value of the shared variable is wrong so for all the processes doing the race to say that my output is correct this condition is known as a race condition.

Critical Section Problem:

The regions of a program that try to access shared resources and may cause race conditions are called critical sections. To avoid race condition among the processes, we need to assure that only one process at a time can execute within the critical section.

For example, If 2 programs has same piece of code then that portion of both programs will be putted into Critical Section.

In the entry section, the process requests for entry in the Critical Section.

Any solution to the critical section problem must satisfy three requirements:

We can solve critical section problem in two ways:

  1. Software solutions: It refers to the use of Peterson’s algorithm. That we have discussed earlier.
  2. Hardware Solution: The hardware level solution is called Hardware Synchronization.


Peterson’s Solution: 

boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section

int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:

Disadvantages of Peterson’s solution:

Hardware Synchronization:

There are three algorithms in the hardware approach of solving Process Synchronization problem: 

Hardware instructions in many operating systems help in the effective solution of critical section problems. 

1. Test and Set:

Here, the shared variable is lock which is initialized to false. 

TestAndSet(lock) algorithm works in this way – 

Test and Set Pseudocode – 

//Shared variable lock initialized to false
boolean lock;
boolean TestAndSet (boolean &target){
boolean rv = target;
target = true;
return rv;

while (TestAndSet(lock));
critical section
lock = false;
remainder section

2. Swap:

Remaining topic of this portion: maphores, Classic Problems of Synchronization, Monitors.


The main task of the operating system is to allocate resources to the processes. We have several types of resources such as printer, scanner, harddisk, RAM, and more.

Every process needs some resources to complete its execution. However, the resource is granted in a sequential order:

A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to some another process. In this situation, none of the process gets executed.

Fro Example, Process 1 is holding Resource 1 while Process 2 acquires Resource 2, and Process 2 is waiting for Resource 1.

In this scenario, a cycle is being formed among the three processes. None of the process is progressing and they are all waiting. The computer becomes unresponsive since all the processes got blocked.

System Model :

For the purposes of deadlock discussion, we should have a proper model where we can mention number of resources available, and the processes requestng for them.

Operations :

In normal operation, a process must request a resource before using it and release it when finished:

Deadlock Characterization:

A deadlock happens in the operating system when two or more processes need some resource to complete their execution that is held by the other process.

A deadlock occurs if the four Coffman conditions become true:

  1. Mutual Exclusion: There should be a resource that can only be held by one process at a time.
  2. Hold and Wait: A process can hold multiple resources and still request more resources from other processes which are holding them.
  3. No Preemption: Once a process holds a resource, that resource cannot be taken away from that process until the process voluntarily releases it.
  4. Circular Wait: A process is waiting for the resource held by the second process, which is waiting for the resource held by the third process and so on, till the last process is waiting for a resource held by the first process. This forms a circular chain.

Deadlock Prevention:

We can prevent the processes from the deadlock situation by eliminating one of the four deadlock condtions like Mutual Exclusion, Hold and Wait, No preemption, and Circular Wait.

If we eliminate one of all the four conditions then we can say that there is no deadlock.

Let us check whether we can eliminate any one of those conditions or not.

1. Mutual Exclusion: It means the process can only use one resource at a time which means all the resources are non sharable by default.

It is not possible to share a resource simultaneously among multiple processes. If we do so then It will produce the wrong result.

For example, if we allocate a printer to P1 and P2 processes simultaneously then we will get the combination of P1 and P2 processes as the output.

So it is not possible to eliminate mutual exclusion.

2. Hold and Wait: A process is holding some resources & is waiting for another resource called Hold and Wait Condition.

There are two approaches of Eliminating this condition:

3. No Preemption: A process cannot release a resource until and unless it has completed its execution. That means a process can release a resource only after its execuion.

Preempt resources from the process when resources are required by other high priority processes. 

4. Circular Wait: Each resource will be assigned with a numerical number. A process can request the resources  in increasing/decreasing order of numbering. 

For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5 such request will not be granted, only request for resources more than R5 will be granted.

Resource Allocation Graph:

The resource allocation graph is the pictorial representation of the state of a system. As its name suggests, the resource allocation graph is the complete information about all the processes which are holding some resources or waiting for some resources.

It also contains the information about all the instances of all the resources whether they are available or being used by the processes.

In Resource allocation graph, the process is represented by a Circle while the Resource is represented by a rectangle. 

Let’s see the types of vertices and edges in detail:


Let’s Consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are having 1 instance each.

According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting for R1 as well as R2.

The graph is deadlock free since no cycle is being formed in the graph.

Deadlock Detection & Avoidance:

Deadlock Detection:

In this approach, The OS doesn’t apply any mechanism to avoid or prevent the deadlocks. Therefore the system considers that the deadlock will definitely occur. 

In order to get rid of deadlocks, The OS periodically checks the system for any deadlock. In case, it finds any of the deadlock then the OS will recover the system using some recovery techniques.

The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help of a Resource allocation graph.

The deadlock Detection Algorithm is of two types:

Wait-for-Graph Algorithm: 


Step 1: Take the first process (Pi) from the resource allocation graph and check the path in which it is acquiring resource (Ri), and start a wait-for-graph with that particular process.

Step 2: Make a path for the Wait-for-Graph in which there will be no Resource included from the current process (Pi) to next process (Pj), from that next process (Pj) find a resource (Rj) that will be acquired by next Process (Pk) which is released from Process (Pj).

Step 3: Repeat Step 2 for all the processes.

Step 4: After completion of all processes, if we find a closed-loop cycle then the system is in a deadlock state, and deadlock is detected.

Now we will see the working of this Algorithm with an Example. Consider a Resource Allocation Graph with 4 Processes P1, P2, P3, P4, and 4 Resources R1, R2, R3, R4. 

Find if there is a deadlock in the Graph using the Wait for Graph-based deadlock detection algorithm.

 Step 1: First take Process P1 which is waiting for Resource R1, resource R1 is acquired by Process P2, Start a Wait-for-Graph for the above Resource Allocation Graph. 

Step 2: Now we can observe that there is a path from P1 to P2 as P1 is waiting for R1 which is been acquired by P2. Now the Graph would be after removing resource R1 looks like. 

Step 3: From P2 we can observe a path from P2 to P3 as P2 is waiting for R4 which is acquired by P3. So make a path from P2 to P3 after removing resource R4 looks like.

Step 4: From P3 we find a path to P4 as it is waiting for P3 which is acquired by P4. After removing R3 the graph looks like this.

Step 5: Here we can find Process P4 is waiting for R2 which is acquired by P1. So finally the Wait-for-Graph is as follows: 

Step 6: Finally In this Graph, we found a cycle as the Process P4 again came back to the Process P1 which is the starting point.

So, According to the Algorithm if we found a closed loop, then the system is in a deadlock state. So here we can say the system is in a deadlock state.


Deadlock Avoidance:

Deadlock can only be avoid by Bankers’s algorithm.

Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the request made by processes for resources, it checks for the safe state, if after granting request system remains in the safe state it allows the request and if there is no safe state it doesn’t allow the request made by the process. 

Banker’s algorithm is also used for the detection of Deadlock.

Inputs to Banker’s Algorithm: 

The request will only be granted under the below condition: 


Total resources in system:

A    B    C    D

6    5    7    6

Available system resources are:

A   B   C   D

3    1   1   2

Processes (currently allocated resources):

         A    B    C   D

P1    1    2    2    1

P2   1     0     3    3

P3   1    2     1    0

Processes (maximum resources):

          A    B   C    D

P1    3    3    2     2

P2   1     2    3    4

P3   1    3    5    0

Need = maximum resources – currently allocated resources.

Processes (need resources):

           A    B    C    D

P1      2    1    0    1

P2      0    2    0    1

P3      0    1    4    0

Note: Deadlock prevention is more strict than Deadlock Avoidance.

Above Observation in Table Formate:

Process Currently Allocated Resources Maximum Need Availability Remaining Need
A   B   C   D A   B   C   D A   B   C   D A   B   C   D
P1 1   2   2   1 3   3   2   2 3   1   1   2 2   1   0   1
P2 1   0   3   3 1   2   3   4 4   3   3   3 0   2   0   1
P3 1   2   1   0 1   3   5   0 5   3   6   6 0   1   4   0
3   4   6   4
6576 – 3464 = 3112


Recovery from Deadlock in Operating System:

When a Deadlock Detection Algorithm determines that a deadlock has occurred in the system, the system must recover from that deadlock. There are two approaches of breaking a Deadlock: 

1. Process Termination: 

To eliminate the deadlock, we can simply kill one or more processes. For this, we use two methods: 

Abort all the Deadlocked Processes: 

Aborting all the processes will certainly break the deadlock, but with a great expense. The deadlocked processes may have computed for a long time and the result of those partial computations must be discarded and there is a probability to recalculate them later.  

Abort one process at a time until deadlock is eliminated: 

Abort one deadlocked process at a time, until deadlock cycle is eliminated from the system. Due to this method, there may be considerable overhead, because after aborting each process, we have to run deadlock detection algorithm to check whether any processes are still deadlocked. 

2. Resource Preemption: 

To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those resources to other processes. This method will raise three issues:

Selecting a victim: 

We must determine which resources and which processes are to be preempted and also the order to minimize the cost. 


We must determine what should be done with the process from which resources are preempted. One simple idea is total rollback. That means abort the process and restart it. 


In a system, it may happen that the same process is always picked as a victim. As a result, that process will never complete its designated task. This situation is called Starvation and must be avoided. One solution is that a process must be picked as a victim only a finite number of times. 


Unit – 4: Memory management

The term Memory can be defined as a collection of data in a specific format. It is used to store instructions and processed data.

The primary motive of a computer system is to execute programs. 

These programs, along with the information they access, should be in the main memory during execution.

The CPU fetches instructions from memory according to the value of the program counter. 

To achieve a degree of multiprogramming and proper utilization of memory, memory management is important.

What is Main Memory?

Main Memory is the volatile memory which holds the data until the power is supplied to the computer.

It is faster then the secondary storage devices that’s why CPU uses main memory to fetch the data as per the requirement.

The main memory is also called Random Access Memory (RAM), whicjh lost the data when the power interruption occurs.

Main memory is a repository of rapidly available information shared by the CPU and I/O devices.

Main memory is directly connected with the processor to provide the required data in an extremely fast manner.

What is Memory Management?

In a multiprogramming computer, the operating system resides in a part of memory and the rest is used by multiple processes.

The task of subdividing the memory among different processes is called memory management.

Memory management is a method in the operating system to manage operations between main memory and disk during process execution.

The main aim of memory management is to achieve efficient utilisation of memory.  

Why Memory Management is required

Logical and Physical Address Space

Logical Address space: An address generated by the CPU is known as “Logical Address”. It is also known as a Virtual address. Logical address space can be defined as the size of the process. A logical address can be changed.

Physical Address space: The Actual memory address exists in the memory is called “Physical Address”. The physical address always remains constant.

The run-time mapping from virtual to physical addresses is done by a hardware device Memory Management Unit(MMU).


Contiguous Memory Allocation:

Contiguous memory allocation is the oldest memory management technique used to allocate RAM to demanded data or processes.

As a name suggest contiguous means the allocation of the memory to processes in a continuous manner (one by one).

The memory is usually divided into two partitions: one for the resident operating system and one for the user processes.

The adjacent blocks of memory are allocated to the new process in the Contiguous memory allocation approach.

Contiguous memory allocation was basically used in Main Frame Computer.

There are two approaches or types of Contiguous memory allocation:


Lets understand paging with example:

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.

See the below image:

Let us consider that, P2 and P4 are moved to a waiting state after some time. Now, 8 frames become empty and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8 pages) is waiting inside the ready queue.

Now, we have 8 non-contiguous frames available in the memory and paging provides the flexibility of storing the process at different places. Therefore, we can load the pages of process P5 in the place of P2 and P4.

Structure of Page Table:

Page Table is a data structure used by the virtual memory system to store the mapping between logical addresses and physical addresses. 

Logical Address generated by the CPU is consists of two things, one is Page number and another one is page offset or page size.

Each of the processes has its own page table to maintain the information about physical addresses of all pages. The number of entries are equal to the number of pages of the process.

Now entries of the page table contain frame numbers of the main memory. For example: 1st entry of the page table contains the frame number of the first page of the process.

How Paging Works?

Lets understand how MMU generate physical address from Logical Address through page table with an Example:

Let us have a process of 8 Bit.  Now we divide the process into 4 equal parts which means we get 2 bits of 4 pages. Now the size of each frame of main memory is also 2 bits.

 Now we feed all 4 pages of a process to frames.

Now in the Page Table we store the fame numbers of respective pages. In the following manner.

Means that we get the page table:





Page Table

RAM Frames:


2 3
4 5
6 7
8 9
10 11
12 13



For Example CPU generate Number 2, this means 1th block of Page Table, and 0 is the offset or size of the page which exists in frame.

1 0


Now, the 1st block or entry of the Page table content frame number is F4.

Now, convert the 4 into a binary number which is 100.

After this we just write offset after 100 so we get 1000. The numerical value of 1000 is 8, which means the CPU requires the data of the 8th block.

Before discussing the type of Page Table Structure we have to understand why they require it because we already have a simple page table.

As we all know that RAM is divided into Equal number of Fixed partitions called Frames. The page tables are also stored in these frames.

If in case The size of the Page table exceeds the limit of Frame then the Data overflow problem will occur.

In order to avoid this problem 3 types of Page Table Structures have Introduced:

Hierarchical Page Table:

In the hierarchical page table structure there are following steps:

  1. First Divide the long page table into equal number of Partitions as per the Frame Size, so that these partitions can easily accommodate into Frames.
  2. Now, Secondly Create One Outer page table ( page table of page tables) in which we have to fill the entries of Page Table Numbers, Like we store Frame numbers in a single page table.
  3. Now, in this approach we can also create more than 2 levels of hierarchy in order to simplify or break down page tables page tables.

Hashed Page Tables:

Hashed page table is an advance Page table management scheme where Searching of any page is extremely fast which is O(1) (A constant time).

In a hashed page table, the virtual addresses are hashed into the hash table.

Each element in the table comprises a linked list of elements to avoid collisions.

Linked list each node comprises two parts one is a data field containing virtual page number, and the second field is pointer points to the next node.

Hashed page tables are common in address spaces greater than 32 bits.

Following are the steps of Hash Table in order to generate physical address:

  1. First of all, the CPU gives a logical address, in which first past contains Page table block number & second contains offset or the size of frame.
  2. Now, here we pass the first part of the Logical address to the hash function.
  3. Hash Function generates corresponding keys using its hash table which is called virtual page number.
  4. The virtual page number is matched against the first field, the virtual address.
  5. If a match is found, the virtual page or frame number followed by offset is used to form the desired memory address. 
  6. If a match is not found, the linked list is traversed using the next pointer until a match is found.


Inverted Page Tables:

Working Process:

  1. The CPU generates the logical address for the page it needs to access.
  2. The logical address consists of three entries: process id, page number, and the offset.
  3. The process id identifies the process of which the page has been demanded, the page number indicates which page of the process has been asked for, and the offset value indicates the size of the page.
  4. The process id and associated page number is searched in the page table.
  5. If the search is found at the ith entry of the page table, then i and offset together generate the physical address for the requested page.


Need of Segmentation:

Problem with Paging:
Solution using Segmentation:


Translation of Logical address into physical address by segment table:

Virtual Memory:


Working Process:

Page Fault:

Demand Paging:

Page Replacement Algorithms:

Page Replacement Algorithms are all about deciding which page needs to be replaced when a new page comes in. 

A page fault occurs when the requested page of the process is not available in the Main Memory. 

In order to resolve Page Faults OS Uses various Page Replacement algorithms for replacing least used pages with new requested page, which are as follows:

1. First In First Out (FIFO):

This is the simplest page replacement algorithm in which the Oldest page of main memory is selected for replacement.

In this algorithm, the operating system keeps track of all pages in a queue.

When the page fault occurs, if the empty frame is available then no need for page replacement there.

But if all frames have a field then we will replace the oldest page with a new requested page.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of page faults. 

2. Optimal Page replacement:

In this algorithm, pages are replaced which would not be used for the longest duration of time in the future.

Look in the forward direction and check the Page which would be used for the longest time or would not be used.

Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find number of page faults. 

3. Least Recently Used (LRU) algorithm:

In this algorithm, a page will be replaced which is least recently used. 

Easy to implement, keep a list, replace pages by looking back into time.

Example: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find number of page faults. 

Allocation of Frames in OS:

The main memory of the operating system is divided into various frames. 

The process is stored in these frames, and once the process is saved as a frame, the CPU may run it.

In order to Allocate frames to all  other pages of the running process, OS uses various algorithms Frames Allocation Algorithms.

Demand paging is an essential operating system feature, used to implement virtual memory. 

It requires the development of a page replacement mechanism and a frame allocation system.

There are mainly five ways of frame allocation algorithms in the OS. These are as follows:

1. Equal Frame Allocation:

As the name suggests, Equal Frame Allocation means the allocation of available frames amongst processes is equal.

For Example, we have 2 processes P1 has 10 Pages & P2 has 30 Pages, and available memory is 40 Frames.

Now Number of frames are equally distributed amongst P1 & P2 so both will get 20-20 frames.

So the problem here is, In case of P1, 10 Frames get wasted, and In case of P2 10 Pages will not get Memory.

Disadvantage:  Allocation of a large number of frames to a small process will eventually create lots of holes in memory, which will cause the wastage of memory.

2. Proportional Frame Allocation

The Proportional Frame allocation overcomes the problem of Equal Frame Allocation.

Here the Number of frames are allocated on the basis of the Size of the process & available frames.

The allocated frames for a process pi of size si are ai = (si/S)*m, in which S represents the total of all process sizes, and m represents the number of frames in the system.

For example: we have 2 processes P1 has 10 Pages & P2 has 30 Pages, and available memory is 40 Frames.

So we just put values in the formula:

P1 = 10 / (10 + 30) * 40 = 10 / 40 * 40 = 10 Frames

P2 = 30 / (10 + 30) * 40 = 30 / 40 * 40 = 30 Frames

So here based on the size of the process the memory will be allocated, which means that large processes will get a high number of frames & small processes will get less number of  frames.

Disadvantage: The only drawback of this algorithm is that it doesn’t allocate frames based on priority. Priority frame allocation solves this problem.

3. Priority Frame Allocation:

As the name Suggests, Priority Frame Allocation Technique allocates Frames to the processes on the basis of Priority. 

Here the process with the high priority will be allocated more number of frames &  the low priority process will get less number of frames.

So here based on the size of the process & the priority of the process the memory will be allocated.

4. Global Replacement Algorithm:

5. Local Replacement Algorithms:

This is also similar to the Global Replacement Algorithm.

Only the difference is that here One process can’t take the frame of another process.

If the page fault occurs then the OS can only replace frames of its fixed allocated number of frames.

So Ultimately we can say that the Local Replacement algorithm is the static frame allocation approach where the number of allocated frames can’t be increased or decresed.




How to Eliminate Thrashing:


Unit – 5: Storage Management

Storage management is a process that is used to optimize the use of storage devices.

There are some key features of Storage Management or storage management is used to achieve following key points:

  1. Performance
  2. Reliability
  3. Recoverability
  4. Capacity 

Storage management is generally a basic system component of information systems, and is used to improve the performance of their data storage resources.

Mass Storage Structure Overview:

Magnetic Tape:

Tape was early type of secondary storage, now mostly for backup

Magnetic Disk:

Disk Structure:

Platter —> Surfaces —> Tracks —> Sectors —> Data

Platters: Patters are the round shape disks which are connected to each other through a spindle. The job of Spindle here is to move all Platters. Each Platter has an Upper & Lower Surface.

Tracks: These are circular liners on the Platter lower & upper surface in order to store data.

Sectors: The small equal parts of Tracks are called sectors where data is stored.

Note: The point to be noted here is that outer tracks are bigger in size than the inner tracks but they contain the same number of sectors and have equal storage capacity. This is because the storage density is high in sectors of the inner tracks whereas the bits are sparsely arranged in sectors of the outer tracks.

R/W head: Read Write heads are used to write data on upper & lower surfaces of platters. They are also responsible to  fetch or read data from the sectors. They can only move in forward & backward direction in order to perform r/w operations with Inner most & upper most track of platter respectively.

Arm: Arms are used to place read/write heads over the platter’s suface.

Actuator Arm / Arm Assembly: Arm Assembly is used to hold all Arms together so that all read write heads move simultaniously.


Disk Attachment:

Disks can be attached to the computer in following ways:

1. Host-attached storage:

2. Network-attached storage:

3. Storage area network:

Disk Scheduling:

Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk scheduling is also known as I/O scheduling. 

Disk scheduling is important because: 

Some important terms must be noted here: 


Disk Scheduling Algorithms:

The Ultimate goal of all scheduling Algorithms is to reduce seek time as much as possible.

1. FCFS (First Come First Served)

FCFS is the simplest of all the Disk Scheduling Algorithms.

In FCFS, the requests are addressed in the order they arrive in the disk queue.


Suppose the order of request is: 82,170,43,140,24,16,190

Current position of Read/Write head is : 50

So, total seek time: 

= (82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)

= 642


There is no starvation in FCFS algorithm, that means non of the requests take too long time to complete.

It is the simplest approach of Disk Scheduling.


The Seek time of this algorithm is higher compared to other algoritms, because it serve all request one by one as per they have arrived.


2. Shortest Seek Time First (SSTF):

The algorithms overcome the disadvantages of FCFS Algorithms.

In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first. 

So, the seek time of every request is calculated in advance in the disk queue and then they are scheduled according to their calculated seek time.

In this approach, the nearest track is served first.


Suppose the order of request is: 82,170,43,140,24,16,190

Current position of Read/Write head is : 50

So, total seek time: 

= (50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) 

= 208


3. Scan Algorithm:

In the SCAN algorithm the disk arm moves into a particular direction and serves the requests coming in its path and after reaching the end of the disk, it reverses its direction and again serves the request arriving in its path. 

So, this algorithm works as an elevator and hence is also known as elevator algorithm.

The direction is already given in this algorithm like Towards the larger value, Towards the smaller value.

Initially the read write head has to travel at the last track but in the reverse direction it has to move till the last value only.


Suppose the order of request is: 82,170,43,140,24,16,190

Current position of Read/Write head is : 50

Direction: Towards the larger value.

So, total seek time:

= (199-50)+(199-16) 

= 332


In a real life scenario, if the R/W head moves in reverse direction & if new value will come in its back position then it will not serve that request. The new request will be served in the next round.


4. CSCAN Disk Scheduling Algorithm:

In CSCAN or Circular Scan Algorithm, the R/w head moves in the last direction initially, after that it goes to the opposite end without serving any request, after going to the opposite end track, it starts serving requests.


Suppose the order of request is: 82,170,43,140,24,16,190

Current position of Read/Write head is : 50

Direction: Towards the larger value.

Seek time is calculated as:




5. LOOK Disk Scheduling Algorithm:

Look is similar to the SCAN Algorithm, Only difference is that here LOOK doesn’t travel the wastage extra path. Like in SCAN initially r/w head traverse till the end track initially, but Look doesn’t do this.

It is an enhanced version of SCAN Algorithm.


Suppose the order of request is: 82,170,43,140,24,16,190

Current position of Read/Write head is : 50

Direction: Towards the larger value. 

So, the seek time is calculated as:



6. CLOOK Disk Scheduling Algorithm

CLOOK is similar to the CSCAN disk scheduling algorithm. 

Only the difference here is that it also does not travers any extra path.

In both directions rather than moving till the last track, it moves till the last request.


Suppose the order of request is: 82,170,43,140,24,16,190

Current position of Read/Write head is : 50

Direction: Towards the larger value.

So, the seek time is calculated as:

=( 190-50)+(190-16)+(43-16) 

= 341


File System Interface

The basic Concept of a File:

A file is a collection of related information or data that is recorded on secondary storage.

So the File System is the module or software in the OS which manages the files in the secondary storage device.

How the data  will be stored & fetch is maintained by the File System of the OS.

Actually the data is stored in Secondary storage in the form of bits & we can’t understand that data so the File System of the OS provide us the Interface to access that data in a structured manner.

Before Discussing about File System let’s understand Some basics about File.

The name  of the file is divided into two parts:

Files Operation, Types, & attributes:

File Operations: Create, Open, Read, Write, Append, Truncate, Delete, Close.

File Types: doc, exe, Jpg, Xis, C, Java, py, class, Jpeg, Gif, and more.

File Attributes: Name, Extension (Type), Identifier, Location, Size, Author, Last Modification, Protection, Creation Date, Permissions.


Access Methods of a File:

There are three ways to access a file into a computer system: Sequential-Access, Direct Access, Indexed Sequential Access. 

Let’s understand Each of them One by One:

1. Sequential Access:

Here the operation are like this:


2. Direct Access:

Here the operation are like this: (here n refers to record number)


3. Indexed Sequential Access:

Directory Structures:

A directory is a container that is used to contain folders and files. It organizes files and folders in a hierarchical manner. 

There are several logical structures of a directory, which are as follows:

1. Single Level Directory:

2. Two Level Directory

3. Tree Structure Directory:

4. Acyclic Graph Directory:

Acyclic file  is  mainly designed in order to share a common file amongst different directories.

File System Structure:

Logical file system:

When an application program asks for a file, the first request is directed to the logical file system. The logical file system contains the Metadata of the file and directory structure. If the application program doesn’t have the required permissions of the file then this layer will throw an error. Logical file systems also verify the path to the file.

File organization Module:

Hard disk is divided into various tracks and sectors. Therefore, in order to store and retrieve the files, the logical blocks need to be mapped to physical blocks. This mapping is done by the File organization module. It is also responsible for free space management.

Basic file system:

Once the File organization module decides which physical block the application program needs, it passes this information to the basic file system. The basic file system is responsible for issuing the commands to I/O control in order to fetch those blocks.

I/O Control level: 

I/O controls contain the codes by which it can access the hard disk. These codes are known as device drivers. I/O controls are also responsible for handling interrupts.


Allocation Methods:

The allocation methods define how the files are stored in the disk blocks.

There are three main file allocation methods:

The main idea behind these methods is to provide:

1. Contiguous Allocation



2. Linked List Allocation:



3. Indexed Allocation:




Free Space Management:

Free space management is One of the Special Tasks of the OS where it manages Free space available in the memory after allocating memory for some files.

There are Four Approach used by OS in order to Manage Free Space:

1. Bit Vector / Bitmap:

We know that a hard disk contains a collection of blocks in those blocks the file Information will be stored.

Here each block is represented by a bit (0 or 1). 

Bit 1 specifies that the block is free and Bit 0 specifies that the block is Occupied.

For Example, we have 16 bits: 0000111000000110, ech bit specifies block status respectively as shown in below Diagram.


2. Linked List:



3. Grouping:


4. Counting:

Exit mobile version