What Are The Functions Of An Operating System?

What Are The Functions Of An Operating System?
What Are The Functions Of An Operating System?

Introduction

An operating system is the heart and brain of a computer. It not only helps in the smooth functioning of a system but also helps in giving users control over the system. Nowadays Operating Systems work on multi-processor CPUs.

That means there are multiple processes that can be triggered, processed and completed in a single quantum (Unit time taken by a process to complete). So, it is a tedious task to manage all these processes, the resources and the memory associated with these processes which are where the operating system comes into play to manage these tasks and schedule these processes accordingly so that the system runs smoothly.

Lifecycle of a Process

The process can be defined as a program that is under execution. The context of each process in the system is defined in the form of a data structure called as Process Control Block contains information regarding :


  • Process Identification: Contains all the data that identifies the process like:
    • Process Identifier 
    • Parent Process Identifier
    • User Identifier

This information helps the Operating System identify the process.

  • Process Control Information: Contains all the information regarding:
    • Memory Management contains pointers to a segment or a page table that describes virtual memory.
    • Resource Utilisation contains the information of all the resources that are controlled by the process.
    • Process Privileges
  • Process State Information: Contains all the information regarding a process that an operating system uses for its scheduling:
    • Process State defines the current state of the process.
    • Process Priority Flags
    • Blocked Events
    • Scheduling Information, which includes the algorithm used to schedule the process and amount of waiting time and other parameters associated with the process.

Now each process has a state. Process states are assigned to certain rules which each process has to abide by while moving from one state to another. These states are:

  • New: When the process is just created.
  • Running: When the process holds the CPU.
  • Waiting: When the process is waiting for an event to occur like resource allocation or interrupt event.
  • Ready: When the process has all the resources to execute but is waiting for the CPU.
  • Suspended: When the process is temporarily stopped to execute another process.
  • Terminated: When the process is removed from the scheduler as well as its Control Block is also removed.

Functions of An Operating System

Each operating system is Event-Driven, which means that processes are created whenever the operating system encounters an Event. This event can be anything like a mouse click or an error. So, whenever an event occurs a process changes its state.

With these changes in the state of a process, there will be some processes that will be using some memory and there will be some processes that might need some data from the disk.

So, the operating system acts as a binding agent between all these tasks. There are several different functions that an operating system performs like :

  • Process Management
  • Memory Management 
  • File Management 
  • Device Management 
  • Security 
  • Error Handling 
  • Deadlock Prevention 
  • Security and many more…..

Process Management

At a time in a system, there can be thousands of processes that can be created and by the nature of a process, all the processes will try to get executed as soon as possible. But this kind of scenario will cause a lot of chaos in the system, will overburden the processor, can cause the processes to hold the resources and this can lead to critical problems like deadlocks and system failure.

These system failures can be so critical that they can even damage the hardware. So, here comes the role of the operating system to avoid such scenarios to happen. 

For this an operating system decides which process will be executed, for how much time and also decides which process will be executed next, this process is called Process Scheduling. Process Scheduling is implemented using different algorithms.

The agenda of each algorithm is different. There can be multiple algorithms that can be implemented in a single system but will work on different kinds of processes. 

The scheduling algorithm works on the schedulers. Schedulers are like a pipeline where the processes get scheduled. There are three types of schedulers:

  • Long Term Scheduler: It interacts between secondary memory where all the jobs are stored and the main memory. It decides which process will be submitted for processing. It selects jobs from a pool of I/O bound and CPU jobs and loads them into the main memory for execution.
  • Medium Term Scheduler: It acts as a mediator between swap space and main memory. The process can be reintroduced into the main memory and can be continued from where it left off.
  • Short Term Scheduler: It is the fastest of all the schedulers and decides which process will be executed next. It is also called CPU Scheduler. It selects only those processes that are ready to execute. 

Scheduling Algorithms 

Scheduling Algorithms are some programming constructs that define the workflow and conditions to which process will be scheduled next to the current process and which process to execute when the CPU is idle.

There are two types of scheduling algorithms:

  • Non-preemptive Algorithms: In these algorithms, the process can go from running state to waiting for state or terminated state. Once a process starts to execute it will finish its life cycle.
  • Preemptive Algorithms: In these algorithms, the process goes from running state to ready state or from waiting for the state to ready state. Once a process starts to execute, it may take several context switches to complete its life cycle. 

The main motives of a scheduling algorithm are to:

  • Maximise CPU utilization
  • Increase Throughput
  • Reduce Turnaround Time
  • Reduce the Total Waiting Time
  • Reduce Response Time

Examples of Scheduling Algorithms:

  • FCFS – First Come First Serve Scheduling Algorithm:
    • It is a non-preemptive scheduling algorithm. 
    • The process that comes first will get all executed first.
    • This algorithm works fine for processes with small burst time, but when the process with large burst time has encountered the processes with small burst time have to wait which increases the average wait time. This is known as the Convoy Effect. 
  • SJF – Shortest Job First Scheduling Algorithm:
    • It can be both non-preemptive and preemptive. 
    • The preemptive type of this Algorithm is called Shortest Remaining Time First. 
    • The jobs with small burst time are scheduled ahead of jobs with large burst time. 
    • The main advantage of this algorithm is that it ensures a minimum average waiting time. 
    • But in the preemptive scheduling of this algorithm, there is a possibility that a job with a large burst time has to wait for long if small jobs are again and again getting added. 
  • Priority Scheduling Algorithm:
    • It can be both preemptive and non-preemptive. 
    • Each process has a priority assigned to it.
    • The process with the lowest priority will be executed first.
  • Round Robin Scheduling Algorithm:
    • It is the most commonly used preemptive scheduling algorithm.
    • This scheduling algorithm is specially designed for time-sharing operating systems. 
    • Each process gets equal time on CPU i.e. ‘q’ quantum. 
    • The dispatcher queue acts as a circular queue. If the process is incomplete after quantum, then it joins the end of the queue. 
    • The main advantage of this algorithm is that it provides minimum turnaround time. 
    • But if the time quantum is very small, then most of the CPU time is consumed in context switching.

Memory Management 

The memory is the storehouse of an operating system. The main memory is the place where all the processes that are executed are being stored. All the data in the memory is stored on an address. The data is stored as a large array of words, each of which has an address associated with it.

There are two types of addresses:

  • Physical Address: This is the real address or the actual memory address where the data is stored in the memory.
  • Logical address: This is a virtual address that is generated by the CPU. 

The management of all these addresses, loading the process into the main memory, keeping the track of memory allocation are the functions of an operating system. 

When a process has to be loaded into the main memory, it has to be loaded into the main memory for fast execution. So this process of loading data is called Memory Allocation. There are two types of memory allocation:

1. Contiguous Memory Allocation

The whole program is stored in a consecutive memory location.

  • The main memory is usually divided into Low Memory and High Memory.
  • The Low Memory holds the resident operating system and interrupts the vector table. The High Memory holds User Processes.
  • Each Process is contained in a separate block of memory.
  • There are holes in the memory. The hole is an available block of memory. These holes are scattered throughout the memory. 
  • To allocate a process to whole different algorithms are used:
    • First-Fit: Allocates the first hole that is big enough to hold the process.
    • Best-fit: Allocate the smallest hole that is big enough; must search the entire list, unless ordered by size.
    • Worst-fit: Allocate the largest hole, must also search the entire list. 

2. Non-Contiguous Memory Allocation:

In Non-Contiguous Memory Allocation, the parts of a program are stored in the non-contiguous memory location. Non-Contiguous Memory Allocation is of two types :

  • Paging:
    • Paging is the process of dividing logical memory into blocks of the same size.
    •  A program can be divided into a number of parts which can be stored separately, each of these parts is known as a Frame. Each frame size is a power of 2.
    • A collection of pages is called a Page.
    • To access the data from frames and pages, the Operating System maintains page tables to translate logical addresses to physical addresses.
  • Segmentation:
    • It is a memory management scheme that supports the user view of memory.
    • A Program is a collection of segments.
    • A segment can be any logical unit of a program like a stack, a function, a procedure, local variable, global variables or an array.
    • To implement segmentation, an Operating System uses a segmentation table which consists of :
      • Base: It stores the starting physical address of where the segment resides in the memory.
      • Limit: It specifies the length of a segment. 

File Management 

File Management is one of the most important functions of the Operating System as it is one feature that we can relate the most to as we create, delete, rename and relocate files on a daily basis.

It’s the responsibility of the Operating System to keep track of all these things like File Attributes, File Operations, Directory Structure and File Allocation. 

  • File Attributes:
    • It’s the responsibility of an operating system to manage all the file attributes like:
      • Name
      • Type 
      • Location
      • Size 
      • Metadata like creation date, modification date, created by and modified by.
  • File Operations:
    • An operating system manages all the file operations like opening, renaming, closing, deleting and updating a file.
    • All these file operations are events that can either be triggered by a user or a system call. 
    • The Operating System manages all these calls and synchronises these operations.
  • Directory Structure:
    • The Operating System is responsible for managing the directory structure to hold the files in the memory.
    • Directory Structure is a collection of nodes containing information about all the files. 
    • The data and the directory structure reside on the same disk and hence the operating system has to manage the memory as well.
  • File Allocation:
    • File Allocation is defined as the way a file is stored on the memory.
    • File Allocation is of different types:
      • Contiguous Allocation: The file is stored in a contiguous chunk of memory on the disk.
      • Linked Allocation: Each file is stored as a linked list of disk blocks and these blocks can be scattered all across the memory.
      • Indexed Allocation: Each file is stored on the memory as blocks and addresses of all these blocks are stored on an index block. This indexed block is accessed whenever any file has to be accessed.

Interrupt Handling 

As discussed above, we now know that the Operating System is event-driven. These events can occur in two forms i.e. Interrupt or Exception. 

An interrupt is an event that is triggered by a user or a program to get the attention of the Operating System. The Interrupts can be of two types:

  • Hardware Interrupts: These are generated by a user using external hardware devices like a mouse click or a keystroke.
  • Software Interrupts: These are generated either by a user or computer software like a system call or a termination.

Interrupts are not generated by the Operating System but are handled by an Operating System. Each interrupt is identified by the operating system using an interrupt number.

The number may be used as an index into part of memory to find the address of the interrupt handler for this device. This part of memory is called the Interrupt Vector/ Interrupt Descriptor Table.

Whenever an interrupt is encountered by the operating system, the system checks in the interrupt vector table and calls the interrupt handler to handle that interrupt. 

Interrupt handler is very important as interrupt might be a very important function to be executed by the user or system like shutting down the system, but if the Operating system does not handle the interrupt, the system might crash or the system might go into an infinite loop.

Deadlock Prevention 

A set of processes is Deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set. 

Deadlock is a very critical situation for an Operating System wherein a single process can bottleneck the processing power of the CPU and may cause the system to crash. 

Deadlock happens when:

  • Mutual Exclusion: At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released.
  • Hold and Wait: A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process.
  • No preemption: Once a process is holding a resource ( i.e. once its request has been granted ), then that resource cannot be taken away from that process until the process voluntarily releases it.
  • Circular Wait: A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. ( Note that this condition implies the hold-and-wait condition, but it is easier to deal with the conditions if the four are considered separately. )

This is the responsibility of the operating system to detect these conditions and solve them to decrease the chances of deadlock. 

Deadlock Handling:

  • Deadlock Prevention: The operating system applies different algorithms to stop deadlock from happening, one such algorithm is Banker’s Algorithm.
  • Deadlock Avoidance: The general idea of deadlock avoidance is to prevent the deadlock from ever happening by always keeping the system in a safe state.
  • Deadlock Ignorance: If deadlocks occur once in a while and that too in a very long time then ignore the deadlock and restart the system if it is not a critical system.

Frequently Asked Questions

What are the five functions of the operating system?

Operating System performs several functions like:

1. Security
2. File management
3. Device Management
4. Resource Allocation
5. Error Handling

What are the five key functions and responsibilities of an OS?

Key functions and responsibilities of an operating system are:

1. To provide users with a smooth user interface
2. Protect all the information and resources
3. Deadlock Prevention
4. Manage Memory and process allocation
5. Interrupt Handling

What is the need for an operating system?

Operating System is very important for the smooth functioning of all systems binds user and hardware together and acts as an interface using which a user can interact with the underlying hardware.

What is an operating system explain different types of operating systems?

Operating System is a program that controls other programs and their execution. It is an interface using which a user can interact with the underlying infrastructure.

Some different types of operating systems are:
1. Batch Operating Systems
2. Real-Time Operating Systems
3. Network Operating Systems

What are the two types of operating systems?

There are several types of operating systems:

1. Distributed Operating Systems
2. Time Sharing Operating Systems

What are operating systems and examples?

Operating System is a programmed infrastructure that controls the functioning of other programs and helps the user to interact with the hardware.
Examples of operating systems are Microsoft Windows, Linux Ubuntu, Kali Linux and Apple’s MacOS.

By Sarthak Dhawan

Exit mobile version