In this operating system tutorial, we will be reading about how Operating systems are the link between the user and the computer. It enables the users to smoothly interact with the machine and instruct the machine to undertake different operations. Windows, Mac, Android are some of the popular operating systems that you definitely must have come across.
It is the operating systems that have made our lives easier by giving us a medium to make our machines easily understand what we require them to do.
In this operating system tutorial, we will go over the basic concepts related to operating systems and try to understand various examples along with code.
Let’s dive right in!
Types of Operating Systems
There are different kinds of operating systems. Some of them which are used widely are mentioned below:
- Batch OS: This operating system stores certain operations in the memory in the form of a ‘batch’. A process is taken for execution, while the others wait. Another process is taken up from the batch for execution only after the first process completes its execution.
- Multiprogramming OS: As the name suggests, this operating system does not wait for the completion of one process like a batch operating system does. The operating system assigns another process to the CPU as soon as the already running process waits for a response or an operation.
- Multitasking OS: This operating system is the best for cases where the user needs to interact with the operating system during its run time. This operating system uses a CPU scheduling paradigm to switch between jobs and make sure ample resources are allotted to every process.
What are Threads?
Just as the cell is the basic unit of life, a thread is the basic unit of CPU utilization. Processes use threads to perform their tasks and if a process is using multiple threads at once, it is said to be a multi-tasking process. While threads are independent they are also interdependent as they share resources revolving around the same process with each other to ensure smooth execution of the process.
Types of threads
- User Level Threads: These are incorporated by the user themselves making it easier to implement them. The context switching time of user-level threads is comparatively less. One drawback of user-level threads is that if one thread executes blocking, then the entire process will come to a halt and be blocked.
- Kernel Level Threads: These threads are not incorporated by the user but by the kernel itself. The context switching time of kernel-level threads is comparatively more. The advantage of kernel-level threads is that if one thread executes blocking, then the entire process will not be blocked, it can continue.
What are Processes?
Processes are programs that are being executed by the operating system in this operating system tutorial. Processes can be initiated due to internal system processes as well as due to the user interactions with the operating system. Every process has its own control block which is termed the process control block.
What is Process Scheduling?
Process scheduling as the name suggests is a paradigm by the virtue of which the CPU can be utilized in the best way possible by ensuring that each process gets executed in time and multiple processes can be executed at the same time. Process scheduling enables the operating system to work faster as one process does not have to wait for the previous process to complete its execution.
Some terms associated with process scheduling:
- Arrival Time: This is the time when a process marks its arrival in the ready queue. A ready queue is a queue where all processes are stored for execution.
- Completion Time: This marks the time at which a process undergoes completion of its execution.
- Burst Time: This is the time required by a certain process to get executed.
- Turn Around Time: This is the time span that marks the total time taken by a process for its execution, right from the arrival and to the point where it is completed.
Turn Around Time = Completion Time – Arrival Time
- Waiting Time: This is the time span that indicates the time a process had to wait before it was sent for execution.
Waiting Time = Turn Around Time – Burst Time
Process Scheduling Algorithms
1. First Come First Serve
You will be provided with different processes, their duration, arrival order, and arrival time. To implement the first come first serve process, you have to execute the processes in the order of their arrival. Take input of each process with their burst time.
Finally, you need to figure out the waiting time, turnaround time, average waiting time, and average turnaround time using the formulae given above. For the complete C++ code, check out this repository by pvsmounish.
2. Shortest Job First Preemptive
As the name suggests, the shortest job first algorithm puts that process first which has the shortest execution time thereby reducing the average waiting time. In the preemptive form of the SJF process scheduling, the processes are put inside the ready queue immediately as they arrive.
The process to be picked out first is decided by the burst time. When a process arrives which has a shorter burst time, then the process which is currently being executed is removed and pushed back giving the CPU to the process with a shorter burst time.
3. Shortest Job First Non-Preemptive
The difference of the non-preemptive version of the shortest job first process scheduling algorithm is that one process when put for execution will be executed completely before a new process is taken.
For the complete C++ code, check out this repository by pvsmounish.
4. Priority Preemptive
After a process arrives, the priority preemptive scheduling algorithm takes its priority into consideration and if a process arrives at a later time with higher priority then the current process being executed is halted and the one with the higher priority is executed first. For the complete C++ code, check out this repository by pvsmounish.
5. Priority Non-Preemptive
The difference that the non-preemptive version of the priority scheduling algorithm possesses is that when one process is put for execution, no other process even if it has a higher priority will be added unless the current process has finished its execution. For the complete C++ code, check out this repository by pvsmounish.
6. Round Robin
The Round Robin Process Scheduling algorithm executes processes in a cyclic manner where each process is executed for a certain time span called the time quantum. For the complete C++ code, check out this repository by pvsmounish.
What are Deadlocks?
A deadlock is a situation where a process gets blocked because some other process is utilising a resource our current process requires. There are four necessary conditions that need to hold true together for a deadlock to occur:
- Mutual Exclusion: This occurs when there exists a resource that can only be utilised by one process at a time which hinders a new process to utilise that resource.
- Hold and Wait: This happens when a process already is holding a resource in its possession and is waiting for another resource. This causes the resource being held to be blocked and not be able to be used by any other process.
- No Preemption: This causes a resource to be given exclusively to one process with a clause that it cannot be assigned to another process unless and until it has been utilised by the existing process.
- Circular Wait: This in simple terms means that the processes are waiting for each other to finish the utilisation of their resources.
There are three ways to handle and do away with deadlocks, given below:
- Deadlock prevention or avoidance: There are certain operating systems that incorporate several methods to prevent deadlocks altogether and ensure that they do not occur within the system.
- Deadlock detection and recovery: If a deadlock has occurred and the system detects it, we can use preemption to handle the said deadlock to recover the system.
- Ignore the problem: There are some operating systems like Windows and the UNIX where deadlocks happen rarely. If they do happen sometimes, you can just restart the system.
What is Paging?
Paging is a memory management technique used by several operating systems to divide the memory into pages which are essentially memory blocks of the same size. Each process is broken down into pages and the memory is broken down into frames. One page is stored in one frame and these pages have no restriction of being stored together in the memory.
Example: The main memory has a size of 16 KB and frame size of 1 KB. This implies that the main memory will be divided into 16 frames (memory size/frame size). The frame size will be the same as the page size (1 KB).
Page Replacement Algorithms
1. First in First Out (FIFO)
This page replacement algorithm is the simplest algorithm. The pages are placed in a queue where the first page in the queue is the one that came first (oldest). For page replacement, the first page in the queue is chosen for removal.
To check out the program in C refer to this GitHub repository by pateldevang.
2. Least Recently Used (LRU)
For this algorithm, the page which is least recently used will be replaced during page replacement. To check out the program in C refer to this GitHub repository by pateldevang.
3. Optimal Page Replacement
For this page replacement algorithm, the page which has not been used for the longest duration is picked for being replaced. This is an ideal page replacement process and is generally not used in real-life scenarios.
To check out the program in C refer to this GitHub repository by pateldevang.
Operating System Projects
Given below are some projects you can make by utilizing types of operating system concepts in this operating system tutorial:
- Client-Server Application
- File System
- Web Server
- Shell Interpreter
- Online Compiler
Frequently Asked Questions
Memory management in an operating system is a technique to handle different memory resources and allot them as per their use and requirement.
OS in technology means operating systems.
For placements where you need to build software products, operating systems definitely play a crucial role. They help in your deeper understanding of how things work under the hood and how you will be able to make your application work efficiently with the resources your user has.
An operating system tutorial is a guide where you can understand how things work with an operating system.
The four types of operating systems are – batch operating system, multiprogramming operating system, multitasking operating system, time sharing operating systems.
The 10 types of operating system are – batch operating system, multitasking / time sharing operating system, multiprocessing operating system, real time operating system, distributed operating system, network operating system, mobile operating system, stand alone operating system, embedded operating system, and single and multi user operating systems.
Operating systems is a subject that is extremely crucial for every engineer to understand. With the proper utilisation of CPU resources, In addition to this, operating systems are a fundamental part of every technical interview.
While you can look up ‘OS basic interview questions on the internet, you need to follow a structured approach to learn operating systems and OS fundamentals.
A guided approach through this operating system tutorial will help you ace the interview in no time!
Here is the newly launched Operating Systems Guided Path by Coding Ninjas which will make you get one step closer to your dream job.
By Pooja Gera