In this blog, we will learn about Demand paging in OS (Operating system). Every virtual memory process has many pages, and swapping all of the pages for the process at once may not be efficient in some instances. Because it can be feasible that the software may need only a particular page for the application to operate, consider the following scenario: assume you have a 500 MB program that only requires 100 MB of page swapping; in this situation, there is no need to swap all pages at once.
What is Demand Paging in OS?
The demand paging system is similar to the swapping paging system in that processes are mostly stored in the main memory (usually on the hard disk). As a result, demand paging is a procedure that addresses the problem above just by shifting pages on demand. Lazy swapper is another name for this ( It never swaps the page into the memory unless it is needed).
A pager is a kind of swapper that deals with the individual pages of a process.
Demand Paging is a method in which a page is only brought into main memory when the CPU requests it. At first, just those pages are loaded directly required by the operation. Pages that are never accessed are thus never loaded into physical memory.
Why Demand Paging?
There are a lot of pages in one particular process. But to put all the pages of that particular process in the memory will only consume the memory. Now for this problem, the concept of demand paging came into execution. Demand Paging is a method in which a page is only brought into main memory when the CPU requests it.
Let's understand this with the help of an example. Suppose the user did a program to calculate the overall cost of the fencing required for the field. For this, there will be different pages for different codes, like:
Page 1: The overall cost of the fence
Page 2: The labour cost
Page 3: The surface area of the field.
The user wants to calculate the overall cost of the fence. Then there is no need for loading all three pages, and this is where the demand paging comes into role. Then only Page 1 will be loaded, not all the pages.
How does Demand Paging in OS work?
When the CPU requests access to any page, the page table is used to find the page in the main memory. If the page is found on the main memory, it is good, and if it is not, then a page fault occurs.
Page fault is when the CPU wants to access the page from the main memory, but it is not present in the main memory. Then, how to overcome this?
For this swapped-in is used, the swapped-in is used to swap from the secondary memory. Swapped-in refers to moving a program back to the hard drive from the main memory, or RAM. However, if the page is already in the main memory, it is retrieved from there. The secondary memory is loaded with other pages. There are also valid and invalid bits, which are used to check whether the page is present in the main memory. Valid bits mean that the page is legal and present in the memory, and invalid bits mean the page is not valid or it is not present in the memory.
Common Terms in Demand Paging Operating System
Following are some common terms in demand paging operating systems:
- Page Fault
There will be a miss if the referenced page is not present in the main memory; this is known as a page miss or page fault.
The CPU must look up the missing page in secondary memory. When the number of page faults is significant, the system's effective access time increases dramatically.
Swapping comprises either erasing all of the process's pages from memory or marking the pages so that we can remove them via the page replacement method.
When a process is suspended, it indicates it is unable to run. However, we can change the process for a while. The system can swap the process from secondary memory to primary memory over a period of time. Thrashing describes a condition in which a process is busy, and the pages are swapped in and out of it.
The effective access time will be the time needed by the CPU to read one word from the secondary memory if the number of page faults is equal to the number of referred pages or if the number of page faults is so high that the CPU is only reading pages from the secondary memory. This is known as Thrashing.
If the page fault rate is PF%, the time spent retrieving a page from secondary memory and resuming is S (service time), and the memory access time is “ma”, the effective access time may be calculated as follows:
EAT = PF x S + (1 - PF) x (ma)
Common Algorithms used for Demand Paging in OS
The target for all algorithms is to reduce the number of page faults. Here are some common algorithms used for Demand Paging in OS:
- First In First Out (FIFO): This algorithm is used to manage resources in operating systems like CPU time, memory, and input/output operations. The first process that enters the queue gets executed first, and the rest wait till they are executed. Here, algorithms are used to manage the queue to make sure that the processes are added and removed correctly. When a new process arrives, it is added to the end of the queue using the enqueue operation
- Optimal Page Algorithm: As the name suggests, the Optimal page replacement algorithm is used to replace pages. This works by replacing the page whose demand in the future is the least as compared to other pages from the frame. Belady's Anomaly does not occur in this algorithm as it uses a stack based algorithm.
- Least Recently Used (LRU) Algorithm: Whenever a page fault occurs, the least recently used page will be replaced with a new page. Hence the page not utilised for the longest time in the memory gets replaced. The Least Recently Used (LRU) Algorithm gives fewer page faults than any other algorithm and is capable of complete analysis.
- Page Buffering Algorithm: This algorithm is used to get a process to start quickly and keep a pool of free frames.
- Least Frequently Used (LFU) Algorithm: In this algorithm, the least frequently used pages are replaced. The page with the least visits in a given period of time is removed.The page that comes first is replaced first, given that the frequency of the page remains constant.
Advantages of Demand Paging in OS
Demand paging has the following benefits:
- Memory can be put to better use.
- If we use demand paging, then we can have a large virtual memory.
- By using demand paging, we can run programs that are larger than physical memory.
- In demand paging, there is no requirement for compaction.
- In demand paging, the sharing of pages is easy.
- Partition management is simple in demand paging because of the fixed partition size and the discontinuous loading.
Disadvantages of Demand Paging in OS
Demand paging has the following drawbacks:
- Internal fragmentation is a possibility with demand paging.
- It takes longer to access memory (page table lookup).
- Memory requirements
- Guarded page tables
- Inverted page tables
The Difference Between Swapping and Demand Paging in OS
A memory management technique to store and acquire data in the RAM.
A technique for temporarily removing inactive applications from the computer system's main memory.
Allows the memory of the process address space to be non-contiguous.
Allows multiple programs in the os to run simultaneously.
Occurs when only a part of the princess gets transferred to the disk.
Occurs when the entire process is transferred to the disk.
Paging allows many processes in the main memory to exist and is performed by active processes.
Swapping reduces the number of processes in the memory and is performed by inactive processes.
More adaptable as relocation of the process is allowed.
Less adaptable as the operation goes back and forth.
Appropriate for heavy workloads.
Appropriate for light to medium workloads.
Used in Non-contiguous Memory Management.
Can be performed without any memory management.
Helps to implement virtual memory.
Helps CPU to access processes faster.
Frequently Asked Questions
What are demand and pre paging in OS?
Demand paging loads a page from disk into RAM when it's accessed. Pre-paging loads multiple adjacent pages into RAM, assuming they'll be needed soon, to reduce disk I/O delays.
What is demand paging vs swapping?
Demand paging is a memory management technique that loads data from disk into RAM when it's accessed. Swapping involves moving entire processes in and out of RAM to free up memory.
What is swapping in operating system?
Swapping in an operating system involves moving parts of processes or entire processes from RAM to disk to free up memory and manage system resources efficiently.
In this article, we have extensively discussed demand paging in Operating Systems. We start with a brief introduction of it along with the concepts like page fault, swapping, and thrashing and cover the advantages and disadvantages of demand paging.
Do upvote our blogs if you find them helpful and engaging!