New update is available. Click here to update.

Memory Management in Operating System

PRASHANT SINGH
Last Updated: Aug 13, 2022
Difficulty Level :
EASY

Introduction

With the introduction of modern operating systems, the performance of a CPU attained new heights. Several processes are handled by the CPU at one time, increasing the overall performance of the system. 

Memory is an essential component of a computer used to store data. Because the amount of main memory available in a computer system is fairly restricted, its management is crucial to the computer system. The primary memory management function of an operating system is the most critical. It facilitates the movement of processes between the main memory and the execution disc. 

If you haven’t learned memory management yet, refer here.

This article contains various memory management techniques in detail and also gate questions on Memory Management in Operating systems with their solutions and explanations.

Let’s start with learning about the address first.

Address Binding 

Before running any program, it must be stored somewhere on the system. Every program is stored on the secondary storage(disk) as a binary executable file. When the program has to be executed, it is loaded into the main memory. The process uses main memory to access instructions and data as it executes. When the process is finished, the memory space it uses is marked free. 

Operating systems allow processes to reside in any part of the physical memory. Physical memory refers to the actual RAM of the system. All these memory addresses can be represented in a variety of ways. 

For example, 14 bytes from the beginning of this module. 

  • The linkage editor or loader will, in turn, bind the relocatable addresses to absolute addresses. 

For example, 74014. 

“Address Binding is the process of mapping one address space to another. In simpler words, the mapping count symbol with the address 74014 in the above example is known as address binding.

 

The following steps can be used to bind instructions and data to memory addresses

  1. Compile Time
  2. Load Time
  3. Execution Time or Run Time

Now, let’s dig deeper into logical and physical addresses and their differences.

Logical versus Physical Address Space 

  • An address generated by the CPU is known as a logical address. It has no physical existence in any memory of the computer system.
  • An address that physically exists somewhere in the system is known as the physical address. This address is loaded into the memory address register of the memory.
     

The compile-time and load-time address binding methods generate identical logical and physical addresses. However, the run-time address binding scheme results in different logical and physical addresses. In this scenario, the logical address is commonly referred to as a virtual address.

The set of all the logical addresses is known as logical address space, and similarly, the set of all the physical addresses is known as physical address space.

At runtime, we need to find out the physical address from the given virtual address. This process is known as Mapping. Mapping is done with the help of a hardware device known as a Memory Management Unit(MMU). The method used by MMU is known as dynamic relocation.

Now, it’s time to know what is loading and linking in memory management.

Loading and Linking

The complete program and all the data must be stored in physical memory for a process to execute. As a result, the size of a process is restricted by the amount of physical memory available. We can use dynamic loading to get better memory utilization by avoiding loading a routine until it is called.

In dynamic loading, a routine is not loaded until it is called. All routines are saved in a relocatable load format on disk. The main program is executed after it has been loaded into memory. When a routine wants to call another routine, the calling routine checks to verify if the other routine is loaded. The relocatable linking loader loads the requested routine into memory and updates the program’s address tables if it is not loaded. The newly loaded routine is then given control.

Dynamic loading has the advantage of never loading a procedure that isn’t in use. This method is handy when large amounts of code are needed to handle. Operating systems provide library routines to implement dynamic loading.

 

Static vs Dynamic Loading

The choice between static and dynamic loading must be made during the development phase. The primary difference between static and dynamic loading is as follows:

Static LoadingDynamic Loading
The entire program will be compiled and linked at compilation in static loading, leaving no external module dependencies.The compiler compiles the program in dynamic loading and only references all modules that we want to include dynamically. The remaining tasks will be completed at the time of execution.
In static loading, the program and data are loaded into memory at the time of loading, allowing the execution to begin.In dynamic loading, the dynamic functions of the library are stored on a disk in relocatable form and loaded into memory only when the program requires them.

 

Static vs Dynamic Linking

The fundamental difference between static and dynamic linking is as follows:

Static LinkingDynamic Linking
Static linking avoids runtime dependencies by combining every other module a program requires into a single executable file.It is not necessary to link the actual module or library with the program when dynamic linking is used. At the time of compilation, the reference to the dynamic module is provided. 
Linking occurs at compile-time.Linking occurs at run-time.

 

 

Swapping

Swapping is a mechanism for temporarily moving a process from main memory to secondary storage and making that memory available to other processes. The system switches the process from secondary storage to main memory at a later time.

For example, Consider a multiprogramming environment with a round-robin CPU scheduling. When a quantum of a process expires, the memory manager will begin swapping out and swapping another process into the free memory space.

Note: Though the swapping process degrades performance, it helps in concurrent execution of various large processes, which is why it’s also known as a Memory Compaction method.

Memory Allocation

The main memory must adapt to the operating system and various user processes. Therefore, we must allocate main memory as efficiently as possible. Memory is divided into two partitions: the one in which the operating system resides and another for user processes. All the processes are allocated memory in the userspace.

The two main memory allocation methods are fixed and variable size partitions. Let’s look at them one by one.

Fixed Partitioning

One of the simplest ways to allocate memory is to divide the memory into several partitions of fixed size. Each partition can contain only one process. In this case, when the partition is free, select a process from the input queue and load it into the free partition. When the process finishes, the partition can be used for another process.

Variable Partitioning

In the variable partition scheme, the operating system maintains a table, which indicates which parts of the memory are available and which parts are occupied. Initially, all memory is available to user processes.

When processes enter the system, they are placed in an input queue. The operating system then considers the memory requirements of each process. It then checks the size of the available memory space to determine which processes can be allocated memory. When a process is allocated space, it is loaded into memory.
 

The algorithm of variable size partitioning is given below:

  • The available memory block includes a set of holes(free blocks of memory) of various sizes scattered throughout the memory. 
  • When a process reaches and needs memory, the system looks for a large enough hole for the process to use. 
  • If the free memory block is too big, it will split into two parts. One part is assigned to the arrival process, and the other is kept for the remaining processes. 
  • When the process finishes, it frees its memory block and places it back in the hole set. 
  • If the new hole is adjacent to other holes, these adjacent holes merge to form a larger hole. 
  • At this point, the system may need to check if processes are waiting for memory and if the newly launched and recombined memory can meet the needs of any of these waiting processes.
     

There are various strategies to select a free hole out of various available holes for a process. 

All the three strategies are explained below:

  1. First fit: Assign the first block of memory that is large enough. The search can start at the beginning or the end of the previous search for the first fit. Once a large enough free hole is found, we can allocate the memory.
  2. Best fit: Assign the smallest block of memory that is large enough. We should search the entire list unless the list is sorted by size. This strategy produces the smallest remaining holes.
  3. Worst fit: Assign the largest block. Similarly, unless sorted by size, we must search the entire list. This strategy produces the largest spare block, which is more valuable than the smallest in the best-fit method.
     

Points to consider: 

All three strategies have their advantages and disadvantages. Advantages on different criteria are mentioned below:

  • The first fit is generally faster than the best fit and the worst fit. 
  • Memory utilization in the best fit is much better than the first fit as it searches for the smallest free partition available.
  • The worst fit technique is the best choice for variable partitioning. The reason is internal fragmentation leaves large size holes in the worst fit. And, there is a good chance that this space will meet the needs of arriving processes.

Fragmentation

The above memory allocation strategies suffer from fragmentation problems. Once the processes are loaded and removed from memory, free memory space is divided into smaller pieces. Fragmentation, in general, means the total memory space is sufficient to meet the request, but the available space is not continuous. 

For example, consider a multi-partition allocation scheme with an 18,464-byte block of free memory. Suppose a process requests for 18,462 bytes. If we allocate the requested block in the available block, we will be left with a 2-byte hole. The cost of tracking this hole will be much higher than the hole itself.

Memory fragmentation is of two types: internal or external. Let’s see them one by one.

Internal Fragmentation

The memory block allotted to the process is larger than the needed space. Because another process cannot use it, some memory is left unused in that block. This is known as internal fragmentation. It can be minimised by designating the smallest partition that is large enough for the process.

External Fragmentation

When total available memory space is sufficient to satisfy a process request, it is not contiguous. It cannot be used for memory allocation. This type of fragmentation is known as external fragmentation. Compaction or shuffle memory contents to place all free memory in one large block can reduce external fragmentation. 

 

Now, let’s discuss another essential memory management technique widely used by modern operating systems, i.e. paging.

Paging

Paging is a memory management technique that allows a process’s physical address space to be noncontiguous. External fragmentation and the necessity for compaction are avoided by paging. It also solves the significant difficulty of fitting memory chunks of various sizes onto the backing store, which plagued most memory management techniques before the introduction of paging.

Most operating systems use paging in one form or another. Traditionally, hardware has been responsible for paging support. Recent designs, particularly on 64-bit microprocessors, have accomplished paging by tightly integrating the hardware and operating system.

Basic methods to implement paging

Breaking physical memory into fixed-sized blocks called frames, and logical memory into blocks of the same size called pages is the most basic way for implementing paging.

When a process is ready to run, its pages are loaded from their source into any available memory frames. The backing store is organised into fixed-size blocks that are the same size as the memory frames.

Advantages and Disadvantages of Paging

The following is a list of the advantages and disadvantages of paging.

  • External fragmentation is reduced by paging, but internal fragmentation remains.
  • Paging is a simple memory management approach that is believed to be effective.
  • Swapping is simple due to the identical size of the pages and frames.
  • Page tables demand additional memory space. Hence they may not be suitable for systems with limited RAM.

Segmentation

The separation of the user’s view of memory from the actual physical memory is a crucial feature of memory management. Segmentation is a memory management strategy that supports the user view of memory.

Now, you might be thinking, what is the user view?

A user view means what a user thinks of a program. A user sees a program as the main method, variables, data structures, library functions etc. A user doesn’t think about their addresses in memory. 

In segmentation, a job is divided into several smaller segments. Each module contains parts that execute related functions. Each segment corresponds to a different program’s logical address space. Each of these segments has a different length, determined by the segment’s function in the program. Segments are given a segment number for ease of implementation. 

As a result, a logical address is made up of two tuples: <segment-number, offset>.

Some of the disadvantages of segmentation are mentioned below:

  1. Possibility of external fragmentation.
  2. Allocating contiguous memory to variable-sized partitions is difficult.
  3. Segmentation is a costly memory management technique.

Gate Problems 

1. Which of the following is suitable for virtual memory?

  1. Large main memory
  2. Large secondary memory
  3. The illusion of large main memory
  4. None of the above

Ans: c. The illusion of large main memory

Explanation: Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of the main memory. It is the illusion of large main memory.

2. Increasing a computer's RAM typically improves performance because:

  1. Larger RAMs are faster
  2. Virtual memory increases
  3. Fewer segmentation faults occur
  4. Fewer page faults occur

Ans: d. Fewer page faults occur

Explanation: More RAM means more mapped virtual pages in physical memory, which means fewer page faults. A page fault degrades performance because the page must be loaded from a secondary device.

3. Belady's anomaly affects which of the following page replacement algorithms?

  1. First In First Out (FIFO)
  2. Least Recently Used (LRU)
  3. Optimal Page Replacement
  4. Both a and b

Ans: a. First In First Out (FIFO)

Explanation: The phenomenon known as Belady's anomaly occurs when the number of page frames increases, resulting in a rise in the number of page faults for a particular memory access pattern. This phenomenon is commonly experienced in the First in first out (FIFO) page replacement algorithm.

4. When does page fault occurs?

  1. When a requested page is not in the memory
  2. Whenever a page is corrupted
  3. When a requested page is in memory
  4. When an exception is thrown

Ans: a. When a requested page is not in the memory

Explanation: When a requested page is mapped in virtual address space but not present in memory, a page fault occurs.

5. What is the purpose of the disk's swap space?

  1. Used for storing the super-block
  2. Used for saving process data
  3. Used for storing device drivers
  4. Used for saving temporary HTML pages

Ans: b. Used for saving process data

Explanation: Swap space is a space on a hard disk that is a substitute for physical memory. It is used as virtual memory which contains process memory images.

6. The important content(s) in each page table entry is/are

  1. Virtual page number
  2. Access right information
  3. Page frame number
  4. Both a and c

Ans: c. Page frame number

Explanation: Page frame number must be included in a page table entry. A virtual page number is commonly used as an index in a page table to obtain the corresponding page frame number.

7. In order to translate a virtual address to a physical address, a multilevel page table is preferred over a single-level page table because:

  1. The translation lookaside buffer requires it.
  2. It aids in the reduction of page faults in page replacement algorithms.
  3. It aids in reducing the size of the page table required to implement a process's virtual address space.
  4. It shortens the memory access time (the time it takes to read or write a memory location).

Ans: c. It aids in reducing the size of the page table required to implement a process's virtual address space.

Explanation: The page table size may become too large to fit in a single contiguous space. As a result, page tables are usually divided into levels.

8. A computer system can use both 32-bit virtual addresses and 32-bit physical addresses. Because the virtual address space is the same size as the physical address space, the operating system designers decide to eliminate virtual memory completely. Which of the following statements is correct?

  1. It is no longer possible to implement multi-user support efficiently.
  2. Memory management hardware support is no longer required.
  3. CPU scheduling can now be improved to be more efficient.
  4. It is no longer possible to implement multi-user support efficiently.

Ans: b. Memory management hardware support is no longer required.

Explanation: Memory Management Unit special hardware support is required to support virtual memory. Because operating system designers have decided to abandon virtual memory entirely, hardware support for memory management is no longer required.

9. A CPU generates 32-bit VA(virtual addresses and the size of the page is 4 KB. The processor has a TLB (translation lookaside buffer) which can hold a total of 128-page table entries and is a 4-way set associative. What will be the minimum size of the TLB tag?

  1. 13 bits
  2. 12 bits
  3. 15 bits
  4. 20 bits

Ans: c. 15 bits

Explanation: Page size = 4KB = 2^12

Total bit count required to address a page frame = 32 – 12 = 20

If a set contains 'n' cache lines, the cache placement is known as an n-way set associative. Because the TLB is a four-way set-associative and can hold a total of 128 (2^7) page table entries, the number of sets in the cache is 2^7/4 = 2^5

As a result, 5 bits are required to address a set, and 15 (20 – 5) bits are required for the TLB tag. 

10. Thrashing occurs

  1. When a page fault occurs
  2. When processes on the system are in the waiting state
  3. When processes on the system are in a running state
  4. When processes on the system frequently access pages, not memory

Ans: d. When processes on the system frequently access pages, not memory

Explanation: Thrashing occurs when processes on the system require more memory than the system has. The page fault rate is extremely high when processes do not have enough pages. This results in low CPU utilization and the operating system spending the majority of its time swapping to disc.

Thrashing is the term used to describe the above situation.

Now, let’s move on to the Frequently asked questions.

Frequently Asked Questions

What is memory management in OS?

Memory management refers to the effort of dividing memory across many tasks. Memory management is an operating system approach for managing actions between main memory and disc during process execution.

What are memory management techniques?

Some of the essential memory management techniques are: Fixed and Variable Partitioning, Paging and Segmentation.

What is thrashing?

Thrashing is a phenomenon in computing that occurs when virtual memory is employed. It occurs when a computer's virtual memory rapidly exchanges data for data on the hard drive, to the exclusion of most application-level operations. As main memory is depleted, more pages must be swapped into and out of virtual memory. 

How many types of RAM are there?

There are two types of integrated RAM chips: SRAM (Static RAM) and DRAM (Dynamic RAM).
SRAM: SRAM is an abbreviation for static RAM. This type of RAM saves data by leveraging the state of a six transistor memory cell. Static RAM is mainly used as a cache memory for CPU processors.
DRAM: DRAM is an abbreviation for Dynamic Random Access Memory. It is a type of RAM in which each bit of data is stored in a separate capacitor within a specific integrated circuit.

What purpose does paging serve in an operating system?

Paging is a method of gaining faster access to data. When a program requires a page, it is available in the main memory because the operating system copies a set number of pages from your storage device to the main memory. Paging allows a process's physical address space to be noncontiguous.

What is a page fault?

When the page referenced by the CPU is not found in the main memory, the situation is called Page Fault.

Conclusion

This article extensively contains a detailed description of various ways of organizing memory, various memory-management techniques, including paging and segmentation including various GATE questions on memory management in Operating systems. along with the solution and explanation of these GATE questions.

Refer here to know more about memory management in detail. For a much deeper understanding of operating system concepts, we recommend exploring the operating system guided path provided by Coding Ninjas.

We hope that this blog has helped you enhance your knowledge regarding memory management in operating systems, and if you would like to learn more, check out our articles in the code studio library. Do upvote our blog to help other ninjas grow. Happy Coding!

Was this article helpful ?
1 upvote