Memory Management Techniques In Operating System

Memory Management Techniques In Operating System
Memory Management Techniques In Operating System

Introduction

With the introduction of modern operating systems, the performance of a CPU attained new heights. Several processes are handled by the CPU at one time, increasing the overall performance of the system. 

Memory acts as the central point of all the operations inside a modern computer system. A memory is made up of a large array of words or bytes, having a unique address. 

To execute a process, the CPU retrieves instructions from memory. While retrieving these instructions, the CPU performs additional loading and storing of instructions from a particular memory address.


The main memory and registers built into the processor are the only memory that the CPU can directly access. Therefore, whatever instructions are executing, all the data used by instructions must be on one of these direct access storage devices. If the data is not in memory, we must move it there before the CPU starts execution.

The operating system is responsible for the management of the above operations to optimize memory usage. In this article, we’ll be discussing various memory management techniques in detail. 

Let’s start with learning about the address first.

Address Binding 

Before running any program, it must be stored somewhere on the system. Every program is stored on the secondary storage(disk) as a binary executable file. When the program has to be executed, it is loaded into the main memory. The process uses main memory to access instructions and data as it executes. When the process is finished, the memory space it uses is marked free. 

Operating systems allow processes to reside in any part of the physical memory. Physical memory refers to the actual RAM of the system. All these memory addresses can be represented in a variety of ways. 

For example:

  • In the source program, addresses are mostly symbolic.

For example, count

  • A compiler binds these symbolic addresses to relocatable addresses.

For example, 14 bytes from the beginning of this module. 

  • The linkage editor or loader will, in turn, bind the relocatable addresses to absolute addresses. 

For example, 74014. 

“Address Binding is the process of mapping one address space to another. In simpler words, the mapping count symbol with the address 74014 in the above example is known as address binding. 

The following steps can be used to bind instructions and data to memory addresses

  1. Compile Time
  2. Load Time
  3. Execution Time or Run Time

Now, let’s dig deeper into logical and physical addresses and their differences.

Logical versus Physical Address Space 

  • An address generated by the CPU is known as a logical address. It has no physical existence in any memory of the computer system.
  • An address that physically exists somewhere in the system is known as the physical address. This address is loaded into the memory address register of the memory.

The compile-time and load-time address binding methods generate identical logical and physical addresses. However, the run-time address binding scheme results in different logical and physical addresses. In this scenario, the logical address is commonly referred to as a virtual address.

The set of all the logical addresses is known as logical address space, and similarly, the set of all the physical addresses is known as physical address space.

At runtime, we need to find out the physical address from the given virtual address. This process is known as Mapping. Mapping is done with the help of a hardware device known as a Memory Management Unit(MMU). The method used by MMU is known as dynamic relocation.

The one variation of dynamic relocation is implemented using the base register. Let’s see how it works. 

Dynamic relocation using Base Register

The base register in this process is called a Relocation register. The value of the relocation register is added to the logical address of every process to get the physical address in the memory.

For example, if the relocation value is 14000, location 346 is mapped to location 14346. 

The value 14346 is the physical address, where the instructions and data are stored in the memory.

In this diagram below, the working of MMU is given, and the complete process of converting a logical address into a physical address is shown.

Now, it’s time to know what is loading and linking in memory management.

Loading and Linking

The complete program and all the data must be stored in physical memory for a process to execute. As a result, the size of a process is restricted by the amount of physical memory available. We can use dynamic loading to get better memory utilisation by avoiding loading a routine until it is called.

In dynamic loading, a routine is not loaded until it is called. All routines are saved in a relocatable load format on disk. The main program is executed after it has been loaded into memory. When a routine wants to call another routine, the calling routine checks to verify if the other routine is loaded. The relocatable linking loader loads the requested routine into memory and updates the program’s address tables if it is not loaded. The newly loaded routine is then given control.

Dynamic loading has the advantage of never loading a procedure that isn’t in use. This method is handy when large amounts of code are needed to handle. Operating systems provide library routines to implement dynamic loading.

Static vs Dynamic Loading

The choice between static and dynamic loading must be made during the development phase. The primary difference between static and dynamic loading is as follows:

Static LoadingDynamic Loading
The entire program will be compiled and linked at compilation in static loading, leaving no external module dependencies.The compiler compiles the program in dynamic loading and only references all modules that we want to include dynamically. The remaining tasks will be completed at the time of execution.
In static loading, the program and data are loaded into memory at the time of loading, allowing the execution to begin.In dynamic loading, the dynamic functions of the library are stored on a disk in relocatable form and loaded into memory only when the program requires them.

Static vs Dynamic Linking

The fundamental difference between static and dynamic linking is as follows:

Static LinkingDynamic Linking
Static linking avoids runtime dependencies by combining every other module a program requires into a single executable file.It is not necessary to link the actual module or library with the program when dynamic linking is used. At the time of compilation, the reference to the dynamic module is provided. 
Linking occurs at compile-time.Linking occurs at run-time.

Swapping

Swapping is a mechanism for temporarily moving a process from main memory to secondary storage and making that memory available to other processes. The system switches the process from secondary storage to main memory at a later time.

For example, Consider a multiprogramming environment with a round-robin CPU scheduling. When a quantum of a process expires, the memory manager will begin swapping out and swapping another process into the free memory space.

Note: Though the swapping process degrades performance, it helps in concurrent execution of various large processes, which is why it’s also known as a Memory Compaction method.

The diagram below shows that the process P1 is swapped out from the main memory into the disk space. Similarly, process P2 is swapped in from secondary storage, i.e. disk, to the main memory for execution.

Memory Allocation

The main memory must adapt to the operating system and various user processes. Therefore, we must allocate main memory as efficiently as possible. Memory is divided into two partitions: the one in which the operating system resides and another for user processes. All the processes are allocated memory in the userspace.

The two main memory allocation methods are fixed and variable size partitions. Let’s look at them one by one.

Fixed Partitioning

One of the simplest ways to allocate memory is to divide the memory into several partitions of fixed size. Each partition can contain only one process. In this case, when the partition is free, select a process from the input queue and load it into the free partition. When the process finishes, the partition can be used for another process.

Variable Partitioning

In the variable partition scheme, the operating system maintains a table, which indicates which parts of the memory are available and which parts are occupied. Initially, all memory is available to user processes.

When processes enter the system, they are placed in an input queue. The operating system then considers the memory requirements of each process. It then checks the size of the available memory space to determine which processes can be allocated memory. When a process is allocated space, it is loaded into memory.

The algorithm of variable size partitioning is given below:

  • The available memory block includes a set of holes(free blocks of memory) of various sizes scattered throughout the memory. 
  • When a process reaches and needs memory, the system looks for a large enough hole for the process to use. 
  • If the free memory block is too big, it will split into two parts. One part is assigned to the arrival process, and the other is kept for the remaining processes. 
  • When the process finishes, it frees its memory block and places it back in the hole set. 
  • If the new hole is adjacent to other holes, these adjacent holes merge to form a larger hole. 
  • At this point, the system may need to check if processes are waiting for memory and if the newly launched and recombined memory can meet the needs of any of these waiting processes.

There are various strategies to select a free hole out of various available holes for a process. 

All the three strategies are explained below:

  1. First fit: Assign the first block of memory that is large enough. The search can start at the beginning or the end of the previous search for the first fit. Once a large enough free hole is found, we can allocate the memory.
  1. Best fit: Assign the smallest block of memory that is large enough. We should search the entire list unless the list is sorted by size. This strategy produces the smallest remaining holes.
  1. Worst fit: Assign the largest block. Similarly, unless sorted by size, we must search the entire list. This strategy produces the largest spare block, which is more valuable than the smallest in the best-fit method.

Points to consider: 

All three strategies have their advantages and disadvantages. Advantages on different criteria are mentioned below:

  • The first fit is generally faster than the best fit and the worst fit. 
  • Memory utilization in the best fit is much better than the first fit as it searches for the smallest free partition available.
  • The worst fit technique is the best choice for variable partitioning. The reason is internal fragmentation leaves large size holes in the worst fit. And, there is a good chance that this space will meet the needs of arriving processes.

For a better understanding of the strategies mentioned above, refer to the diagram below. 

Blue shaded blocks are already used. Grey shaded blocks are free to allocate.

While allocating memory, some problems may arise, fragmentation of memory is one such problem. Let us understand the fragmentation problem in detail.

Fragmentation

The above memory allocation strategies suffer from fragmentation problems. Once the processes are loaded and removed from memory, free memory space is divided into smaller pieces. Fragmentation, in general, means the total memory space is sufficient to meet the request, but the available space is not continuous. 

The diagram below shows the fragmentation in memory. 

For example, consider a multi-partition allocation scheme with an 18,464-byte block of free memory. Suppose a process requests for 18,462 bytes. If we allocate the requested block in the available block, we will be left with a 2-byte hole. The cost of tracking this hole will be much higher than the hole itself.

Memory fragmentation is of two types: internal or external. Let’s see them one by one.

Internal Fragmentation

The memory block allotted to the process is larger than the needed space. Because another process cannot use it, some memory is left unused in that block. This is known as internal fragmentation. It can be minimised by designating the smallest partition that is large enough for the process.

External Fragmentation

When total available memory space is sufficient to satisfy a process request, it is not contiguous. It cannot be used for memory allocation. This type of fragmentation is known as external fragmentation. Compaction or shuffle memory contents to place all free memory in one large block can reduce external fragmentation. 

Now, let’s discuss another essential memory management technique widely used by modern operating systems, i.e. paging.

Paging

Paging is a memory management technique that allows a process’s physical address space to be noncontiguous. External fragmentation and the necessity for compaction are avoided by paging. It also solves the significant difficulty of fitting memory chunks of various sizes onto the backing store, which plagued most memory management techniques before the introduction of paging.

Most operating systems use paging in one form or another. Traditionally, hardware has been responsible for paging support. Recent designs, particularly on 64-bit microprocessors, have accomplished paging by tightly integrating the hardware and operating system.

Basic methods to implement paging

Breaking physical memory into fixed-sized blocks called frames, and logical memory into blocks of the same size called pages is the most basic way for implementing paging.

When a process is ready to run, its pages are loaded from their source into any available memory frames. The backing store is organised into fixed-size blocks that are the same size as the memory frames.

The CPU divides each address into two parts: a page number p and a page offset d. The page number determines a page table’s index. The base address of each page in physical memory is stored in the page table. The physical memory address transferred to the memory unit is defined by combining this base address with the page offset. The figure below depicts the memory paging model.

We have no external fragmentation when we utilise a paging scheme: any free frame can be allocated to a process that requires it. Internal fragmentation, on the other hand, is a possibility. 

The size of a process, expressed in pages, is checked when it arrives in the system to be executed. One frame is required for each page of the process. As a result, if the process calls for n pages, at least n frames must be available in memory. If there are n frames available, they will be assigned to the arriving process.

Advantages and Disadvantages of Paging

The following is a list of the advantages and disadvantages of paging.

  • External fragmentation is reduced by paging, but internal fragmentation remains.
  • Paging is a simple memory management approach that is believed to be effective.
  • Swapping is simple due to the identical size of the pages and frames.
  • Page tables demand additional memory space. Hence they may not be suitable for systems with limited RAM.

Segmentation

The separation of the user’s view of memory from the actual physical memory is a crucial feature of memory management. Segmentation is a memory management strategy that supports the user view of memory.

Now, you might be thinking, what is the user view?

A user view means what a user thinks of a program. A user sees a program as the main method, variables, data structures, library functions etc. A user doesn’t think about their addresses in memory. 

In segmentation, a job is divided into several smaller segments. Each module contains parts that execute related functions. Each segment corresponds to a different program’s logical address space. Each of these segments has a different length, determined by the segment’s function in the program. Segments are given a segment number for ease of implementation. 

As a result, a logical address is made up of two tuples: <segment-number, offset>

The diagrammatic representation of the segmentation process is given below.

Usually, the user program is compiled, and the compiler automatically creates segments based on the input program. Segmentation techniques also have some advantages and disadvantages. 

Some of the disadvantages of segmentation are mentioned below:

  1. Possibility of external fragmentation.
  2. Allocating contiguous memory to variable-sized partitions is difficult.
  3. Segmentation is a costly memory management technique.

Frequently Asked Questions

Does the OS perform memory management?

Yes, the operating system is responsible for memory management.

What is a memory management unit in an OS?

A memory management unit(MMU) is responsible for mapping logical memory with its corresponding physical memory.

What are memory management techniques?

Some of the essential memory management techniques are: Fixed and Variable Partitioning, Paging and Segmentation.

Key Takeaways

This article contains a detailed description of various ways of organising memory, various memory-management techniques, including paging and segmentation. 

The operating system is a crucial subject asked in various competitive as well as placement tests. A well-designed course will help you prepare for the questions asked on operating systems in software engineering interviews. Coding Ninjas offer various courses to prepare for these exams. Do check it out. 

For a much deeper understanding of operating system concepts, we recommend exploring the operating system guided path provided by Coding Ninjas.

Keep Learning!

Exit mobile version