LRU Cache Implementation

Introduction

An operating system is responsible for running more than one process at a time. That’s why it needs to manage the memory efficiently. There are various ways like FIFO, LIFO, LRU to accomplish this.

 

Least Recently Used (LRU) is a widely used technique. It arranges data in order of use, making it easy to see which ones haven't been used in the longest time. This was the drawback of earlier algorithms like FIFO and LIFO etc.

 

You must have seen in shops, the items having higher demand are placed in outer showcases. Similarly, the processes which are frequently processed are stored in the LRU cache. 

 

In this article, we’ll discuss the implementation of the LRU cache. Before that, you can revise your concepts of Memory management techniques in OS.

Implementation of LRU cache

There is not a particular data structure that implements the LRU algorithm. We need a combination of available data structures to implement this technique.

Data structures needed to implement LRU.

 

LRU cache can be implemented by combining these two data structures: 

 

  1. A doubly linked list implementation of Queue: The most-recently-used item will be at the head of the list and the least-recently-used item at the tail. 
  2. A hash set: To store the references of a key in the cache.

 

Algorithm

A process is stored in memory frames. And the processes are divided into pages. Every time a process executes, its pages are brought to the LRU Cache.
 

If a page needed by the processor is not present in the LRU Cache, we bring that in it. We simply add a new node in the queue and change its address in the hash table to implement this. If the LRU cache(doubly linked list) is full, we must replace a page with the required one.
 

If the page is already present somewhere in the Cache, we have to take it to the front. Let's understand this with the help of one example.

Example: 

 

 

Initially, there are no pages in the memory. The cache is implemented as a queue, so, the insertion and deletion will take place as shown in the diagram below:





 

The processing of reference strings using the given LRU Cache is shown below. 
 

Figure 1  

                                                                                                              

Figure 2

 

 Figure 3

Implementation-1

The implementation of the above algorithm is given below. 

 

  • We’ve utilised a linked list taking advantage of its standard queue implementation. It uses a linked list internally to model a queue or a deque. 
  • We've used HashSet to store references of pages in Cache. 
     

import java.util.*;

 

public class LRU_Cache 

{

 

//Doubly linked list implementation of Queue

private Deque<Integer> LRU_Queue;

 

//Hash set to store references of key in cache

private HashSet<Integer> hashSet;

 

//Total number of frames in an LRU Cache

private final int TOTAL_FRAMES;

 

//Constructor for initialization

LRU_Cache(int capacity) 

{

LRU_Queue = new LinkedList<>();

hashSet = new HashSet<>();

TOTAL_FRAMES = capacity;

}

 

//Calling pages one by one

public void call(int page) 

{

//If the page is not present in the LRU Cache

if (!hashSet.contains(page)) 

{

if (LRU_Queue.size() == TOTAL_FRAMES) 

{

//Finding and removing the last page

int last = LRU_Queue.removeLast();

hashSet.remove(last);

}

}

//The page is present in the cache

else {

LRU_Queue.remove(page);

}

//Inserting that page in the front of Cache

LRU_Queue.push(page);

//Updating the address in the hash set

hashSet.add(page);

}

 

public void display() 

{

Iterator<Integer> itr = LRU_Queue.iterator();

System.out.println("The LRU Cache contains:");

while (itr.hasNext()) 

{

System.out.println("|_"+itr.next()+"_|");

}

}

 

public static void main(String[] args) {

LRU_Cache cache = new LRU_Cache(3);

cache.call(1);

cache.call(2);

cache.call(3);

cache.call(4);

cache.call(1);

cache.call(2);

cache.display();

cache.call(5);

cache.call(1);

cache.call(2);

cache.call(4);

cache.call(3);

cache.display();

}

}

 

Output

The LRU Cache contains:

|_2_|

|_1_|

|_4_|

The LRU Cache contains:

|_3_|

|_4_|

|_2_|

 

Time Complexity:

The time complexity of the above approach depends on the addition and deletion in the linked list.

  • Adding an element at the beginning will take O(1) time. 
  • Removing an element will take O(N) time.

Space Complexity:

The space complexity of this approach is O(N), where N is the total number of frames in the cache. 

Implementation-2

In this approach, we are using a hash map of keys and double linked nodes. This approach is more efficient than the first one. 
 

import java.util.*;

class Cache_Node {
int key;
int value;
Cache_Node prev_node;
Cache_Node next_node;

public Cache_Node(int key, int value)
{
this.key = key;
this.value = value;
}
}

//A class to implement LRU Cache
class LRUCache {
private HashMap<Integer, Cache_Node> map;
private int number_of_frames, count;
private Cache_Node head, tail;

public LRUCache(int number_of_frames)
{
this.number_of_frames = number_of_frames;
mapnew HashMap<>();
head = tail = null ;
count = 0;
}

public void deleteNode(Cache_Node node)
{
Cache_Node prev = node.prev_node;
Cache_Node next = node.next_node;
if(prev != null) {
prev.next_node = next;
}

if(next != null) {
next.prev_node = prev;
}

}

public void addToHead(Cache_Node node)
{
if(head == null) {
head = tail = node;
return;
}
node.next_node = head ; 
if(head != null) {
head.prev_node = node;
}
head = node;


}

public int get(int key)
{
if (map.get(key) != null) {
Cache_Node node = map.get(key);
int result = node.value;
deleteNode(node);
addToHead(node);
return result;
}


return -1;


}

public void set(int key, int value)
{
if (map.get(key) != null) {
Cache_Node node = map.get(key);
node.value = value;
deleteNode(node);
addToHead(node);
}
else {
Cache_Node node = new Cache_Node(key, value);
map.put(key, node);
if (count < number_of_frames) {
count++;
addToHead(node);
}
else {
Cache_Node tail2 = tail;
tail = tail.prev_node;
tail.next_node = null;

deleteNode(tail2);
map.remove(tail2.key);

addToHead(node);
}
}
}

public void printCache() {
Cache_Node temp = head;
while(temp != null) {
System.out.println("| " + temp.key + ": " + temp.value + " |");
temp = temp.next_node;
}
}
}

public class LRU_Test {
public static void main(String[] args)
{
//Creates a cache with three frames
LRUCache cache = new LRUCache(3);

cache.set(110);
cache.set(220);
cache.set(330);
System.out.println("Current sequence in cache");
System.out.println(" -------");
cache.printCache();
System.out.println(" -------");
System.out.println("Value for the key: 2 is " + cache.get(2));
cache.set(440);
System.out.println("Current sequence in cache");
System.out.println(" -------");
cache.printCache();
System.out.println(" -------");
System.out.println("Value for the key: 1 is " + cache.get(1));
System.out.println("Value for the key: 3 is " + cache.get(3));
System.out.println("Value for the key: 4 is " + cache.get(4));

System.out.println("Current sequence in cache");
System.out.println(" -------");
cache.printCache();
System.out.println(" -------");
}
}

 

Output

Current sequence in cache
-------
330 |
220 |
110 |
-------
Value for the key: 2 is 20
Current sequence in cache
-------
440 |
220 |
330 |
-------
Value for the key: 1 is -1
Value for the key: 3 is 30
Value for the key: 4 is 40
Current sequence in cache
-------
440 |
330 |
220 |
-------

 

Time Complexity: 

The hash map makes the time of the get operation to be O(1). The list of double linked nodes makes the nodes adding/removal operations O(1).

 

Space Complexity:

The space complexity of this approach is O(N), where N is the total number of nodes in the cache.

 

Now, let’s move on to the frequently asked questions on this topic. 

Frequently asked questions

  1. What does LRU stand for?
    Answer: LRU stands for Least Recently Used. It is a technique to store elements in the Cache.
     
  2. What is the LRU algorithm for Cache?
    Answer: This caching method keeps items that have been recently used near the top of the Cache. The LRU inserts a new item to the top of the Cache whenever it is accessed. When the cache limit is reached, things that have been accessed less recently are deleted from the Cache beginning at the bottom.

Key Takeaways

In this article, we’ve learned about the implementation of LRU Cache, which is an important topic of OS. If you find any difficulty understanding this, we recommend you first understand memory management techniques in OS.

You can also prepare the top Operating System Interview Questions and their answers asked by leading tech companies here.

Happy Learning!

Was this article helpful ?
0 upvotes

Comments

No comments yet

Be the first to share what you think