Skip to content
Related Articles

Related Articles

Improve Article
Save Article
Like Article

Least Frequently Used (LFU) Cache Implementation

  • Difficulty Level : Expert
  • Last Updated : 05 Dec, 2019

Least Frequently Used (LFU) is a caching algorithm in which the least frequently used cache block is removed whenever the cache is overflowed. In LFU we check the old page as well as the frequency of that page and if the frequency of the page is larger than the old page we cannot remove it and if all the old pages are having same frequency then take last i.e FIFO method for that and remove that page.

Min-heap data structure is a good option to implement this algorithm, as it handles insertion, deletion, and update in logarithmic time complexity. A tie can be resolved by removing the least recently used cache block. The following two containers have been used to solve the problem:

Become a success story instead of just reading about them. Prepare for coding interviews at Amazon and other top product-based companies with our Amazon Test Series. Includes topic-wise practice questions on all important DSA topics along with 10 practice contests of 2 hours each. Designed by industry experts that will surely help you practice and sharpen your programming skills. Wait no more, start your preparation today!

  • A vector of integer pairs has been used to represent the cache, where each pair consists of the block number and the number of times it has been used. The vector is ordered in the form of a min-heap, which allows us to access the least frequently used block in constant time.
  • A hashmap has been used to store the indices of the cache blocks which allows searching in constant time.

Below is the implementation of the above approach:




// C++ program for LFU cache implementation
#include <bits/stdc++.h>
using namespace std;
  
// Generic function to swap two pairs
void swap(pair<int, int>& a, pair<int, int>& b)
{
    pair<int, int> temp = a;
    a = b;
    b = temp;
}
  
// Returns the index of the parent node
inline int parent(int i)
{
    return (i - 1) / 2;
}
  
// Returns the index of the left child node
inline int left(int i)
{
    return 2 * i + 1;
}
  
// Returns the index of the right child node
inline int right(int i)
{
    return 2 * i + 2;
}
  
// Self made heap tp Rearranges
//  the nodes in order to maintain the heap property
void heapify(vector<pair<int, int> >& v, 
             unordered_map<int, int>& m, int i, int n)
{
    int l = left(i), r = right(i), minim;
    if (l < n)
        minim = ((v[i].second < v[l].second) ? i : l);
    else
        minim = i;
    if (r < n)
        minim = ((v[minim].second < v[r].second) ? minim : r);
    if (minim != i) {
        m[v[minim].first] = i;
        m[v[i].first] = minim;
        swap(v[minim], v[i]);
        heapify(v, m, minim, n);
    }
}
  
// Function to Increment the frequency 
// of a node and rearranges the heap
void increment(vector<pair<int, int> >& v, 
               unordered_map<int, int>& m, int i, int n)
{
    ++v[i].second;
    heapify(v, m, i, n);
}
  
// Function to Insert a new node in the heap
void insert(vector<pair<int, int> >& v, 
            unordered_map<int, int>& m, int value, int& n)
{
       
    if (n == v.size()) {
        m.erase(v[0].first);
        cout << "Cache block " << v[0].first
                            << " removed.\n";
        v[0] = v[--n];
        heapify(v, m, 0, n);
    }
    v[n++] = make_pair(value, 1);
    m.insert(make_pair(value, n - 1));
    int i = n - 1;
  
    // Insert a node in the heap by swapping elements
    while (i && v[parent(i)].second > v[i].second) {
        m[v[i].first] = parent(i);
        m[v[parent(i)].first] = i;
        swap(v[i], v[parent(i)]);
        i = parent(i);
    }
    cout << "Cache block " << value << " inserted.\n";
}
  
// Function to refer to the block value in the cache
void refer(vector<pair<int, int> >& cache, unordered_map<int
                    int>& indices, int value, int& cache_size)
{
    if (indices.find(value) == indices.end())
        insert(cache, indices, value, cache_size);
    else
        increment(cache, indices, indices[value], cache_size);
}
  
// Driver Code
int main()
{
    int cache_max_size = 4, cache_size = 0;
    vector<pair<int, int> > cache(cache_max_size);
    unordered_map<int, int> indices;
    refer(cache, indices, 1, cache_size);
    refer(cache, indices, 2, cache_size);
    refer(cache, indices, 1, cache_size);
    refer(cache, indices, 3, cache_size);
    refer(cache, indices, 2, cache_size);
    refer(cache, indices, 4, cache_size);
    refer(cache, indices, 5, cache_size);
    return 0;
}


Output:

Cache block 1 inserted.
Cache block 2 inserted.
Cache block 3 inserted.
Cache block 4 inserted.
Cache block 3 removed.
Cache block 5 inserted.



My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!