LRU Cache

0

LRU Cache Algorithm

LRU is a cache eviction algorithm called least recently used cache. The algorithm presented here is used in Apache Kahadb module, class LRUCache. The algorithm discards the least recently used items first as the cache reaches its maximum size.

Data structure

It uses a hash table to cache the entries and a double linked list to keep track of the access order. If an entry is inserted, updated or accessed, it gets removed and re-linked before the head node. The node before head is the most recently used and the node after is the eldest node. When the cache reaches its maximum size the least recently used entry will be evicted from the cache.

This simple LRU algorithm is implemented by simply extending LinkedHashMap.
In order to keep track of access order, we must pass access order true while creating LinkedHashMap.
new LRUCache(pageCacheSize, pageCacheSize, 0.75f, true)
If the cache size reaches the maximum size, we want the eldest entry (least recently used) to be removed. To implement this, we simply need to override the removeEldestEntry method.

    protected boolean removeEldestEntry(Map.Entry eldest) {
        return size() > maxCacheSize;
    }
Share.

Leave A Reply