Cache2Q

2Q cache is variant of multi-level LRU cache. Original paper http://www.vldb.org/conf/1994/P439.PDF It is adaptive, scan-resistant and can give more hits than plain LRU.

This cache consists from three parts (In, Out and Main) where 'In' receive all new elements, 'Out' receives all overflows from 'In', and 'Main' is LRU cache which hold all long-lived data.

Members

Functions

cacheEvents
auto cacheEvents()
clear
void clear()

Drop all elements from cache.

enableCacheEvents
auto enableCacheEvents()
get
Nullable!V get(K k)

Get element from cache.

length
int length()

Number of elements in cache.

put
PutResult put(K k, V v, TTL ttl = TTL())

Put element to cache.

remove
bool remove(K k)

Remove element from cache.

size
auto size(uint s)

Set total cache size. 'In' and 'Out' gets 1/6 of total size, Main gets 2/3 of size.

sizeIn
auto sizeIn(uint s)

Set In queue size

sizeMain
auto sizeMain(uint s)

Set Main queue size

sizeOut
auto sizeOut(uint s)

Set Out queue size

ttl
void ttl(time_t v)

Set default ttl (seconds)

Examples

1 // create cache with total size 1024
2 auto cache = () @trusted {
3     auto allocator = Mallocator.instance;
4     return allocator.make!(Cache2Q!(int, string))(1024);
5 }();
6 
7 cache.sizeIn = 10;              // if you need, later you can set any size for In queue
8 cache.sizeOut = 55;             // and for out quque
9 cache.sizeMain = 600;           // and for main cache
10 cache.put(1, "one");
11 assert(cache.get(1) == "one");  // key 1 is in cache
12 assert(cache.get(2).isNull);    // key 2 not in cache
13 assert(cache.length == 1);      // # of elements in cache
14 cache.clear;                    // clear cache

Meta