Drop all elements from cache.
Get element from cache.
Number of elements in cache.
Put element to cache.
Remove element from cache.
Set total cache size. 'In' and 'Out' gets 1/6 of total size, Main gets 2/3 of size.
Set In queue size
Set Main queue size
Set Out queue size
Set default ttl (seconds)
1 // create cache with total size 1024 2 auto cache = () @trusted { 3 auto allocator = Mallocator.instance; 4 return allocator.make!(Cache2Q!(int, string))(1024); 5 }(); 6 7 cache.sizeIn = 10; // if you need, later you can set any size for In queue 8 cache.sizeOut = 55; // and for out quque 9 cache.sizeMain = 600; // and for main cache 10 cache.put(1, "one"); 11 assert(cache.get(1) == "one"); // key 1 is in cache 12 assert(cache.get(2).isNull); // key 2 not in cache 13 assert(cache.length == 1); // # of elements in cache 14 cache.clear; // clear cache
2Q cache is variant of multi-level LRU cache. Original paper http://www.vldb.org/conf/1994/P439.PDF It is adaptive, scan-resistant and can give more hits than plain LRU.
This cache consists from three parts (In, Out and Main) where 'In' receive all new elements, 'Out' receives all overflows from 'In', and 'Main' is LRU cache which hold all long-lived data.