EE Seminar: Efficient admission policies for cache management and heavy hitter detection

11 בדצמבר 2017, 15:00 
חדר 011, בניין כיתות-חשמל 

 (The talk will be given in English)


Speaker:     Dr. Gil Einziger
                   Nokia Bell-Labs


Monday, December 11th, 2017
15:00 - 16:00

Room 011, Kitot Bldg., Faculty of Engineering


Efficient admission policies for cache management and heavy hitter detection



Caching is a fundamental technique in computer science. It creates the illusion of a faster memory by maintaining some data items in a memory that is faster or "closer" to the application. Such a technique works because in many real workloads the access pattern is far from uniform.  Once the cache is full, in order to admit a new item to the cache one of the cached items should be evicted. During the last decades, the selection of which item to evict attracted plenty of research by industry and academia alike. In most cache policies, newly accessed items are added to the cache. 

This design choice is so obvious that many papers do not even explicitly mention it. 


In this talk, I introduce theTinyLFU cache admission policy that decides if a newly arriving item should be admitted to the cache. Intuitively, the newly arriving item should not be admitted to the cache if this admission means dropping a 'better' data item. TinyLFU is used in many open sources projects such as Apache Cassandra, Apache Accumulo, VMware's Corfu, JBoss, Infinispan, Druid, Neo4J, and Spring. As well as by companies such as Google, Netflix, and Linkedin.   Next, I will move to the problem of finding the most frequent items in a stream and show that admission is an important decision in that field as well. I will introduce Randomized Admission Policy (RAP), an admission policy that greatly increases the efficiency of existing algorithms. Specifically, it improves the best-known 

algorithms by as much as  x32 on real traces and up to x128 on synthetic traces. 

אוניברסיטת תל-אביב, ת.ד. 39040, תל-אביב 6997801
UI/UX Basch_Interactive