Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Lru cache non-async version: time complexity of cache-hits
(version: 0)
Lru clock 2 hand version
Comparing performance of:
Test1 vs Test2 vs Test3
Created:
5 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
"use strict"; let Lru = function(cacheSize,callbackBackingStoreLoad,elementLifeTimeMs=1000){ let me=this; let maxWait = elementLifeTimeMs; let size = parseInt(cacheSize,10); let mapping = {}; let buf = []; for(let i=0;i<size;i++) { let rnd = Math.random(); mapping[rnd] = i; buf.push({data:"",visited:false, key:rnd, time:Date.now()}); } let ctr= 0; let ctrEvict= parseInt(cacheSize/2,10); let loadData = callbackBackingStoreLoad; this.get = function(key){ if(key in mapping) { if(Date.now() - buf[mapping[key]].time > maxWait) { delete mapping[key]; return me.get(key); } else { buf[mapping[key]].visited=true; buf[mapping[key]].time = Date.now(); return buf[mapping[key]].data; } } else { let ctrFound = -1; while(ctrFound===-1) { if(buf[ctr].visited) { buf[ctr].visited=false; } ctr++; if(ctr >= size) { ctr=0; } if(!(buf[ctrEvict].visited)) { ctrFound = ctrEvict; } ctrEvict++; if(ctrEvict >= size) { ctrEvict=0; } } delete mapping[buf[ctrFound].key]; mapping[key] = ctrFound; let dataKey = loadData(key); buf[ctrFound] = {data:dataKey, visited:false, key:key,time:Date.now()}; return buf[ctrFound].data; } }; }; document.lru = new Lru(100,function(key){ /* cache miss, load from data-store */ let wait=Date.now();let aa=0; while(Date.now()-wait<1){aa++;} return aa.toString()+key; },5000 /*miliseconds before next get() invalidates data */); document.lru2 = new Lru(1000,function(key){ /* cache miss, load from data-store */ let wait=Date.now();let aa=0; while(Date.now()-wait<1){aa++;} return aa.toString()+key; },5000 /*miliseconds before next get() invalidates data */); document.lru3 = new Lru(10000,function(key){ /* cache miss, load from data-store */ let wait=Date.now();let aa=0; while(Date.now()-wait<1){aa++;} return aa.toString()+key; },5000 /*miliseconds before next get() invalidates data */);
Tests:
Test1
let c=0; let tt=Date.now(); function runThis() { for(let i=0;i<10000;i++) { let myData = document.lru.get(parseInt(Math.random()*100,10).toString()+"ee"); console.log(myData); } } runThis();
Test2
let c=0; let tt=Date.now(); function runThis() { for(let i=0;i<10000;i++) { let myData = document.lru2.get(parseInt(Math.random()*100,10).toString()+"ee"); console.log(myData); } } runThis();
Test3
let c=0; let tt=Date.now(); function runThis() { for(let i=0;i<10000;i++) { let myData = document.lru3.get(parseInt(Math.random()*100,10).toString()+"ee"); console.log(myData); } } runThis();
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Test1
Test2
Test3
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.1:latest
, generated one year ago):
Let's break down the benchmark test cases and results. **What is being tested?** The tests are evaluating the performance of an LRU (Least Recently Used) cache implementation in JavaScript, specifically the time complexity of cache hits. The LRU cache is implemented using three instances with different cache sizes: `document.lru` (100 elements), `document.lru2` (1000 elements), and `document.lru3` (10000 elements). **What options are being compared?** The tests compare the performance of accessing elements in the LRU cache with different cache sizes. The goal is to see how well each implementation handles cache hits, which occur when an element is accessed after a certain time period has passed since its last access. **What pros and cons do these approaches have?** Here are some considerations for each approach: * **Small cache size (document.lru)**: Pros: Fast lookups; Cons: High risk of cache misses. * **Medium cache size (document.lru2)**: Pros: Balanced trade-off between lookup speed and cache hit ratio; Cons: May still experience cache misses, albeit less frequently. * **Large cache size (document.lru3)**: Pros: Reduced risk of cache misses; Cons: Increased memory usage and slower lookups. **What library or special JS feature is used?** None are mentioned. The LRU cache implementation appears to be custom-built for this benchmark test case. **What are the results?** The latest benchmark results show that: * **Test1 (document.lru)**: 2.501 executions per second * **Test2 (document.lru2)**: 2.498 executions per second * **Test3 (document.lru3)**: 2.436 executions per second These results indicate that the performance difference between the three cache sizes is relatively small, with a slight improvement in execution speed for larger cache sizes. Overall, this benchmark test case aims to evaluate the performance of an LRU cache implementation under various cache size scenarios and demonstrates the trade-offs involved in balancing lookup speed and cache hit ratio.
Related benchmarks:
Lru cache
Lru cache 50% hit ratio test
Lru cache non-async version with 99 percent hit ratio
Lru cache non-async version: time complexity of cache-miss
Comments
Confirm delete:
Do you really want to delete benchmark?