Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Lru cache
(version: 0)
Lru clock 2 hand version
Comparing performance of:
Test1 vs Test2
Created:
5 years ago
by:
Guest
Jump to the latest result
Tests:
Test1
let Lru = function(cacheSize,callbackBackingStoreLoad,elementLifeTimeMs=1000){ let maxWait = elementLifeTimeMs; let size = parseInt(cacheSize,10); let mapping = {}; // mapping from key to buffer index let buf = []; // actual buffer that holds cache data for(let i=0;i<size;i++) { let rnd = Math.random(); mapping[rnd] = i; buf.push({data:"",visited:false, key:rnd}); } let ctr = 0; // second-chance "hand" of clock algorithm let ctrEvict = parseInt(cacheSize/2,10); // eviction "hand" of clock alg. let loadData = callbackBackingStoreLoad; // user-given data-storage load fn. this.get = async function(key){ if(key in mapping) { // RAM speed // check lifetime is over if(Date.now() - buf[mapping[key]].time > maxWait) { delete mapping[key]; return await me.get(key); } else { // not over, load from RAM buf[mapping[key]].visited=true; buf[mapping[key]].time = Date.now(); return buf[mapping[key]].data; } } else { // load at data store speed (hdd, network, ...) // find a slot to evict for new data let ctrFound = -1; while(ctrFound===-1) { // give "another chance" to some "failed" slots if(buf[ctr].visited) { buf[ctr].visited=false; } ctr++; if(ctr >= size) { ctr=0; } // find a victim if(!(buf[ctrEvict].visited)) { // evict ctrFound = ctrEvict; } ctrEvict++; if(ctrEvict >= size) { ctrEvict=0; } } delete mapping[buf[ctrFound].key]; mapping[key] = ctrFound; let dataKey = await loadData(key); buf[ctrFound] = {data:dataKey, visited:false, key:e.now()}; return buf[ctrFound].data; } }; }; let lru = new Lru(95/* cache size*/,async function(key){ /* cache miss, load from data-store */ for(let i=0;i<10000;i++){} return key; },50 /*miliseconds before next get() invalidates data */); async function runThis() { for(let i=0;i<1000;i++) { let myData = await lru.get(parseInt(Math.random()*100,10).toString()); console.log(myData); } } runThis();
Test2
async function runThis() { for(let i=0;i<1000;i++) { for(let j=0;j<10000;j++){} console.log(parseInt(Math.random()*100,10)); } } runThis();
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
Test1
Test2
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
10 months ago
)
User agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36
Browser/OS:
Chrome 137 on Windows
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
Test1
13130.5 Ops/sec
Test2
142.1 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into the world of JavaScript microbenchmarks on MeasureThat.net. **Benchmark Definition** The benchmark definition is a JSON object that describes the test to be performed. In this case, it defines two separate benchmarks: "Lru cache" and two test cases, "Test1" and "Test2". The Lru cache benchmark definition appears to implement a Least Recently Used (LRU) cache algorithm, which is used in many caching systems. The implementation consists of three main components: 1. `Lru`: A constructor function that takes three arguments: `cacheSize`, `callbackBackingStoreLoad`, and `elementLifeTimeMs`. It initializes the cache with a fixed size, mapping data from keys to buffer indices, and buffers to store cached data. 2. `get`: An asynchronous function that retrieves data from the cache based on the provided key. If the data is not found in the cache, it loads data from the user-given data storage load function (`callbackBackingStoreLoad`). **Options Compared** The benchmark comparison involves two main options: 1. **Lru cache implementation**: The first test case ("Test1") runs the custom Lru cache implementation. 2. **Empty loop**: The second test case ("Test2") runs an empty loop for 10000 iterations, which serves as a baseline to compare with the Lru cache implementation. **Pros and Cons** The pros of using a custom Lru cache implementation like in "Test1" are: * **Fine-grained control**: Developers can optimize the cache implementation to suit their specific use case. * **Performance tuning**: By tweaking parameters like `cacheSize` and `elementLifeTimeMs`, developers can achieve better performance. However, there are also some cons: * **Increased complexity**: The custom implementation introduces more code, which can increase development time and maintenance effort. * **Potential bugs**: A poorly optimized or incorrectly implemented Lru cache algorithm can lead to performance issues or even crashes. In contrast, the "Empty loop" test case in "Test2" is a simple baseline that allows for a straightforward comparison of execution times. This approach has some pros: * **Simpllicity**: The test case is easy to set up and run. * **Consistency**: The test case provides a consistent baseline for comparing with other implementations. However, the cons are: * **Limited insight**: The "Empty loop" test case may not provide meaningful results about the performance of the Lru cache implementation, as it does not simulate actual caching behavior. * **Less informative**: The test case does not allow for fine-grained tuning or optimization of the Lru cache algorithm. **Library and Special JS Features** In this benchmark, no specific libraries are used beyond the standard JavaScript features. There is also no special JS feature like `async/await` that's being used in a non-standard way. **Alternatives** Other alternatives for building a microbenchmarking framework include: * **V8.js**: A JavaScript engine that allows developers to run and optimize their code in a controlled environment. * **Benchmark.js**: A popular benchmarking library that provides a simple API for running and comparing performance tests. * **JSPerf**: A web-based tool for running and comparing JavaScript performance tests. These alternatives offer different levels of flexibility, complexity, and ease of use, depending on the specific needs of the project.
Related benchmarks:
memoizeOne - complex types
for-in: cached vs non-cached
for-in: cached vs non-cached 2
Memoization functions
new Intl.DateTimeFormat vs cached Intl.DateTimeFormat vs custom
Comments
Confirm delete:
Do you really want to delete benchmark?