Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
LRU vs memoizeWith
(version: 0)
Comparing performance of:
LRUMemoizeWith vs Ramda memoizeWith
Created:
5 years ago
by:
Guest
Jump to the latest result
HTML Preparation code:
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous"></script>
Script Preparation code:
function createNode(key, value) { return { newer: null, older: null, key: key, value: value }; } function LRU(size) { this.size = size; this.length = 0; this.head = null; this.last = this.head; // Require map polyfill this.map = new Map(); } LRU.prototype.has = function(key) { return this.map.has(key); }; LRU.prototype._hoistNode = function(node) { if (this.head === node) { return; } if (this.last === node) { this.last = node.newer || node; this.last.older = null; } if (node.older) { node.older.newer = node.newer; } if (node.newer) { node.newer.older = node.older; } node.older = this.head; node.newer = null; if (this.head) { this.head.newer = node; } if (!this.last) { this.last = node; } this.head = node; }; LRU.prototype.get = function(key) { if (!this.has(key)) { return undefined; } var node = this.map.get(key); this._hoistNode(node); return node.value; }; LRU.prototype.set = function(key, value) { var node; if (this.has(key)) { node = this.map.get(key); node.value = value; } else if (this.length < this.size) { node = createNode(key, value); this.map.set(key, node); this.length += 1; } else { node = this.last; this.map.delete(node.key); node.value = value; node.key = key; this.map.set(key, node); } this._hoistNode(node); }; function memoizeWithLRU(size, prop, fn) { var cache = new LRU(size); return function (value) { var key = prop.call(null, value); if (cache.has(key)) { return cache.get(key); } const c = fn.call(this, value); cache.set(key, c); return c; } }
Tests:
LRUMemoizeWith
var memo = memoizeWithLRU(50,i => i, i => i * 2); for (let i = 0; i < 5000; ++i) { memo(i); } for (let i = 0; i < 5000; ++i) { memo(i); }
Ramda memoizeWith
var memo = R.memoizeWith(i => i, i => i * 2); for (let i = 0;i< 5000; ++i ) memo(i); for (let i = 0;i< 5000; ++i ) memo(i);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
LRUMemoizeWith
Ramda memoizeWith
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
one year ago
)
User agent:
Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Mobile Safari/537.36
Browser/OS:
Chrome Mobile 135 on Android
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
LRUMemoizeWith
676.6 Ops/sec
Ramda memoizeWith
1963.6 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down what's being tested in this benchmark. **Benchmark Definition** The benchmark tests the performance of two approaches: `LRU` (Least Recently Used) cache and `memoizeWith`, which is a higher-order function that wraps an existing function with caching using LRU. **LRU Cache** The LRU cache implementation consists of three main methods: 1. `has(key)`: checks if a key is present in the cache. 2. `_hoistNode(node)`: moves a node to the front of the cache (i.e., makes it the most recently used). 3. `get(key)` and `set(key, value)`: retrieve or set values for a given key. The cache uses a `Map` data structure to store nodes, where each node represents a cached value. **memoizeWith** `memoizeWith` is a higher-order function that takes three arguments: 1. `size`: the maximum number of items to store in the cache. 2. `prop`: a property selector function that extracts a unique key from an input value. 3. `fn`: the original function to be cached. The `memoizeWith` implementation wraps the original function with caching using LRU: 1. It creates an LRU cache with the specified size. 2. When called, it checks if the result is already cached. If so, returns the cached value. Otherwise: * Calls the original function with the input value. * Caches the result in the LRU cache. **Comparison** The benchmark tests the performance of `memoizeWith` and `LRU` cache implementations under two scenarios: 1. Both scenarios involve calling a function (`i => i * 2`) repeatedly (5000 times each). 2. In the first scenario, `memoizeWith` is used with an LRU cache, while in the second scenario, the original function is wrapped with `R.memoizeWith`, which uses Ramda's implementation of `memoizeWith`. **Options Compared** The two options being compared are: 1. `LRU` (native implementation) 2. `memoizeWith` (Ramda's implementation) **Pros and Cons** Here's a brief summary of the pros and cons of each approach: **LRU Cache** Pros: * Native implementation, potentially more efficient. * Can be customized with different cache policies. Cons: * Requires manual management of the cache (e.g., adding, removing nodes). * More complex to implement correctly. **memoizeWith (Ramda)** Pros: * Higher-order function simplifies caching and LRU management. * Part of a well-established library (Ramda). Cons: * May introduce additional overhead due to the higher-order function. * Less flexible than a native implementation. **Other Considerations** When choosing between `LRU` and `memoizeWith`, consider factors like performance, complexity, and maintainability. If you need fine-grained control over caching behavior, a native LRU implementation might be a better choice. However, if you prioritize simplicity and ease of use, Ramda's `memoizeWith` implementation could be a good option. **Benchmark Results** The latest benchmark results show that: * `Ramda memoizeWith` outperforms the `LRU` cache implementation by a significant margin (around 3x). * The performance difference is likely due to the overhead introduced by Ramda's higher-order function and the simplicity of the LRU implementation.
Related benchmarks:
Array.prototype.some vs Lodash some_2
lodash.mapKeys vs Native
lodash mapValues vs vanilla Object.keys foreach vs lodash reduce
JS Destruction vs Assign
Set (Lodash vs Lodash/fp vs Immutable) comp. test
Comments
Confirm delete:
Do you really want to delete benchmark?