Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Custom Reduce vs Vanilla JS Reduce vs _lodash Reduce with Map
(version: 3)
Comparing the performance of the reduce functions on a Map
Comparing performance of:
Vanilla JS Reduce on a Map vs Lodash Reduce on a Map vs Custom Reduce on a Map
Created:
3 years ago
by:
Registered User
Jump to the latest result
HTML Preparation code:
<script src='https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.21/lodash.min.js'></script>
Script Preparation code:
const reduce = (map, reducer, accumulator) => { let iter = map[Symbol.iterator](); let item; if (accumulator === undefined && (item = iter.next())) { if (item.done) return accumulator; accumulator = item.value[1]; } while ((item = iter.next()) && !item.done) accumulator = reducer(accumulator, item.value[1], item.value[0]); return accumulator; }; const randomMap = new Map(Array.from({length: 1000000}, (_, i) => [i, Math.floor(Math.random() * 100)])); const sumValues = (a, b) => a + b; const sumKeyValues = (a, b) => a + b[1];
Tests:
Vanilla JS Reduce on a Map
Array.from(randomMap).reduce(sumKeyValues, 0);
Lodash Reduce on a Map
_.reduce(_.toArray(randomMap), sumKeyValues, 0);
Custom Reduce on a Map
reduce(randomMap, sumValues, 0);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Vanilla JS Reduce on a Map
Lodash Reduce on a Map
Custom Reduce on a Map
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
one year ago
)
User agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36
Browser/OS:
Chrome 131 on Windows
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
Vanilla JS Reduce on a Map
5.0 Ops/sec
Lodash Reduce on a Map
10.5 Ops/sec
Custom Reduce on a Map
104.6 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the provided benchmark definition and test cases. **Benchmark Definition** The test compares three different approaches to adding elements to an accumulator inside a `reduce` callback function: 1. **Spread operator (`...`)**: This approach uses the spread operator to create a new array with the accumulated values and the mapped value. 2. **`concat()` method**: This approach uses the `concat()` method to concatenate the accumulated array with the mapped value. 3. **`push()` method**: This approach uses the `push()` method to add the mapped value to the end of the accumulated array. **Pros and Cons** 1. **Spread operator (`...`)** * Pros: + Creates a new array, which can be more efficient than concatenating arrays. + Less prone to performance issues due to shallow copying. * Cons: + Can create unnecessary intermediate arrays if the accumulator is large. 2. **`concat()` method** * Pros: + Efficiently concatenates arrays without creating new ones. * Cons: + Can be slower than using the spread operator for small arrays due to the overhead of function calls and arguments passing. 3. **`push()` method** * Pros: + Fastest approach, as it only updates the existing array reference. * Cons: + Can lead to performance issues if the accumulator is large, as `push()` creates a new reference each time. **Library** None of the test cases use any external libraries beyond the built-in JavaScript functions (`Array.from()`, `map()`, `reduce()`, `concat()`, and `push()`). **Special JS feature or syntax** The benchmark does not mention any special JavaScript features or syntax, apart from using the spread operator (`...`), which is a relatively recent addition to JavaScript (introduced in ES6). **Other alternatives** In theory, there are other approaches to adding elements to an accumulator inside a `reduce` callback function: 1. **Using a mutable array**: Instead of pushing to the accumulator array, you could create a new array and return it. 2. **Using a linked list data structure**: You could use a linked list instead of an array, which would allow for more efficient insertion and deletion operations. However, these alternatives are likely to be less efficient or more complex than the three approaches tested in this benchmark. **Benchmark preparation code** The prepared code creates an array `nums` with 100 elements (1-100) using `Array.from()` and defines a simple transformation function `double(x)` that doubles its input. The test cases then create a `reduce` callback function using these transformations, starting from an empty array (`[]`).
Related benchmarks:
Spread vs mutating
flatMap vs reduce using push
flatMap vs reduce using push spread
Object set vs new spread when reducing over results
Comments
Confirm delete:
Do you really want to delete benchmark?