Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
reduce vs foreach with destructuring fork
(version: 0)
Comparing performance of:
Reduce Destructuring vs Reduce reusing accumulator
Created:
4 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var data = []; for (var iterator = 0; iterator < 1000; iterator++) { data.push({id: iterator, name: 'Iteration ' + iterator}); }
Tests:
Reduce Destructuring
var flattened = data.reduce((entities, item) => { return { ...entities, [item.id]: item } }, {}); console.log(flattened);
Reduce reusing accumulator
var flattened = data.reduce((entities, item) => { entities[item.id] = item return entities }, {}); console.log(flattened);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
Reduce Destructuring
Reduce reusing accumulator
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.1:latest
, generated one year ago):
Let's dive into the benchmarking results. **Benchmark Definition:** The benchmark compares two different approaches to processing an array of objects: 1. Using `reduce()` with destructuring (`Reduce Destructuring`) 2. Using `reduce()` without destructuring, but reusing the accumulator object (`Reduce reusing accumulator`) **Test Cases:** ### Reduce Destructuring This test case uses the `reduce()` method to create a flattened object from an array of objects. The callback function for `reduce()` uses object spread syntax (`...`) to merge the accumulated object with the current item. ```javascript var flattened = data.reduce((entities, item) => { return { ...entities, [item.id]: item } }, {}); console.log(flattened); ``` In this approach, a new accumulator object is created on each iteration, which can lead to slower performance due to the overhead of creating multiple objects. ### Reduce Reusing Accumulator This test case also uses `reduce()` but reuses the same accumulator object across all iterations. The callback function updates the existing accumulator object with the current item. ```javascript var flattened = data.reduce((entities, item) => { entities[item.id] = item; return entities; }, {}); console.log(flattened); ``` In this approach, only one object is created and updated throughout the iterations, which can lead to better performance. **Benchmark Results:** The latest benchmark results show that: * `Reduce Reusing Accumulator` performs significantly better than `Reduce Destructuring`, with an Execution Per Second (EPS) rate of 64,168.43 compared to 6,093.47 for `Reduce Destructuring`. **Conclusion:** * When processing arrays of objects and reusing the accumulator object is feasible, using `reduce()` without destructuring (`Reduce Reusing Accumulator`) is a more performant approach. * However, if you need to merge properties from multiple objects into one object, using `reduce()` with destructuring (`Reduce Destructuring`) might be a better choice, even though it's slower. **Other Alternatives:** You can also consider other approaches for processing arrays of objects: 1. **`forEach()`**: Use the `forEach()` method to iterate over the array and update an accumulator object manually. 2. **`for...of` loop**: Use a traditional `for...of` loop to iterate over the array and process each item individually. Keep in mind that these alternatives might have different performance characteristics depending on your specific use case.
Related benchmarks:
Benchmark: flatMap vs reduce vs while vs foreach
Benchmark: flatMap vs reduce vs while vs foreach vs for of
Benchmark: flatMap vs reduce vs while vs foreach (40k)
push vs spread (reduce array)
flatMap vs reduce (push)
Comments
Confirm delete:
Do you really want to delete benchmark?