Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
reducer function performance v2
(version: 0)
Testing ops/sec between creating a new array w/ spread initializer vs accumulator.push() in array.reduce()
Comparing performance of:
New array with spread vs Accumulator push
Created:
6 years ago
by:
Guest
Jump to the latest result
Tests:
New array with spread
const testData = [...Array(100000).keys()]; const evens = testData.reduce((acc, v) => { if(v % 2 === 0) { return [...acc, v]; } return acc; }, []); console.log(evens.length);
Accumulator push
const testData = [...Array(100000).keys()]; const evens = testData.reduce((acc, v) => { if(v % 2 === 0) { acc.push(v); } return acc; }, []); console.log(evens.length);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
New array with spread
Accumulator push
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into the world of JavaScript microbenchmarks on MeasureThat.net. **What is being tested?** The provided JSON represents two benchmark test cases that compare the performance of creating an array using two different methods: 1. **New array with spread**: This method uses the spread operator (`...`) to create a new array from an existing one. 2. **Accumulator push**: This method uses the `reduce()` function with accumulator updates to build up the resulting array. **Options compared** The benchmark is comparing these two methods, which are both used for creating arrays in JavaScript. Here's a brief overview of each approach: * **New array with spread**: This method creates a new array by copying elements from an existing array using the spread operator (`...`). The new array has its own memory allocation. * **Accumulator push**: In this method, the accumulator is updated incrementally to build up the resulting array. The accumulation happens in-place, without creating a new array. **Pros and cons of each approach** * **New array with spread**: + Pros: Simple and readable code, creates a new array that can be easily manipulated. + Cons: Memory allocation overhead due to creating a new array, which can lead to performance issues for large datasets. * **Accumulator push**: + Pros: In-place accumulation, reduces memory allocation overhead, can be more efficient for large datasets. + Cons: More complex code and potentially harder to read. **Library and syntax considerations** There is no explicit library mentioned in the benchmark JSON. However, `Array.prototype.reduce()` is a built-in JavaScript method used in both test cases. No special JavaScript features or syntax are explicitly mentioned in the benchmark. If we were to analyze further, we might find some minor optimizations or techniques used in the code, but they would not be specific to any particular feature or syntax. **Other alternatives** If we consider alternative approaches for creating arrays, we might also look at: * Using `Array.prototype.slice()` and copying elements from a source array. * Utilizing libraries like Lodash or Ramda for more functional programming style array creation. * Using `Array.from()` to create an array from an iterable. However, the benchmark is specifically focused on comparing the performance of these two methods, which are both part of the standard JavaScript library. Let me know if you have any further questions!
Related benchmarks:
Spread vs mutating
reducer function performance
reduce() push vs reduce() spread
flatMap vs reduce spread vs reduce push
Comments
Confirm delete:
Do you really want to delete benchmark?