Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
filter-map vs reduce vs reduce with destructuring 2
(version: 1)
modified version of `map-filter vs reduce` that switches the order of operations
Comparing performance of:
map-filter vs reduce vs reduce with desctructuring
Created:
6 months ago
by:
Guest
Jump to the latest result
Script Preparation code:
a = []; for (i = 0; i < 1000; i++) a.push(Number(i) / 1000); var filtering = x => (x * 114514) % 1 > 0.5; var mapping = x => x + 0.1919; var reducing = (acc, x) => { if (filtering(x)) acc.push(mapping(x)); return acc; } var reducingWithDestructuring = (acc, x) => { if (filtering(x)) { return [...acc, mapping(x)]; } return acc; }
Tests:
map-filter
a.filter(filtering).map(mapping);
reduce
a.reduce(reducing,[]);
reduce with desctructuring
a.reduce(reducingWithDestructuring,[]);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
map-filter
reduce
reduce with desctructuring
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
6 months ago
)
User agent:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36
Browser/OS:
Chrome 141 on Mac OS X 10.15.7
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
map-filter
38933.5 Ops/sec
reduce
43072.5 Ops/sec
reduce with desctructuring
6429.1 Ops/sec
Autogenerated LLM Summary
(model
gpt-4o-mini
, generated 6 months ago):
The benchmark you provided tests three different approaches to processing an array of numbers using JavaScript. Specifically, it compares: 1. **`map-filter` approach**: - The code: `a.filter(filtering).map(mapping);` - **Description**: This utilizes the `filter` method to keep certain numbers based on a filtering function and then applies the `map` method to modify the remaining numbers through a mapping function. - **Pros**: - Readable and expressive; clear separation of filtering and mapping concerns. - Functional programming style, which can be easier to understand and maintain. - **Cons**: - May have performance overhead due to creating intermediate arrays, which can be substantial with large datasets. 2. **`reduce` approach**: - The code: `a.reduce(reducing, []);` - **Description**: This uses a single `reduce` method to accumulate a result while applying both filtering and mapping within a single pass. - **Pros**: - More efficient than `map-filter` since it only iterates through the array once, resulting in better performance for large arrays. - This reduces memory consumption by not creating intermediate arrays for filtered values. - **Cons**: - Can be less readable and may complicate understanding due to the merging of filtering and mapping logic into one function. 3. **`reduce with destructuring` approach**: - The code: `a.reduce(reducingWithDestructuring, []);` - **Description**: Similar to the above, it also combines filtering and mapping but utilizes the spread operator for constructing the accumulated array when a value meets filtering conditions. - **Pros**: - Maintains a similar structure to the regular `reduce` while leveraging modern JavaScript syntax, which can be more visually appealing. - Still performs in a single pass, preserving efficiency compared to `map-filter`. - **Cons**: - The use of the spread operator (`...acc`) creates a new array at each step of the reduction, which could incur performance overhead compared with the regular `reduce`, especially for large arrays. ### Benchmark Results The benchmark results indicate the following executions per second for each method on a desktop environment with specific configurations: - **`reduce`**: 43,072 executables/sec - **`map-filter`**: 38,933 executables/sec - **`reduce with destructuring`**: 6,429 executables/sec ### Considerations - **Performance**: The `reduce` method showcased significantly better performance metrics compared to the other approaches. The `reduce with destructuring` performed noticeably worse, likely due to the array spread operation's overhead on every iteration, which negates some performance benefits associated with `reduce`. - **Readability vs Performance**: There is a trade-off between code readability and performance in these approaches. While the `map-filter` approach is easier to understand, it is also less efficient for large datasets. The trade-offs should be considered based on project requirements and expected data sizes. - **Alternative Solutions**: Other alternatives could include: - Using a traditional `for` loop to manually filter and map, potentially optimizing for performance. - Using libraries like Lodash which often provide utility functions for functional programming tasks that are optimized for performance, but at the cost of adding external dependencies. The benchmark effectively illustrates the importance of choosing the right method based on performance requirements and code maintainability priorities.
Related benchmarks:
filter-map vs reduce vs reduce with destructuring
filter-map vs reduce fixed
filter-map vs reduce PSOW
filter-map vs reduce, fixed.
filter-map vs reduce 2
filter-map vs reduceggg
filter-map vs reduce 100k
filter-map vs reduce with spread
segu: filter-map vs reduce
Comments
Confirm delete:
Do you really want to delete benchmark?