Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Flat map + filter vs. Reduce
(version: 0)
Comparing performance of:
flat map + filter vs reduce
Created:
2 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var arr = Array.from({length:10000}, (v, i) => ({name: i, assigned: Math.random() < 0.5}));
Tests:
flat map + filter
arr.flatMap((o) => (o.assigned ? [o.name] : [])).filter(Boolean);
reduce
arr.reduce((a, o) => (o.assigned && a.push(o.name), a), [])
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
flat map + filter
reduce
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
11 months ago
)
User agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36
Browser/OS:
Chrome 136 on Windows
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
flat map + filter
2513.3 Ops/sec
reduce
7419.6 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into the explanation. **Benchmark Definition and Options** The benchmark measures two different approaches to process an array of objects: 1. **Flat Map + Filter**: This approach uses the `flatMap()` method, which flattens an array of arrays into a single array, followed by filtering out falsy values using the `filter()` method. 2. **Reduce**: This approach uses the `reduce()` method, which accumulates a value from an array (in this case, the names) by iterating through the array. **Pros and Cons of Each Approach** * **Flat Map + Filter**: + Pros: More efficient for arrays with a large number of objects, as it avoids the overhead of creating intermediate arrays. + Cons: May create additional memory allocations, depending on the size of the input array. * **Reduce**: + Pros: More concise and easier to read, especially for smaller arrays. + Cons: May be slower due to the overhead of creating an accumulator value. **Other Considerations** In general, `flatMap()` is a faster approach when dealing with large datasets, as it avoids creating intermediate arrays. However, in this specific benchmark, the difference between the two approaches is relatively small, suggesting that other factors may influence the outcome (e.g., JavaScript engine optimizations). **Library and Special JS Features** No external libraries are used in this benchmark. **Special JS Features** This benchmark does not utilize any special JavaScript features beyond the standard array methods. However, it's worth noting that some modern browsers support additional features like `Array.prototype.flatMap` (introduced in ES6) or `let`/`const` declarations, which may affect the execution time or memory usage. **Alternative Approaches** Other approaches to process arrays of objects might include: 1. Using `forEach()` and creating a new array for filtered values. 2. Using `map()` and then filtering the resulting array using `filter()`. 3. Using `reduce()` with an accumulator object, but this is already used in the benchmark. In conclusion, the benchmark measures two common approaches to process arrays of objects: flat map + filter and reduce. The choice between these approaches depends on the specific requirements of your use case, such as performance, memory usage, or readability.
Related benchmarks:
flatMap vs reduce vs filter.map
flatMap vs reduce vs filter.map v2
flatMap vs reduce filtering performance
flatMap vs reduce vs loop filtering vs filter/map performance
Comments
Confirm delete:
Do you really want to delete benchmark?