Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
reduce vs filter+map
(version: 0)
Tests performance between a single reduce and a filter then map
Comparing performance of:
reduce vs filter+map
Created:
4 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var items = []; for (var i = 0; i < 10000; i++) { items[i] = { id: i, selected: i % 4 == 0 }; }
Tests:
reduce
items.reduce( ( array, item ) => { if ( item.selected ) { array.push( item.id ); } return array; }, [] );
filter+map
items.filter( x => x.selected ).map( x => x.id );
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
reduce
filter+map
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
5 months ago
)
User agent:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 YaBrowser/25.8.0.0 Safari/537.36
Browser/OS:
Yandex Browser 25 on Linux
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
reduce
38391.4 Ops/sec
filter+map
25968.1 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into the benchmark definition and test cases. **Benchmark Definition:** The provided JSON represents a JavaScript microbenchmark that tests the performance difference between two approaches: 1. **`items.reduce()`**: This approach uses the `reduce()` method to iterate over an array and accumulate values. In this case, it iterates over the `items` array and pushes the `id` of selected items into a new array. 2. **`filter()` + `map()`**: This approach uses two methods: `filter()` to filter out non-selected items and `map()` to transform the filtered items into their `id` values. **Options Compared:** The benchmark compares the performance of these two approaches, with the goal of determining which one is faster. **Pros and Cons of Each Approach:** 1. **`reduce()`**: * Pros: + Can be more memory-efficient for large datasets, as it only creates a single new array. + Can be more expressive and concise in terms of intent. * Cons: + Requires the accumulator to keep track of the intermediate results, which can lead to performance overhead. 2. **`filter()` + `map()`**: * Pros: + Separates concerns between filtering and mapping, making it easier to understand and maintain. + Can be more parallelizable, as each filtered item can be processed independently. * Cons: + Creates two temporary arrays (filtered and mapped), which can lead to memory overhead. **Library/Functionality Used:** In this benchmark, the `reduce()` method is used to iterate over the array and accumulate values. The `filter()` and `map()` methods are also used in this approach. **Special JavaScript Features/Syntax:** There are no special JavaScript features or syntax used in this benchmark that would require additional explanation. **Other Considerations:** * Performance: The benchmark is designed to measure the execution speed of each approach, which may vary depending on the specific use case and dataset. * Memory usage: The benchmark considers memory usage as a potential factor, although it's not explicitly measured. **Alternatives:** If you're looking for alternative approaches or libraries to solve similar problems, consider the following: 1. **`forEach()`**: Similar to `reduce()`, but with fewer features and less control over iteration. 2. **`forEach()` + callback function**: Allows for more flexibility in processing each item, but can lead to more complex code. 3. **Libraries like Lodash or Ramda**: Provide additional functionality for array manipulation and can simplify code, but may introduce performance overhead. In summary, the benchmark compares two approaches to iterating over an array: `reduce()` and `filter() + map()`. While `reduce()` can be more memory-efficient, `filter()` + `map()` separates concerns and can be more parallelizable. The choice between these approaches depends on the specific use case and performance requirements.
Related benchmarks:
Native map & filter vs reduce with push and desctructuring (10 000 samples )
filter + map vs reduce 123
Filter and Map vs Reduce
Flat map + filter vs. Reduce
Comments
Confirm delete:
Do you really want to delete benchmark?