Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
filter-then-map vs reduce
(version: 0)
Comparing performance of:
Native filter-map vs Reduce
Created:
one year ago
by:
Guest
Jump to the latest result
Script Preparation code:
var data = Array(1000000).fill({ filtering: true, mapping: 42 });
Tests:
Native filter-map
data.filter(({ filtering }) => filtering).map(({ mapping }) => mapping)
Reduce
data.reduce((acc, obj) => { if (obj.filtering) { acc.push(obj.mapping) } return acc }, [])
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
Native filter-map
Reduce
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
one year ago
)
User agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36 Edg/128.0.0.0
Browser/OS:
Chrome 128 on Windows
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
Native filter-map
48.6 Ops/sec
Reduce
97.0 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the provided benchmark and explain what's being tested, compared, and their pros and cons. **Benchmark Overview** The benchmark compares two approaches for processing an array of objects: 1. `filter-then-map` 2. `reduce` In this case, we're using a sample dataset with 1 million objects, each containing `filtering` and `mapping` properties. **Library: None** There is no external library used in the benchmark preparation code or individual test cases. **Special JS Feature/Syntax: None** There are no special JavaScript features or syntaxes being tested in this benchmark. **Options Compared** We have two options: 1. **Native filter-map**: This approach uses native JavaScript methods, specifically `Array.prototype.filter()` and `Array.prototype.map()`. These methods are implemented in the browser's JavaScript engine and can be optimized for performance. 2. **Reduce**: This approach uses the `reduce()` method to process the array. While `reduce()` is a powerful method, it requires more CPU cycles and memory compared to native filter-map. **Pros and Cons of Each Approach** 1. **Native filter-map**: * Pros: + Optimized for performance by browser's JavaScript engine + Uses built-in methods that are likely to be optimized * Cons: + Requires separate method calls, which can lead to overhead (e.g., function call overhead) 2. **Reduce**: * Pros: + Can process data in a single pass without multiple iterations * Cons: + Requires more CPU cycles and memory due to the accumulation of intermediate results + May be slower than native filter-map due to the overhead of `reduce()` **Benchmark Results** The latest benchmark result shows that: 1. **Native filter-map**: 96.97 executions per second (in Chrome 128 on Windows Desktop) 2. **Reduce**: 48.55 executions per second (in Chrome 128 on Windows Desktop) This suggests that native filter-map is faster than reduce. **Other Alternatives** If you were to modify this benchmark or create a new one, some alternative approaches you could consider are: 1. Using `Array.prototype.forEach()` instead of `filter-then-map` 2. Implementing a custom iterator or generator function for processing the array 3. Comparing performance with different JavaScript engines (e.g., V8 vs SpiderMonkey) 4. Adding additional data processing steps to the filter-map approach to simulate real-world use cases Keep in mind that the choice of approach depends on the specific requirements and constraints of your project, such as memory usage, CPU cycles, or development complexity.
Related benchmarks:
Filter-Map: Lodash vs Native
Filter-Map: Lodash chain vs Native
CORRECTED: Filter-Map: Lodash vs Native
Filter-Map: Lodash vs Native (smaller array
js vs lowdash
Comments
Confirm delete:
Do you really want to delete benchmark?