Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
reduce collection
(version: 0)
some reducer over collection test
Comparing performance of:
mutate reducer 1 vs mutate reducer 2 vs mutate reducer 3 vs shallow copy reducer
Created:
6 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
var collection = Array(10000).fill().map(() => ({ section: { id: 'id_' + ~~(Math.random()*10) }, user: { id: 'id_' + ~~(Math.random()*10) }, }))
Tests:
mutate reducer 1
var result = collection.reduce((acc, { section, user }) => { if (!acc[section.id]) { acc[section.id] = [user.id]; return acc; } acc[section.id].push(user.id); return acc; }, {});
mutate reducer 2
var result = collection.reduce((acc, { section, user }) => { acc[section.id] = acc[section.id] || []; acc[section.id].push(user.id); return acc; }, {});
mutate reducer 3
var result = collection.reduce((acc, { section, user }) => { if (!acc[section.id]) { acc[section.id] = []; } acc[section.id].push(user.id); return acc; }, {});
shallow copy reducer
var result = collection.reduce((acc, { section, user }) => ({ ...acc, [section.id]: [...(acc[section.id] || []), user.id] }), {});
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (4)
Previous results
Fork
Test case name
Result
mutate reducer 1
mutate reducer 2
mutate reducer 3
shallow copy reducer
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll break down the benchmark for you. **Benchmark Overview** The benchmark is designed to test different approaches to implementing a reducer function in JavaScript. A reducer function is a pure function that takes a state and an action, and returns a new state. In this case, the test creates a collection of objects with `section` and `user` properties, and then applies a reducer function to it. **Options Compared** The benchmark compares four different approaches to implementing the reducer function: 1. **Mutate the accumulator directly**: This approach modifies the accumulator object directly in place, e.g., `acc[section.id] = []`. 2. **Create a new array for each section**: This approach creates a new array for each section and stores it in the accumulator, e.g., `acc[section.id] = []`. 3. **Shallow copy the accumulator**: This approach uses the spread operator (`...`) to create a shallow copy of the accumulator object, and then updates the copied object. 4. **Mutate the accumulator using an array method**: This approach uses an array method (e.g., `push`) to add elements to the accumulator. **Pros and Cons** 1. **Mutate the accumulator directly**: * Pros: Simple and efficient, as it avoids creating new objects or arrays. * Cons: Modifies the original object, which can be problematic if the accumulator is used elsewhere in the codebase. 2. **Create a new array for each section**: * Pros: Avoids modifying the original object, makes it easier to reason about the state. * Cons: Creates new objects or arrays, which can be inefficient and wasteful. 3. **Shallow copy the accumulator**: * Pros: Preserves the original object's structure and content, making it easier to debug and test. * Cons: Can create a deep copy of large objects, which can be memory-intensive. 4. **Mutate the accumulator using an array method**: * Pros: Similar to mutate directly, but avoids modifying the original object. * Cons: May not be as efficient or readable as the other approaches. **Library and Special JS Features** There are no libraries used in this benchmark. However, there is a special JavaScript feature: **Rest parameter** (`{section, user}`). This allows the reducer function to destructure the accumulator object into separate variables for `section` and `user`. **Alternatives** Other approaches to implementing reducers could include: 1. Using a library like Lodash or Ramda, which provide optimized and tested implementations of functional programming concepts. 2. Implementing the reducer using a different data structure, such as an object with nested properties or an array with objects. 3. Using parallel processing or concurrency to speed up the benchmark. Keep in mind that these alternatives may not be relevant to this specific benchmark, but they could be worth exploring in other contexts.
Related benchmarks:
map vs fromentries
Test filter and map123
Object.keys(object).includes(key) vs key in object
reduce vs plain cycle
Comments
Confirm delete:
Do you really want to delete benchmark?