Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Methods to remove duplicates object with a key from array
(version: 0)
Comparing performance of:
Using a Set vs Using filter and findIndex vs Using reduce
Created:
2 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var array = []; for (let i = 0; i < 100000; i++) { array.push({ id: Math.floor((Math.random() * 10) + 1), someOtherKey: "someOtherValue" }); }
Tests:
Using a Set
function removeDuplicates(items) { const seenIds = new Set(); const result = []; for (const item of items) { if (!seenIds.has(item.id)) { seenIds.add(item.id); result.push(item); } } return result; } removeDuplicates(array)
Using filter and findIndex
array.filter((item, index, array) => array.findIndex(i => i.id === item.id) === index );
Using reduce
array.reduce((acc, currentItem) => { const isDuplicate = acc.some(item => item.id === currentItem.id); if (!isDuplicate) { acc.push(currentItem); } return acc; }, []);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Using a Set
Using filter and findIndex
Using reduce
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
2 years ago
)
User agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36
Browser/OS:
Chrome 122 on Windows
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
Using a Set
801.4 Ops/sec
Using filter and findIndex
428.6 Ops/sec
Using reduce
450.6 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into the benchmark. **Benchmark Overview** The provided JSON represents a JavaScript microbenchmark that tests three different approaches to remove duplicates from an array based on a specific key (in this case, `id`). The goal is to measure which approach is the most efficient in terms of execution time. **Options Compared** 1. **Using a Set**: This approach uses a `Set` data structure to keep track of unique `id`s encountered so far. It iterates through the array, adding each item to the set and pushing it to the result array if it's not already present. 2. **Using filter and findIndex**: This approach uses the `filter()` method to create a new array with only the items that are not duplicates. It does this by using `findIndex()` to check if an item is a duplicate of any other item in the array. 3. **Using reduce**: This approach uses the `reduce()` method to accumulate the result array. It iterates through the array, checking for each item whether it's a duplicate of any previous item, and only adding it to the accumulator (the result array) if it's not. **Pros and Cons** 1. **Using a Set**: * Pros: Efficient use of memory, fast lookup times. * Cons: May require more iterations through the array due to set membership testing. 2. **Using filter and findIndex**: * Pros: Simple and straightforward implementation, can be optimized with indexing. * Cons: May have slower performance compared to other approaches due to unnecessary comparisons. 3. **Using reduce**: * Pros: Can be more memory-efficient if the accumulator is reused. * Cons: May require additional iterations through the array for each item in the accumulator. **Library and Special JS Features** None of these approaches rely on a specific library, but they do utilize built-in JavaScript features like `Set`, `filter()`, `findIndex()`, and `reduce()`. **Other Considerations** * The benchmark assumes that the input array has at least one duplicate item. * The test cases don't account for edge cases like empty arrays or arrays with only one unique item. * The results may vary depending on the specific JavaScript engine, browser, and system configuration being used. **Alternatives** Some alternative approaches to removing duplicates from an array could include: 1. **Using `Map`**: Similar to using a set, but maps can also keep track of values associated with each key. 2. **Using `indexOf()` and conditional statements**: This approach would involve iterating through the array and checking for duplicate items by comparing indices using `indexOf()`. 3. **Using an external sorting library**: If performance is critical, external libraries like Sort-Ex or FastSort can provide optimized sorting algorithms for arrays. Overall, these approaches offer different trade-offs between performance, memory usage, and implementation complexity. The choice of approach depends on the specific requirements of the application and personal preference.
Related benchmarks:
Methods to remove duplicates from array
Methods to remove duplicates from array (fork)
Methods to remove duplicates from array (test)
Methods to remove duplicates from array x2
Comments
Confirm delete:
Do you really want to delete benchmark?