Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Filter duplicates
(version: 0)
Comparing performance of:
Set vs Filter+indexOf vs Reduce+includes
Created:
4 years ago
by:
Guest
Jump to the latest result
Tests:
Set
const arr = ['audi', 'audi', 'bmw', 'renault', 'renault', 'audi']; const a = [...new Set(arr)]; return a;
Filter+indexOf
const arr = ['audi', 'audi', 'bmw', 'renault', 'renault', 'audi']; const a = arr.filter((brand, index) => arr.indexOf(brand) === index); return a;
Reduce+includes
const arr = ['audi', 'audi', 'bmw', 'renault', 'renault', 'audi']; const a = arr.reduce((acc, val) => { if(!acc.includes(val)) { acc.push(val); } return acc; }, []); return a;
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Set
Filter+indexOf
Reduce+includes
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the provided benchmark and explain what's being tested. **Benchmark Overview** The benchmark measures the performance of three different approaches to remove duplicates from an array: 1. **Set**: Using the `Set` data structure to create a new array with unique elements. 2. **Filter+indexOf**: Using the `filter()` method in conjunction with `indexOf()` to filter out duplicate elements. 3. **Reduce+includes**: Using the `reduce()` method and `includes()` method to push unique elements into an accumulator array. **Options Compared** The benchmark compares these three approaches, which can be considered as different trade-offs between: * **Memory usage**: Creating a new Set uses more memory than the other two approaches. * **Algorithmic complexity**: The Set approach is generally faster and more efficient for large datasets, while Filter+indexOf might be slower due to the extra lookup. Reduce+includes might have a higher constant factor but can still be quite fast. **Pros and Cons** Here's a brief summary of each approach: 1. **Set**: * Pros: Fastest, most efficient, and scalable. * Cons: Higher memory usage, requires additional data structure (Set). 2. **Filter+indexOf**: * Pros: Simple to implement, uses built-in methods. * Cons: Slower due to extra lookup, may not be suitable for very large datasets. 3. **Reduce+includes**: * Pros: Can be more efficient than Filter+indexOf, avoids extra lookup. * Cons: May have higher constant factor, requires using accumulator array. **Library and Special JS Feature** None of the test cases use any external libraries or special JavaScript features beyond standard JavaScript syntax. **Alternative Approaches** Other possible approaches to remove duplicates from an array could include: 1. **Array.prototype.indexOf() + Array.prototype.concat()**: A simple approach that uses `indexOf()` to filter out duplicates, but may be slower. 2. **Array.prototype.reduce() with custom logic**: An alternative implementation using `reduce()` with custom logic to push unique elements into the accumulator array. 3. **Using a sorting-based approach**: Sorting the array and then removing duplicates based on their indices. Keep in mind that these alternatives might not be as efficient or scalable as the approaches tested in this benchmark, but they could provide an interesting comparison for specific use cases.
Related benchmarks:
Filter vs Set (unique)
lodash UniqueWith vs custom filter to remove duplicates
lodash UniqueWith vs custom filter for duplicates
lodash UniqueWith vs custom filter with isEqual for duplicates
lodash UniqueWith vs custom filter for de-duplication
Comments
Confirm delete:
Do you really want to delete benchmark?