Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Unique values (large list)
(version: 0)
Comparing performance of:
Set vs Filter vs Reduce vs Lodash
Created:
5 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var array = []; for (var i = 0; i < 1000; i++) { array.push(Math.random() * 1000); }
Tests:
Set
[...new Set(array)];
Filter
array.filter((item, index) => array.indexOf(item) === index);
Reduce
array.reduce((unique, item) => unique.includes(item) ? unique : [...unique, item], []);
Lodash
_.uniq(array)
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (4)
Previous results
Fork
Test case name
Result
Set
Filter
Reduce
Lodash
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down what's being tested in this benchmark. **What is being tested?** The benchmark tests the performance of different algorithms for removing duplicates from an array: 1. `Set`: creates a new Set object from the input array, which automatically removes duplicates. 2. `Filter`: uses the `filter()` method to create a new array with only unique elements. 3. `Reduce`: uses the `reduce()` method to accumulate a set of unique elements. 4. `_uniq` (from Lodash library): uses the `_uniq` function from the Lodash library, which is similar to the `Set` and `Filter` approaches. **Options compared** The benchmark compares the performance of these four approaches: * `Set`: creates a new Set object from the input array. * `Filter`: uses the `filter()` method with a callback function that checks for duplicates. * `Reduce`: uses the `reduce()` method with an accumulator that keeps track of unique elements. * `_uniq` (Lodash): uses the `_uniq` function from the Lodash library, which is similar to the `Set` and `Filter` approaches. **Pros and Cons** Here's a brief summary of the pros and cons of each approach: 1. **Set**: Pros: * Fast and efficient, as Sets are optimized for performance. * Easy to use and understand. Con: May require extra memory allocation for the Set object. 2. **Filter**: Pros: * Simple and intuitive implementation. * Works with most array methods. Cons: May have higher overhead due to the filter method's complexity. 3. **Reduce**: Pros: * Can be used in conjunction with other reduce functions. * Allows for more control over the reduction process. Cons: Requires an accumulator function, which can add complexity. 4. `_uniq` (Lodash): Pros: * Part of the popular Lodash library, so users are likely familiar with it. * Easy to use and understand. Con: Requires an external dependency on Lodash. **Library and special JS features** The benchmark uses the Lodash library for its `_uniq` function. Lodash is a utility library that provides a collection of helper functions, including `uniq`. No other special JavaScript features or syntax are used in this benchmark. **Alternatives** Some alternatives to these approaches could be: * Using an `Array` with `forEach()` and checking for duplicates manually. * Using a different data structure, such as a `Map`, to store unique elements. * Implementing a custom solution using recursive functions or bitwise operations. However, these alternatives are less efficient and more prone to errors compared to the tested approaches.
Related benchmarks:
lodash test
lodash test
Array.Sort vs Math.Min-Max
Methods to remove duplicates from array (fork)
Comments
Confirm delete:
Do you really want to delete benchmark?