Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Unique values big list
(version: 0)
Comparing performance of:
Set vs Filter vs Reduce vs Lodash
Created:
2 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
var array = new Array(1000).fill(null).map((v, index) => new Date().getTime().toString());
Tests:
Set
[...new Set(array)];
Filter
array.filter((item, index) => array.indexOf(item) === index);
Reduce
array.reduce((unique, item) => unique.includes(item) ? unique : [...unique, item], []);
Lodash
_.uniq(array)
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (4)
Previous results
Fork
Test case name
Result
Set
Filter
Reduce
Lodash
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into explaining the provided benchmark. **What is tested?** The provided benchmark tests four different approaches to remove duplicates from an array of 1000 unique values: 1. Using JavaScript's built-in `Set` data structure 2. Filtering out elements that have an index equal to their value in the original array 3. Reducing the array using a custom callback function that checks if each element is already included in the result 4. Using the `uniq` function from the Lodash library **Options compared** The benchmark compares the performance of these four approaches on a large dataset. * **Pros and Cons:** + JavaScript's built-in `Set`: fast, efficient, and widely supported, but may not work as expected in older browsers or with complex data structures. + Filtering out elements: simple to implement, but can be slower due to the need to iterate over the array multiple times. Additionally, this approach assumes that the index of an element is always unique, which might not be the case for arrays with duplicate indices. + Reducing using a custom callback function: flexible and allows for more control, but may incur additional overhead due to function calls and conditional statements. This approach also relies on the accumulator being an array. + Lodash's `uniq`: part of a larger library that includes many utility functions, this approach provides a convenient one-liner solution, but requires including an external dependency. Additionally, it might be slower than native JavaScript solutions due to the overhead of function calls and object creation. * **Other considerations:** + The benchmark does not account for edge cases such as empty arrays, arrays with only one element, or arrays with non-unique elements. **Library usage** The Lodash library is used in the benchmark. Specifically, it provides the `uniq` function, which takes an array and returns a new array with duplicate values removed. This function is useful when working with data that may contain duplicates, but its inclusion adds an external dependency to the test. **Special JS features or syntax** None of the individual test cases use special JavaScript features or syntax beyond what's considered standard in modern JavaScript (ECMAScript 2015+). **Alternatives** Other alternatives for removing duplicates from an array include: * Using `Array.prototype.every()` and iterating over the array to check if each element exists only once. * Using a regular expression with a capture group that checks if an element appears more than once in the array. The benchmark's results can be used to compare the performance of these approaches and make informed decisions about which one to use in specific scenarios.
Related benchmarks:
Set vs Array for unique list2
unique elements in array using filter v2
unique elements in array using filter v2.3
unique elements in array using filter - large array
Array of strings, null, and ints lodash uniq vs set
Comments
Confirm delete:
Do you really want to delete benchmark?