Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Filter vs Set (unique)
(version: 0)
Compare 3 ways to remove duplicates
Comparing performance of:
Filter vs Set spread vs From set
Created:
3 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var array = Array.from({ length: 60 }, () => Math.floor(Math.random() * 140)); var filterF = function(arr) { return arr.filter((a, b) => array.indexOf(a) === b) } var filterS = function(arr) { return [...new Set(array)] } var sf = function(arr) { return Array.from(new Set(arr)) }
Tests:
Filter
const a = filterF(array)
Set spread
const b = filterS(array)
From set
const c = sf(array)
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Filter
Set spread
From set
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into the explanation of the provided benchmark. **What is being tested?** The benchmark compares three ways to remove duplicates from an array: 1. **Filter**: Using the `filter()` method with a callback function that checks for duplicate elements by comparing the index of each element in the original array. 2. **Set spread**: Using the spread operator (`...`) on a set created from the original array, which automatically removes duplicates. 3. **From set**: Creating an array from a set, which also removes duplicates. **Options comparison** The three options are compared to measure their performance differences. Pros and cons of each approach: * **Filter**: This method has a time complexity of O(n^2) in the worst case, making it less efficient for large arrays. However, it's simple and easy to understand. * **Set spread**: This method uses a set data structure internally, which has an average time complexity of O(1) for lookups and insertions. It's more efficient than filter but requires modern JavaScript support (ES6+). * **From set**: Similar to the set spread approach, it creates an array from a set, removing duplicates. This method also relies on ES6+ syntax. **Library usage** There is no explicit library used in this benchmark. **Special JS feature or syntax** None of the test cases explicitly use special JavaScript features or syntax. However, the `Set` data structure and its methods (e.g., spread operator) require modern JavaScript support (ES6+) to be executed efficiently. **Other alternatives** If you wanted to compare other approaches for removing duplicates from an array, some additional options could include: * Using a `Map` data structure instead of a set * Implementing a custom hash function or collision resolution strategy * Using a sorting and iteration approach (e.g., sorting the array and iterating over it) * Using a parallel processing or multi-threaded approach (which might not be supported on all platforms) Keep in mind that each alternative would require significant changes to the benchmark script and might introduce new variables or assumptions. In summary, this benchmark compares three common approaches for removing duplicates from an array using modern JavaScript features: filter, set spread, and from set. The benchmark results provide insights into the relative performance of these approaches on specific hardware configurations.
Related benchmarks:
Set vs Filter for unique for me
Filter vs Set (get unique elements)
Filter vs Set (unique elements)
Set from array vs array Filter unique
Comments
Confirm delete:
Do you really want to delete benchmark?