Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Set vs Filter for unique for me
(version: 1)
Comparing performance of:
Set spread vs Array from set vs Filter
Created:
3 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
var array = Array.from({length: 400000}, () => Math.floor(Math.random() * 140));
Tests:
Set spread
const f = [... new Set(array)]
Array from set
const s = new Set(array) const l = Array.from(s)
Filter
const b = array.filter((i,index) => array.indexOf(i)=== index)
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Set spread
Array from set
Filter
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the provided benchmark and explain what's being tested. **Benchmark Definition** The benchmark defines three different approaches to remove duplicates from an array: 1. **Set Spread**: `const f = [... new Set(array)]` 2. **Array.from Set**: `const s = new Set(array)\r\nconst l = Array.from(s)` 3. **Filter**: `const b = array.filter((i,index) => array.indexOf(i)=== index)` These approaches aim to achieve the same result: removing duplicate elements from the input array. **Options Compared** The benchmark compares the performance of these three approaches: * Set Spread (using `new Set()` and spread operator `...`) * Array.from Set (using `Array.from()` with a set) * Filter (using `filter()` method) **Pros and Cons of Each Approach:** 1. **Set Spread**: This approach is concise and efficient, as it uses the built-in `Set` data structure to automatically remove duplicates. However, it requires JavaScript 13 or later for syntax support. 2. **Array.from Set**: This approach uses `Array.from()` with a set, which may seem counterintuitive. While it's not the most common way to remove duplicates, it can be effective in certain situations. The main advantage is that it works on older browsers and JavaScript versions that don't support `Set` spread. 3. **Filter**: This approach uses a more traditional filtering method, which can be slower than the other two options due to its iteration over the entire array. **Library Usage** In this benchmark, the `Array.from()` function is used, which is a built-in JavaScript function. No additional library is required. **Special JS Features or Syntax** The Set Spread syntax (`const f = [... new Set(array)]`) requires JavaScript 13 or later for support. **Other Considerations:** * The input array size is set to 400,000 elements using `Array.from({length: 400000}, () => Math.floor(Math.random() * 140))`. This ensures a large enough sample size to detect performance differences between the approaches. * The benchmark is run on desktop platforms (Chrome 103) to ensure consistent results across different environments. **Alternative Approaches** If you're interested in exploring alternative methods for removing duplicates from an array, here are a few more: 1. **Using `reduce()`**: `array.reduce((acc, curr) => acc.includes(curr) ? acc : [...acc, curr], [])` 2. **Using `indexOf()` and slicing**: `const uniqueArray = array.slice(0, array.indexOf(i) + 1)` 3. **Using a third-party library like Lodash's `uniq()` function** Keep in mind that these alternatives may not be as efficient or scalable as the approaches tested in this benchmark.
Related benchmarks:
Unique values of array
Filter vs Set (get unique elements)
Filter vs Set (unique elements)
Set from array vs array Filter unique
Comments
Confirm delete:
Do you really want to delete benchmark?