Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Set vs Filter for unique with 10000 items
(version: 0)
Comparing performance of:
Set spread vs Array from set vs Filter
Created:
3 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var array = Array.from({length: 10000}, () => Math.floor(Math.random() * 20000));
Tests:
Set spread
const f = [... new Set(array)]
Array from set
const s = new Set(array) const l = Array.from(s)
Filter
const b = array.filter((i,index) => array.indexOf(i)=== index)
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Set spread
Array from set
Filter
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll break down the benchmark for you. **Benchmark Definition:** The provided JSON defines a JavaScript microbenchmark that compares three approaches to remove duplicates from an array: 1. `Set spread` (const f = [... new Set(array)]) 2. `Array.from(set)` (const s = new Set(array) \ const l = Array.from(s)) 3. `Filter` (const b = array.filter((i,index) => array.indexOf(i)=== index)) **Options Compared:** The benchmark compares the performance of these three approaches on a large dataset (`array`) with 10,000 items. **Pros and Cons of each approach:** 1. **Set spread**: This approach is concise and efficient because it uses the spread operator to create a new array from the set. However, it may incur overhead due to the creation of an intermediate set. 2. **Array.from(set)**: This approach creates a new set from the input array and then converts it to an array using `Array.from()`. This method is more explicit and can be easier to understand, but it may be slower than the set spread approach. 3. **Filter**: This approach uses the `filter()` method to remove duplicates from the array by comparing each element with its index in the array. However, this method has a time complexity of O(n^2) because it uses the `indexOf()` method, which scans the array for each iteration. **Library and Purpose:** None of these approaches use any external libraries. **Special JavaScript Feature or Syntax:** The use of spread operator (`...`) in the set spread approach is a modern JavaScript feature introduced in ECMAScript 2015. It allows creating new arrays from existing arrays, sets, or other iterable objects. **Other Considerations:** When choosing an approach, consider the trade-off between conciseness, performance, and readability: * For small datasets, any of these approaches might be acceptable. * For large datasets, the set spread approach is likely to be faster due to its efficiency. * If you need more control over the process or want to ensure correctness, the `Array.from(set)` approach might be a better choice. **Alternative Approaches:** Other approaches to remove duplicates from an array include: 1. Using `Reduce()`: const arr = arr.reduce((acc, curr) => acc.includes(curr)? acc : acc.concat([curr]); 2. Using `Map()` and `Set`: const set = new Set(arr.map(x => x)); const result = [...set]; 3. Using a custom implementation with loops. However, these alternatives are not typically used in production code due to their complexity or performance overhead. I hope this explanation helps you understand the benchmark!
Related benchmarks:
Unique values of array
Filter vs Set (get unique elements)
Filter vs Set (unique elements)
Set from array vs array Filter unique
Comments
Confirm delete:
Do you really want to delete benchmark?