Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Deduplicate 4
(version: 0)
Improve perf of sort
Comparing performance of:
Sort vs Set
Created:
2 years ago
by:
Guest
Jump to the latest result
Tests:
Sort
let array = []; for (let i = 0; i < 100000; i++) { array.push(1); array.push(i); } array.sort((a, b) => a - b); let newArray = [array[0]]; for(let i = 1; i < array.length; i ++) { if(array[i-1] !== array[i]) { newArray.push(array[i]) } }
Set
const array = []; for (let i = 0; i < 100000; i++) { array.push(1); array.push(i); } const out = [...new Set(array)];
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
Sort
Set
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
one year ago
)
User agent:
Mozilla/5.0 (Android 14; Mobile; rv:135.0) Gecko/135.0 Firefox/135.0
Browser/OS:
Firefox Mobile 135 on Android
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
Sort
75.8 Ops/sec
Set
56.4 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll break down the provided benchmark definition and test cases, explaining what's being tested, compared options, pros/cons, and other considerations. **Benchmark Definition:** The benchmark is designed to compare the performance of two approaches to deduplicate an array: a manual approach using a loop and conditional statements (`Sort`), versus a built-in `Set` data structure (`Set`). The goal is to measure which approach provides better performance in terms of execution time. **Options Compared:** 1. **Manual Deduplication (Sort)**: * Uses a loop to iterate through the array and check for duplicates. * Uses conditional statements (`if`) to determine whether each element is unique or not. 2. **Built-in Set Data Structure (Set)**: * Creates an empty set from the original array using the spread operator (`[...new Set(array)]`). * Leverages the built-in `Set` data structure's optimized duplicate removal mechanism. **Pros and Cons:** 1. **Manual Deduplication (Sort)** * Pros: + Easy to understand and implement. + Can be modified or extended as needed. * Cons: + May incur performance penalties due to unnecessary iterations or comparisons. 2. **Built-in Set Data Structure (Set)** * Pros: + Optimized for duplicate removal, reducing execution time. + Eliminates the need for manual loop and conditional statements. * Cons: + Limited flexibility, as it relies on the built-in `Set` data structure. **Library/Functionality Used:** None in this benchmark definition. The code uses only JavaScript's standard library and basic syntax. **Special JS Feature/Syntax:** There are no special JavaScript features or syntax used in these test cases. The code is straightforward and easy to understand. **Other Considerations:** 1. **Array Size:** Both test cases create an array of 100,000 elements. 2. **Execution Time:** The benchmark measures the execution time for each approach, providing a relative comparison between the two methods. **Alternatives:** If you were to rewrite this benchmark or explore alternative approaches, some options could include: 1. Using other data structures, like `Map` or `WeakSet`, to deduplicate arrays. 2. Implementing custom sorting algorithms (e.g., QuickSort, MergeSort) for comparison. 3. Incorporating caching mechanisms to reduce unnecessary computations. 4. Utilizing parallel processing techniques to speed up execution times. Keep in mind that the choice of alternative approaches depends on the specific requirements and constraints of your use case.
Related benchmarks:
Sorting algorithms comparison (source: https://www.measurethat.net/Benchmarks/Show/3549/0/javascript
Find Duplicate: sort, set, floyd
test sort lodash and JS
Lodash#uniq + Native v2
Comments
Confirm delete:
Do you really want to delete benchmark?