Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
unique list
(version: 0)
fastest way to get
Comparing performance of:
Set vs fast
Created:
3 years ago
by:
Guest
Jump to the latest result
Tests:
Set
const routes = [ ['PHX', 'LAX'], ['PHX', 'JFK'], ['JFK', 'OKC'], ['JFK', 'HEL'], ['JFK', 'LOS'], ['MEX', 'LAX'], ['MEX', 'BKK'], ['MEX', 'LIM'], ['MEX', 'EZE'], ['LIM', 'BKK'], ]; const uniq1 = [...new Set(routes.flat())];
fast
function uniq_fast(a) { var seen = {}; var out = []; var len = a.length; var j = 0; for(var i = 0; i < len; i++) { var item = a[i]; if(seen[item] !== 1) { seen[item] = 1; out[j++] = item; } } return out; }; const routes = [ ['PHX', 'LAX'], ['PHX', 'JFK'], ['JFK', 'OKC'], ['JFK', 'HEL'], ['JFK', 'LOS'], ['MEX', 'LAX'], ['MEX', 'BKK'], ['MEX', 'LIM'], ['MEX', 'EZE'], ['LIM', 'BKK'], ]; const uniq2 = uniq_fast(routes.flat());
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
Set
fast
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the provided benchmark and explain what is being tested. **Benchmark Overview** The benchmark measures the performance of two approaches to remove duplicates from an array of routes: using `Array.prototype.filter()` with a Set, and implementing a custom function called `uniq_fast()`. The benchmark compares the execution time and number of executions per second for both approaches on different browsers and devices. **Options Compared** 1. **Using Array.prototype.filter() with a Set**: This approach uses the built-in `filter()` method to create a new array with only unique elements, and then converts the resulting array to a Set using the spread operator (`[...]`). The Set is used to keep track of seen elements. 2. **Implementing a custom function called uniq_fast()**: This approach implements a custom function that iterates over the input array, keeps track of seen elements in an object, and returns a new array with only unique elements. **Pros and Cons** 1. **Using Array.prototype.filter() with a Set**: * Pros: Easy to implement, efficient, and widely supported. * Cons: May not be suitable for very large datasets due to the overhead of creating and manipulating Sets. 2. **Implementing a custom function called uniq_fast()**: * Pros: Can be optimized for specific use cases or performance-critical code paths. * Cons: Requires more implementation effort, may have higher memory usage, and can lead to slower performance. **Library Used** None explicitly mentioned, but the `Array.prototype.filter()` method is a built-in JavaScript method that uses various optimizations and algorithms internally. The Set data structure is also a built-in JavaScript data structure. **Special JS Feature or Syntax** The benchmark does not use any special JavaScript features or syntax that would require additional explanation. **Other Considerations** * The benchmark measures execution time, which may not accurately reflect the actual performance impact of these approaches in production code. * The benchmark only tests two specific approaches and does not consider other methods for removing duplicates, such as using a custom recursive function or leveraging libraries like Lodash. **Alternatives** 1. **Using Array.prototype.reduce()**: This approach can be used to remove duplicates from an array, but it may have higher performance overhead compared to `Array.prototype.filter()` with a Set. 2. **Using a library like Lodash**: Lodash provides a `uniqBy()` function that can be used to remove duplicates from an array, among other features. 3. **Using a custom recursive function**: A custom recursive function can be implemented to remove duplicates from an array, but it may have higher performance overhead and complexity compared to the existing approaches. I hope this explanation helps! Let me know if you have any further questions or need clarification on any of these points.
Related benchmarks:
slice VS splice: who is the fastest to keep constant size
slice VS splice VS shift: who is the fastest to keep constant size 2
slice VS splice VS shift: who is the fastest to keep constant size after adding multiple
slice vs splice: which one is fastest to extract all data from a long array
at VS slice VS length: who is the fastest to get last item
Comments
Confirm delete:
Do you really want to delete benchmark?