Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Filtering test LZ 2
(version: 0)
Comparing performance of:
Set spread vs Array from set vs Filter
Created:
2 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var array = Array.from({length: 40}, () => ({ value: Math.floor(Math.random() * 140)}));
Tests:
Set spread
const values = array.map(({ value }) => value) const f = [... new Set(values)]
Array from set
const values = array.map(({ value }) => value) const s = new Set(array) const l = Array.from(s)
Filter
const b = array.filter((item,index) => array.indexOf(item.value)=== index)
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
Set spread
Array from set
Filter
Fastest:
N/A
Slowest:
N/A
Latest run results:
Run details:
(Test run date:
2 years ago
)
User agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
Browser/OS:
Chrome 119 on Windows
View result in a separate tab
Embed
Embed Benchmark Result
Test name
Executions per second
Set spread
1575218.9 Ops/sec
Array from set
1335953.5 Ops/sec
Filter
509903.6 Ops/sec
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'd be happy to help you understand the provided JSON benchmark data. **Benchmark Definition:** The benchmark is designed to test three different approaches for filtering and manipulating an array of random values: 1. **Set Spread**: This approach uses the spread operator (`...`) to convert a set into an array. 2. **Array from Set**: This approach creates a new set from the array, then converts it back into an array using `Array.from()`. 3. **Filter**: This approach filters the original array by checking if the index of each value in the array is equal to its actual value. **Options Compared:** The benchmark compares the performance of these three approaches on a large dataset of 40 random values, generated using `Array.from()` with an initial length of 40 and a callback function that generates random values between 0 and 139. **Pros and Cons:** * **Set Spread**: + Pros: Can be more concise and readable than the other approaches. + Cons: May incur overhead due to creating intermediate sets, which can lead to slower performance for large datasets. * **Array from Set**: + Pros: Allows for more control over the filtering process and can be useful when dealing with complex data structures. + Cons: Can be slower than the other approaches due to the additional creation of a set and conversion back into an array. * **Filter**: + Pros: Simple and efficient, as it only requires iterating over the original array once. + Cons: May not be suitable for large datasets or performance-critical applications. **Library:** In this benchmark, no specific libraries are used beyond the built-in JavaScript `Array`, `Set`, and `Map` data structures. The `Array.from()` method is used to create arrays from iterables, and the `Set` constructor is used to create sets from arrays. **Special JS Feature/Syntax:** The spread operator (`...`) is used in the **Set Spread** approach, which is a relatively new feature introduced in ECMAScript 2018. It allows for concise array destructuring and spreading of values into a new array. **Other Alternatives:** * For filtering large datasets, other approaches like `Array.prototype.filter()` or `reduce()` could be considered. * If memory efficiency is a concern, using iterators or generators to process data in chunks might be more suitable. * For complex data structures, using libraries like Lodash or Ramda for functional programming utilities could provide more options. Keep in mind that the performance differences between these approaches may vary depending on the specific use case and requirements.
Related benchmarks:
Unique values of array
slice vs filter2
slice vs filter 2
slice vs filter2
slice vs filter23
Comments
Confirm delete:
Do you really want to delete benchmark?