Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Count Array Duplicates
(version: 0)
Comparing performance of:
Reduce + Entries + Filter vs Reduce + Clone + Includes + Set vs Reduce + Find + Set vs Reduce + Includes + Filter
Created:
4 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
var numbers = [...Array(1000).keys()].map(() => Math.floor(Math.random() * 100));
Tests:
Reduce + Entries + Filter
const stats = numbers.reduce((memo, val) => { if (memo[val]) { memo[val]++; } else { memo[val] = 1; } return memo; }, {}); const duplicateItems = Object.entries(stats).filter((a => a[1] > 1));
Reduce + Clone + Includes + Set
const duplicates = numbers.reduce((memo, val, index) => { const omitted = [ ...numbers ]; delete omitted[index]; if (omitted.includes(val)) { memo.push(val); } return memo; },[]); const deduped = [...new Set(...[duplicates])];
Reduce + Find + Set
const duplicates = numbers.reduce((memo, val) => { if (numbers.filter((num) => num === val).length > 1) { memo.push(val); } return memo; },[]); const deduped = [...new Set(...[duplicates])];
Reduce + Includes + Filter
numbers.reduce((memo, val) => { if (!memo.includes(val)) { if (numbers.filter((num) => num === val).length > 1) { memo.push(val); } } return memo; },[]);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (4)
Previous results
Fork
Test case name
Result
Reduce + Entries + Filter
Reduce + Clone + Includes + Set
Reduce + Find + Set
Reduce + Includes + Filter
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
**Benchmark Overview** MeasureThat.net is a website that allows users to create and run JavaScript microbenchmarks. The provided benchmark measures the performance of different approaches for finding duplicates in an array. **Script Preparation Code** The script preparation code generates an array of 1000 random numbers between 0 and 99 using `Array(1000).keys()` and `Math.floor(Math.random() * 100)`. This creates a large dataset to test the benchmarking functions. **Benchmark Definitions** There are four individual test cases: 1. **Reduce + Entries + Filter** This approach uses the `reduce()` method to iterate over the array, accumulating duplicates in an object (`stats`). Then, it filters out non-duplicate entries using `Object.entries()`, `filter()`, and `Array.from()`. 2. **Reduce + Clone + Includes + Set** This approach is similar to the first one but uses `clone()` to create a copy of the original array (`omitted`), then deletes an element from the copy, and finally checks if the value exists in the `omitted` array using `includes()`. The resulting duplicates are accumulated in an array (`duplicates`) and filtered out using `Set`. 3. **Reduce + Find + Set** This approach uses `find()` to search for duplicate values in the original array and accumulates them in an array (`duplicates`). It then filters out non-duplicate entries using `Set`. 4. **Reduce + Includes + Filter** This approach uses a variation of the second approach, where it checks if a value exists in the original array using `includes()` instead of creating a copy. **Pros and Cons** * **Efficiency**: Approaches that use built-in methods like `filter()` and `Set` might be faster due to their optimized implementations. * **Memory usage**: The second approach creates an additional copy of the original array, which can consume more memory. * **Readability**: Approaches with more descriptive variable names (e.g., `omitted`) are often easier to understand. **Library and Special Features** None of the approaches use external libraries. However, they do utilize some special JavaScript features: * **Array methods**: Many modern browsers support the new array methods introduced in ECMAScript 2015 (ES6). These methods can be more efficient than their older counterparts. * **Destructuring assignment**: The `Object.entries()` method uses destructuring assignment to extract properties from objects. **Alternative Approaches** Some possible alternatives to these approaches include: * Using a data structure like a hash table or a trie, which are optimized for fast lookup and insertion operations. * Utilizing parallel processing techniques to take advantage of multiple CPU cores. * Leveraging GPU acceleration using libraries like WebGL or WebGPU. Keep in mind that the best approach will depend on the specific requirements of your project, such as performance needs, memory constraints, and readability concerns.
Related benchmarks:
Array.Sort vs Math.Min-Max
Methods to remove duplicates from array (fork)
Methods to remove duplicates from array (test)
Methods to remove duplicates from array x2
Comments
Confirm delete:
Do you really want to delete benchmark?