Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Array duplicate removal for duplicates exceeding `N`-number
(version: 0)
https://codereview.stackexchange.com/q/175067/120114
Comparing performance of:
original duplicate removal vs 2-pass approach vs 2-pass with filtering
Created:
8 years ago
by:
Registered User
Jump to the latest result
Tests:
original duplicate removal
// Remove duplicates that occur 3 or more times in an array // keeping unique values and those with less than 3 function removeMany(arr) { const newArr = Array.from(arr).sort() let count = 0; let result = [] newArr.forEach((value, index, ar) => { count += 1; // refactored afterwards from (ar[index + 1] !== value) if (ar.lastIndexOf(value) <= index && count <= 2) { for (var i = 0; i < arr.length; i++) { if (arr[i] === value) { result.push(arr[i]) } } count = 0 } else if (ar[index + 1] !== value) { count = 0; } }); // +1 is there anyway to return a result that mimicks the original order of `numbers`? return result; // [1, 2, 2, 3, 4, 4] } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers));
2-pass approach
// Remove duplicates that occur 3 or more times in an array // keeping unique values and those with less than 3 function removeMany(arr) { let countMappings = arr.reduce(function(carry, item) { if (carry[item]!== undefined) { carry[item]++; } else { carry[item] = 1; } return carry; }, {}); return arr.reduce(function(final, item) { if (countMappings[item] <3) { final.push(item); } return final; }, []); } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers));
2-pass with filtering
function removeMany(numbers, max) { const numberMap = numbers.reduce((map, num) => { map[num] = map[num] ? map[num] + 1 : 1; return map; }, []); return numbers.filter(num => numberMap[num] < max); } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers, 3));
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
original duplicate removal
2-pass approach
2-pass with filtering
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the benchmark and its test cases. **What is being tested?** The benchmark measures the performance of three different approaches to remove duplicates from an array where each element occurs 3 or more times, keeping unique values and those with less than 3 occurrences. The approaches are: 1. **Original duplicate removal**: This approach sorts the array, then iterates through it, checking if each element has occurred fewer than 3 times. If not, all occurrences of that element are added to a result array. 2. **2-pass approach**: This approach first creates a mapping of elements to their counts using `Array.prototype.reduce()`. Then, it uses another `Array.prototype.reduce()` to filter out elements with a count greater than or equal to 3. 3. **2-pass with filtering**: This approach is similar to the previous one but uses `Array.prototype.filter()` instead of `Array.prototype.reduce()` for the second pass. **Options compared** The benchmark compares these three approaches: * **Pros and Cons:** * Original duplicate removal: * Pros: Easy to understand, no additional libraries needed. * Cons: Sorts the entire array, which can be slow for large arrays. * 2-pass approach: * Pros: More efficient than original duplicate removal, as it avoids sorting the array. * Cons: Requires an extra pass through the data to create a mapping. * 2-pass with filtering: * Pros: Similar efficiency to the 2-pass approach but uses built-in filtering functions, which can be faster in some cases. * Cons: Still requires two passes, which can increase memory usage and slow down the test for very large arrays. * **Other considerations:** * For large arrays, the 2-pass approaches might be more efficient due to their reduced need for sorting or array filtering operations. **Libraries used** 1. The `Array.from()` method is used in the original duplicate removal approach to convert an iterable object into a new array. 2. The `Array.prototype.sort()` and `Array.prototype.lastIndexOf()` methods are used in the original duplicate removal approach. 3. The `Array.prototype.reduce()` method is used in both the 2-pass and 2-pass with filtering approaches. **Special JS features or syntax** None mentioned in this benchmark, but it does utilize built-in JavaScript functionality such as array operations (e.g., `Array.prototype.sort()`, `Array.prototype.filter()`), map operations (`Array.prototype.reduce()`), and string concatenation for the purpose of executing multiple test cases efficiently. **Alternatives** If you were to create a similar benchmark or explore other approaches, some alternatives might include: * **Using more efficient data structures**, such as a hash table (e.g., `Map` in JavaScript) instead of an array to track counts. * **Employing iterative algorithms with less overhead**, like a Boyer-Moore algorithm for string matching that could be adapted for this problem. * **Utilizing parallel processing techniques** if you have the resources, which can significantly speed up benchmarking. However, given the nature of JavaScript and its built-in functions (like `Array.prototype.sort()`, `Array.prototype.filter()`, etc.), most developers would likely stick with these approaches as they offer a good balance between readability, performance, and standard library usage.
Related benchmarks:
Array duplicate removal for duplicates exceeding `N`-number
Array duplicate removal for duplicates exceeding `N`-number
Array duplicate removal for duplicates exceeding `N`-number
The Non Repeating Number
Comments
Confirm delete:
Do you really want to delete benchmark?