Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Array duplicate removal for duplicates exceeding `N`-number
(version: 0)
https://codereview.stackexchange.com/q/175067/120114
Comparing performance of:
original duplicate removal vs 2-pass approach vs 2-pass with filtering vs 2pf-obj
Created:
8 years ago
by:
Guest
Jump to the latest result
Tests:
original duplicate removal
// Remove duplicates that occur 3 or more times in an array // keeping unique values and those with less than 3 function removeMany(arr) { const newArr = Array.from(arr).sort() let count = 0; let result = [] newArr.forEach((value, index, ar) => { count += 1; // refactored afterwards from (ar[index + 1] !== value) if (ar.lastIndexOf(value) <= index && count <= 2) { for (var i = 0; i < arr.length; i++) { if (arr[i] === value) { result.push(arr[i]) } } count = 0 } else if (ar[index + 1] !== value) { count = 0; } }); // +1 is there anyway to return a result that mimicks the original order of `numbers`? return result; // [1, 2, 2, 3, 4, 4] } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers));
2-pass approach
// Remove duplicates that occur 3 or more times in an array // keeping unique values and those with less than 3 function removeMany(arr) { let countMappings = arr.reduce(function(carry, item) { if (carry[item]!== undefined) { carry[item]++; } else { carry[item] = 1; } return carry; }, {}); return arr.reduce(function(final, item) { if (countMappings[item] <3) { final.push(item); } return final; }, []); } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers));
2-pass with filtering
function removeMany(numbers, max) { const numberMap = numbers.reduce((map, num) => { map[num] = map[num] ? map[num] + 1 : 1; return map; }, []); return numbers.filter(num => numberMap[num] < max); } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers, 3));
2pf-obj
function removeMany(numbers, max) { const numberMap = numbers.reduce((map, num) => { if (!map[num]) map[num] = 0; map[num]++; return map; }, {}); return numbers.filter(num => numberMap[num] < max); } const numbers = [1, 2, 3, 2, 4, 4, 5, 5, 5, 5]; console.log(removeMany(numbers, 3));
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (4)
Previous results
Fork
Test case name
Result
original duplicate removal
2-pass approach
2-pass with filtering
2pf-obj
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's dive into explaining what is tested in the provided JSON and exploring the different approaches. **Benchmark Purpose** The benchmark measures the performance of removing duplicates from an array, where duplicates are considered as values that occur 3 or more times. The goal is to find unique values and those with less than 3 occurrences. **Approaches Compared** There are four approaches compared in this benchmark: 1. **Original Duplicate Removal**: This approach uses a simple iterative method to remove duplicates. It keeps the original order of elements. 2. **2-Pass Approach**: This approach involves two passes over the array: one for counting occurrences and another for filtering out duplicates. 3. **2-Pass with Filtering**: Similar to the 2-pass approach, but it filters out duplicates in a single pass using an object mapping. 4. **2PF-Obj** (likely a typo): This approach is similar to the 2-pass with filtering method. **Pros and Cons of Each Approach** 1. **Original Duplicate Removal**: * Pros: Simple, easy to understand, and potentially faster since it doesn't require additional data structures. * Cons: May have higher overhead due to the use of `lastIndexOf` for counting occurrences. 2. **2-Pass Approach**: * Pros: Effective for large datasets, allows for early filtering, and can be optimized with caching. * Cons: Requires two passes over the array, which can increase memory usage and overhead. 3. **2-Pass with Filtering**: * Pros: Fast, efficient, and scalable, as it uses an object mapping to filter out duplicates in a single pass. * Cons: May have higher memory usage due to the creation of the object map. 4. **2PF-Obj**: * Pros: Similar to 2-pass with filtering, but might have slight performance improvements due to the use of an object mapping. * Cons: Same as the 2-pass approach (two passes), which can increase memory usage and overhead. **Library Usage** None of the approaches explicitly use a library for removing duplicates. However, the `lastIndexOf` method used in the original duplicate removal approach is part of the JavaScript standard library. **Special JS Features or Syntax** The benchmark uses an array reduction technique (`reduce`) to count occurrences and filter out duplicates. This is a feature introduced in ECMAScript 2015 (ES6) for functional programming. The use of `const` declarations, arrow functions, and template literals are also part of ES6 syntax. **Benchmark Results** The latest benchmark results show the performance differences between these approaches on a Chrome 61 browser: * **2-Pass with Filtering**: The fastest approach, with an average of 43858 executions per second. * **2PF-Obj**: Second-fastest, with an average of 41699 executions per second. * **Original Duplicate Removal**: Third-fastest, with an average of 39272 executions per second. * **2-Pass Approach**: The slowest approach, with an average of 48679 executions per second. Overall, the choice of approach depends on the specific requirements and constraints of the use case.
Related benchmarks:
Array duplicate removal for duplicates exceeding `N`-number
Array duplicate removal for duplicates exceeding `N`-number
Array duplicate removal for duplicates exceeding `N`-number
The Non Repeating Number
Comments
Confirm delete:
Do you really want to delete benchmark?