Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Get unique and duplicate emails
(version: 3)
Comparing performance of:
For Of with has() vs For Of with size vs Reduce (current)
Created:
3 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
emailAddressList = Array.from({length: 1000}, () => `User${Math.floor(Math.random() * 1000)}@emailtesting.com.au`);
Tests:
For Of with has()
const uniqueEmailAddressList = new Set(); const suppressedDuplicateUserEmails = new Set(); for (const email of emailAddressList) { if (uniqueEmailAddressList.has(email)) { suppressedDuplicateUserEmails.add(email); } else { uniqueEmailAddressList.add(email); } }
For Of with size
const uniqueEmailAddressList = new Set(); const duplicateEmails = new Set(); let lastUniqueEmailAddressListSize = 0 for (const email of emailAddressList) { uniqueEmailAddressList.add(email); if (lastUniqueEmailAddressListSize === uniqueEmailAddressList.size) { suppressedDuplicateUserEmails.add(email); lastUniqueEmailAddressListSize += 1 } }
Reduce (current)
const uniqueEmailAddressList = new Set(emailAddressList); const suppressedDuplicateUserEmails = emailAddressList.reduce((list, item, index, array) => { if (array.indexOf(item, index + 1) !== -1 && list.indexOf(item) === -1) { list.push(item); } return list; }, []);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (3)
Previous results
Fork
Test case name
Result
For Of with has()
For Of with size
Reduce (current)
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
**Benchmark Overview** The provided benchmark measures the performance of three different approaches to filter duplicate emails from a list of 1000 unique email addresses. **Approaches Compared** 1. **For Of with has()**: This approach uses the `has()` method of the `Set` data structure to check if an email address is already present in the set. 2. **For Of with size**: This approach compares the size of the `Set` before and after adding each email address. If the size remains unchanged, it means the email address is a duplicate. 3. **Reduce (current)**: This approach uses the `reduce()` method to iterate over the array of email addresses and filter out duplicates. **Pros and Cons of Each Approach** 1. **For Of with has()**: * Pros: Simple and efficient way to check for duplicates using the `Set` data structure. * Cons: May incur additional overhead due to the `has()` method call for each iteration. 2. **For Of with size**: * Pros: Easy to implement and understand, as it only requires comparing sizes before and after adding each email address. * Cons: May be slower than other approaches due to the repeated comparisons of sizes. 3. **Reduce (current)**: * Pros: Can be more efficient than other approaches for large datasets, as it avoids the overhead of repeated `has()` method calls or size comparisons. * Cons: More complex and harder to understand, especially for developers without prior experience with `reduce()`. **Library and Special JS Features** In this benchmark, none of the test cases explicitly use any libraries or special JavaScript features beyond the built-in `Set` data structure and the `reduce()` method. However, it's worth noting that some browsers may have optimized implementations of these functions that could affect performance. **Other Considerations** When implementing benchmarks like this one, it's essential to consider factors such as: * Data size: The size of the input data (in this case, 1000 email addresses) can significantly impact performance. * Browser and platform variations: Different browsers and platforms may have varying levels of optimization or caching that affect performance. * Cache-friendly algorithms: Some algorithms, like `reduce()`, are designed to be cache-friendly, which can improve performance in certain scenarios. **Alternative Approaches** Other approaches to filtering duplicate emails could include: * Using a `Map` data structure instead of `Set` for faster lookups. * Implementing a custom algorithm that takes advantage of the specific characteristics of the input data. * Using a different programming paradigm, such as parallel processing or functional programming. In summary, this benchmark provides valuable insights into the performance differences between three common approaches to filtering duplicate emails. By understanding the pros and cons of each approach and considering factors like data size and browser variations, developers can optimize their code for better performance.
Related benchmarks:
Methods to remove duplicates from array (test)
Filter vs Set (get unique elements)
Find unique and duplicates
Find unique and duplicate emails
Comments
Confirm delete:
Do you really want to delete benchmark?