Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
reuse of audio elements in ratechangeTick with await
(version: 0)
Comparing performance of:
reusing elements vs nonreusables
Created:
5 years ago
by:
Guest
Jump to the latest result
Tests:
reusing elements
(async function(){ var reusables = []; function nextTick(cb){ var resolve; var promise = new Promise(function (resolveImpl) { resolve = resolveImpl; }); var audio = reusables.length ? reusables.shift() : document.createElement("audio"); audio.onratechange = function(){ audio.onratechange = undefined; //todo not really necessary audio.playbackRate = 1; reusables.push(audio); resolve(cb()); } audio.playbackRate = 2; promise.cancel = function(){ audio.onratechange = undefined; resolve(false); return cb; } return promise; } var result = await nextTick(function(){}); })();
nonreusables
(async function(){ function nextTick(cb){ var resolve; var promise = new Promise(function (resolveImpl) { resolve = resolveImpl; }); var audio = document.createElement("audio"); audio.onratechange = function(){ resolve(cb()); } audio.playbackRate = 2; promise.cancel = function(){ audio.onratechange = undefined; resolve(false); return cb; } return promise; } var result = await nextTick(function(){}); })();
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
reusing elements
nonreusables
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the provided benchmark definition and individual test cases. **Benchmark Definition:** The provided JSON represents a JavaScript microbenchmark named "reuse of audio elements in ratechangeTick with await". There is no specific script preparation code or HTML preparation code, indicating that these are built-in functions or variables available for testing. The purpose of this benchmark appears to be measuring the performance difference between reusing and non-reusing `audio` elements. **Individual Test Cases:** ### "reusing elements" This test case uses a custom function `nextTick()` that creates an `audio` element, sets its playback rate, and returns a promise. The promise is cancelled when the `onratechange` event is triggered. The test then awaits this promise using the `await` keyword. **Library:** None **Special JS Feature/Syntax:** The use of async/await syntax and promise manipulation (`nextTick()` function) requires modern JavaScript features (ECMAScript 2017 or later). The purpose of this test case is to measure the performance impact of reusing `audio` elements. The benchmark likely checks if reusing an `audio` element is faster than creating a new one. **Pros and Cons:** Pros: * This approach allows for more fine-grained control over the creation and reuse of resources. * It can help identify performance bottlenecks in specific scenarios. Cons: * Creating and manipulating promises and async/await contexts can add complexity to the benchmarking code. * The use of a custom `nextTick()` function may introduce additional overhead not present in real-world code. ### "nonreusables" This test case is similar to the previous one, but without reusing the `audio` element. Instead, it creates a new `audio` element each time the `onratechange` event is triggered. **Library:** None **Special JS Feature/Syntax:** None (only basic JavaScript) The purpose of this test case is to measure the performance impact of not reusing `audio` elements. **Pros and Cons:** Pros: * This approach is simpler than the previous one, with fewer variables and no custom promise manipulation. * It can provide a more straightforward comparison between reusing and non-reusing resources. Cons: * Creating multiple `audio` elements may introduce additional overhead due to DOM management and garbage collection. **Other Alternatives:** 1. **Reuse of other resources**: Instead of `audio`, the benchmark could test the performance impact of reusing other types of resources (e.g., images, fonts, or DOM elements). 2. **Different browsers or platforms**: The benchmark could be expanded to include multiple browsers or platforms to compare performance across different environments. 3. **Additional complexity**: Introduce additional complexity to the benchmark by including factors like concurrency, caching, or network latency. Keep in mind that these alternatives would require modifications to the benchmark definition and individual test cases.
Related benchmarks:
(ColorfulCakeChen) Measure That Async () 7
HelloDDDDAA
async for vs promise all (setTimeout version)
debounce ramda vs. vanila
(ColorfulCakeChen) delayValue 8
Comments
Confirm delete:
Do you really want to delete benchmark?