Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Stack strategies
(version: 0)
Comparing performance of:
generators vs Manual
Created:
8 years ago
by:
Guest
Jump to the latest result
Tests:
generators
var GAS = 0; function run(init, makeGen) { GAS = init; var gen = makeGen(); while(true) { var res = gen.next(); if(res.done) { return res.value; } else { GAS = init; continue; } } } var linkNames = ["first", "rest"]; function* link(f, r) { return { $name: "link", $fields: linkNames, first: f, rest: r } } var empty = { $name: "empty" }; function* map(f, l) { if(GAS-- <= 0) { yield null; } var ans; var dispatch = { empty: 0, link: 1 }; var case_step = dispatch[l.$name]; switch(case_step) { case 0: ans = empty break; case 1: fst = l[l.$fields[0]]; rst = l[l.$fields[1]]; var t1 = yield* f(fst); var t2 = yield* map(f, rst); var t3 = yield* link(t1, t2); ans = t3; } return ans; } var buildList = function(n) { var l = empty; for(var i = 0; i < n; i++) { l = link(i, l).next().value; } return l; } run(100, function*() { return yield* map(function*(l) { return l + 1; }, buildList(1000)); });
Manual
var GAS = 0; function run(init, runner) { GAS = init; var frames = [{ run: runner }]; var thisFrame = frames.pop(); while(true) { var res = thisFrame.run(thisFrame); if(res.isCont) { var len = res.frames.length; for(var i = 0; i < len; i++) { frames.push(res.frames.pop()); } GAS = init; thisFrame = frames.pop(); } else { if(frames.length <= 0) { break; } thisFrame = frames.pop(); thisFrame.ans = res; } } return res; } var linkNames = ["first", "rest"]; function link(f, r) { return { $name: "link", $fields: linkNames, first: f, rest: r } } var empty = { $name: "empty" }; function map(f, l) { var step = 0; if(f.isFrame) { var ans = f.ans; step = f.step; var t1 = f.vars[0]; var fst = f.vars[1]; var rst = f.vars[2]; var t2 = f.vars[3]; l = f.args[1]; f = f.args[0]; } if(GAS-- <= 0) { return { isCont: true, frames: [{ run: map, isFrame: true, vars: [t1, fst, rst, t2], args: [f, l], step: step }] }; } while(true) { switch(step) { case 0: var dispatch = { empty: 1, link: 2 }; step = dispatch[l.$name]; break; case 1: ans = empty step = 5 break; case 2: fst = l[l.$fields[0]]; rst = l[l.$fields[1]]; step = 3; ans = f(fst); break; case 3: t1 = ans; step = 4 ans = map(f, rst); break; case 4: t2 = ans; step = 5 ans = link(t1, t2); break; case 5: GAS++; return ans; } if(ans && ans.isCont) { ans.frames.push({ run: map, isFrame: true, vars: [t1, fst, rst, t2], args: [f, l], step: step }); return ans; } } } var buildList = function(n) { var l = empty; for(var i = 0; i < n; i++) { l = link(i, l); } return l; } run(100, function() { return map(function(l) { return l + 1; }, buildList(1000)); });
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
generators
Manual
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'd be happy to help explain the benchmark and its various components. **Benchmark Overview** The provided benchmark measures the performance of different stack strategies in JavaScript. The benchmark defines two test cases: `generators` and `Manual`. Both test cases aim to evaluate the performance of a function that generates a list of numbers by recursively applying a transformation to each element in the list. **Test Cases** ### Generators The first test case, `generators`, uses a generator-based approach. Here's what happens: 1. The function `run` initializes a variable `GAS` (short for "Garbage Collection Allocation Size") and creates a generator `gen`. 2. The generator yields an infinite sequence of values. 3. The `map` function is applied to the generator, transforming each value by incrementing it by 1. 4. The resulting transformed values are accumulated in a list. The generator-based approach uses a technique called "coroutine" programming, where the `run` function yields control back to the caller at specific points, allowing the `map` function to continue executing without blocking the main thread. **Manual** The second test case, `Manual`, uses a manual implementation of the stack strategy. Here's what happens: 1. The function `run` initializes a variable `GAS` and creates an empty list `l`. 2. A loop iterates from 0 to a specified number (`n`) and appends each index to the list. 3. Once the loop completes, the function returns the transformed list by applying a transformation to each element (incrementing by 1). The manual implementation uses a simple iterative approach, without relying on coroutine programming or generators. **Performance Comparison** The benchmark measures the performance of both test cases using various metrics, including: * `ExecutionsPerSecond`: The number of times the function is executed per second. * `RawUAString`, `Browser`, `DevicePlatform`, and `OperatingSystem` provide metadata about the testing environment. In this case, the latest benchmark result shows that: * The `Manual` test case performs slightly better than the `generators` test case (approximately 214 executions per second vs. 890 executions per second). **Implications** The performance difference between the two test cases suggests that the coroutine-based approach used in `generators` might incur additional overhead due to the creation and management of generator objects, which can be slower than the simple iterative approach used in `Manual`. However, this conclusion is based on a single benchmark result and may not hold true for all scenarios or configurations. Keep in mind that benchmarking is an art, and results can vary depending on various factors such as system load, memory constraints, and optimization techniques.
Related benchmarks:
Reduce to array
Stack vs Queue
spread vs push set creation 2
fill vs push multiple
fill vs push multiple small
Comments
Confirm delete:
Do you really want to delete benchmark?