Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Stack strategies
(version: 0)
Comparing performance of:
generators vs Manual
Created:
9 years ago
by:
Guest
Jump to the latest result
Tests:
generators
var GAS = 0; function run(init, makeGen) { GAS = init; var gen = makeGen(); while(true) { var res = gen.next(); if(res.done) { return res.value; } else { GAS = init; continue; } } } var linkNames = ["first", "rest"]; function* link(f, r) { return { $name: "link", $fields: linkNames, first: f, rest: r } } var empty = { $name: "empty" }; function* map(f, l) { if(GAS-- <= 0) { yield null; } var ans; var dispatch = { empty: 0, link: 1 }; var case_step = dispatch[l.$name]; switch(case_step) { case 0: ans = empty break; case 1: fst = l[l.$fields[0]]; rst = l[l.$fields[1]]; var t1 = yield* f(fst); var t2 = yield* map(f, rst); var t3 = yield* link(t1, t2); ans = t3; } return ans; } var buildList = function(n) { var l = empty; for(var i = 0; i < n; i++) { l = link(i, l).next().value; } return l; } run(100, function*() { return yield* map(function*(l) { return l + 1; }, buildList(4000)); });
Manual
var GAS = 0; function run(init, runner) { GAS = init; var frames = [{ run: runner }]; var thisFrame = frames.pop(); while(true) { var res = thisFrame.run(thisFrame); if(res.isCont) { var len = res.frames.length; for(var i = 0; i < len; i++) { frames.push(res.frames.pop()); } GAS = init; thisFrame = frames.pop(); } else { if(frames.length <= 0) { break; } thisFrame = frames.pop(); thisFrame.ans = res; } } return res; } var linkNames = ["first", "rest"]; function link(f, r) { return { $name: "link", $fields: linkNames, first: f, rest: r } } var empty = { $name: "empty" }; function map(f, l) { var step = 0; if(f.isFrame) { var ans = f.ans; step = f.step; var t1 = f.vars[0]; var fst = f.vars[1]; var rst = f.vars[2]; var t2 = f.vars[3]; l = f.args[1]; f = f.args[0]; } if(GAS-- <= 0) { return { isCont: true, frames: [{ run: map, isFrame: true, vars: [t1, fst, rst, t2], args: [f, l], step: step }] }; } while(true) { switch(step) { case 0: var dispatch = { empty: 1, link: 2 }; step = dispatch[l.$name]; break; case 1: ans = empty step = 5 break; case 2: fst = l[l.$fields[0]]; rst = l[l.$fields[1]]; step = 3; ans = f(fst); break; case 3: t1 = ans; step = 4 ans = map(f, rst); break; case 4: t2 = ans; step = 5 ans = link(t1, t2); break; case 5: GAS++; return ans; } if(ans && ans.isCont) { ans.frames.push({ run: map, isFrame: true, vars: [t1, fst, rst, t2], args: [f, l], step: step }); return ans; } } } var buildList = function(n) { var l = empty; for(var i = 0; i < n; i++) { l = link(i, l); } return l; } run(100, function() { return map(function(l) { return l + 1; }, buildList(4000)); });
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
generators
Manual
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll explain the benchmark and its options in detail. **Benchmark Definition** The benchmark definition is a JavaScript function that defines the experiment to be performed. In this case, there are two benchmark definitions: 1. "generators" 2. "Manual" Both benchmarks use a similar structure, which I'll explain later. **Options Compared** In the "Generators" benchmark, the options compared are: * Using generators (defined by the `run` function) vs * Not using generators (i.e., returning immediately from the `map` function) In the "Manual" benchmark, there is only one option compared: * Manual execution of the experiment **Benchmark Logic** Here's a high-level overview of how each benchmark works: 1. **Generators**: The `run` function uses a generator to recursively call itself until a base case is reached (i.e., until `GAS-- <= 0`). In each recursive call, the function updates some variables (`t1`, `fst`, `rst`, and `t2`) and performs some operations based on those values. The generator yields control back to the caller when it reaches the base case. 2. **Manual**: The `run` function simply executes the experiment without using generators. It iterates over a list of elements (`l`) and calls the `map` function for each element, passing in an incrementing value (i.e., `l + 1`). **Benchmark Logic (in more detail)** The "Generators" benchmark is implemented as follows: * The `run` function initializes some variables (`t1`, `fst`, `rst`, and `t2`) and executes the experiment. * In each recursive call, it updates these variables based on the current values of `t1`, `fst`, `rst`, and `t2`. * When `GAS-- <= 0`, it yields control back to the caller using a special value (`true`) as the return value. * The generator continues to yield until all base cases are reached. In contrast, the "Manual" benchmark is implemented simply by iterating over the list of elements and calling the `map` function for each element: * It initializes some variables (`t1`, `fst`, `rst`, and `t2`) and executes the experiment. * For each element in the list, it calls the `map` function with an incrementing value (i.e., `l + 1`). **Device Platform and Browser** The benchmark was run on a Mac OS X 10.12.5 device with Chrome 59 as the browser. **Results** The latest benchmark result shows two runs: * "Manual" with an execution rate of 1204.924072265625 executions per second. * "Generators" with zero executions per second (i.e., no valid results). This suggests that the manual execution method is significantly faster than using generators, which may be due to the overhead introduced by generator operations. I hope this explanation helps!
Related benchmarks:
Reduce to array
Stack vs Queue
spread vs push set creation 2
fill vs push multiple
fill vs push multiple small
Comments
Confirm delete:
Do you really want to delete benchmark?