Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Computation single optimization with destructuring vs application fn - 1000 runs
(version: 0)
Comparing performance of:
apply 1 vs single 1 vs apply 2 vs single 2 vs apply 3 vs single 3
Created:
5 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
const data = ["hello", " ", "world"]; const applicators = [ ({ fn }) => fn(), ({ fn, deps }) => fn(read(deps[0])), ({ fn, deps }) => fn(read(deps[0]), read(deps[1])), ({ fn, deps }) => fn(...deps.map(read)) ] let cid = 0; var dep1 = { id: 0 }; var dep2 = { id: 1 }; var dep3 = { id: 2 }; function sampleArgs1(_0) { return _0; } function sampleArgs2(_0, _1) { return _0 + _1; } function sampleArgs3(_0, _1, _2) { return _0 + _1 + _2; } function sampleDes1(_0) { return _0; } function sampleDes2([_0, _1]) { return _0 + _1; } function sampleDes3([_0, _1, _2]) { return _0 + _1 + _2; } function createApplyComputation(fn, deps) { return { id: cid++, fn, deps, apply: applicators[deps.length] }; } function createSingleComputation(fn, deps, isSingle) { return { id: cid++, fn, deps, isSingle }; } function execApplyComputation(c) { return c.apply(c); } function execSingleComputation(c) { if (c.isSingle) { return c.fn(read(c.deps)); } return c.fn(c.deps.map(read)); } function read(s) { return data[s.id]; }
Tests:
apply 1
for (let i = 0; i < 1000; i++) execApplyComputation(createApplyComputation(sampleArgs1, [dep1]));
single 1
for (let i = 0; i < 1000; i++) execSingleComputation(createSingleComputation(sampleDes1, dep1, true));
apply 2
for (let i = 0; i < 1000; i++) execApplyComputation(createApplyComputation(sampleArgs2, [dep1, dep2]));
single 2
for (let i = 0; i < 1000; i++) execSingleComputation(createSingleComputation(sampleDes2, [dep1, dep2]));
apply 3
for (let i = 0; i < 1000; i++) execApplyComputation(createApplyComputation(sampleArgs3, [dep1, dep2, dep3]));
single 3
for (let i = 0; i < 1000; i++) execSingleComputation(createSingleComputation(sampleDes3, [dep1, dep2, dep3]));
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (6)
Previous results
Fork
Test case name
Result
apply 1
single 1
apply 2
single 2
apply 3
single 3
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll break down what's being tested in the provided JSON, explain the different options being compared, their pros and cons, and other considerations. **Benchmark Overview** The benchmark measures the performance of two approaches: `apply` and `single`, when executing a function with multiple dependencies. The test creates multiple functions (`sampleArgs1`, `sampleArgs2`, and `sampleArgs3`) that take varying numbers of arguments, as well as three dependency objects (`dep1`, `dep2`, and `dep3`). The benchmark then executes these functions using both the `apply` and `single` approaches. **Options Being Compared** The two options being compared are: ### 1. `apply` This approach uses an array of applicators to execute the function. Each applicator is created by passing the function and its dependencies to a `createApplyComputation` function, which returns an object with an `apply` method. **Pros:** * Simplifies the execution process, as the applicator handles the function call and argument passing. * Allows for easy extension of the applicator array to support multiple functions. **Cons:** * Introduces additional overhead due to the creation and management of applicators. * May not be optimal for performance-critical code paths. ### 2. `single` This approach uses a direct function call with explicit argument passing, without using an applicator. **Pros:** * Eliminates the overhead associated with applicator creation and management. * Provides more control over the execution process. **Cons:** * Requires manual argument passing, which can lead to errors or inefficiencies if not done correctly. **Other Considerations** * The benchmark measures the `ExecutionsPerSecond` value for each test case, indicating how many function executions are performed per second. A higher value typically indicates better performance. * The results show variations in performance across different browser and platform combinations, highlighting potential issues with code portability or optimization. * The benchmark does not account for other factors that might affect performance, such as garbage collection or system load. Overall, the `apply` approach appears to be slightly faster in most cases, but with a trade-off in terms of applicator creation overhead. The `single` approach provides more control and flexibility but may require additional manual effort to ensure correct argument passing.
Related benchmarks:
single-upstream-vs-switch-apply-2
Computation single optimization vs application fn
Computation single optimization with destructuring vs application fn
Computation single optimization with destructuring vs application switch with single dep
Comments
Confirm delete:
Do you really want to delete benchmark?