Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
Assign object props
(version: 3)
Assign object props
Comparing performance of:
test1 vs test2 vs test3 vs test4 vs test5 vs test6 vs test7 vs test8 vs test9
Created:
3 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
const readonly = (prop) => typeof prop === 'function' ? { configurable: false, enumerable: true, get: prop } : { configurable: false, enumerable: true, value: prop, writable: false }; const readonlyExpand = (props) => Object.entries(props).reduce((prev, [k, v]) => { prev[k] = readonly(v); return prev; }, {}); function test1() { return { a: 'a', b: 'b', c: 'c', d: 'd', e: 'e', f: 'f' }; } class Test2Class { constructor() { this.a = 'a'; this.b = 'b'; this.c = 'c'; this.d = 'd'; this.e = 'e'; this.f = 'f'; } } function test2() { return new Test2Class(); } function test3() { return Object.defineProperties({ a: 'a', b: 'b', c: 'c', d: 'd', e: 'e', f: 'f' }, {}); } function test4() { return Object.defineProperties({}, readonlyExpand({ a: 'a', b: 'b', c: 'c', d: 'd', e: 'e', f: 'f' })); } function test5() { return Object.assign({}, { a: 'a', b: 'b', c: 'c', d: 'd', e: 'e', f: 'f' }); } function test6() { return Object.assign({ a: 'a', b: 'b', c: 'c', d: 'd', e: 'e', f: 'f' }); } function test7() { return Object.defineProperties({}, { a: readonly('a'), b: readonly('b'), c: readonly('c'), d: readonly('d'), e: readonly('e'), f: readonly('f') }); } class Test8 { constructor() { } get a() { return 'a'; } get b() { return 'b'; } get c() { return 'c'; } get d() { return 'd'; } get e() { return 'e'; } get f() { return 'f'; } } function test8() { return new Test8(); } function test9() { return { get a() { return 'a'; }, get b() { return 'b'; }, get c() { return 'c'; }, get d() { return 'd'; }, get e() { return 'e'; }, get f() { return 'f'; }, }; }
Tests:
test1
test1();
test2
test2();
test3
test3();
test4
test4();
test5
test5();
test6
test6();
test7
test7()
test8
test8();
test9
test9();
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (9)
Previous results
Fork
Test case name
Result
test1
test2
test3
test4
test5
test6
test7
test8
test9
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
It seems like you're providing a JSON string with an array of objects, each representing a test result. I'll help you parse and summarize the data. Here are some key statistics extracted from the data: 1. **Browser**: Chrome 110 is the most common browser, appearing in all 12 tests. 2. **Device Platform**: Desktop is the primary device platform, used in all 12 tests. 3. **Operating System**: Windows is the dominant operating system, used in all 12 tests. 4. **Executions Per Second**: The execution rates vary widely across tests, with a minimum of 1006694 executions per second (Test 7) and a maximum of 29581798 executions per second (Test 1). 5. **Test Name Distribution**: * Test 1 appears 3 times. * Test 2 appears twice. * Tests 3 to 9 appear once each. * Test 4 appears once. These statistics provide a brief overview of the test results, but if you'd like me to help with anything more specific or perform further analysis, feel free to ask!
Related benchmarks:
Create object
object creation: new, object.create, literal+proto
object creation+method lookup: new, object.create, literal+proto
bind props test
Comments
Confirm delete:
Do you really want to delete benchmark?