Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
getLimitsFromData optimization
(version: 3)
Comparing performance of:
Before optimization vs After optimization
Created:
2 years ago
by:
Registered User
Jump to the latest result
Script Preparation code:
var LIST_OSCILLATION_STATE = { 6: [6, 7] } var DATE_KEY = 'dateTime' var data = [] let id = Math.floor(Math.random() * 5) let nextIdChange = Math.round(Math.random() * 170 + 30) for (let i = 0; i < 2000; i++) { if (i >= nextIdChange) { id = Math.floor(Math.random() * 5) nextIdChange = Math.round(Math.random() * 170 + 30) } const wellDepth = Math.random() * 5000 const dateTime = new Date(Math.round(Date.now() * Math.random())).toISOString() data.push({ dateTime, wellDepth, idFeedRegulator: id, state: id + 4 }) } var makeLimit = (id, dateTime, depth) => ({ id, dateStart: dateTime, dateEnd: dateTime, depthStart: depth, depthEnd: depth, })
Tests:
Before optimization
const getLimitsFromData = (data, accessorName) => { if (data.length < 1) return [] const out = [] for (let i = 0; i < data.length; i++) { if (!data[i][accessorName] || !data[i][DATE_KEY]) continue const lastLimit = out?.[out.length - 1] let { [accessorName]: id, [DATE_KEY]: dateTime, wellDepth = null } = data[i] if (accessorName === 'state') for (const [key, states] of Object.entries(LIST_OSCILLATION_STATE)) if (states.includes(id)) id = Number(key) const newDateTime = new Date(dateTime) if (lastLimit && lastLimit.id === id) { lastLimit.dateEnd = newDateTime lastLimit.depthEnd = wellDepth } else { out.push(makeLimit(id, newDateTime, wellDepth)) } } return out } const saubLimitData = getLimitsFromData(data, 'idFeedRegulator') const spinLimitData = getLimitsFromData(data, 'state')
After optimization
const getLimitsFromData = (data, accessorName, parseId) => { if (data.length < 1) return [] const out = [] for (let i = 0; i < data.length; i++) { if (!data || !data[i][accessorName] || !data[i][DATE_KEY]) continue const lastLimit = out[out.length - 1] let { [accessorName]: id, [DATE_KEY]: dateTime, wellDepth = null } = data[i] if (typeof parseId === 'function') id = parseId(id) const newDateTime = new Date(dateTime) if (lastLimit?.id === id) { lastLimit.dateEnd = newDateTime lastLimit.depthEnd = wellDepth continue } out.push(makeLimit(id, newDateTime, wellDepth)) } return out } const saubLimitData = getLimitsFromData(data, 'idFeedRegulator') const spinLimitData = getLimitsFromData(data, 'state', (id) => Number(Object.keys(LIST_OSCILLATION_STATE).find((key) => LIST_OSCILLATION_STATE[key].includes(id)) ?? id))
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
Before optimization
After optimization
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll provide an explanation of the benchmark, its options, pros and cons, and other considerations. **Benchmark Definition** The benchmark measures the performance of the `getLimitsFromData` function in JavaScript. This function processes an array of objects, extracts specific data from each object, and returns a new array with aggregated results. **Script Preparation Code** The script preparation code sets up some constants and variables: * `LIST_OSCILLATION_STATE`: an object containing oscillation states as keys and arrays of corresponding IDs as values. * `DATE_KEY`: the key for the date property in the data objects. * `data`: a randomly generated array of 2000 objects, each with `dateTime`, `wellDepth`, and `idFeedRegulator` properties. **Html Preparation Code** There is no HTML preparation code provided. **Individual Test Cases** The benchmark consists of two test cases: 1. **Before optimization**: This test case measures the performance of the original `getLimitsFromData` function without any optimizations. 2. **After optimization**: This test case measures the performance of the modified `getLimitsFromData` function with some optimizations applied. **Options Compared** The main difference between the two test cases is the presence and behavior of an additional `parseId` parameter in the optimized version: * In the original function, there is no `parseId` parameter. * In the optimized function, if a custom parsing function for the `id` property is provided (via the `parseId` parameter), it will be applied to the `id` value before processing. **Pros and Cons** **Before optimization:** Pros: * Simpler implementation with fewer dependencies on external libraries or custom functions. * Less overhead due to added logic in the optimized version. Cons: * May perform slower due to increased computation time for parsing IDs. **After optimization:** Pros: * Can benefit from custom parsing logic for `id` values, which can improve performance in certain scenarios. * Reduced computational overhead due to simplified logic. Cons: * Requires additional setup and configuration (defining the custom parsing function). * May introduce additional dependencies on external libraries or custom functions. **Other Considerations** The benchmark measures the performance of the `getLimitsFromData` function using a simple, yet representative test case. The use of random data generation and the presence of oscillation states in the data help to isolate the performance impact of optimizations. It's worth noting that this is just one possible optimization for the `getLimitsFromData` function, and there may be other approaches or trade-offs depending on the specific requirements and constraints of the application. **Alternatives** Other alternatives to measure the performance of the `getLimitsFromData` function could include: * Using a more complex or representative test case (e.g., with larger data sets or additional dependencies). * Implementing different optimization strategies (e.g., caching, parallel processing). * Comparing the performance across multiple browsers or environments. * Measuring the impact of optimizations on specific aspects of the function's behavior (e.g., memory usage, latency).
Related benchmarks:
+new Date vs new Date().getTime() vs Date.now() 100k
Lru cache non-async version with 99 percent hit ratio
lodash groupBy vs Array.reduce 100k with gey generation 2
Every time parse date or second loop with date parse
Comments
Confirm delete:
Do you really want to delete benchmark?