Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
eval vs Function
(version: 0)
Comparing performance of:
eval vs function
Created:
4 years ago
by:
Guest
Jump to the latest result
Script Preparation code:
var modelconfig = JSON.stringify({ animation: true, animationDuration: 1000, color: [ "#60ACFC", "#5BC49F", "#FEB64D", "#B8EE4E", "#32D3E8", "#9286E7", "#458CF7", "#EFE94E", "#3FCEC7", "#6370DE", "#FF7B7B", "#668ED6", "#D660A8" ], tooltip: { show: false, tooltipHeight: "auto", tooltipWidth: "auto", triggerOn: "mousemove", formatter: "{b} <br/>{d}", textStyle: { fontFamily: "微软雅黑", fontWeight: "normal", fontStyle: "normal", color: "#333", fontSize: 14 }, backgroundColor: "rgba(251,251,251,1)", tooltipStyle: "solid", borderWidth: 0, padding: 10, tooltipRadius: 0, boxshadow: { color: "#000000", isNone: true, size: 5 }, extraCssText: "border-style: solid; border-radius:0;width:auto;height:auto;box-shadow:none" }, title: { padding: 12, show: false, text: "金字塔", top: "top", left: "center", textStyle: { fontFamily: "微软雅黑", fontWeight: "bold", fontStyle: "normal", color: "#333333", fontSize: 16 } }, legend: { show: true, padding: 12, textStyle: { fontFamily: "微软雅黑", fontWeight: "normal", fontStyle: "normal", color: "rgb(65,65,65)", fontSize: 14 }, orient: "vertical", right: 0, top: 0, data: ["序列A", "序列B", "序列C", "序列D"], itemGap: 20, icon: "rect", itemWidth: 12, itemHeight: 12 }, series: [ { name: "金字塔图", sort: "ascending", type: "funnel", gap: 1, top: 45, left: 52, width: "80%", selectedMode: "single", avoidLabelOverlap: false, label: { show: true, position: "right", fontSize: "14", fontFamily: "微软雅黑", fontWeight: "normal", fontStyle: "", formatter: "{c}", color: "#333333" }, labelLine: { show: true, length: 5, lineStyle: { width: 1, type: "solid" } }, itemStyle: { borderType: "solid", borderColor: "#fff", borderWidth: 0 }, emphasis: { itemStyle: { borderColor: "red" } }, data: [ { value: 20, name: "序列A" }, { value: 40, name: "序列B" }, { value: 60, name: "序列C" }, { value: 80, name: "序列D" } ], animationDuration: 1000 }, { name: "金字塔图", sort: "ascending", type: "funnel", gap: 1, top: 45, left: 52, width: "80%", selectedMode: "single", avoidLabelOverlap: false, label: { show: true, position: "inside", fontSize: 14, color: "#fff", fontFamily: "微软雅黑", fontWeight: "normal", fontStyle: "normal", formatter: "{d}%" }, labelLine: { show: true, length: 10, lineStyle: { width: 1, type: "solid" } }, itemStyle: { borderType: "solid", borderColor: "#000", borderWidth: 0 }, emphasis: { itemStyle: { borderColor: "#333333" } }, data: [ { value: 20, name: "序列A" }, { value: 40, name: "序列B" }, { value: 60, name: "序列C" }, { value: 80, name: "序列D" } ] } ] }); var model=null;
Tests:
eval
eval('model=' + modelconfig);
function
let methodFunc = new Function('"use strict";return '+ modelconfig); model = methodFunc()
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
eval
function
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll do my best to explain what's being tested on the provided JSON. **Benchmark Definition** The benchmark definition consists of two individual test cases: 1. **eval**: This test case uses the `eval` function to execute a string containing the model configuration (`modelconfig`) as JavaScript code. 2. **function**: This test case creates a new function using the `Function` constructor, passing a string that contains the same model configuration (`modelconfig`). The function is then called with `model = methodFunc()`, where `methodFunc` is the newly created function. **What's being tested** The benchmark appears to be testing the performance of two different ways to execute a JavaScript function that receives a model configuration as an argument: 1. **eval**: This method uses the `eval` function, which can pose security risks if not used carefully. 2. **function**: This method creates a new function using the `Function` constructor and then calls it. **What's being measured** The benchmark is measuring the execution speed of these two methods for each test case. **Latest Benchmark Result** The latest benchmark result shows the execution speed (in executions per second) for both test cases on a specific device: * **eval**: 184,627.296875 executions/second * **function**: 52,980.078125 executions/second This suggests that the `eval` method is significantly faster than the `function` method. **Comparison of eval and function** The main difference between these two methods lies in how they parse and execute the input string as JavaScript code: * **eval**: This method uses a built-in function to execute the input string, which can lead to security vulnerabilities if the input is not sanitized. * **function**: This method creates a new function using a template string, which is safer but may be slower due to the overhead of creating and compiling the function. **Implications** The benchmark results suggest that the `eval` method is faster than the `function` method, but this comes at the cost of security. The choice between these two methods depends on the specific use case and requirements, such as performance, security, or ease of use.
Related benchmarks:
eval vs new Function
eval vs Function vs lodash.template
eval vs Function vs lodash.template v3
eval vs Function vs lodash.template v4
window.eval vs new Function
Comments
Confirm delete:
Do you really want to delete benchmark?