Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
tensorflow-preprocessing
(version: 0)
Comparing performance of:
await runPrediction([], []) vs await runPrediction([1], [1])
Created:
6 years ago
by:
Guest
Jump to the latest result
HTML Preparation code:
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js"></script>
Script Preparation code:
function runPrediction(graphsArray, featuresArray) { let input = this.tf.tidy(() => { let featuresTensor = null; let graphsTensor = null; if (graphsArray && graphsArray.length > 0) { let graphsTensors = []; for (let graph of graphsArray) { let graphTensor = this.tf.tensor(graph); let shape = graphTensor.shape; if (shape[0] > GRAPH_SIZE) { graphTensor = graphTensor.slice( [0, 0], [GRAPH_SIZE, GRAPH_SIZE] ); } else { graphTensor = graphTensor.pad([ [0, GRAPH_SIZE - shape[0]], [0, GRAPH_SIZE - shape[1]] ]); } graphTensor = this._localPooling(graphTensor); graphsTensors.push(graphTensor); } graphsTensor = this.tf.stack(graphsTensors); } else { graphsTensor = this.tf.zeros([1, GRAPH_SIZE, GRAPH_SIZE]); } if (featuresArray && featuresArray.length > 0) { let featuresTensors = []; for (let features of featuresArray) { let featuresPadded = null; if (features.length < GRAPH_SIZE) { features = features.concat(new Array(GRAPH_SIZE).fill(0)); } featuresPadded = features.slice(0, GRAPH_SIZE); featuresTensors.push(this.tf.oneHot( this.tf.tensor( featuresPadded, [1, GRAPH_SIZE], "int32" ), NUM_OF_TAGS) ); } featuresTensor = this.tf.cast( this.tf.concat(featuresTensors), "float32" ); } else { featuresTensor = this.tf.zeros([1, GRAPH_SIZE, NUM_OF_TAGS]); } return {graphsTensor, featuresTensor}; }); }
Tests:
await runPrediction([], [])
1
await runPrediction([1], [1])
2
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
await runPrediction([], [])
await runPrediction([1], [1])
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
I'll break down the provided benchmark definition and test cases for you. **Benchmark Definition** The provided JSON represents a JavaScript microbenchmarking framework, specifically MeasureThat.net. It defines a benchmark called "tensorflow-preprocessing" with two scripts preparation codes: 1. `runPrediction(graphsArray, featuresArray)`: This script prepares the input data for a TensorFlow.js model. 2. `Html Preparation Code`: This code loads the necessary TensorFlow.js library. **Options Compared** The benchmark compares two different approaches for running the `runPrediction` function: 1. **Awaiting the result of `runPrediction([], [])`**: This test case runs the `runPrediction` function with empty input arrays and awaits its result. 2. **Awaiting the result of `runPrediction([1], [1])`**: This test case runs the `runPrediction` function with non-empty input arrays `[1]` for both graphs and features. **Pros and Cons** Both approaches have their pros and cons: * **Awaiting the result of `runPrediction([], [])`**: + Pros: Allows measuring the overhead of waiting for a promise to resolve. + Cons: May not accurately measure the performance of the `runPrediction` function, as it involves waiting for an empty input. * **Awaiting the result of `runPrediction([1], [1])`**: + Pros: Measures the actual performance of the `runPrediction` function with non-empty input. + Cons: May be affected by other factors, such as the execution time of the promise. **Library: TensorFlow.js** The provided benchmarking framework uses the TensorFlow.js library to create and run the model. TensorFlow.js is a JavaScript version of the popular machine learning library TensorFlow.org. It allows developers to build and deploy machine learning models in web applications. In this specific benchmark, TensorFlow.js is used to create a model that processes graph data and performs one-hot encoding on features. The `runPrediction` function prepares the input data for this model. **Special JS Feature: Async/Await** The benchmark uses the async/await syntax, which allows writing asynchronous code that's easier to read and maintain. Async/await is a feature introduced in ECMAScript 2017 (ES7). It enables developers to write asynchronous code using a more synchronous style, making it easier to understand and debug. In this case, the `await` keyword is used to pause the execution of the code until the promise returned by the `runPrediction` function resolves. This allows measuring the overhead of waiting for the promise to resolve. **Other Alternatives** MeasureThat.net provides a variety of benchmarking frameworks and scripts for testing different JavaScript scenarios. Some other alternatives include: 1. WebPageTest: A popular tool for measuring the performance of web applications. 2. Lighthouse: An open-source, automated tool for auditing web application performance, usability, and accessibility. 3. jsPerf: A JavaScript benchmarking framework that allows developers to create custom benchmarks. Keep in mind that each benchmarking tool has its strengths and weaknesses, and the choice of tool depends on the specific use case and requirements.
Related benchmarks:
jsNetworkX neighbors and things
jsNetworkX big graph vs 2 smaller ones - lookup time --2
push vs spread in building graph from edges
graph test
Comments
Confirm delete:
Do you really want to delete benchmark?