Toggle navigation
MeasureThat.net
Create a benchmark
Tools
Feedback
FAQ
Register
Log In
sha1asm
(version: 5)
Comparing performance of:
computeHash_direct vs computeHash_parse
Created:
9 years ago
by:
Registered User
Jump to the latest result
HTML Preparation code:
<html> <body> <script src="https://www.toolsley.com/sha1asm.js"></script> <div id="computeHash_direct"></div> </body> </html>
Script Preparation code:
var typedArray, blob; typedArray = new Uint8Array(1024 * 1024); for (var i = 0; i < typedArray.length; typedArray[i] = i % 256, ++i); blob = new Blob([typedArray], {type: 'application/octet-binary'}); function computeHash_direct(file, hashCallback, errorCallback) { var hashAlgo = new sha1asm(); var hash; // function parseFile(file, endCallback) { var fileSize = file.size, chunkSize = 2097152, // 1024 * 1024 * 2 - read in chunks of 2MB offset = 0, endByte, time, fileReader = new FileReader(); function readNextChunk() { var endFromOffset = offset + chunkSize, endByte = endFromOffset >= fileSize ? fileSize : endFromOffset; fileReader.readAsArrayBuffer(file.slice(offset, endByte)); } fileReader.onloadend = function (e) { var arrayBuffer = e.target.result; hashAlgo.update(new Uint8Array(arrayBuffer)); offset += arrayBuffer.byteLength; if (offset < fileSize) { readNextChunk(); } else { hash = hashAlgo.final(); // console.log("Time to parse (direct) (ms): " + (new Date().getTime() - time)); // console.log("Hash (sha1asm): " + hash); if (!e.target.error) { hashCallback(hash); } else { errorCallback(e.target.error); } //endCallback(e.target.error); } }; fileReader.onerror = function () { }; time = new Date().getTime(); hashAlgo.reset(); readNextChunk(); } function computeHash_parse(file, hashCallback, errorCallback) { var hashAlgo; function parseFile(file, chunkCallback, endCallback) { var // blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice, fileSize = file.size, chunkSize = 2097152, // 1024 * 1024 * 2 - read in chunks of 2MB offset = 0, endByte, time, fileReader = new FileReader(); function readNextChunk() { var endFromOffset = offset + chunkSize, endByte = endFromOffset >= fileSize ? fileSize : endFromOffset; //fileReader.readAsArrayBuffer(blobSlice.call(file, offset, endByte)); fileReader.readAsArrayBuffer(file.slice(offset, endByte)); } fileReader.onloadend = function (e) { var arrayBuffer = e.target.result; chunkCallback(arrayBuffer); offset += arrayBuffer.byteLength; if (offset < fileSize) { readNextChunk(); } else { endCallback(e.target.error); //console.log("Time to parse (parse) (ms): " + (new Date().getTime() - time)); } }; fileReader.onerror = function () { // running = false; // registerLog('<strong>Oops, something went wrong.</strong>', 'error'); }; time = new Date().getTime(); readNextChunk(); } hashAlgo = new sha1asm(); hashAlgo.reset(); parseFile(file, function (arrayBuffer) { hashAlgo.update(new Uint8Array(arrayBuffer)); // hashAlgo.update(arrayBuffer); }, function (err) { if (! err) { var hash = hashAlgo.final(); // console.log("Hash (sha1asm): " + hash); hashCallback(hash); } else if (typeof errorCallback === "function" ) { errorCallback(err); } } ); } function logHash(hash) { // console.log("Hash:", hash); } function logErr(err) { // console.log("ERROR:", err); }
Tests:
computeHash_direct
computeHash_direct(blob, logHash, logErr);
computeHash_parse
computeHash_parse(blob, logHash, logErr);
Rendered benchmark preparation results:
Suite status:
<idle, ready to run>
Run tests (2)
Previous results
Fork
Test case name
Result
computeHash_direct
computeHash_parse
Fastest:
N/A
Slowest:
N/A
Latest run results:
No previous run results
This benchmark does not have any results yet. Be the first one
to run it!
Autogenerated LLM Summary
(model
llama3.2:3b
, generated one year ago):
Let's break down the benchmark and its test cases. **Benchmark Definition** The benchmark is designed to measure the performance of two different approaches for computing the SHA-1 hash of a binary file: `computeHash_direct` and `computeHash_parse`. The benchmark uses the `sha1asm.js` library, which provides an assembly language implementation of the SHA-1 algorithm. **Script Preparation Code** The script preparation code creates a large Uint8Array representing a binary file with a size of 1024 MB. This array is then converted to a Blob object and stored in the `blob` variable. **computeHash_direct Function** This function uses the `sha1asm.js` library to compute the SHA-1 hash of the blob directly, without parsing the file into chunks. The function takes three arguments: * `file`: the blob containing the binary data * `hashCallback`: a callback function that will receive the computed hash value * `errorCallback`: an optional error callback function The function initializes a new instance of the SHA-1 algorithm and updates it with the blob data in chunks. Once all chunks have been processed, the final hash value is computed and passed to the `hashCallback` function. **computeHash_parse Function** This function uses a similar approach as `computeHash_direct`, but instead of computing the hash directly from the blob, it parses the blob into smaller chunks using a FileReader API. The function takes three arguments: * `file`: the blob containing the binary data * `chunkCallback`: a callback function that will receive each chunk of data * `endCallback`: an optional error callback function The function uses the FileReader API to read the blob in chunks, updating the SHA-1 algorithm with each chunk. Once all chunks have been processed, the final hash value is computed and passed to the `hashCallback` function. **Html Preparation Code** The HTML preparation code includes a script tag that loads the `sha1asm.js` library from an external URL, as well as two `<div>` elements where the benchmark results will be displayed. **Individual Test Cases** There are two individual test cases: * `computeHash_direct`: this test case calls the `computeHash_direct` function with the blob and a callback function that will display the result. * `computeHash_parse`: this test case calls the `computeHash_parse` function with the blob and a callback function that will display the result. **Latest Benchmark Result** The latest benchmark result shows the execution frequency per second for each test case, as well as the browser type, device platform, operating system, and raw User Agent string. The results suggest that the `computeHash_direct` approach is slightly faster than the `computeHash_parse` approach, although the differences are relatively small. In general, this benchmark suggests that both approaches can achieve good performance, but may have different trade-offs in terms of memory usage or processing overhead.
Related benchmarks:
Map (Native vs Ramda vs Lodash vs Immutable)
Map (Native vs Ramda vs Lodash vs Immutable) - sample size 100 1
xotahal - map
Map (Native vs Ramda vs Lodash vs Immutableytu6r5bfgvnr
Ramda sort vs JS native sort vs Lodash
Comments
Confirm delete:
Do you really want to delete benchmark?