Lightning fast normal and incremental md5 for javascript
Switch branches/tags
Nothing to show
Pull request Compare This branch is 89 commits behind satazor:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


SparkMD5 is a fast md5 implementation of the MD5 algorithm. This script is based in the JKM md5 library which is the fastest algorithm around (see:

NOTE: Please disable Firebug while performing the test! Firebug consumes a lot of memory and CPU and slows the test by a great margin.

Improvements over the JKM md5 library:

  • Functionality wrapped in a closure
  • Object oriented library
  • Incremental md5 (see bellow)
  • Validates using jslint

Incremental md5 performs a lot better for hashing large ammounts of data, such as files. One could read files in chunks, using the FileReader & Blob's, and append each chunk for md5 hashing while keeping memory usage low. See example bellow.

Normal usage:

var hexHash = SparkMD5.hash('Hi there');       // hex hash
var rawHash = SparkMD5.hash('Hi there', true); // OR raw hash

Incremental usage:

var spark = new SparkMD5();
spark.append(' there');
var hexHash = spark.end();                    // hex hash
var rawHash = spark.end(true);                // OR raw hash

Hash a file incrementally:

NOTE: If you test the code bellow using the file:// protocol in chrome you must start the browser with -allow-file-access-from-files argument.
      Please see:

document.getElementById("file").addEventListener("change", function() {

    var fileReader = new FileReader(),
        blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice || File.prototype.slice,
        file = document.getElementById("file").files[0],
        chunkSize = 2097152,                           // read in chunks of 2MB
        chunks = Math.ceil(file.size / chunkSize),
        currentChunk = 0,
        spark = new SparkMD5();

    fileReader.onload = function(e) {
        console.log("read chunk nr", currentChunk + 1, "of", chunks);
        spark.appendBinary(;           // append binary string

        if (currentChunk < chunks) {
        else {
           console.log("finished loading");
 "computed hash", spark.end()); // compute hash

    function loadNext() {
        var start = currentChunk * chunkSize,
            end = start + chunkSize >= file.size ? file.size : start + chunkSize;

        fileReader.readAsBinaryString(, start, end));



  • Add support for byteArrays.
  • Add support for hmac.
  • Add native support for reading files? Maybe add it as an extension?


Joseph Myers (