Benchmarks for authenticated AVL+ trees and other features of the Scrypto framework
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
project
src/main/scala/scorex/crypto/benchmarks
.gitignore
README.md
build.sbt

README.md

scrypto-benchmarks

Thi repository contains benchmarks for authenticated AVL+ trees and other features of the Scrypto framework(only the former at the moment). You can find some ready results for our hardware in the paper Improving Authenticated Dynamic Dictionaries, with Applications to Cryptocurrencies.

Prerequisites

Java 8 and SBT (Scala Build Tool) are required. Scala 2.12.x and other dependencies will be downloaded by the SBT.

Authenticated AVL+ trees

There are two implementations of the AVL+ trees defined in the paper https://eprint.iacr.org/2016/994, legacy one (from scorex.crypto.authds.avltree.legacy package) operates without batch updates and also does not support deletion of an element from a tree, while newer implementation (from the scorex.crypto.authds.avltree.batch) supports efficient batch updates and also removals.

Benchmarks List

Performance Benchmark

The benchmark shows performance of the AVL+ (legacy version) in comparison with authenticated treaps. To get the data, launch the benchmark with:

sbt "run-main scorex.crypto.benchmarks.PerformanceMeter"

Batching Benchmark

The benchmark prints out sizes and generation/verification times for batching and legacy provers/verifiers. A tree is initially populated.

sbt "run-main scorex.crypto.benchmarks.BatchingBenchmark"

The output table contains following data in the rows:

  • Step is a batch size
  • Plain size is an average proof size with no batching compression
  • GZiped size is an average proof size with GZip compression
  • Batched size is an average proof size with batching compression
  • Old apply time is an average proof generation time without batching
  • New apply time is an average proof generation time with batching
  • Old verify time is an average proof verification time without batching
  • New verify time is an average proof verification time with batching

Blockchain Processing Simulation Benchmark

This benchmark simulates two blockchain verifiers: a full verifier which simply maintains a full on-disk (HDD or SSD) data structure of (key, value) pairs and a light verifier who uses our data structure, and thus maintains only digests and verifies proofs, using very little RAM and no on-disk storage at all. The data structure is populated with 5 000 000 random 32-byte keys (with 8-byte values) at the start. Our simulated blocks contained 1500 updates of values for randomly chosen existing keys and 500 insertions of new random keys. The simulation is running for 90 000 blocks (thus ending with a data structure of 50 000 000 keys, similar to Bitcoin UTXO set size at the time of writing this text).

The benchmark consists of following parts:

  • Run sbt "run-main scorex.crypto.benchmarks.RunLegacyProver" to generate data for legacy non-batching light verifer

  • Run sbt "run-main scorex.crypto.benchmarks.RunBatchProver" to generate data for batching light verifer

  • Run sbt "run-main scorex.crypto.benchmarks.RunFullWorker" to simulate full block processing

  • Run sbt "run-main scorex.crypto.benchmarks.RunLegacyVerifier" to simulate light verifier without batching support (on proofs previously generated by scorex.crypto.benchmarks.RunLegacyProver)

  • Run sbt "run-main scorex.crypto.benchmarks.RunBatchVerifier" to simulate light verifier with batching support (on proofs previously generated by scorex.crypto.benchmarks.RunBatchProver)

To minimize effect of caching on OS level, please do regular freeing up of OS caches (for Linux, see http://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system). We made this every 10 seconds in our tests.