Skip to content

dmp-mitigation/dmp-bench

Repository files navigation

dmp-bench: Benchmarking Suite for DMP Mitigation

This repository provides benchmarks for various cryptographic primitives and system components under different compiler and runtime configurations, focusing on Splitting Secret.


What does this benchmark do?

  • Measures performance (cycles, memory usage) and security-related overhead for cryptographic primitives.

  • Algorithms benchmarked include:

    • Hash functions (SHA256, SHA512, BLAKE2b)
    • Stream ciphers (Salsa20, XChaCha20)
    • HMAC (HMAC-SHA256, HMAC-SHA512)
    • Asymmetric key algorithms (ed25519, X25519)
    • Key derivation functions (Argon2id)
    • Authenticated encryption (ChaCha20-Poly1305)
    • Application flow metrics (via the Tox protocol—latency, memory, message roundtrip)
  • The benchmark compares implementations and mitigation strategies across different compiler toolchains (including custom LLVM branches), runtimes, and secret/private data handling.


How to Run the Benchmark

Prerequisites

Clone and Prepare Repositories

The main Python script (bench.py) automatically clones required submodules and repositories:

git clone https://github.com/dmp-mitigation/dmp-bench.git
cd dmp-bench
python3 bench.py --benchmark <benchmark_name> --iterations <num> --output <output_file>

Available benchmarks (from tests/):

  • sha256, sha512, blake2b
  • salsa20, xchacha20
  • hmac_sha256, hmac_sha512
  • ed25519, x25519
  • argon2id
  • aead_chacha20poly1305
  • toxcore (application-level)

You can also specify a configuration file or reuse existing artifact directories:

python3 bench.py --config <config.yml> --use-existing <existing_dir>

Running Standalone C Benchmarks

You can run individual C benchmarks directly:

cd tests/<benchmark>
make
./<benchmark>

Each benchmark prints cycles, memory usage, and additional measurements for multiple secrets/data sizes.


Example Results

After running, results appear in either text log files (resultsx.txt, etc.) or as plots generated in Python.

Text Output Example (SHA256)

=== Hash <cycles> <mem_usage>
<cycles-for-each-secret> ...
<cycles-for-each-size> ...

Python Plotting

After generating result files, use:

import bench
compilers = bench.parse_data('resultsx.txt')
bench.generate_plots(compilers)

This produces performance comparison charts (e.g., cycles vs data size for different toolchain variants):

Cycles vs Data Size Plot


Output Metrics

  • Cycles: Average time to encrypt/decrypt/hash/sign for increasing data sizes or key/secrets.
  • Memory Usage: Peak resident set size measured via getrusage.
  • Application Flow Metrics: Time for init, connection, message roundtrip, peak memory in real application (via Tox protocol).
  • Compiler Metrics: Compile time, binary size, measured across builds with/without mitigation, different branches.

Generating Plots

  • Results from multiple runs can be plotted for comparative analysis. Use the built-in functions in bench.py:
    bench.generate_plots(compilers)      # Plots runtime/data size for each compiler+benchmark
    bench.generate_tables(compilers)     # CSV or table output of summary results

Directory Structure

bench.py               # Python orchestration and analysis script
tests/                 # C source benchmarks for each crypto primitive/algorithm
docs/example_plot.png  # [Add result images here!]

Contribution & Extending

  • Add new benchmarks in tests/
  • Extend bench.py for new metrics or output formats

References

  • dmp-llvm: Custom LLVM branch for mitigation
  • libNa: Crypto library variants
  • dmp-rt: Runtime modifications
  • TokTok/c-toxcore: Reference implementation for application protocol benchmarks

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published