Skip to content

kleinron/bit-shift-performance

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Bit Shift vs Array Lookup: A Deep Performance Investigation

This repository contains all source code, benchmarks, and findings from an investigation into whether precalculating bit masks (1 << i) is faster than computing them on-the-fly.

Repository Structure

blog-post/
├── README.md                          # This file - overview
├── blog-post.md                       # Main blog post
├── src/
│   ├── javascript/
│   │   ├── bit-counting-benchmark.js          # Main JS benchmark
│   │   └── powerset-bitwise-benchmark.js      # Extended JS analysis
│   └── c/
│       ├── bit_counting_benchmark.c           # C implementation
│       └── Makefile                           # Build system
└── data/
    └── results-summary.md                     # Tabulated results

Quick Start

JavaScript Benchmarks

cd src/javascript
node bit-counting-benchmark.js

C Benchmarks

cd src/c
make                    # Build with -O2 optimization
./bit_counting_benchmark

# Compare optimization levels
make clean
make benchmark-O0       # No optimization
./bit_counting_benchmark_O0

Benchmark Methodology

Randomized Testing: Eliminating Bias

To ensure accurate and unbiased results, the C benchmark uses randomized test interleaving:

// Instead of sequential runs:
// Test1, Test1, Test1... Test2, Test2, Test2... Test3, Test3, Test3...

// We use Fisher-Yates shuffle for random interleaving:
// Test2, Test1, Test3, Test1, Test2, Test3, Test2, Test3, Test1...

This eliminates several sources of bias:

  • CPU warmup bias: Each approach faces equal warm/cold CPU states
  • Thermal throttling effects: No approach consistently runs when CPU is hottest
  • Cache state advantages: Each test starts with similar cache conditions
  • OS scheduling bias: Background processes affect all approaches equally
  • Timing artifacts: Reduces impact of periodic system events

With 1000 iterations and randomized order, the benchmark produces statistically robust results with low variance. This methodology revealed that differences between approaches are real and consistent, not artifacts of test ordering.

Key Findings

  1. JavaScript (V8 JIT): Direct bit shifts are 36% faster than global precalculation
  2. C without optimization (-O0): Direct bit shifts are 46% faster
  3. C with optimization (-O2): Results depend on dataset size relative to CPU cache
  4. Cache matters: Performance characteristics flip when data exceeds L2 cache

Requirements

  • JavaScript: Node.js v18+
  • C: GCC or Clang with C99 support

License

MIT

Related

This investigation was conducted as part of the mod-counter JavaScript library project.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published