- Hadoop and Spark don't need to be using same benchmark and workload
-
The goal is not compare hadoop vs spark but to create a smaller benchmark of hadoop an spark that has the same behavior (IPC,L1,L2...) inside the node. (not overall performance)
-
- Collect metrics for worker node
- Scale down to single node
-
Multicore node is OK
-
This repository has been archived by the owner on Apr 12, 2018. It is now read-only.