Benchmark software for ArrayFire
The benchmarking program requires the following system-level libraries:
On Ubuntu these dependencies are most easily installed via. the package manager
an Anaconda Python. First install
ncurses via. the package manager:
sudo apt-get install libncurses5-dev
Next, download and install Anaconda from Continuum Analytics. Once this is complete, run
conda install bokeh
which will automatically download and install all required packages.
Build and install the ArrayFire library following instructions here:
Note, you may install ArrayFire to a non-system path if needed.
Checkout and build
Basic building instructions:
git clone --recursive https://github.com/bkloppenborg/arrayfire_benchmark.git cd arrayfire_benchmark cd build cmake .. make
If you have ArrayFire installed in a non-standard location, specify the directory
which contains the
ArrayFireConfig* files. These files may be found in the
share/ArrayFire subdirectory of the installation folder. For example, if ArrayFire
were installed locally to
/opt/ArrayFire then we would modify the
above to be:
cmake -DArrayFire_DIR=/opt/ArrayFire/share/ArrayFire ..
Building on Windows
Install ArrayFire using the installer. Advanced users can opt to use custom builds, but this document will not detail steps for that.
You will also need Boost. You can install it using the Boost binary installers for Windows (VS2013 builds).
Open the CMake GUI. Source directory is arrayfire-benchmark and build directory is arrayfire-benchmark/build. Hit configure.
You may need to add/change the following:
BOOST_ROOTand point it to the boost install directory.
BOOST_LIBRARYDIRand point it to
Run configure again.
Once the Boost libraries are found, you will need to add a prefix of "lib" to all the boost libraries. Example:
BOOST_SYSTEM_LIBRARY_RELEASEwill change from
Run configure once again. Then generate.
Now open the
build/af_benchmark solution and build it.
Using the benchmark suite
First generate a series of benchmark results by running one of
benchmark_cuda programs with the
-r output_file.csv option. These three programs have the same set of options
which may be seen using the
After this, use the
scripts/standalone-plot.py to visualize individual
results from the benchmark suite (specify the
-h option to see possibilities).
If you need to plot a lot of results, modify the