Skip to content

Cloudxtreme/speccpu2006

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SPEC CPU 2006 v1.2

SPEC CPU(tm) 2006 is designed to provide a comparative measure of 
compute-intensive performance across the widest practical range of hardware 
using workloads developed from real user applications. Metrics for both integer 
and floating point compute intensive performance are provided. Full 
documentation is available on the SPEC website: http://www.spec.org/cpu2006/. 
In order to use this benchmark, SPEC CPU must be installed and the [spec_dir]/config 
directory must be writable by the benchmark user. The runtime parameters 
defined below essentially determine the 'runspec' arguments.

SPEC CPU2006 consists of a total of 29 individual benchmarks. 12 of these 
benchmarks measure integer related CPU performance, and the remaining 19
measure floating point performance. Aggregate scores are calculated when 
the benchmark run is int (all integer benchmarks), fp (all floating point 
benchmarks), or all (both integer and floating point benchmarks). These 
aggregate scores are calculated as the geometric mean of the medians from 
3 runs of each individual benchmark in the suite. Aggregate scores are 
calculated based on tuning (base and/or peak) and whether the run is speed
(single copy) or rate (multiple copies).

A few notes on execution:
1. Benchmark execution will always use the runspec action validate signifying 
   the following: Build (if needed), run, check for correct answers, and 
   generate reports
2. check_version will always be 0


TESTING PARAMETERS

* benchmark                  the benchmark(s) to run - any of the benchmark 
                             identifiers listed in config/spec-benchmarks.ini 
                             may be specified. This argument can be repeated 
                             to designate multiple benchmarks. You may specify
                             'int' for all SPECint benchmarks, 'fp' for all 
                             SPECfp benchmarks and 'all' for all benchmarks. 
                             Benchmarks may be referenced either by their 
                             numeric or full identifier (e.g. --benchmark=400
                             or --benchmark=400.perlbench). Additionally, you
                             may designate benchmarks that should be removed
                             by prefixing them with a minus character
                             (e.g. --benchmark=all --benchmark=-429). May also
                             be specified using a single space or comma 
                             separated value (e.g. --benchmark "all -429")
                             DEFAULT: all
                             
* collectd_rrd               If set, collectd rrd stats will be captured from 
                             --collectd_rrd_dir. To do so, when testing starts,
                             existing directories in --collectd_rrd_dir will 
                             be renamed to .bak, and upon test completion 
                             any directories not ending in .bak will be zipped
                             and saved along with other test artifacts (as 
                             collectd-rrd.zip). User MUST have sudo privileges
                             to use this option

* collectd_rrd_dir           Location where collectd rrd files are stored - 
                             default is /var/lib/collectd/rrd

* comment                    optional comment to add to the log file
                             DEFAULT: none

* config                     name of a configuration file in [spec_dir]/config 
                             to use for the run. The following macros will be 
                             automatically set via the --define argument 
                             capability of runspec (optional parameters will 
                             only be present if specified by the user):

                             rate                if this is a rate run, this 
                                                 macro will be present defining
                                                 the number of copies

                             cpu_cache:          level 2 cpu cache 
                                                 (e.g. 4096 KB)

                             cpu_count:          the number of CPU cores present

                             cpu_family:         numeric CPU family identifier

                             cpu_model:          numeric CPU model identifier

                             cpu_name:           the CPU model name (e.g. Intel 
                                                 Xeon 5570)

                             cpu_speed:          the CPU speed in MHz 
                                                 (e.g. 2933.436)

                             cpu_vendor:         the CPU vendor 
                                                 (e.g. GenuineIntel)

                             compute_service_id: the compute service ID

                             external_id:        an external identifier for the 
                                                 compute resource

                             instance_id:        identifier for the compute 
                                                 resource under test 
                                                 (e.g. m1.xlarge)

                             ip_or_hostname:     IP or hostname of the compute 
                                                 resource

                             is32bit:            set if the OS is 32 bit

                             is64bit:            set if the OS is 64 bit

                             iteration_num:      the test iteration number 
                                                 (e.g. 2)

                             meta_*:             any of the meta parameters 
                                                 listed below

                             label:              user defined label for the 
                                                 compute resource

                             location:           location of the compute 
                                                 resource (e.g. CA, US)

                             memory_free:        free memory in KB

                             memory_total:       total memory in KB

                             numa:               set only if the system under
                                                 test has numa support

                             os:                 the operating system name 
                                                 (e.g. centos)

                             os_version:         the operating system version 
                                                 (e.g. 6.2)

                             provider_id:        the provider identifier 
                                                 (e.g. aws)

                             region:             compute resource region 
                                                 identifier (e.g. us-west)

                             run_id:             the benchmark run ID

                             run_name:           the name of the run (if 
                                                 assigned by the user)

                             sse:                the highest SSE flag supported

                             storage_config:     storage config identifier 
                                                 (e.g. ebs, ephemeral)

                             subregion:          compute resource subregion 
                                                 identifier (e.g. 1a)

                             test_id:            a user defined test identifier

                             x64:                set if the x64 parameter is 
                                                 also set

                             if this parameter value identifies a remote file 
                             (either an absolute or relative path on the 
                             compute resource, or an external reference like 
                             http://...) that file will be automatically copied 
                             into the [spec_dir]/config directory - if not specified,
                             a default.cfg file should be present in the config
                             directory
                             DEFAULT: none

* copies                     the number of copies to run concurrently. A higher 
                             number of copies will generally produce a better 
                             score (subject to resource availability for those 
                             copies to run). This parameter value may be one of 
                             the following:
            
                             cpu relative:    a percentage relative to the 
                                              number of CPU cores present. For 
                                              example, if copies=50% and the 
                                              compute instance has 4 cores, 2 
                                              copies will be run - standard 
                                              rounding will be used

                             fixed:           a simple numeric value 
                                              representing the number of copies 
                                              to run (e.g. copies=2)

                             memory relative: a memory to copies size ratio. 
                                              For example, if copies=2GB and 
                                              the compute instance has 16GB of 
                                              memory, then 8 copies will be run
                                              standard rounding will be used. 
                                              Either MB or GB suffix may be 
                                              used

                             mixed:           a combination of the above 3 types 
                                              may be used, each value separated 
                                              by a forward slash /. For example, 
                                              if copies=100%/2GB, then the number 
                                              of copies will be the lesser of 
                                              either the number of CPU cores or 
                                              memory/2GB. Alternatively, if this 
                                              value is prefixed by a +, the 
                                              greater of the values will be 
                                              used (e.g. copies=+100%/2GB)

                             The general recommend ratio of copies to resources 
                             is 2GB of memory for 64 bit binaries, 1GB of 
                             memory for 32 bit binaries, 1 CPU core and 2-3GB 
                             of free disk space. To specify a different number
                             of copies for 32-bit binaries versus 64-bit 
                             binaries (based on the value of the x64 parameter 
                             defined below), separate the values with a pipe, 
                             and prefix the 64-bit specified value with x64: 
                             (e.g. copies="x64:100%/2GB|100%/1GB")
                             DEFAULT: x64:100%/1GB|100%/512MB (NULL for speed runs)

* define_*                   additional macros to define using the runspec 
                             --define capability (these will be accessible in 
                             the config file using the format %{macro_name}) - 
                             any number of defines may be specified. 
                             Conditional logic within the config file is 
                             supported using the format:
																%ifdef %{macro_name}
																  # do something
																%else
																  # do something else
																%endif
                             More information is available about the use of 
                             macros on the SPEC website here: 
                             http://www.spec.org/cpu2006/Docs/config.html#sectionI.D.2
                             For flags - do not set a value for this parameter
                             (e.g. -p define_smt translates to --define smt)
                             DEFAULT: none

* delay                      Add a delay of the specified number of seconds 
                             before and after each benchmark. The delay is not 
                             counted toward the benchmark runtime.
                             DEFAULT: none

* failover_no_sse            When set to 1 in combination with an sse parameter
                             benchmark execution will be re-attempted without 
                             sse if runspec execution with sse results in an 
                             error status code (runspec will be restarted 
                             without the sse macro set)
                             DEFAULT: 0

* flagsurl                   Path to a flags file to use for the run - A flags 
                             file provides information about how to interpret 
                             and report on flags (e.g. -O5, -fast, etc.) that
                             are used in a config file. The flagsurl may be an 
                             absolute or relative path in the file system, or 
                             refer to an http accessible file
                             (e.g. $[top]/config/flags/Intel-ic12.0-linux64-revB.xml)
                             Alternatively, flagsurl can be defined in the 
                             config file
                             DEFAULT: none

* huge_pages                 Whether or not to enable huge pages if 
                             supported by the OS. To do so, prior to runspec
                             execution, if the file /usr/lib64/libhugetlbfs.so
                             or /usr/lib/libhugetlbfs.so exists, it then checks
                             that free huge pages are available in /proc/meminfo
                             and if these conditions are met, sets the following 
                             environment variables:
                               export HUGETLB_MORECORE=yes
                               export LD_PRELOAD=/usr/lib/libhugetlbfs.so
                             Note: In order to use huge pages, you must enable 
                             them first using something along the lines of:
                               # first clear out existing huge pages
                               echo  0 > /proc/sys/vm/nr_hugepages
                               # create 500 2MB huge pages (1GB total) - 2MB is
                               # the default huge page size on RHEL6
                               echo 500 > /proc/sys/vm/nr_hugepages
                               # mount the huge pages
                               mkdir -p /libhugetlbfs
                               mount -t hugetlbfs hugetlbfs /libhugetlbfs
                             Note: CentOS 6+ supports transparent huge pages 
                             (THP) by default. This parameter will likely have 
                             little effect on systems where THP is already 
                             enabled
                             DEFAULT: 0

* ignore_errors              whether or not to ignore errors - if 0, benchmark 
                             execution will stop if any errors occur
                             DEFAULT: 0

* iterations                 How many times to run each benchmark. This 
                             parameter should only be changed if reportable=0
                             because reportable runs always use 3 iterations
                             DEFAULT: 3 (not used if reportable=1)

* max_copies                 May be used in conjunction with dynamic copies
                             calculation (see copies parameter above) in order
                             to set a hard limit on the number of copies
                             DEFAULT: none (no limit)

* nobuild                    If 1, don't build new binaries if they do not 
                             already exist
                             DEFAULT: 1
                             
* nocleanup                  Do not delete test files generated by SPEC 
                             (i.e. [spec]/benchspec/CPU2006/[benchmark]/run/*)
                             DEFAULT: 0

* nonuma                     Do not set the 'numa' macro or invoke using 
                             'numactl --interleave=all' even if numa is 
                             supported
                             DEFAULT: 0
                             
* nosse_macro                Optional macro to define for --sse optimal if no 
                             SSE flag will be set
                             
* output                     The output directory to use for writing test 
                             artifacts. If not specified, the current working 
                             directory will be used

* purge_output               Whether or not to remote run files (created in the 
                             [spec_dir]/benchspec/CPU2006/*/run/ directories) 
                             following benchmarking completion
                             DEFAULT: 1

* rate                       Whether to execute a speed or a rate run. Per the 
                             official documentation: One way is to measure how 
                             fast the computer completes a single task; this is 
                             a speed measure. Another way is to measure how many 
                             tasks a computer can accomplish in a certain amount 
                             of time; this is called a throughput, capacity or 
                             rate measure. Automatically set if 'copies' > 1
                             DEFAULT: 1

* reportable                 whether or not to designate the run as reportable,  
                             only int, fp or all benchmarks can be designated 
                             as reportable. Per the official documentation: A 
                             reportable execution runs all the benchmarks in a 
                             suite with the test and train data sets as an 
                             additional verification that the benchmark 
                             binaries get correct results. The test and train 
                             workloads are not timed. Then, the reference 
                             workloads are run three times, so that median run 
                             time can be determined for each benchmark.
                             DEFAULT: 0

* review                     Format results for review, meaning that additional 
                             detail will be printed that normally would not be 
                             present
                             DEFAULT: 0
                             
* run_timeout                The amount of time to allow each test iteration to
                             run
                             DEFAULT: 72 hours

* size                       Size of the input data to run: test, train or ref
                             DEFAULT: ref

* spec_dir                   Directory where SPEC CPU 2006 is installed. If not 
                             specified, the benchmark run script will look up 
                             the directory tree from both pwd and --output for 
                             presence of a 'cpu2006'. If this fails, it will 
                             check '/opt/cpu2006'

* sse                        Run with a specific SSE optimization flag - if not
                             specified, the most optimal SSE flag will be used 
                             for the processor in use. The options availabe for
                             this parameter are:

                             optimal: choose the most optimal flag
                             none:    do not use SSE optimizations
                             AVX:     AVX, SSE4.2, SSE4.1, SSSE3, SSE3, SSE2 
                                      and SSE instructions
                             SSE4.2:  SSE4.2, SSE4.1, SSSE3, SSE3 SSE2 and 
                                      SSE instructions
                             SSE4.1:  SSE4.1, SSSE3, SSE3, SSE2 and SSE 
                                      instructions
                             SSSE3:   SSSE3, SSE3, SSE2 and SSE instructions
                             SSE3:    SSE3, SSE2 and SSE instructions
                             SSE2:    SSE2 and SSE instructions

                             More information is available regarding SSE compiler 
                             optimizations here: http://goo.gl/yevdH
                             DEFAULT: optimal

* sse_max                    The max SSE flag to support in conjunction with 
                             sse=optimal - if a processor supports greater than 
                             this SSE level, sse_max will be used instead
                             DEFAULT: SSE4.2

* sse_min                    The minimum SSE flag to support in conjunction with 
                             sse=optimal - if a processor does not at least 
                             support this SSE level sse optimization will not 
                             be used
                             DEFAULT: SSSE3

* tune                       Tuning option: base, peak or all - reportable runs 
                             must be either base or all
                             DEFAULT: base

* validate_disk_space        Whether or not to validate if there is sufficient 
                             diskspace available for a run - this calculation
                             is based on a minimum requirement of 2GB per copy
                             If this space is not available, the run will fail
                             DEFAULT: 1
                             
* verbose                    Show verbose output

* x64                        Optional parameter that will be passed into 
                             runspec using the macro --define x64 - this may be 
                             used to designate that a run utilize 32-bit versus 
                             64-bit binaries - this parameter can also affect 
                             the dynamic calculation of the 'copies' parameter
                             described above. Valid options are 0, 1 or 2
                             DEFAULT: 2 (64-bit binaries for 64-bit systems, 
                             32-bit otherwise)

* x64_failover               This flag will cause testing to be re-attempted
                             for the opposite x64 flag if current testing 
                             fails (e.g. if initial testing is x64=1 and it 
                             fails, then testing will be re-attempted with 
                             x64=0). When used in conjunction with 
                             failover_no_sse, sse failover will take precedence 
                             followed by x64 failover
                             DEFAULT: 0


META PARAMETERS
If set, these parameters will be included in the results generated using 
save.sh. Additionally, the parameters with a * suffix can be used to change the 
values in the SPEC CPU 2006 config file using macros. When specified, each of 
these parameters will be passed in to runspec using 
--define [parameter_name]=[parameter_value] and will then be accessible in the 
config using macros %{parameter_name}

* meta_burst                 If set to 1, designates testing performed in burst 
                             mode (e.g. Amazon EC2 t-series burst)

* meta_compute_service       The name of the compute service this test pertains
                             to. May also be specified using the environment 
                             variable bm_compute_service
                            
* meta_compute_service_id    The id of the compute service this test pertains
                             to. Added to saved results. May also be specified 
                             using the environment variable bm_compute_service_id
                            
* meta_cpu                   CPU descriptor - if not specified, it will be set 
                             using the 'model name' attribute in /proc/cpuinfo
                            
* meta_instance_id           The compute service instance type this test pertains 
                             to (e.g. c3.xlarge). May also be specified using 
                             the environment variable bm_instance_id
                             
* meta_hw_avail*             Date that this hardware or instance type was made 
                             available
                             
* meta_hw_fpu*               Floating point unit

* meta_hw_nthreadspercore*   Number of hardware threads per core - DEFAULT 1

* meta_hw_other*             Any other relevant information about the instance 
                             type

* meta_hw_ocache*            Other hardware primary cache

* meta_hw_pcache*            Hardware primary cache

* meta_hw_tcache*            Hardware tertiary cache

* meta_hw_ncpuorder*         Valid number of processors orderable for this 
                             model, including a unit. (e.g. "2, 4, 6, or 
                             8 chips"
                             
* meta_license_num*          The SPEC CPU 2006 license number
                            
* meta_memory                Memory descriptor - if not specified, the system
                             memory size will be used
                             
* meta_notes_N*              General notes - all of the meta_notes_* parameters 
                             support up to 5 entries (N=1-5)
                             
* meta_notes_base_N*         Notes about base optimization options
                             
* meta_notes_comp_N*         Notes about compiler invocation

* meta_notes_os_N*           Notes about operating system tuning and changes

* meta_notes_part_N*         Notes about component parts (for kit-built systems)

* meta_notes_peak_N*         Notes about peak optimization options

* meta_notes_plat_N*         Notes about platform tuning and changes

* meta_notes_port_N*         Notes about portability options
                             
* meta_notes_submit_N*       Notes about use of the submit option
                            
* meta_os                    Operating system descriptor - if not specified, 
                             it will be taken from the first line of /etc/issue
                            
* meta_provider              The name of the cloud provider this test pertains
                             to. May also be specified using the environment 
                             variable bm_provider
                            
* meta_provider_id           The id of the cloud provider this test pertains
                             to. May also be specified using the environment 
                             variable bm_provider_id
                            
* meta_region                The compute service region this test pertains to. 
                             May also be specified using the environment 
                             variable bm_region
                            
* meta_resource_id           An optional benchmark resource identifiers. May 
                             also be specified using the environment variable 
                             bm_resource_id
                            
* meta_run_id                An optional benchmark run identifiers. May also be 
                             specified using the environment variable bm_run_id
                            
* meta_storage_config        Storage configuration descriptor. May also be 
                             specified using the environment variable 
                             bm_storage_config
                             
* meta_sw_avail*             Date that the OS image was made available

* meta_sw_other*             Any other relevant information about the software
                            
* meta_test_id               Identifier for the test. May also be specified 
                             using the environment variable bm_test_id
                             
                             
DEPENDENCIES
This benchmark has the following dependencies:

 SPEC CPU 2006               This benchmark is licensed by spec.org. To use 
                             this benchmark harness you must have it installed
                             and available in the 'spec_dir' directory
 perl                        Used by SPEC CPU 2006
 php-cli                     Test automation scripts (/usr/bin/php)
 zip                         Used to compress test artifacts
 
 
TEST ARTIFACTS
This benchmark generates the following artifacts:

collectd-rrd.zip             collectd RRD files (see --collectd_rrd)
specint2006.csv              SPECint test results in CSV format
specint2006.gif              GIF image referenced in the SPECint HTML report
specint2006.html             HTML formatted SPECint test report
specint2006.pdf              PDF formatted SPECint test report
specint2006.txt              Text formatted SPECint test report
specfp2006.csv               SPECfp test results in CSV format
specfp2006.gif               GIF image referenced in the SPECfp HTML report
specfp2006.html              HTML formatted SPECfp test report
specfp2006.pdf               PDF formatted SPECfp test report
specfp2006.txt               Text formatted SPECfp test report


SAVE SCHEMA
The following columns are included in CSV files/tables generated by save.sh. 
Indexed MySQL/PostgreSQL columns are identified by *. Columns without 
descriptions are documented as runtime parameters above. Data types and 
indexing used are documented in save/schema/speccpu2006.json. Columns can be
removed using the save.sh --remove parameter

# Individual benchmark metrics. These provide the selected, min, max and 
# (sample) standard deviation metrics for each benchmark as well as runtime and
# reftime (reftime only included for speed runs: rate=0). For rate runs 
# (i.e. rate=1) the metrics represent the 'rate' metric - signifying that 
# multiple copies of the benchmark were run in parallel (i.e. --copies > 1). 
# Rate metrics essentially represent throughput. For speed runs these columns 
# contain a 'ratio' metric derived from ([base_run_time]/[ref_time]). For speed
# runs, only 1 copy of the benchmark is run. These columns may be excluded using 
# --remove benchmark_4*
benchmark_400_perlbench
benchmark_400_perlbench_max
benchmark_400_perlbench_min
benchmark_400_perlbench_reftime
benchmark_400_perlbench_runtime
benchmark_400_perlbench_stdev

benchmark_401_bzip2
benchmark_401_bzip2_max
benchmark_401_bzip2_min
benchmark_401_bzip2_reftime
benchmark_401_bzip2_runtime
benchmark_401_bzip2_stdev

benchmark_403_gcc
benchmark_403_gcc_max
benchmark_403_gcc_min
benchmark_403_gcc_reftime
benchmark_403_gcc_runtime
benchmark_403_gcc_stdev

benchmark_410_bwaves
benchmark_410_bwaves_max
benchmark_410_bwaves_min
benchmark_410_bwaves_reftime
benchmark_410_bwaves_runtime
benchmark_410_bwaves_stdev

benchmark_416_gamess
benchmark_416_gamess_max
benchmark_416_gamess_min
benchmark_416_gamess_reftime
benchmark_416_gamess_runtime
benchmark_416_gamess_stdev

benchmark_429_mcf
benchmark_429_mcf_max
benchmark_429_mcf_min
benchmark_429_mcf_reftime
benchmark_429_mcf_runtime
benchmark_429_mcf_stdev

benchmark_433_milc
benchmark_433_milc_max
benchmark_433_milc_min
benchmark_433_milc_reftime
benchmark_433_milc_runtime
benchmark_433_milc_stdev

benchmark_434_zeusmp
benchmark_434_zeusmp_max
benchmark_434_zeusmp_min
benchmark_434_zeusmp_reftime
benchmark_434_zeusmp_runtime
benchmark_434_zeusmp_stdev

benchmark_435_gromacs
benchmark_435_gromacs_max
benchmark_435_gromacs_min
benchmark_435_gromacs_reftime
benchmark_435_gromacs_runtime
benchmark_435_gromacs_stdev

benchmark_436_cactusadm
benchmark_436_cactusadm_max
benchmark_436_cactusadm_min
benchmark_436_cactusadm_reftime
benchmark_436_cactusadm_runtime
benchmark_436_cactusadm_stdev

benchmark_437_leslie3d
benchmark_437_leslie3d_max
benchmark_437_leslie3d_min
benchmark_437_leslie3d_reftime
benchmark_437_leslie3d_runtime
benchmark_437_leslie3d_stdev

benchmark_444_namd
benchmark_444_namd_max
benchmark_444_namd_min
benchmark_444_namd_reftime
benchmark_444_namd_runtime
benchmark_444_namd_stdev

benchmark_445_gobmk
benchmark_445_gobmk_max
benchmark_445_gobmk_min
benchmark_445_gobmk_reftime
benchmark_445_gobmk_runtime
benchmark_445_gobmk_stdev

benchmark_447_dealii
benchmark_447_dealii_max
benchmark_447_dealii_min
benchmark_447_dealii_reftime
benchmark_447_dealii_runtime
benchmark_447_dealii_stdev

benchmark_450_soplex
benchmark_450_soplex_max
benchmark_450_soplex_min
benchmark_450_soplex_reftime
benchmark_450_soplex_runtime
benchmark_450_soplex_stdev

benchmark_453_povray
benchmark_453_povray_max
benchmark_453_povray_min
benchmark_453_povray_reftime
benchmark_453_povray_runtime
benchmark_453_povray_stdev

benchmark_454_calculix
benchmark_454_calculix_max
benchmark_454_calculix_min
benchmark_454_calculix_reftime
benchmark_454_calculix_runtime
benchmark_454_calculix_stdev

benchmark_456_hmmer
benchmark_456_hmmer_max
benchmark_456_hmmer_min
benchmark_456_hmmer_reftime
benchmark_456_hmmer_runtime
benchmark_456_hmmer_stdev

benchmark_458_sjeng
benchmark_458_sjeng_max
benchmark_458_sjeng_min
benchmark_458_sjeng_reftime
benchmark_458_sjeng_runtime
benchmark_458_sjeng_stdev

benchmark_459_gemsfdtd
benchmark_459_gemsfdtd_max
benchmark_459_gemsfdtd_min
benchmark_459_gemsfdtd_reftime
benchmark_459_gemsfdtd_runtime
benchmark_459_gemsfdtd_stdev

benchmark_462_libquantum
benchmark_462_libquantum_max
benchmark_462_libquantum_min
benchmark_462_libquantum_reftime
benchmark_462_libquantum_runtime
benchmark_462_libquantum_stdev

benchmark_464_h264ref
benchmark_464_h264ref_max
benchmark_464_h264ref_min
benchmark_464_h264ref_reftime
benchmark_464_h264ref_runtime
benchmark_464_h264ref_stdev

benchmark_465_tonto
benchmark_465_tonto_max
benchmark_465_tonto_min
benchmark_465_tonto_reftime
benchmark_465_tonto_runtime
benchmark_465_tonto_stdev

benchmark_470_lbm
benchmark_470_lbm_max
benchmark_470_lbm_min
benchmark_470_lbm_reftime
benchmark_470_lbm_runtime
benchmark_470_lbm_stdev

benchmark_471_omnetpp
benchmark_471_omnetpp_max
benchmark_471_omnetpp_min
benchmark_471_omnetpp_reftime
benchmark_471_omnetpp_runtime
benchmark_471_omnetpp_stdev

benchmark_473_astar
benchmark_473_astar_max
benchmark_473_astar_min
benchmark_473_astar_reftime
benchmark_473_astar_runtime
benchmark_473_astar_stdev

benchmark_481_wrf
benchmark_481_wrf_max
benchmark_481_wrf_min
benchmark_481_wrf_reftime
benchmark_481_wrf_runtime
benchmark_481_wrf_stdev

benchmark_482_sphinx3
benchmark_482_sphinx3_max
benchmark_482_sphinx3_min
benchmark_482_sphinx3_reftime
benchmark_482_sphinx3_runtime
benchmark_482_sphinx3_stdev

benchmark_483_xalancbmk
benchmark_483_xalancbmk_max
benchmark_483_xalancbmk_min
benchmark_483_xalancbmk_reftime
benchmark_483_xalancbmk_runtime
benchmark_483_xalancbmk_stdev

benchmark_version: [benchmark version]
benchmarks: comma separated names of benchmarks run - or int, fp or all
collectd_rrd: [URL to zip file containing collectd rrd files]
comment
config: config file name
copies: number of copies (for rate runs only)
delay
failover_no_sse
flagsurl: flags file url
huge_pages
ignore_errors
iteration: [iteration number (used with incremental result directories)]
iterations: number of iterations - 3 is default (required for compliant runs)
max_copies
meta_burst
meta_compute_service 
meta_compute_service_id*
meta_cpu: [CPU model info]
meta_cpu_cache: [CPU cache]
meta_cpu_cores: [# of CPU cores]
meta_cpu_speed: [CPU clock speed (MHz)]
meta_instance_id*
meta_hostname: [system under test (SUT) hostname]
meta_hw_avail 
meta_hw_fpu 
meta_hw_nthreadspercore 
meta_hw_other 
meta_hw_ocache 
meta_hw_pcache 
meta_hw_tcache 
meta_hw_ncpuorder 
meta_license_num 
meta_memory 
meta_memory_gb: [memory in gigabytes]
meta_memory_mb: [memory in megabytes]
meta_notes 
meta_notes_base 
meta_notes_comp 
meta_notes_os 
meta_notes_part 
meta_notes_peak 
meta_notes_plat 
meta_notes_submit 
meta_os_info: [operating system name and version]
meta_provider 
meta_provider_id*
meta_region*
meta_resource_id 
meta_run_id 
meta_storage_config*
meta_sw_avail 
meta_sw_other 
meta_test_id*
nobuild
nonuma
numa: true if numa was supported and used (--define numa flag and numactl)
num_benchmarks: total number of individual benchmarks run
peak: true for a peak run, false for a base run
purge_output
rate: true for a rate run, false for a speed run

reportable
review
size: input data size - test, train or ref
spec_dir
specfp2006: peak, speed floating point score (only present if rate=0 and peak=1)
specfp_base2006: base, speed floating point score (only present if rate=0 and peak=0)
specfp_csv: [URL to the SPECfp CSV format report (if save.sh --store option used)]
specfp_gif: [URL to the SPECfp HTML report GIF image (if save.sh --store option used)]
specfp_html: [URL to the SPECfp HTML format report (if save.sh --store option used)]
specfp_pdf: [URL to the SPECfp PDF format report (if save.sh --store option used)]
specfp_rate2006: peak, rate floating point score (only present if rate=1 and peak=1)
specfp_rate_base2006: base, rate floating point score (only present if rate=1 and peak=0)
specfp_text: [URL to the SPECfp text format report (if save.sh --store option used)]

specint2006: peak, speed integer score (only present if rate=0 and peak=1)
specint_base2006: base, speed integer score (only present if rate=0 and peak=0)
specint_csv: [URL to the SPECint CSV format report (if save.sh --store option used)]
specint_gif: [URL to the SPECint HTML report GIF image (if save.sh --store option used)]
specint_html: [URL to the SPECint HTML format report (if save.sh --store option used)]
specint_pdf: [URL to the SPECint PDF format report (if save.sh --store option used)]
specint_rate2006: peak, rate integer score (only present if rate=1 and peak=1)
specint_rate_base2006: base, rate integer score (only present if rate=1 and peak=0)
specint_text: [URL to the SPECint text format report (if save.sh --store option used)]

sse: sse optimization used (if applicable)
sse_max
sse_min
test_started 
test_stopped 
tune: tune level - base, peak or all
valid: 1 if this was a valid run (0 if invalid)
validate_disk_space
x64: true if 64-bit binaries used, false if 32-bit
x64_failover


USAGE
# run 1 test iteration with some metadata
./run.sh --meta_compute_service_id aws:ec2 --meta_instance_id c3.xlarge --meta_region us-east-1 --meta_test_id aws-0914

# run with SPEC CPU 2006 installed in /usr/local/speccpu
./run.sh --spec_dir /usr/local/speccpu

# run for floating point benchmarks only
./run.sh --benchmark fp

# run for perlbench and bwaves only
./run.sh --benchmark 400 --benchmark 410


# save.sh saves results to CSV, MySQL, PostgreSQL, BigQuery or via HTTP 
# callback. It can also save artifacts (text report ) to S3, Azure Blob Storage
# or Google Cloud Storage

# save results to CSV files
./save.sh

# save results from 5 iterations text example above
./save.sh ~/spec-testing

# save results to a PostgreSQL database
./save --db postgresql --db_user dbuser --db_pswd dbpass --db_host db.mydomain.com --db_name benchmarks

# save results to BigQuery and artifact (TRIAD gnuplot PNG image) to S3
./save --db bigquery --db_name benchmark_dataset --store s3 --store_key THISIH5TPISAEZIJFAKE --store_secret thisNoat1VCITCGggisOaJl3pxKmGu2HMKxxfake --store_container benchmarks1234

About

Benchmark harness for SPEC CPU2006

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published