Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run CPU2006 benchmarks on SST? #1064

Closed
Xiaoyang-Lu opened this issue Jun 13, 2018 · 8 comments
Closed

How to run CPU2006 benchmarks on SST? #1064

Xiaoyang-Lu opened this issue Jun 13, 2018 · 8 comments

Comments

@Xiaoyang-Lu
Copy link

New Issue for sst-elements

I want to run CPU2006 benchmarks on Ariel. For the 400.perlbench benchmark, I set the configuration for Ariel and it works(add the benchmark's path to "executable" part),

def _arielCoreConfig(self, core_id):
        params = dict({
            "maxcorequeue"        : 256,
            "maxtranscore"        : 16,
            "maxissuepercycle"    : self.max_reqs_cycle,
            "pipetimeout"         : 0,
            "appargcount"         : 0,
            "memorylevels"        : 1,
            "arielinterceptcalls" : 1,
            "arielmode"           : 1,
            "pagecount0"          : 1048576,
            "corecount"           : self.total_cores,
            "defaultlevel"        : 0,
            "verbose"             : 16,
            "executable"          : "/home/cc/gem5_benchmark/cpu_2006/benchspec/CPU2006/400.perlbench/exe/perlbench_base.amd64-m64-gcc43-nn",
            "arieltool"           : "/home/cc/mysst/sst-elements/sst-elements-library-7.0.0/src/sst/elements/ariel/fesimple.so",
            })

But for some benchmarks which needs an input data, I don't know how to add the input data's path to the configuration.
For example for benchmark 401.bzip2, it needs an extra input

data=data_dir+'401.bzip2/data/ref/input/liberty.jpg'

What kind of parameter should I add in the Ariel configuration code?
I am using CentOS 7.3 with Pin tool 2.14 and SST 7.0.0

Thanks!

@nmhamster
Copy link
Contributor

@Xiaoyang-Lu - you need to add each argument one by one (this is a pain).

Example:

"appargcount" : 2,
"apparg0" : "-n",
"apparg1" : "100"

@Xiaoyang-Lu
Copy link
Author

Thanks a lot for your answer.
I found that, for some benchmarks which the arguments are needed, for example, N queens, we have to provide the argument "n". So in gem5, if we missing the argument "n", the simulation will not start, and it will post an error:

Missing n argument
Usage: /home/cc/gem5_benchmark/benchmarks/queens [-ac] n
n Number of queens (rows and columns). An integer from 1 to 100.
-a Find and print all solutions.
-c Count all solutions, but do not print them.

But in Ariel, even if we ignore the necessary arguments, the simulation will still start normally and we still can get the CSV output. I think these simulation results should be wrong. I want to know, if we have a wrong input, how to detect it?

@hughes-c
Copy link
Member

You should be able to look at the simulation output to know if the benchmark completed successfully. If it did, then you can use the results compiled in the csv.

@Xiaoyang-Lu
Copy link
Author

Xiaoyang-Lu commented Jun 18, 2018

@hughes-c
For N queens benchmark, I miss the argument n here,

    def _arielCoreConfig(self, core_id):
        params = dict({
            "maxcorequeue"        : 256,
            "maxtranscore"        : 16,
            "maxissuepercycle"    : self.max_reqs_cycle,
            "pipetimeout"         : 0,
            "appargcount"         : 0,
            "memorylevels"        : 1,
            "arielinterceptcalls" : 1,
            "arielmode"           : 1,
            "pagecount0"          : 1048576,
            "corecount"           : self.total_cores,
            "defaultlevel"        : 0,
            "verbose"             : 16,
            "max_insts"           : 50000,
            "executable"          : "/home/cc/gem5_benchmark/benchmarks/queens",
            "appargcount"         : 0,
            })

The Statistics:

Ariel Memory Management Statistics:
---------------------------------------------------------------------
Page Table Sizes:
- Map entries         62
Page Table Coverages:
- Bytes               253952
Simulation is complete, simulated time: 29.0577 us

The output is too big, so I put it on mu github, https://github.com/Xiaoyang-Lu/SST-issue.git, I can not find any difference between the output which has the input "n", I want to know, how to check the benchmark completed successfully, It feels like something basic, but really puzzled me. Thanks.

@hughes-c
Copy link
Member

If you run your benchmark outside of a simulation environment, does it produce some output? If so, you should be able to compare it with what it produced when you run it in a simulator.

@Xiaoyang-Lu
Copy link
Author

@hughes-c
I run this benchmark outside the simulation environment, if I miss the input argument,
./queens -c
the benchmark cannot start, and the output is:

./queens: Missing n argument
Usage:  ./queens [-ac] n
	n	Number of queens (rows and columns). An integer from 1 to 100.
	-a	Find and print all solutions.
	-c	Count all solutions, but do not print them.

But in SST, this kind of output never appear, it shows simulation is complete.

@nmhamster
Copy link
Contributor

@Xiaoyang-Lu - does SST show that time has progressed when it exits? Is there any application output at all?

@Xiaoyang-Lu
Copy link
Author

@nmhamster Thanks, this issue is fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants