Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark results.json #53

Open
Jaefromkorea opened this issue Apr 2, 2019 · 8 comments
Open

benchmark results.json #53

Jaefromkorea opened this issue Apr 2, 2019 · 8 comments

Comments

@Jaefromkorea
Copy link

Dear Adrian Sampson,

I am testing benchmark with accept compiler follow tutorial.
However, I got some weird result from command make exp
It seems like only sobel works properly, and the other application do not give the result .

could you please explain what i have done wrong?

My purpose of running these application is that

  1. i first want to generate the benchmarks with accept compiler in order to get QoR(quality of result)

  2. I want to generate the binary file with this and simulate in NoC simulator call sniper in order to test the floating point exchanges between noc.

I got this idea from the below paper

AxNoC: Low-power Approximate
Network-on-Chips using Critical-Path Isolation
Akram Ben Ahmed�, Daichi Fujikiy, Hiroki Matsutani�, Michihiro Koibuchiz, and Hideharu Amano�

I just want to simulate exactly same like the paper to see the more precise result.

Thank you
{
"blackscholes": {
"isolated": {
"desync": [],
"loopperf": [],
"npu": []
},
"main": [],
"stats": {
"desync": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0003631114959716797,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"loopperf": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0003218650817871094,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"main": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.6068341732025146,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"npu": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0003249645233154297,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
}
}
},
"canneal": {
"isolated": {
"desync": [],
"loopperf": [],
"npu": []
},
"main": [],
"stats": {
"desync": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.00036406517028808594,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"loopperf": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.00031495094299316406,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"main": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 29.426498889923096,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"npu": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0003910064697265625,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
}
}
},
"fluidanimate": {
"isolated": {
"desync": [],
"loopperf": [],
"npu": []
},
"main": [],
"stats": {
"desync": {
"all": 6,
"base": 6,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 5.352258920669556,
"train-bad": 6,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 6
},
"loopperf": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0003380775451660156,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"main": {
"all": 6,
"base": 6,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 107.3139750957489,
"train-bad": 6,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 6
},
"npu": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0003781318664550781,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
}
}
},
"sobel": {
"isolated": {
"desync": [],
"loopperf": [
{
"config": "loop at sobel.c:50 @ 1, loop at sobel.c:51 @ 6, loop at sobel.c:56 @ 1",
"error_mu": 0.25624184701956954,
"error_sigma": 0.0,
"speedup_mu": 1.2263029031129764,
"speedup_sigma": 0.049871042276112255
},
{
"config": "loop at sobel.c:56 @ 1",
"error_mu": 0.25624184701956954,
"error_sigma": 0.0,
"speedup_mu": 1.1543336900156391,
"speedup_sigma": 0.01722010757024235
}
],
"npu": []
},
"main": [
{
"config": "loop at sobel.c:50 @ 1, loop at sobel.c:51 @ 6, loop at sobel.c:56 @ 1",
"error_mu": 0.25624184701956954,
"error_sigma": 0.0,
"speedup_mu": 1.2263029031129764,
"speedup_sigma": 0.049871042276112255
},
{
"config": "loop at sobel.c:56 @ 1",
"error_mu": 0.25624184701956954,
"error_sigma": 0.0,
"speedup_mu": 1.1543336900156391,
"speedup_sigma": 0.01722010757024235
}
],
"stats": {
"desync": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.0002830028533935547,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
},
"loopperf": {
"all": 11,
"base": 4,
"composite": 3,
"test-bad": 8,
"test-optimal": 2,
"test-suboptimal": 0,
"time": 5.6406919956207275,
"train-bad": 1,
"train-optimal": 8,
"train-suboptimal": 2,
"tuned": 9
},
"main": {
"all": 11,
"base": 4,
"composite": 3,
"test-bad": 8,
"test-optimal": 2,
"test-suboptimal": 0,
"time": 10.55833101272583,
"train-bad": 1,
"train-optimal": 8,
"train-suboptimal": 2,
"tuned": 9
},
"npu": {
"all": 0,
"base": 0,
"composite": 0,
"test-bad": 0,
"test-optimal": 0,
"test-suboptimal": 0,
"time": 0.00033092498779296875,
"train-bad": 0,
"train-optimal": 0,
"train-suboptimal": 0,
"tuned": 0
}
}
}
}

@sampsyo
Copy link
Member

sampsyo commented Apr 2, 2019

Hi! Can you include the verbose output of a tool run for a benchmark that produces zero results?

@Jaefromkorea
Copy link
Author

Jaefromkorea commented Apr 2, 2019 via email

@sampsyo
Copy link
Member

sampsyo commented Apr 2, 2019

Yes, I know! But can you include the verbose output from a tool run for one of those benchmarks?

@Jaefromkorea
Copy link
Author

Jaefromkorea commented Apr 3, 2019 via email

@sampsyo
Copy link
Member

sampsyo commented Apr 3, 2019

Hello! It looks like the image attachments didn't work. Can you paste the actual, text output?

@Jaefromkorea
Copy link
Author

Jaefromkorea commented Apr 3, 2019 via email

@Jaefromkorea
Copy link
Author

Jaefromkorea commented Apr 3, 2019 via email

@sampsyo
Copy link
Member

sampsyo commented Apr 4, 2019

Hmm… zero configurations? I'm not sure exactly what's causing that! I'm really going to have to depend on you to do the debugging yourself, though. Can you look through the code to see where configurations are generated and trace backward to see why none are being found for you?

(Your attachments still aren't working. Please look at the GitHub thread.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants