Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

03_rca.sh killed #1

Closed
Clingto opened this issue Jun 2, 2021 · 3 comments
Closed

03_rca.sh killed #1

Clingto opened this issue Jun 2, 2021 · 3 comments

Comments

@Clingto
Copy link

Clingto commented Jun 2, 2021

Hi,
I ran the mruby example in the given docker. When I ran the /home/user/aurora/docker/example_scripts/03_rca.sh I got the errors as below. And, I couldn't get predicates.json, ranked_predicates.txt, ranked_predicates_verbose.txt, rankings.json scores_linear.csv files.
Can you give me some advice?

Finished release [optimized] target(s) in 52.66s
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/user/aurora/root_cause_analysis/predicate_monitoring/Cargo.toml
workspace: /home/user/aurora/root_cause_analysis/Cargo.toml
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/user/aurora/root_cause_analysis/trace_analysis/Cargo.toml
workspace: /home/user/aurora/root_cause_analysis/Cargo.toml
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/user/aurora/root_cause_analysis/root_cause_analysis/Cargo.toml
workspace: /home/user/aurora/root_cause_analysis/Cargo.toml
   Compiling itertools v0.9.0
   Compiling root_cause_analysis v0.1.0 (/home/user/aurora/root_cause_analysis/root_cause_analysis)
    Finished release [optimized] target(s) in 3.33s
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/user/aurora/root_cause_analysis/predicate_monitoring/Cargo.toml
workspace: /home/user/aurora/root_cause_analysis/Cargo.toml
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/user/aurora/root_cause_analysis/trace_analysis/Cargo.toml
workspace: /home/user/aurora/root_cause_analysis/Cargo.toml
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /home/user/aurora/root_cause_analysis/root_cause_analysis/Cargo.toml
workspace: /home/user/aurora/root_cause_analysis/Cargo.toml
    Finished release [optimized] target(s) in 0.04s
     Running `target/release/rca --eval-dir /home/user/evaluation --trace-dir /home/user/evaluation --monitor --rank-predicates`
analyzing traces
reading crashes
reading non-crashes
./03_rca.sh: line 13:  3335 Killed                  cargo run --release --bin rca -- --eval-dir $EVAL_DIR --trace-dir $EVAL_DIR --monitor --rank-predicates
@mu00d8
Copy link
Collaborator

mu00d8 commented Jun 2, 2021

Hi there,

the only occasion we've seen this so far is when running out of memory. Can you check whether this is the case for you? If so, you can try to supply less crashing/non-crashing inputs.

@Clingto
Copy link
Author

Clingto commented Jun 2, 2021

Hi there,

the only occasion we've seen this so far is when running out of memory. Can you check whether this is the case for you? If so, you can try to supply less crashing/non-crashing inputs.

It seems like running out of memory.
In the previous time, I ran the AFL for 12 hours, and it produced (12378 files in 2 subdirs at /home/user/evaluation/inputs crashes + no_crashes);
Now, I run the AFL for 3 minutes, and it produces(1294 files in 2 subdirs at /home/user/evaluation/inputs crashes + no_crashes).

So that, for the limitation of memory, should I take care of the AFL running time? how long will be better or how many crashing/non-crashing inputs should I supply?
Thanks!

@mu00d8
Copy link
Collaborator

mu00d8 commented Jun 6, 2021

Yeah, either you run AFL for less time or you randomly sample a subset of inputs from the large set you already produced in the 12h run (if you happen to have these inputs still lying around).

I'd suggest to supply as many inputs as your machine can handle RAM-wise, as more inputs yield better predicates. I can't give you precise numbers here as we haven't really looked into the RAM consumption in detail.

@mu00d8 mu00d8 closed this as completed Jun 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants