-
Notifications
You must be signed in to change notification settings - Fork 762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How long will it take to run "prepare.sh"? #97
Comments
According to your log, you are running HiBench with default data scale profile, which should be finished in several minutes for each workloads even for only 1 node. Can you paste your |
Thank you for your reply! This is report/join/prepare/bench.log you mentioned. 15/05/27 16:32:45 INFO HiBench.HiveData: Generating hive data files... |
And this is the information from the job0006 URL: ///Application Overview ///Application Metrics |
It seems like your yarn cluster has no available resource to run the job. The job state is ACCEPTED for hours to wait for RUN. Could you please check your Memory Total of Cluster metrics in YARN web UI? |
Thank you for your reply. Yes the Memory Total of Cluster is zero. And what should I do? |
And my yang-site.xml is like this:
|
Have you started at least 1 yarn nodemanager? You need to execute |
Seem's like a trivial issue. I'll close it. |
I am running HiBench to verify the performance SPARK SQL. And the "prepare.sh" took more than 3 hours and hasn't finished yet. This is my console below:
yang@xxxxx:~/HiBench/bin$ ./run-all.sh
Prepare join ...
Exec script: /home/yang/HiBench/workloads/join/prepare/prepare.sh
Parsing conf: /home/yang/HiBench/conf/00-default-properties.conf
Parsing conf: /home/yang/HiBench/conf/10-data-scale-profile.conf
Parsing conf: /home/yang/HiBench/conf/99-user_defined_properties.conf
Parsing conf: /home/yang/HiBench/workloads/join/conf/00-join-default.conf
Parsing conf: /home/yang/HiBench/workloads/join/conf/10-join-userdefine.conf
Probing spark verison, may last long at first time...
start HadoopPrepareJoin bench
hdfs rm -r: /home/yang/hadoop/bin/hadoop --config /home/yang/hadoop/etc/hadoop fs -rm -r -skipTrash hdfs://andromeda:9000/HiBench/Join/Input
15/05/27 16:32:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
rm: `hdfs://xxxxx:9000/HiBench/Join/Input': No such file or directory
Pages:120000, USERVISITS:1000000
Submit MapReduce Job: /home/yang/hadoop/bin/hadoop --config /home/yang/hadoop/etc/hadoop jar /home/yang/HiBench/src/autogen/target/autogen-4.0-SNAPSHOT-jar-with-dependencies.jar HiBench.DataGen -t hive -b hdfs://xxxxx:9000/HiBench/Join -n Input -m 12 -r 6 -p 120000 -v 1000000 -o sequence
15/05/27 16:32:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/27 16:32:49 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/05/27 16:32:50 INFO mapreduce.Job: Running job: job_1432692021703_0006
And in my hadoop web page, the log shows as below:
2015-05-27 16:42:12,268 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2015-05-27 16:42:12,269 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
So, is there anything wrong with my hdfs configuration?
The text was updated successfully, but these errors were encountered: