Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run wordcount error #133

Closed
cb200664 opened this issue Sep 3, 2015 · 4 comments
Closed

run wordcount error #133

cb200664 opened this issue Sep 3, 2015 · 4 comments

Comments

@cb200664
Copy link

cb200664 commented Sep 3, 2015

when I use the command:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teragen -Dmapred.map.tasks=20 109951 terasort/100M-input

15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/09/03 09:32:58 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
15/09/03 09:32:58 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS, CRC disabled.
15/09/03 09:32:58 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=46d9738, git.commit.user.email=bchilds@redhat.com, git.commit.message.full=Merge branch 'master' of https://github.com/gluster/glusterfs-hadoop
, git.commit.id=46d973834ae1db6eb6cf9ac025ded9a9ffa38c93, git.commit.message.short=Merge branch 'master' of https://github.com/gluster/glusterfs-hadoop, git.commit.user.name=childsb, git.build.user.name=childsb, git.commit.id.describe=2.3.13-6-g46d9738-dirty, git.build.user.email=bchilds@redhat.com, git.branch=master, git.commit.time=21.01.2015 @ 10:31:08 CST, git.build.time=21.01.2015 @ 12:01:55 CST}
15/09/03 09:32:58 INFO glusterfs.GlusterFileSystem: GIT_TAG=2.3.13
15/09/03 09:32:58 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol at : /mnt/glusterfs
15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/hadoop
15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Write buffer size : 131072
15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Default block size : 67108864
15/09/03 09:32:58 INFO glusterfs.GlusterVolume: Directory list order : fs ordering
15/09/03 09:32:59 INFO client.RMProxy: Connecting to ResourceManager at cn0/192.168.1.40:8032
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol at : /mnt/glusterfs
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/hadoop
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Write buffer size : 131072
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Default block size : 67108864
15/09/03 09:33:02 INFO glusterfs.GlusterVolume: Directory list order : fs ordering
15/09/03 09:33:05 INFO terasort.TeraSort: Generating 109951 using 20
15/09/03 09:33:05 INFO mapreduce.JobSubmitter: number of splits:20
15/09/03 09:33:05 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/09/03 09:33:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441272751391_0001
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol at : /mnt/glusterfs
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/hadoop
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Write buffer size : 131072
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Default block size : 67108864
15/09/03 09:33:07 INFO glusterfs.GlusterVolume: Directory list order : fs ordering
15/09/03 09:33:09 INFO impl.YarnClientImpl: Submitted application application_1441272751391_0001
15/09/03 09:33:09 INFO mapreduce.Job: The url to track the job: http://cn0:8088/proxy/application_1441272751391_0001/
15/09/03 09:33:09 INFO mapreduce.Job: Running job: job_1441272751391_0001
15/09/03 09:33:19 INFO mapreduce.Job: Job job_1441272751391_0001 running in uber mode : false
15/09/03 09:33:19 INFO mapreduce.Job: map 0% reduce 0%
15/09/03 09:33:19 INFO mapreduce.Job: Job job_1441272751391_0001 failed with state FAILED due to: Application application_1441272751391_0001 failed 2 times due to AM Container for appattempt_1441272751391_0001_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://cn0:8088/proxy/application_1441272751391_0001/Then, click on links to logs of each attempt.
Diagnostics: File glusterfs:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1441272751391_0001/job.splitmetainfo does not exist.
java.io.FileNotFoundException: File glusterfs:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1441272751391_0001/job.splitmetainfo does not exist.
at org.apache.hadoop.fs.glusterfs.GlusterVolume.getFileStatus(GlusterVolume.java:368)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Failing this attempt. Failing the application.
15/09/03 09:33:19 INFO mapreduce.Job: Counters: 0

why this happen

@cb200664 cb200664 closed this as completed Sep 5, 2015
@b-long
Copy link

b-long commented Sep 7, 2015

I'll be using GlusterFS with Hadoop, did this turn out to be a non-issue @cb200664 ?

@b-long
Copy link

b-long commented Sep 9, 2015

@jayunit100 @childsb Any idea if glusterfs-hadoop works with Hadoop version 2.6.0 ?

@cb200664
Copy link
Author

yes,it's ok

@b-long
Copy link

b-long commented Sep 16, 2015

Thanks @cb200664 , I appreciate it 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants