You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
root@namenode:/opt/hadoop# ./bin/hadoop jar hadoop-examples-1.2.0.jar wordcount swift://mycontainer.hp/wordcount.txt20130813015900000 swift://mycontainer.hp/out-01
13/08/22 12:36:05 INFO input.FileInputFormat: Total input paths to process : 1
13/08/22 12:36:07 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/22 12:36:07 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/22 12:36:07 INFO mapred.JobClient: Running job: job_201308221224_0002
13/08/22 12:36:08 INFO mapred.JobClient: map 0% reduce 0%
13/08/22 12:36:34 INFO mapred.JobClient: Task Id : attempt_201308221224_0002_m_000000_0, Status : FAILED
java.io.IOException: Operation failed as connection is closed to https://region-a.geo-1.objects.hpcloudsvc.com/v1//mycontainer/wordcount.txt20130813015900000
at org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease.assumeNotReleased(HttpInputStreamWithRelease.java:146)
at org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease.read(HttpInputStreamWithRelease.java:180)
at org.apache.hadoop.fs.swift.snative.SwiftNativeInputStream.read(SwiftNativeInputStream.java:146)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.read(DataInputStream.java:83)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:205)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:169)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:139)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:531)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
The text was updated successfully, but these errors were encountered:
When executing the wordcount MapReduce job I get an exception (below) from the Swift FS component about the connection being closed.
I've verified that I can cat the file,
hadoop fs -cat swift://mycontainer.hp/wordcount.txt20130813015900000
This better work!!
Any idea what I might be missing?
Thanks,
Christian
root@namenode:/opt/hadoop# ./bin/hadoop jar hadoop-examples-1.2.0.jar wordcount swift://mycontainer.hp/wordcount.txt20130813015900000 swift://mycontainer.hp/out-01
13/08/22 12:36:05 INFO input.FileInputFormat: Total input paths to process : 1
13/08/22 12:36:07 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/22 12:36:07 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/22 12:36:07 INFO mapred.JobClient: Running job: job_201308221224_0002
13/08/22 12:36:08 INFO mapred.JobClient: map 0% reduce 0%
13/08/22 12:36:34 INFO mapred.JobClient: Task Id : attempt_201308221224_0002_m_000000_0, Status : FAILED
java.io.IOException: Operation failed as connection is closed to https://region-a.geo-1.objects.hpcloudsvc.com/v1//mycontainer/wordcount.txt20130813015900000
at org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease.assumeNotReleased(HttpInputStreamWithRelease.java:146)
at org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease.read(HttpInputStreamWithRelease.java:180)
at org.apache.hadoop.fs.swift.snative.SwiftNativeInputStream.read(SwiftNativeInputStream.java:146)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.read(DataInputStream.java:83)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:205)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:169)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:139)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:531)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
The text was updated successfully, but these errors were encountered: