New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3A error: HTTP request: Timeout waiting for connection from pool #1906

Closed
akmorrow13 opened this Issue Feb 7, 2018 · 9 comments

Comments

3 participants
@akmorrow13
Contributor

akmorrow13 commented Feb 7, 2018

When trying to read in s3a://1000genomes/phase1/data/NA19685/exome_alignment/NA19685.mapped.illumina.mosaik.MXL.exome.20110411.bam from s3a and count, I get the following error:

com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool

I tried setting the max connections to 5000, but this didn't help. Is there any hack for this issue?

@heuermh

This comment has been minimized.

Member

heuermh commented Feb 7, 2018

Is this from adam-shell? Could you provide the full command line and environment?

@fnothaft

This comment has been minimized.

Member

fnothaft commented Feb 7, 2018

Can you paste the full stack trace? This is caused by an IO resource leak.

@akmorrow13

This comment has been minimized.

Contributor

akmorrow13 commented Feb 8, 2018

 WARN TaskSetManager: Lost task 9.0 in stage 0.0 (TID 9, ip-172-31-17-143.us-west-2.compute.internal, executor 2): com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1113)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1063)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1253)
	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1228)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:903)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
	at net.fnothaft.s3a.jsr203.HadoopPath.getAttributes(HadoopPath.java:90)
	at net.fnothaft.s3a.jsr203.HadoopFileSystemProvider.readAttributes(HadoopFileSystemProvider.java:176)
	at java.nio.file.Files.readAttributes(Files.java:1737)
	at java.nio.file.Files.isRegularFile(Files.java:2229)
	at htsjdk.samtools.SamFiles.lookForIndex(SamFiles.java:72)
	at htsjdk.samtools.SamFiles.findIndex(SamFiles.java:39)
	at org.seqdoop.hadoop_bam.BAMRecordReader.initialize(BAMRecordReader.java:140)
	at org.seqdoop.hadoop_bam.BAMInputFormat.createRecordReader(BAMInputFormat.java:121)
	at org.seqdoop.hadoop_bam.AnySAMInputFormat.createRecordReader(AnySAMInputFormat.java:190)
	at org.apache.spark.rdd.NewHadoopRDD$$anon$1.liftedTree1$1(NewHadoopRDD.scala:180)
	at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:179)
	at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:134)
	at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:292)
	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:269)
	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
	at com.amazonaws.http.conn.$Proxy13.get(Unknown Source)
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:191)
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1235)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
	... 49 more

@akmorrow13

This comment has been minimized.

Contributor

akmorrow13 commented Feb 8, 2018

A fix is to set fs.s3a.connection.maximum=5000 but I am not sure if this is a good idea

@akmorrow13

This comment has been minimized.

Contributor

akmorrow13 commented Feb 8, 2018

This is in python

@fnothaft

This comment has been minimized.

Member

fnothaft commented Feb 8, 2018

A fix is to set fs.s3a.connection.maximum=5000 but I am not sure if this is a good idea

s/fix/workaround/g ;)

I'll look into this a bit more; there's a stream getting left open somewhere, not sure whether it is in Hadoop-BAM or in jsr203-s3a.

@akmorrow13

This comment has been minimized.

Contributor

akmorrow13 commented Feb 8, 2018

Thanks @fnothaft ! Are there any hints or known bugs for hacking at this in jsr203?

@heuermh heuermh added this to the 0.24.0 milestone Feb 14, 2018

@heuermh heuermh added this to Triage in Release 0.24.0 Feb 14, 2018

@akmorrow13

This comment has been minimized.

Contributor

akmorrow13 commented Feb 15, 2018

This is not an issue, was fixed in hadoop-bam 7.9.1

@akmorrow13 akmorrow13 closed this Feb 15, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment