Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request fails at the server when writing large files #232

Closed
hakanai opened this issue Jan 19, 2016 · 1 comment
Closed

Request fails at the server when writing large files #232

hakanai opened this issue Jan 19, 2016 · 1 comment

Comments

@hakanai
Copy link
Contributor

hakanai commented Jan 19, 2016

If I try to send a large file to a WebDAV server (at least ours, which is running inside Jetty), the request dies with an error in the logs:

Caused by: java.net.SocketException: Broken pipe
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
    at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:431)
    at sun.security.ssl.OutputRecord.write(OutputRecord.java:417)
    at sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:876)
    at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:847)
    at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
    at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)
    at org.apache.commons.io.output.TeeOutputStream.write(TeeOutputStream.java:64)
    at org.apache.http.impl.io.SessionOutputBufferImpl.streamWrite(SessionOutputBufferImpl.java:126)
    at org.apache.http.impl.io.SessionOutputBufferImpl.flushBuffer(SessionOutputBufferImpl.java:138)
    at org.apache.http.impl.io.SessionOutputBufferImpl.write(SessionOutputBufferImpl.java:169)
    at org.apache.http.impl.io.ChunkedOutputStream.flushCache(ChunkedOutputStream.java:111)
    at org.apache.http.impl.io.ChunkedOutputStream.write(ChunkedOutputStream.java:158)
    at java.io.DataOutputStream.write(DataOutputStream.java:88)

Over on the server, it's a bit more mysterious:

2016-01-20 09:58:46.474+1100 [qtp1273689789-61] WARN  org.eclipse.jetty.http.HttpParser - badMessage: java.lang.IllegalStateException: too much data after closed for HttpChannelOverHttp@5c0ca20a{r=5,c=false,a=IDLE,uri=-}

No entry ever ends up in the access log because HttpParser doesn't seem to acknowledge it as a valid request.

Posts on StackOverflow appear to suggest that this means the client is misbehaved and is sending data after the request is supposed to have ended, but I don't really know. If that is true, the problem in this case is either Sardine or Apache HTTP Client but I'm not really sure which.

I can log what goes over the wire and it looks fairly normal:

PUT /api/dav/path/to/file.dat HTTP/1.1
Overwrite: T
Content-Type: ISO-8859-1
Transfer-Encoding: chunked
Host: bucket.local:27443
Connection: Keep-Alive
User-Agent: Agent/6.3.99999 (Mac OS X 10.11.1; en_AU)
Cookie: JSESSIONID=vxa3pfv4op84iqm13ot215k0
Accept-Encoding: gzip,deflate

800
<binary data>
800
<binary data>
800
<binary data>
800<end of sent data>

I can only guess that at some point the server decides that the request is too large and interrupts it. I have tried setting the max post size to Integer.MAX_VALUE at the Jetty side, but this appears to have no effect.

Perhaps Sardine should be writing particularly large files one chunk at a time so that each individual request doesn't treat on the server's limit. Even if I figure out how to make this 600MB request go through, there would be other servers where I don't have access to the server in order to raise the limit.

@Gerry33
Copy link

Gerry33 commented Mar 3, 2016

Try to chunk upload. By sending byte pieces if the server supports it.
I managed to get this running towards a TOMCAT, but failed for an APACHE.

Perhaps something like this:

`int SIZE = 10000 ;
FileInputStream f = new FileInputStream( file );
FileChannel ch = f.getChannel( );

MappedByteBuffer mappedBuffer = ch.map( MapMode.READ_ONLY, 0L, ch.size( ) );
mappedBuffer.position(from);
byte[] byteArray ; // = new byte[SIZE];
int nGet;
to = from = mappedBuffer.position();
int remCnt=0;
while( mappedBuffer.hasRemaining( ) )
{
nGet = Math.min( mappedBuffer.remaining( ), SIZE );
byteArray = new byte[nGet];
mappedBuffer.get( byteArray, 0, nGet );
to+=nGet;
header.put(HttpHeaders.CONTENT_RANGE, "bytes " + from + "-" + to + "/" + to ) ; // total size is mandatory; works on tomcat local
// header.put(HttpHeaders.CONTENT_TYPE, "application/octet-stream"); // or try thid
// header.put(HttpHeaders.CONTENT_TYPE, "multipart/byteranges"); // or this
// header.put(HttpHeaders.TRANSFER_ENCODING, "chunked"); // doesnot work Transfer-encoding header already present
remCnt = mappedBuffer.remaining( )/ SIZE;
sardine.put(uriString, new ByteArrayInputStream(byteArray) , header);
from = to;
}
ch.close();
f.close();
`

Repository owner locked and limited conversation to collaborators Jun 15, 2023
@dkocher dkocher converted this issue into discussion #400 Jun 15, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants