Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ReadAheadRemoteFileInputStream #183

Closed
bkarge opened this issue Mar 11, 2015 · 4 comments
Closed

ReadAheadRemoteFileInputStream #183

bkarge opened this issue Mar 11, 2015 · 4 comments
Milestone

Comments

@bkarge
Copy link
Contributor

bkarge commented Mar 11, 2015

sshj.sftp.RemoteFile.ReadAheadRemoteFileInputStream.read() leads to data overwrites and/or ArrayIndexOutOfBoundsException when len < res.recvLen.

The problem does not occur with regular read() because there, the size of the response is limited by the len argument.

With read ahead, the size of the response is bounded by a previous len argument.

This bug is not trivial to fix, because it relies on the RemoteFile.checkReadResponse swiss army knife, so there is no easy way to buffer partially evaluated response packets.

@hierynomus
Copy link
Owner

Hi bkarge,

Do you have a sample or testcase to reproduce (preferably) both behaviours? That would help tremendously in fixing it.

@bkarge
Copy link
Contributor Author

bkarge commented Mar 27, 2015

Sir!

You use an ordered request fifo with entries (r+0, l1), (r+1, l2), ...
When read(myBuf,off,lk) is called, you enqueue (r+k, lk) and dequeue r+0, l1
Then you call checkReadResponse, which effectively calls arraycopy(...myBuf, off, l1) because SFTP will return a block of size l1 for the next response (if no problem did occur, but that's still another story)

If l1 > lk, then you overwrite the end of my buffer.

An idea how to fix the problem would be to hide the request fifo behind an InputStream and fill myBuf byte by byte (and skip response packets internally when needed). You don't have to literally use InputStream, it's just to illustrate the technique.

@bkarge
Copy link
Contributor Author

bkarge commented Jun 3, 2015

Below is a demonstration of the behaviour

I have submitted a pull request...

import net.schmizz.sshj.SSHClient;
import net.schmizz.sshj.sftp.OpenMode;
import net.schmizz.sshj.sftp.ReadAheadRemoteFileInputStream;
import net.schmizz.sshj.sftp.RemoteFile;
import net.schmizz.sshj.sftp.SFTPEngine;
import net.schmizz.sshj.transport.verification.HostKeyVerifier;

import java.io.InputStream;
import java.security.PublicKey;
import java.util.Arrays;
import java.util.EnumSet;
import java.util.Random;

public class SftpReadAheadTest {
    public static void main(String[] args) {
        try {
            SSHClient ssh = new SSHClient();
            ssh.loadKnownHosts();
            ssh.addHostKeyVerifier(new HostKeyVerifier()
            {
                @Override
                public boolean verify( String hostname, int port, PublicKey key ) {
                    return true;
                }
            });
            ssh.connect("localhost");
            ssh.authPublickey(System.getProperty("user.name"));

            SFTPEngine sftp = new SFTPEngine( ssh ).init();

            RemoteFile rf;
            rf = sftp.open( "/tmp/SftpReadAheadTest.bin", EnumSet.of( OpenMode.WRITE, OpenMode.CREAT ) );
            byte[] data = new byte[8192];
            new Random( 53 ).nextBytes( data );
            data[3072] = 1;
            rf.write( 0, data, 0, data.length );
            rf.close();

            rf = sftp.open( "/tmp/SftpReadAheadTest.bin" );
            InputStream rs = rf.new ReadAheadRemoteFileInputStream(16 /*maxUnconfirmedReads*/);
);
            byte[] test = new byte[4097];
            int n = 0;

            while (n < 2048)
                n += rs.read( test, n, 2048 - n );

            while (n < 3072)
                n += rs.read( test, n, 3072 - n );

            if (test[3072] != 0)
                System.err.println("buffer overrun!");

            n += rs.read( test, n, test.length - n ); // --> ArrayIndexOutOfBoundsException

            byte[] test2 = new byte[data.length];
            System.arraycopy( test, 0, test2, 0, test.length );

            while (n < data.length)
                n += rs.read( test2, n, data.length - n );

            if (Arrays.equals( data, test2 ))
                System.out.println("All is good!");

        }
        catch (Throwable t)
        {
            t.printStackTrace();
        }
    }
}

@hierynomus
Copy link
Owner

Fixed by merge of #199

@hierynomus hierynomus modified the milestone: 0.13.0 Jun 17, 2015
vladimirlagunov added a commit to vladimirlagunov/sshj that referenced this issue Mar 4, 2022
…ffer is too big

If an instance of ReadAheadRemoteFileInputStream before this change is wrapped into a BufferedInputStream with a big buffer, the SSH client requests big packets from the server. It turned out that if the server had sent a response smaller than requested, the client wouldn't have adjusted to decreased window size, and would have read the file incorrectly.

This change detects cases when the server is not able to fulfil client's requests. Since this change, the client adjusts the maximum request length, sends new read-ahead requests, and starts to ignore all read-ahead requests sent earlier.

Just specifying some allegedly small constant buffer size wouldn't have helped in all possible cases. There is no way to explicitly get the maximum request length inside a client. All that limits differ from server to server. For instance, OpenSSH defines SFTP_MAX_MSG_LENGTH as 256 * 1024. Apache SSHD defines MAX_READDATA_PACKET_LENGTH as 63 * 1024, and it allows to redefine that size.

Interestingly, a similar issue hierynomus#183 was fixed many years ago, but the bug was actually in the code introduced for that fix.
vladimirlagunov added a commit to vladimirlagunov/sshj that referenced this issue Mar 4, 2022
…ffer is too big

If an instance of ReadAheadRemoteFileInputStream before this change is wrapped into a BufferedInputStream with a big buffer, the SSH client requests big packets from the server. It turned out that if the server had sent a response smaller than requested, the client wouldn't have adjusted to decreased window size, and would have read the file incorrectly.

This change detects cases when the server is not able to fulfil client's requests. Since this change, the client adjusts the maximum request length, sends new read-ahead requests, and starts to ignore all read-ahead requests sent earlier.

Just specifying some allegedly small constant buffer size wouldn't have helped in all possible cases. There is no way to explicitly get the maximum request length inside a client. All that limits differ from server to server. For instance, OpenSSH defines SFTP_MAX_MSG_LENGTH as 256 * 1024. Apache SSHD defines MAX_READDATA_PACKET_LENGTH as 63 * 1024, and it allows to redefine that size.

Interestingly, a similar issue hierynomus#183 was fixed many years ago, but the bug was actually in the code introduced for that fix.
hierynomus pushed a commit that referenced this issue Mar 4, 2022
…ffer is too big (#769)

If an instance of ReadAheadRemoteFileInputStream before this change is wrapped into a BufferedInputStream with a big buffer, the SSH client requests big packets from the server. It turned out that if the server had sent a response smaller than requested, the client wouldn't have adjusted to decreased window size, and would have read the file incorrectly.

This change detects cases when the server is not able to fulfil client's requests. Since this change, the client adjusts the maximum request length, sends new read-ahead requests, and starts to ignore all read-ahead requests sent earlier.

Just specifying some allegedly small constant buffer size wouldn't have helped in all possible cases. There is no way to explicitly get the maximum request length inside a client. All that limits differ from server to server. For instance, OpenSSH defines SFTP_MAX_MSG_LENGTH as 256 * 1024. Apache SSHD defines MAX_READDATA_PACKET_LENGTH as 63 * 1024, and it allows to redefine that size.

Interestingly, a similar issue #183 was fixed many years ago, but the bug was actually in the code introduced for that fix.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants