-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Put files truncating at 100kb when not reading from file #531
Comments
Additionally, wrapping my filename in a |
Alright instead of more comments I'll just update this one with my findings
|
If this is a bug, you have neglected to provide any where near
sufficient details for me to reproduce it. Being able to reproduce the
issue is the first step.
You need to supply
- What version of ssh2-sftp-client your using?
- What version of node are you running?
- What platform is the client running on (Windows?)
- What client is the server running on (Windows?)
Also, if you can provide a complete minimal example script which
reproduces the issue, that would help a lot.
When this type of issue has been reported by others in the past. in
every case the issue was due to the client script calling end() before
the put/get or fastPut/fastGet had completed or the data source being
truncated before it was ppassed to the client (I suspect this is your
issue based on your finding that wrapping the buffer in a read stream
makes it work). Therefore, first steps are to eliminate the simpler
possibilities.
First step, remove the call to end() and see if that makes any
difference. If the data is transferred with no problem, then we know the
issue is because end() is being called before put() has completed and we
can then focus on why that is happening.
If removing end() makes no difference, then we need to focus on the data
source. I would do the following
1. Create a test file by writing your buffer to a local file. Open a
local write stream and pipe the buffer into the stream use the stream
pipe() method (this mirrors how the client works internally). Verify the
integrity and size of the crated file.
2. Use the file created above and call put() using the filename. If this
works, then we know it is some issue with the buffer source.
3. As another test, open a read stream on the test file created in 1 and
pass that in as the source to put(). If that also works, then we have
even more confirmation the problem is with the buffer being passed in.
If we get to this point, you need to then generate a test script which
includes the buffer creation code and supply that so that I can test it
and try to reproduce the issue. If I cannot reproduce the issue, I
cannot resolve any possible issue.
Note that if you are using windows, which it appears you are, my ability
to help is very limited. I don't run windows and I no longer have access
to a windows environment for testing. Last time I did have access and
was able to test on windows, all worked fine. However, that was a while
ago, so who knows.
Tristan Sokol ***@***.***> writes:
… Alright instead of more comments I'll just update this one with my findings
const sftp = new SFTPClient();
await sftp.connect({
host:
port: '22',
username:
password:
});
`console.log(pdf.length, pdf.byteLength, Buffer.isBuffer(pdf)); //1882193 1882193 true`
await sftp.put(pdf, `${name}.pdf`);
console.log(await sftp.stat(`${name}.pdf`));
await sftp.end();
Technique Result
.put(pdf,.. size: 102400,
writeFileSync('./test.pdf', pdf); await put('./test.pdf',..); size: 1882193,
put(createReadStream('./test.pdf'),... size: 1882193,
put(Readable.from(pdf),... size: 102400,
put(readFileSync('./test.pdf'),...//reading the file logs 1882193 bytes size: 102400,
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
|
Thank you! I had actually caught that I forgot those important details and made a fast follow-on edit, but I know they don't update the emails so I'll include them here as well
I created the following script using the (hopefully innocent) 1mb file I randomly found googling:
which exhibits the issues I describe:
I tried your removing
This works as intended, but I'm not sure if I would agree that this indicates an issue with the buffer source. (the buffer source is readFileSync, which by all means available seems to be reading the file as a buffer correctly. )
This works just fine as well. I am testing locally with WSL (Ubuntu 22.04.2 LTS) but have confirmed this behavior on linux instances in GCP. Let me know if this triggers any other thoughts! |
I've had a closer look at your issue. Unfortunately, I could not
reproduce using your data (and my own data) using either my own script
or your (slightly modified) script. This was using node v20.12.2 on
Linux (Fedora 39) and connecting to a remote SFTP server running on a
Fedora 39 server running openSSH OpenSSH_9.3p1, OpenSSL 3.1.1 30 May 2023
Something I did recall which may be of relevance. Some time back, there
was another user having problems with truncated data. After
investigation, it turned out to be due to a network issue associated
with WSL. It was related to MTU size. I recall it because back in the
'old days' i.e. mid 1990s, it was common to have to set a smaller mtu
size on windows 'winsock' because of problems with default standard mtu
size (I'm sure if you google it you will find something). I have also
heard of similar issues when using docker on windows.
To, my suggestion would be to try testing your scripts outside a WSL
environment and see if you get a similar issue. You could also try
reducing the size of your MTU. I did a quick google on this and saw
there were lots of articles about people having to reset MTU when using
WSL, especially wrt large data transfers. Sorry, but not sure what a
good size would be. You probably need to experiment.
For reference, below is the modified version of your script I used. The
modifications are cosmetic and I don't think changed any
functionality. Note that there are other example scripts in the example
directory of the ssh2-sftp-client repository, including some using
buffers for input to put().
BTW re-iterate, unless you have to use buffers, your far better off
using either a readStream (my preferred method) or passing a file
path. Buffers are really only useful in a very limited set of use cases
and most of these cases can now be dealt with using the far more
efficient stream model.
import path from 'node:path';
import fs from 'node:fs';
import SFTPClient from 'ssh2-sftp-client';
import 'dotenv/config';
const file = fs.readFileSync('./gistfile1.txt');
console.log(`buffer size: ${file.length} Is a buffer: ${Buffer.isBuffer(file)}`);
const sftp = new SFTPClient();
await sftp.connect({
host: process.env.SFTP_SERVER,
username: process.env.SFTP_USER,
password: process.env.SFTP_PASSWORD,
port: process.env.SFTP_PORT || 22,
// debug: (msg) => {
// console.error(msg);
// },
});
const remotePath = './ts-data.txt';
await sftp.put(file, remotePath);
console.log(await sftp.stat(remotePath));
await sftp.end();
|
Thanks again for your help. to close the loop here, I was able to get the server upgraded from OpenSSH 8.1 > 9.5 and that seems to have fixed the issue. |
Hi! I'm encountering a weird bug and am hoping for some direction on where to go.
"ssh2-sftp-client": "^10.0.3"
Basically I have files that are being truncated after the first 102,000 bytes. Smaller files work fine, but larger ones report success but are clearly truncated. "manually" connecting with
$ sftp
from the same machine works fine which from what I understand eliminates the issues of #342The code is very simple:
and the logs:
I the log message
Uploaded data stream to /D:/Shares/0097643.pdf
which seems like everything is correct. what does theSFTP: Inbound: Received STATUS (id:2, 4, "Failure")
mean? I initially suspected there being some kind of FPT quota or something, but again, I am able to transfer larger files just fine from the same machine with other SFTP tools.Any advice as to where to dig in more? Could this be a bug?
The text was updated successfully, but these errors were encountered: