Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect writing/reading of large BLOBs #19

Closed
gsbelarus opened this Issue Mar 20, 2019 · 3 comments

Comments

Projects
2 participants
@gsbelarus
Copy link

commented Mar 20, 2019

We have a strange problem. Below is a test. It works fine with BLOB string of 65 000 bytes but when size of BLOB is increased up to 650 000 bytes it fails. It is written without any problem but subsequent SELECT statement could read only 60 176 bytes of it.

We test it with FB3 embedded and Nodejs 11.12.

test('#fetch() with large blob', async () => {
    const attachment = await client.createDatabase(getTempFile('ResultSet-fetch-with-large-blob.fdb'), options);
    let transaction = await attachment.startTransaction();
    await attachment.execute(transaction, `create table t1 (x_blob blob)`);

    await transaction.commit();
    transaction = await attachment.startTransaction();
    const buffer = Buffer.from('#'.repeat(650000));

    await attachment.execute(transaction, insert into t1 (x_blob) VALUES (?), [buffer]);

    await transaction.commit();
    transaction = await attachment.startTransaction();

    const resultSet = await attachment.executeQuery(transaction, `select x_blob from t1`);
    const result = await resultSet.fetch();
    const readStream = await attachment.openBlob(transaction, result[0][0]);

    const resultBuffer = Buffer.alloc(await readStream.length);
    await readStream.read(resultBuffer);
    await readStream.close();
    expect(resultBuffer.toString().length).toEqual(buffer.toString().length);

    await resultSet.close();
    await transaction.commit();
    await attachment.dropDatabase();
   });
@asfernandes

This comment has been minimized.

Copy link
Owner

commented Mar 20, 2019

Both blob read and write has problems with data > 64KB. I don't known why you say write succeded.

However, I think read not necessarily returns in one go the total bytes you're requesting. You should read in a loop until it returns -1 representing the stream end. That could also be parameterized with a option to fully read the requested size in one call.

@asfernandes asfernandes self-assigned this Mar 20, 2019

@gsbelarus

This comment has been minimized.

Copy link
Author

commented Mar 20, 2019

I thought that write succeded because there were no error thrown during method call.

Is it limitation of node/driver, fbclient library or fb server itself?

@asfernandes

This comment has been minimized.

Copy link
Owner

commented Mar 20, 2019

In my test the method throwed. Firebird has the limitation, each segment can have a maximum of 64KB, so we should call it spliting the buffer.

@asfernandes asfernandes added this to To do in driver@0.1.3 Mar 20, 2019

@asfernandes asfernandes moved this from To do to In progress in driver@0.1.3 Mar 20, 2019

@asfernandes asfernandes added this to In progress in driver-native@0.1.3 Mar 20, 2019

@asfernandes asfernandes moved this from In progress to Done in driver@0.1.3 Mar 21, 2019

@asfernandes asfernandes moved this from In progress to Done in driver-native@0.1.3 Mar 21, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.