Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove/increase the record size limit #7332

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

dyemanov
Copy link
Member

@dyemanov dyemanov commented Oct 6, 2022

This addresses ticket #1130. After compression improvements the storage overhead this is no longer an issue. I think we should still preserve some safety limit, e.g. 1MB. This change suggests some other improvements too, like compression of the stored temporary records (sorts, record buffers), but they may be addressed separately.

@@ -49,7 +49,7 @@ class VaryingString : public pool_alloc_rpt<SCHAR, type_str>
UCHAR str_data[2]; // one byte for ALLOC and one for the NULL
};

const ULONG MAX_RECORD_SIZE = 65535;
const ULONG MAX_RECORD_SIZE = 1000000; // just to protect from misuse
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would not 1048576 (1MB) be more easy to document/explain?

Copy link
Member Author

@dyemanov dyemanov Oct 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. But my primary worry is whether we can foresee any other problems with this change. Increased tempspace usage is bad, but this is just a performance issue (those using very long records should remember about that). Longer records will also cause bigger memory usage. For very complex queries (those near the 255 contexts limit), if we imagine that e.g. every second stream has its rpb_record, then max memory usage per query (worst case) increases from 8MB to 128MB. With many compiled statement being cached this may become a problem, although in practice we shouldn't expect all tables to be that wide. Or we should release rpb's records of cached requests when their use count goes to zero. Any other issue you can think of?

Copy link
Member Author

@dyemanov dyemanov Oct 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although the memory usage issue is not only about user statements but also about procedures/functions/triggers that are also cached. Maybe EXE_unwind() should delete all rpb_record's after closing the rsb's and releasing the local tables? Or should it be done by RecordStream::invalidateRecords()?

@livius2
Copy link

livius2 commented Nov 27, 2022

As Firebird is used in LibreOffice i suppose that 10MB is more rational change for them.
But of course they can build FB self with this value too.

@aafemt
Copy link
Contributor

aafemt commented Nov 27, 2022

BTW, is there the same sanity check for result set record size or it is completely unlimited?

@dyemanov
Copy link
Member Author

Unlimited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants