-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve InboundTableMapper performance with bulk inserts #1626
Comments
how this affects the user code? Will we have to change the existing user code when the grouping will change? |
the change will be 100% transparent - it's just about changing the internal implementation to use more efficient data structures - which should allow the same SQL approach as with BulkInsertOperation with mappers in common use cases (such as used now in user code in the Rossini project for example) |
ok then - nevertheless, as will have big impact on Rossini, I'd like to be also in reviewers to see what happened... thanks |
if we want it for Rossini, the milestone should change, right? |
…e *Mapper modules to improve the performance of InboundTableMapper::queueData() with hashes of lists, to keep data in the same format as it will be inserted with bulk DML
InboundTableMapper bulk inserts should be made without converting the data to a list of hashes and then back again to a hash of lists - a hash of lists should be mapped in place and any constant or identify mappings should be made on the source list directly, which should bring performance up closed to that of
BulkInsertOperation
The text was updated successfully, but these errors were encountered: