-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'System.OutOfMemoryException' was thrown on a Large Table #19
Comments
The tool was originally designed for synchronising smaller data sets between environments via source control, so typically this meant that a few thousand rows of static data would be the max. Others have used it for slightly different purposes, such as for performing simple ETL operations between databases. In order to allow it to be used for datasets of this size, and have the data fully synchronised (e.g. all rows in the target inserted/updated/deleted to reflect the source data set), then a fundamentally different approach is needed in order to merge data in a reliable and efficient way. The For example:
Generates the following statement:
The downside is that you'd need to manually edit the generated statement to use the pre-populated table object (rather than the original table object), so that the
I'm curious to know if this kind of general approach would suit your needs? If so then there's always the possibility of adding a new parameter to override the source table name. |
"The downside is that you'd need to manually edit the generated statement to use the pre-populated table object (rather than the original table object), " A way to make it more dynamic would be add an additional parameter that would take in a string - custom table name (e.g. @custom_source_name = '#currencyRateBulkInserted') - and the procedure would use this parameter's value instead if NOT NULL (a validation can be added at the top of the merge to throw an error if an object with that name doesn't exist). Just a thought off the top of my head. It's possible that I might be missing something. |
An update on this: @EitanBlumin has very helpfully implemented a new parameter that allows you to split source rows into multiple MERGE statements. To use, use This should avoid the out-of-memory exception. If it recurs in spite of this, please comment here and I will re-open the issue. |
The script has been very helpful to my project. Thank you very much.
I have run into one problem - When I run the script on a very large table, I get the following error:
The particular table has 4 columns - 2 Int32 and 2 String type with about 1.7 million rows of data. Is it something that we can solve - or is it a inherent system limitation?
The text was updated successfully, but these errors were encountered: