Migrated from https://code.google.com/p/roundhouse/issues/detail?id=29
2. Performance of really really large scripts is not good.
In our spike, where we test new versions of RoundhousE, we have a very
script containing a lot of initial data. The script was made by a tool SQL
and performed fine in earlier versions of RoundhousE. It ran in <1 minute.
taking >15 minutes. We also have a script of 3Mb and that one runs fine.
probably NOT a show stopper for RoundhousE, because people shouldn't make
THAT big, right? :-)
As far as performance. I had a huge file (15MB) that I was running when I
started splitting on a batch terminator for Access. It was a whole bunch
statements (180,000+). We let it run for about two hours before we finally
it. At the time we assumed it was something with Access.
I'm thinking it's possibly a combination of RH and the database. We may
explore string builder here instead of immutable strings. Let's log a
issue on that.
Splitting the file up is a good workaround. Right now if you load a 25MB file you
are increasing the memory by at least that, probably double. At some point RH
degrades significantly trying to handle it. Not sure exactly where that is...
I'm thinking this is a lower priority because it has a proven way to work.
It is indeed lower priority.
Another workaround is to use a lot of splitters in the sql file.
A big file with a whole bunch of inserts (initial data) is handled better if all statements are split by GO-statements.
That way RH can handle files up to 60MB on our development machines in a few minutes.