New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tableCreate seems extremely slow #4746
Comments
Is this a single or multi server cluster? |
single, localhost, default install |
This is expected at the moment (at least I can reproduce it). |
This just broke all of my tests after upgrading. Had to change the default timeout to be over 2s which of course makes the full set of tests take a long time to finish. This is not a production issue but is causing pain in development. Some calls take as long as 3.5s. |
@hueniverse I highly recommend emptying tables instead of dropping/creating them for each test run. Your tests will run much much faster. |
@chrisvariety Yep. Already done :-) outmoded/penseur@ae73059 |
@chrisvariety @hueniverse this is only recently the case. Prior to 2.1 it was always much faster to drop and recreate databases/tables than it would be to call |
@marshall007 Our overall experience really since 1.12 is that truncation is a lot faster than dropping/creating. We've always seen slow-ish table creation times, but our testing datasets/fixtures are very small, so your milage may vary 😄 |
This is definitely very annoying. We'll try to optimize this in 2.3. Sorry about this one, everyone -- we needed to get failover out, and this particular performance regression slipped through the cracks (since the entire table creation routine had to be rewritten). |
is there a set release schedule to predict when 2.3 will land? |
We shoot for releases that are ~8 weeks apart (sometimes they come out faster, sometimes it takes longer, but roughly this is the ideal time interval between releases). |
Just want to let those interested know that this is still happening as of v2.2.1 Edit: truncate workaround (coffeescript) # truncate all tables
r.dbList()
.contains DB
.do (result) ->
r.branch result,
r.tableList()
.forEach( (table) ->
r.table(table).delete()
),
{dopped: 0}
.run()
# drop all indexes
r.dbList()
.contains DB
.do (result) ->
r.branch result,
r.tableList()
.forEach( (table) ->
r.table(table).indexList().forEach (index) ->
r.table(table).indexDrop index
),
{dopped: 0}
.run() |
It is much faster to truncate your tables between tests instead of drop/create them. |
Feel wrong though, when testing table creation/deletion On Thu, Nov 19, 2015 at 12:27 PM Hannes Nevalainen notifications@github.com
|
This definitely needs to get fixed -- there are obvious ways to get around it (e.g. delete all the data in existing tables), but if nothing else, this is incredibly annoying. We'll look into it ASAP. |
Yeah, truncating between tests is a non-starter. If you have multiple files testing against a rethink instance, and you don't want to leave any mess behind when you run your tests, then you'll incur the 2-3s penalty for at least every file/suite you run. In my case that's at least 30s of dead time for each full test run. |
@coffeemug -- should we move it to 2.3 polish then? |
I actually think this should be in 2.3. This isn't a showstopper, but it's immediately visible to literally every user, and is very frustrating (especially when testing). We have a good track record of fixing simple issues in polish, but more complicated ones tend to get punted, and I think this one we should actually fix asap. |
Fair enough. Let's try to fix this. |
Tracked this down; after adjusting some timers to be faster for better measurement, I saw table create delays from 1.3 to 2.3 seconds (NB: never faster), uniformly distributed (plus a peak of slower ones due to IO spikes on the machine I was testing on.) The majority of the time (precisely 1.0 to 2.0 seconds of it) is caused by the raft election timeout: When we create a table, we create a new raft cluster for it; for any member joining a raft cluster (including on its initial creation), it waits There's still another 300ms or so of work being done, which is more than I'd like, but just adding a fast path for the raft code will get us almost an order of magnitude speedup on non-distributed table creation. (I have yet to figure out the implementation, and depending on what it is, it might speed up distributed table creation too.) |
@encryptio my hero 👍 |
👍 |
In CR 3405 with @VeXocide |
In Time is down to 300-500ms typical table create times. I still think we can improve this further with more effort (<50ms seems like a good goal), but this change brings us back on par with the pre-raft code. Our own |
@encryptio fantastic, thanks for the fix! |
qq: I saw a number of bug fixes made it into 2.2.4 (including features, such as map/filter support being added to system table change feeds), why didn't this make it into that release? |
@mbroadst 2.2.4 was primarily a bug fix release, so we didn't include this optimization there. This is simply for being conservative and minimizing the risk of any regressions in such releases. The improved table creation performance will ship in RethinkDB 2.3.0. |
Could this degrade again somehow? let D = require("rethinkdb")
let co = require("co")
co(function* () {
let conn = yield D.connect({
host: process.env.RETHINKDB_HOST,
port: parseInt(process.env.RETHINKDB_PORT),
db: process.env.RETHINKDB_DB,
})
try { yield D.tableCreate("foo").run(conn) } catch (e) { }
try { yield D.tableCreate("bar").run(conn) } catch (e) { }
try { yield D.tableCreate("bazz").run(conn) } catch (e) { }
try { yield D.tableDrop("foo").run(conn) } catch (e) { }
try { yield D.tableDrop("bar").run(conn) } catch (e) { }
try { yield D.tableDrop("bazz").run(conn) } catch (e) { }
})
$ brew info rethinkdb
rethinkdb: stable 2.3.4 (bottled) |
@ivan-kleshnin It looks fine on 2.3.5 for me. (~400-600ms for a table creation). What hardware are you running this on? Is there any other load on the system? |
@danielmewes it was my working macmini, pretty normal setup, without load. Hovewer, those freezes were not constant so I'm guessing what's really going on. |
As a bit of a side note that might help: rather than using Homebrew, I would generically recommend our .pkg as the best source of RethinkDB for MacOS. Mostly you are getting the same product, but we can do more testing on what we produce directly rather than something we only have a hand in. |
I doubt it will improve with 2.3.5, since nothing in this respect has changed since 2.3.4. 2.3.5 should already be available on Homebrew btw. |
"benchmark"
results
average: 2504.55ms
info
Is this latency expected?
The text was updated successfully, but these errors were encountered: