You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While profiling memory usage I came across a performance bottleneck with a resultset with a large number of rows.
The bottleneck is not present with large resultset in general, but only with resultset with a lot of rows.
For example, using a large table with 2M rows:
vagrant@ubuntu-14:~$ mysqlslap -u msandbox -pmsandbox -c 8 -i 5 -P6033 -h 127.0.0.1 --create-schema=sbtest -q "SELECT id FROM sbtest.longtable"
Warning: Using a password on the command line interface can be insecure.
Benchmark
Average number of seconds to run all queries: 16.755 seconds
Minimum number of seconds to run all queries: 15.146 seconds
Maximum number of seconds to run all queries: 21.020 seconds
Number of clients running queries: 8
Average number of queries per client: 1
While profiling memory usage I came across a performance bottleneck with a resultset with a large number of rows.
The bottleneck is not present with large resultset in general, but only with resultset with a lot of rows.
For example, using a large table with 2M rows:
According to
perf
:There is a clear bottleneck in memmove()
An educated guess is that the main offender is:
The text was updated successfully, but these errors were encountered: