New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sequel doesn't seem to reconnect on "Mysql2::Error: MySQL server has gone away: BEGIN" error #368
Comments
This looks like a general bug in Sequel, that it doesn't apply the disconnect checking when using log_connection_execute, since by default it just issues an execute method directly on the connection. It should instead call another Database method that varies per adapter which will check for disconnect errors. While that isn't the most complex change, it will require changing all adapters, so I probably won't have a patch available for a few days (Tuesday at the earliest). |
Sounds great, thanks! |
I took another look at the code and came up with a way that isn't invasive and should solve the issue. Unfortunately, I won't be able to test most of it until Tuesday. If you could give the patch at http://pastie.org/2159769 a test and let me know if it works for you, I'd appreciate it. I plan on testing all the adapters I can with it on Tuesday, and hopefully committing it then if it has no regressions. I have tested this on PostgreSQL and SQLite, and found no regressions on either, but they aren't really affected (SQLite has no disconnect detection and PostgreSQL does it at the connection level). |
I Just tested it with the mysql2 adapter, and it seems to solve the problem. Thanks alot! |
I'm having an issue with mysql2 adapter. Everything seems to run fine until I tried some stress testing on a sinatra app running on mysql with the apache benchmark tool ab. After about 900 requests it just hangs and I have to do touch tmp/restart to let passenger/apache restart the process. Here some info on my test environment: root@Debian-60-squeeze-64-LAMP:/var/www/viu2_json# rails --version "Phusion Passenger" is a trademark of Hongli Lai & Ninh Bui. sequel (3.26.0) Here is my testing app.rb: require 'sinatra' #these also work but all have the same problem of the sinatra app locking up when running benchmark with concurrency : #as recommended by the sequel wiki use a constant: class User < Sequel::Model get '/' do get '/getAuthToken' do post '/getAuthToken' do get '/users' do #json request sequel Here is my config.ru: root_dir = File.dirname(FILE) disable :run run Sinatra::Application When I run this with Passenger/apache2. All works Benchmarking viu.sitweb.eu (be patient) So it just hangs after 991 requests. I loved the faster startup times vs activerecord and would like to get this working. Kind regards and thanks in advance for any help or advise, |
If this is really an issue with Sequel itself, it should be possible to create a self contained test that doesn't rely on passenger (I would prefer not to have to install passenger to help you debug this issue). Especially since numerous other Sequel users use the mysql2 adapter without any problems, it seems unlikely to be a problem in general. If you can put together a self contained test with just Sequel that I can use to replicate your issue, I'll be happy to reopen this ticket. |
Made the app even smaller: 1 require 'sinatra' Ok so when I just fire up: Basically, the question is how do you run with sequel gem in production then, again also this above code with passenger locks up. What are my other options. Or rather what do you use to get good speeds? Thanks in advance for your time. Kind regards, |
The first thing I'd try is: 17 get '/' do When you use User.all, you are taking the entire table into memory at the same time (User.all returns an array, not a proxy object). Also, you should only be selecting the columns that you are interested in. I'd try that first and see what the effect is. |
Indeed the get '/' is not optimal. Still when even running ab -n 1000 -c 10 "http://localhost:4567/user/2" Still have same issue. I get about 30 req/second which is not same ballpark as 4000req/sec when using passenger+activerecord on same route, same db. I can see that when firing up the sequel version it starts faster for the first request so it should be running even better once I figure out to properly run it in production the way I can now with the sinatra-activerecord+passenger version. There must be something else that trips out. |
You'll have to profile the code to see where it is slow. I usually use ruby-prof for that. I'm not sure what could account for a 100x difference, so I'd have to see the profile results before making further suggestions. |
This is great, you are so wonderful. You solve my biggest problem |
Sorry, I know this has been closed a while, but it's happening to me right now. Was there a regression, perhaps? |
sequel-4.34.0 here. |
Relevant backtrace
|
Sequel 4.34.0 is over a year old, please try with the current version, and also try using mysql2 directly to make sure the error is in Sequel and not in the driver. If you think there is a bug in Sequel, please create a new issue with a minimal, self-contained, reproducible example. |
Again, is there a possibility of regression for this? Happened after long (couple of hours) idle period. Sequel: 4.48.0 Backtrace:
|
I don't think so. If I do: DB.synchronize{|c| def c.query(*) raise ::Mysql2::Error, "MySQL server has gone away" end}
DB.get(1) I get: |
@Krule looks like you edited your comment after posting it. Just FYI, Sequel's behavior is to raise the error as a disconnect error, which removes the connection from the pool. It does not automatically retry the query, as it may not be safe to do so. You should probably use the connection_validator extension if you want to check connection validity before use, or something like a cron job to make sure connections are not left idle. |
@jeremyevans Yes, I have added more details as I have investigated the problem and applied the fix which happened to be increasing connection pool and utilising Thank you for your time and sorry for bogus "bug" report. I am really glad it's not a bug. |
For anyone that gets down this far in the thread, see the connection_validator docs. |
You can test that this is working correctly on MySQL by tuning the SET GLOBAL wait_timeout=5; Where idle connections will be closed after 5 seconds. This will surface any connection issues very quickly. The normal default (MariaDB) is 28800s, or 8 hours, meaning you'll hit this with any long-running process. |
I might have missed some settings but it seems like Sequel should reconnect on these errors
MYSQL_DATABASE_DISCONNECT_ERRORS.match(e.message)
MYSQL_DATABASE_DISCONNECT_ERRORS = /\A(Commands out of sync; you can't run this command now|Can't connect to local MySQL server through socket|MySQL server has gone away)/
Haven't been able to reproduce this in dev environment though. So I will try to upgrade the gems, but since a database restart don't happen so often it might take a while before it happens again.
gem 'sequel', "3.22.0"
gem 'mysql2', "0.2.7"
And full error trace:
FATAL - [03/Jul/2011 13:23:55] "Mysql2::Error: MySQL server has gone away: BEGIN"
ERROR - [03/Jul/2011 13:23:55] "Sequel::DatabaseError - Mysql2::Error: MySQL server has gone away:
sequel-3.22.0/lib/sequel/database/logging.rb:53:in
query' sequel-3.22.0/lib/sequel/database/logging.rb:53:in
send'sequel-3.22.0/lib/sequel/database/logging.rb:53:in
log_connection_execute' sequel-3.22.0/lib/sequel/database/logging.rb:32:in
log_yield'sequel-3.22.0/lib/sequel/database/logging.rb:53:in
log_connection_execute' sequel-3.22.0/lib/sequel/adapters/shared/mysql.rb:158:in
begin_new_transaction'sequel-3.22.0/lib/sequel/database/query.rb:274:in
begin_transaction' sequel-3.22.0/lib/sequel/adapters/shared/mysql.rb:168:in
begin_transaction'sequel-3.22.0/lib/sequel/database/query.rb:222:in
_transaction' sequel-3.22.0/lib/sequel/database/query.rb:209:in
transaction'sequel-3.22.0/lib/sequel/connection_pool/threaded.rb:84:in
hold' sequel-3.22.0/lib/sequel/database/connecting.rb:226:in
synchronize'sequel-3.22.0/lib/sequel/database/query.rb:207:in `transaction'
The text was updated successfully, but these errors were encountered: