Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sequel doesn't seem to reconnect on "Mysql2::Error: MySQL server has gone away: BEGIN" error #368

Closed
PatrickLef opened this issue Jul 3, 2011 · 21 comments

Comments

@PatrickLef
Copy link

I might have missed some settings but it seems like Sequel should reconnect on these errors

MYSQL_DATABASE_DISCONNECT_ERRORS.match(e.message)
MYSQL_DATABASE_DISCONNECT_ERRORS = /\A(Commands out of sync; you can't run this command now|Can't connect to local MySQL server through socket|MySQL server has gone away)/

Haven't been able to reproduce this in dev environment though. So I will try to upgrade the gems, but since a database restart don't happen so often it might take a while before it happens again.

gem 'sequel', "3.22.0"
gem 'mysql2', "0.2.7"

And full error trace:

FATAL - [03/Jul/2011 13:23:55] "Mysql2::Error: MySQL server has gone away: BEGIN"
ERROR - [03/Jul/2011 13:23:55] "Sequel::DatabaseError - Mysql2::Error: MySQL server has gone away:
sequel-3.22.0/lib/sequel/database/logging.rb:53:in query' sequel-3.22.0/lib/sequel/database/logging.rb:53:insend'
sequel-3.22.0/lib/sequel/database/logging.rb:53:in log_connection_execute' sequel-3.22.0/lib/sequel/database/logging.rb:32:inlog_yield'
sequel-3.22.0/lib/sequel/database/logging.rb:53:in log_connection_execute' sequel-3.22.0/lib/sequel/adapters/shared/mysql.rb:158:inbegin_new_transaction'
sequel-3.22.0/lib/sequel/database/query.rb:274:in begin_transaction' sequel-3.22.0/lib/sequel/adapters/shared/mysql.rb:168:inbegin_transaction'
sequel-3.22.0/lib/sequel/database/query.rb:222:in _transaction' sequel-3.22.0/lib/sequel/database/query.rb:209:intransaction'
sequel-3.22.0/lib/sequel/connection_pool/threaded.rb:84:in hold' sequel-3.22.0/lib/sequel/database/connecting.rb:226:insynchronize'
sequel-3.22.0/lib/sequel/database/query.rb:207:in `transaction'

@jeremyevans
Copy link
Owner

This looks like a general bug in Sequel, that it doesn't apply the disconnect checking when using log_connection_execute, since by default it just issues an execute method directly on the connection. It should instead call another Database method that varies per adapter which will check for disconnect errors. While that isn't the most complex change, it will require changing all adapters, so I probably won't have a patch available for a few days (Tuesday at the earliest).

@PatrickLef
Copy link
Author

Sounds great, thanks!

@jeremyevans
Copy link
Owner

I took another look at the code and came up with a way that isn't invasive and should solve the issue. Unfortunately, I won't be able to test most of it until Tuesday. If you could give the patch at http://pastie.org/2159769 a test and let me know if it works for you, I'd appreciate it. I plan on testing all the adapters I can with it on Tuesday, and hopefully committing it then if it has no regressions. I have tested this on PostgreSQL and SQLite, and found no regressions on either, but they aren't really affected (SQLite has no disconnect detection and PostgreSQL does it at the connection level).

@PatrickLef
Copy link
Author

I Just tested it with the mysql2 adapter, and it seems to solve the problem. Thanks alot!

@w-A-L-L-e
Copy link

I'm having an issue with mysql2 adapter. Everything seems to run fine until I tried some stress testing on a sinatra app running on mysql with the apache benchmark tool ab.

After about 900 requests it just hangs and I have to do touch tmp/restart to let passenger/apache restart the process.
When I changed the gem from sequel to sinatra-activerecord all works fine. How do I best tackle this issue?

Here some info on my test environment:

root@Debian-60-squeeze-64-LAMP:/var/www/viu2_json# rails --version
Rails 3.0.9
root@Debian-60-squeeze-64-LAMP:/var/www/viu2_json# passenger --version
Phusion Passenger version 3.0.7

"Phusion Passenger" is a trademark of Hongli Lai & Ninh Bui.
root@Debian-60-squeeze-64-LAMP:/var/www/viu2_json# mysql --version
mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64) using readline 6.1

sequel (3.26.0)
sinatra (1.2.6)
sinatra-activerecord (0.1.3)
mysql2 (0.2.11)
json (1.5.4)

Here is my testing app.rb:

require 'sinatra'
require 'mysql2'
require 'sequel'
require 'yaml'
require 'json'

#these also work but all have the same problem of the sinatra app locking up when running benchmark with concurrency :
#set :database, Sequel.connect( "mysql2://root:aaaa5M@localhost/ScaleChampion_development", :max_connections=>5, :socket=>'/var/run/mysqld/mysqld.sock', :encoding=>'utf8' )
#set :database, Sequel.connect( "mysql2://root:aa5M@localhost/ScaleChampion_development", :max_connections=>5, :encoding=>'utf8' )

#as recommended by the sequel wiki use a constant:
DB = Sequel.connect( "mysql2://root:thepassword@localhost/ScaleChampion_development", :max_connections=>5, :encoding=>'utf8' )

class User < Sequel::Model
end

get '/' do
erb :getAuthToken
end

get '/getAuthToken' do
erb :getAuthToken
end

post '/getAuthToken' do
erb :getAuthToken
end

get '/users' do
content_type :json
User.all.map{|u|
{ :id=>u.id, :name => u.name, :email => u[:email] }.to_json
}
end

#json request sequel
get '/user/:id' do
u = User.find( params[:id] )
content_type :json
{ :name => u.name, :email => u.email }.to_json
end

Here is my config.ru:
require './app.rb'

root_dir = File.dirname(FILE)
set :environment, ENV['RACK_ENV'].to_sym
set :root, root_dir
set :app_file, File.join(root_dir, 'app.rb')

disable :run

run Sinatra::Application

When I run this with Passenger/apache2. All works
Except when I try to benchmark it with ab I get following:
root@Debian-60-squeeze-64-LAMP:/var/www/viu2_json# ab -n 1000 -c 10 'http://viu.sitweb.eu/user/100'
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking viu.sitweb.eu (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
apr_poll: The timeout specified has expired (70007)
Total of 991 requests completed

So it just hangs after 991 requests.

I loved the faster startup times vs activerecord and would like to get this working.

Kind regards and thanks in advance for any help or advise,
W.

@jeremyevans
Copy link
Owner

If this is really an issue with Sequel itself, it should be possible to create a self contained test that doesn't rely on passenger (I would prefer not to have to install passenger to help you debug this issue). Especially since numerous other Sequel users use the mysql2 adapter without any problems, it seems unlikely to be a problem in general. If you can put together a self contained test with just Sequel that I can use to replicate your issue, I'll be happy to reopen this ticket.

@w-A-L-L-e
Copy link

Made the app even smaller:

1 require 'sinatra'
2 require 'mysql2'
3 require 'sequel'
4 require 'yaml'
5 require 'json'
6
7 #these also work but all have the same problem of the sinatra app locking up when running benchmark with concurrency :
8 #set :database, Sequel.connect( "mysql2://root:aaaa5M@localhost/ScaleChampion_development", :max_connections=>5, :socket=>'/var/run/mysqld/mysqld.sock', :encoding=>'utf8' )
9 #set :database, Sequel.connect( "mysql2://root:aa5M@localhost/ScaleChampion_development", :max_connections=>5, :encoding=>'utf8' )
10
11 #as recommended by the sequel wiki use a constant:
12 DB = Sequel.connect( "mysql2://root:themysqlpass@localhost/ScaleChampion_development", :max_connections=>5, :encoding=>'utf8' )
13
14 class User < Sequel::Model
15 end
16
17 get '/' do
18 content_type :json
19 User.all.map{|u|
20 { :id=>u.id, :name => u.name, :email => u[:email] }.to_json
21 }
22 end
23
24 get '/users' do
25 content_type :json
26 User.all.map{|u|
27 { :id=>u.id, :name => u.name, :email => u[:email] }.to_json
28 }
29 end
30
31 #json request sequel
32 get '/user/:id' do
33 u = User.find( params[:id] )
34 content_type :json
35 { :name => u.name, :email => u.email }.to_json
36 end
37
38
39 #run this:
40 #ab -n 1000 -c 10 "http://localhost:4567/"
41 #does not hang per se but only gets us 6req/sec ...

Ok so when I just fire up:
ruby app.rb
Here I can get a max of 6 req/sec. With passenger/activerecord on the app, same route, same server, same mysql I get +4000 req/sec.

Basically, the question is how do you run with sequel gem in production then, again also this above code with passenger locks up. What are my other options. Or rather what do you use to get good speeds?

Thanks in advance for your time.

Kind regards,
W.

@jeremyevans
Copy link
Owner

The first thing I'd try is:

17 get '/' do
18 content_type :json
19 User.select(:id, :name, :email).map{|u|
20 { :id=>u.id, :name => u.name, :email => u[:email] }.to_json
21 }
22 end

When you use User.all, you are taking the entire table into memory at the same time (User.all returns an array, not a proxy object). Also, you should only be selecting the columns that you are interested in. I'd try that first and see what the effect is.

@w-A-L-L-e
Copy link

Indeed the get '/' is not optimal.
The get '/user/:id' is ok though.

Still when even running ab -n 1000 -c 10 "http://localhost:4567/user/2"

Still have same issue. I get about 30 req/second which is not same ballpark as 4000req/sec when using passenger+activerecord on same route, same db.

I can see that when firing up the sequel version it starts faster for the first request so it should be running even better once I figure out to properly run it in production the way I can now with the sinatra-activerecord+passenger version. There must be something else that trips out.
Saw you posted some benchmarks for sequel where you state it is faster for postgress, unfortunately that is not an option for me since the mysql is already used by 4 other daemons in our project...

@jeremyevans
Copy link
Owner

You'll have to profile the code to see where it is slow. I usually use ruby-prof for that. I'm not sure what could account for a 100x difference, so I'd have to see the profile results before making further suggestions.

@iamhabbeboy
Copy link

This is great, you are so wonderful. You solve my biggest problem

@bougyman
Copy link
Contributor

bougyman commented Jun 6, 2017

Sorry, I know this has been closed a while, but it's happening to me right now. Was there a regression, perhaps?

@bougyman
Copy link
Contributor

bougyman commented Jun 6, 2017

sequel-4.34.0 here.

@bougyman
Copy link
Contributor

bougyman commented Jun 6, 2017

Relevant backtrace

Mysql2::Error: MySQL server has gone away
ruby2.1.x/lib/ruby/gems/2.1.0/gems/mysql2-0.4.4/lib/mysql2/client.rb:107:in `_query'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/mysql2-0.4.4/lib/mysql2/client.rb:107:in `block in query'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/mysql2-0.4.4/lib/mysql2/client.rb:106:in `handle_interrupt'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/mysql2-0.4.4/lib/mysql2/client.rb:106:in `query'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/adapters/mysql2.rb:80:in `block in _execute'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/database/logging.rb:35:in `log_yield'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/adapters/mysql2.rb:80:in `_execute'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/adapters/shared/mysql_prepared_statements.rb:36:in `block in execute'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/database/connecting.rb:251:in `block in synchronize'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/connection_pool/threaded.rb:105:in `hold'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/database/connecting.rb:251:in `synchronize'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/adapters/shared/mysql_prepared_statements.rb:36:in `execute'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/dataset/actions.rb:965:in `execute'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/adapters/mysql2.rb:197:in `execute'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/adapters/mysql2.rb:156:in `fetch_rows'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/dataset/actions.rb:846:in `with_sql_each'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/dataset/actions.rb:856:in `with_sql_first'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/dataset/actions.rb:664:in `single_record!'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/dataset/actions.rb:652:in `single_record'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/dataset/actions.rb:194:in `first'
ruby2.1.x/lib/ruby/gems/2.1.0/gems/sequel-4.34.0/lib/sequel/model/base.rb:486:in `first'

@jeremyevans
Copy link
Owner

Sequel 4.34.0 is over a year old, please try with the current version, and also try using mysql2 directly to make sure the error is in Sequel and not in the driver. If you think there is a bug in Sequel, please create a new issue with a minimal, self-contained, reproducible example.

@Krule
Copy link

Krule commented Jul 18, 2017

Again, is there a possibility of regression for this? Happened after long (couple of hours) idle period.

Sequel: 4.48.0
mysql2: 0.4.8

Backtrace:

Mysql2::Error: MySQL server has gone away
/var/bundle/ruby/2.4.0/gems/mysql2-0.4.7/lib/mysql2/client.rb:120:in `_query'"
/var/bundle/ruby/2.4.0/gems/mysql2-0.4.7/lib/mysql2/client.rb:120:in `block in query'"
/var/bundle/ruby/2.4.0/gems/mysql2-0.4.7/lib/mysql2/client.rb:119:in `handle_interrupt'"
/var/bundle/ruby/2.4.0/gems/mysql2-0.4.7/lib/mysql2/client.rb:119:in `query'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/adapters/mysql2.rb:141:in `block in _execute'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/database/logging.rb:45:in `log_connection_yield'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/adapters/mysql2.rb:136:in `_execute'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/adapters/utils/mysql_mysql2.rb:37:in `block in execute'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/database/connecting.rb:301:in `block in synchronize'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/connection_pool/threaded.rb:107:in `hold'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/database/connecting.rb:301:in `synchronize'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/adapters/utils/mysql_mysql2.rb:37:in `execute'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/dataset/actions.rb:1081:in `execute'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/adapters/mysql2.rb:291:in `execute'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/adapters/mysql2.rb:254:in `fetch_rows'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/dataset/actions.rb:942:in `with_sql_each'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/dataset/actions.rb:950:in `with_sql_first'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/dataset/actions.rb:748:in `single_record!'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/dataset/actions.rb:736:in `single_record'"
/var/bundle/ruby/2.4.0/gems/sequel-4.48.0/lib/sequel/dataset/actions.rb:212:in `first'"

@jeremyevans
Copy link
Owner

I don't think so. If I do:

DB.synchronize{|c| def c.query(*) raise ::Mysql2::Error, "MySQL server has gone away" end}
DB.get(1)

I get: Sequel::DatabaseDisconnectError: Mysql2::Error: MySQL server has gone away, so it looks like the disconnection is recognized. As I stated previously, if you think there is a bug here, please create a new issue with a minimal, self-contained, reproducible example.

@jeremyevans
Copy link
Owner

@Krule looks like you edited your comment after posting it. Just FYI, Sequel's behavior is to raise the error as a disconnect error, which removes the connection from the pool. It does not automatically retry the query, as it may not be safe to do so. You should probably use the connection_validator extension if you want to check connection validity before use, or something like a cron job to make sure connections are not left idle.

@Krule
Copy link

Krule commented Jul 18, 2017

@jeremyevans Yes, I have added more details as I have investigated the problem and applied the fix which happened to be increasing connection pool and utilising connection_validator to check it periodically.

Thank you for your time and sorry for bogus "bug" report. I am really glad it's not a bug.

@blaskovicz
Copy link

blaskovicz commented Feb 13, 2018

For anyone that gets down this far in the thread, see the connection_validator docs.

@tadman
Copy link

tadman commented Sep 22, 2021

You can test that this is working correctly on MySQL by tuning the wait_timeout to a very low value, like:

SET GLOBAL wait_timeout=5;

Where idle connections will be closed after 5 seconds. This will surface any connection issues very quickly. The normal default (MariaDB) is 28800s, or 8 hours, meaning you'll hit this with any long-running process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants