Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sql Azure - transient error retry logging is too agressive #542

Closed
tynorton opened this issue Apr 1, 2016 · 2 comments
Closed

Sql Azure - transient error retry logging is too agressive #542

tynorton opened this issue Apr 1, 2016 · 2 comments

Comments

@tynorton
Copy link

tynorton commented Apr 1, 2016

We've recently adopted HangFire to run a few jobs, and we use SQL Azure as the job storage subsystem.

As I'm sure you're aware, SQL Azure DBs have expected failure states as databases are transitioned around the cluster to balance load. We employ EntLib transient fault handling in our code, which works great.

However, Hangfire appears to log exceptions, which are transient, during these transition states. We were suprised to see Hangfire errors just start appearing in our logs, lots of them.

The error is seen below.
We have worked around this issue by silencing ALL HangFire.* loggers in our application, but this feels heavy handed.

Please adjust the log levels of the retry logging so that they do not emit LogLevel.Error.

Error occurred during execution of 'Hangfire.SqlServer.CountersAggregator' process. Execution will be retried (attempt 10 of 2147483647) in 00:01:24 seconds.

System.Data.SqlClient.SqlException (0x80131904): Login failed for user 'sqladmin'.
   at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
   at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()
   at Hangfire.SqlServer.SqlServerStorage.CreateAndOpenConnection()
   at Hangfire.SqlServer.SqlServerStorage.UseConnection[T](Func`2 func)
   at Hangfire.SqlServer.SqlServerStorage.UseConnection(Action`1 action)
   at Hangfire.SqlServer.CountersAggregator.Execute(CancellationToken cancellationToken)
   at Hangfire.Server.ServerProcessExtensions.Execute(IServerProcess process, BackgroundProcessContext context)
   at Hangfire.Server.AutomaticRetryProcess.Execute(BackgroundProcessContext context)
ClientConnectionId:6029471f-9c2e-41e9-bffa-16969c36da67
Error Number:18456,State:1,Class:14
ClientConnectionId before routing:7b0d84b5-7560-4f44-972d-a72b4dd87e28
Routing Destination:bede88e3d989.tr28.westeurope1-a.worker.database.windows.net,11156    at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
   at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()
   at Hangfire.SqlServer.SqlServerStorage.CreateAndOpenConnection()
   at Hangfire.SqlServer.SqlServerStorage.UseConnection[T](Func`2 func)
   at Hangfire.SqlServer.SqlServerStorage.UseConnection(Action`1 action)
   at Hangfire.SqlServer.CountersAggregator.Execute(CancellationToken cancellationToken)
   at Hangfire.Server.ServerProcessExtensions.Execute(IServerProcess process, BackgroundProcessContext context)
   at Hangfire.Server.AutomaticRetryProcess.Execute(BackgroundProcessContext context)
@dandcg
Copy link

dandcg commented Apr 11, 2016

+1

@odinserj
Copy link
Member

In the latest 1.7.0 betas there will be much fewer log messages in case of storage issues, and their number doesn't depend on the worker count as they are grouped together. debug level will show all of them, but info level and above are modest now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants