You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've recently adopted HangFire to run a few jobs, and we use SQL Azure as the job storage subsystem.
As I'm sure you're aware, SQL Azure DBs have expected failure states as databases are transitioned around the cluster to balance load. We employ EntLib transient fault handling in our code, which works great.
However, Hangfire appears to log exceptions, which are transient, during these transition states. We were suprised to see Hangfire errors just start appearing in our logs, lots of them.
The error is seen below.
We have worked around this issue by silencing ALL HangFire.* loggers in our application, but this feels heavy handed.
Please adjust the log levels of the retry logging so that they do not emit LogLevel.Error.
Error occurred during execution of 'Hangfire.SqlServer.CountersAggregator' process. Execution will be retried (attempt 10 of 2147483647) in 00:01:24 seconds.
System.Data.SqlClient.SqlException (0x80131904): Login failed for user 'sqladmin'.
at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)
at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
at System.Data.SqlClient.SqlConnection.Open()
at Hangfire.SqlServer.SqlServerStorage.CreateAndOpenConnection()
at Hangfire.SqlServer.SqlServerStorage.UseConnection[T](Func`2 func)
at Hangfire.SqlServer.SqlServerStorage.UseConnection(Action`1 action)
at Hangfire.SqlServer.CountersAggregator.Execute(CancellationToken cancellationToken)
at Hangfire.Server.ServerProcessExtensions.Execute(IServerProcess process, BackgroundProcessContext context)
at Hangfire.Server.AutomaticRetryProcess.Execute(BackgroundProcessContext context)
ClientConnectionId:6029471f-9c2e-41e9-bffa-16969c36da67
Error Number:18456,State:1,Class:14
ClientConnectionId before routing:7b0d84b5-7560-4f44-972d-a72b4dd87e28
Routing Destination:bede88e3d989.tr28.westeurope1-a.worker.database.windows.net,11156 at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)
at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
at System.Data.SqlClient.SqlConnection.Open()
at Hangfire.SqlServer.SqlServerStorage.CreateAndOpenConnection()
at Hangfire.SqlServer.SqlServerStorage.UseConnection[T](Func`2 func)
at Hangfire.SqlServer.SqlServerStorage.UseConnection(Action`1 action)
at Hangfire.SqlServer.CountersAggregator.Execute(CancellationToken cancellationToken)
at Hangfire.Server.ServerProcessExtensions.Execute(IServerProcess process, BackgroundProcessContext context)
at Hangfire.Server.AutomaticRetryProcess.Execute(BackgroundProcessContext context)
The text was updated successfully, but these errors were encountered:
In the latest 1.7.0 betas there will be much fewer log messages in case of storage issues, and their number doesn't depend on the worker count as they are grouped together. debug level will show all of them, but info level and above are modest now.
We've recently adopted HangFire to run a few jobs, and we use SQL Azure as the job storage subsystem.
As I'm sure you're aware, SQL Azure DBs have expected failure states as databases are transitioned around the cluster to balance load. We employ EntLib transient fault handling in our code, which works great.
However, Hangfire appears to log exceptions, which are transient, during these transition states. We were suprised to see Hangfire errors just start appearing in our logs, lots of them.
The error is seen below.
We have worked around this issue by silencing ALL HangFire.* loggers in our application, but this feels heavy handed.
Please adjust the log levels of the retry logging so that they do not emit LogLevel.Error.
Error occurred during execution of 'Hangfire.SqlServer.CountersAggregator' process. Execution will be retried (attempt 10 of 2147483647) in 00:01:24 seconds.
The text was updated successfully, but these errors were encountered: