You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a problem where Scaleout with SQL Server holds many connections. And they doesn't get disposed for about 3-5 minutes.
This is the result in the database:
Here is the queries to get the data above:
Active connections for database:
select count() as sessions,
s.host_name,
s.host_process_id,
s.program_name,
db_name(s.database_id) as database_name
from sys.dm_exec_sessions s
where is_user_process = 1
and DB_NAME(s.database_id) = 'DatabaseName'
and s.program_name = '.Net SqlClient Data Provider'
group by host_name, host_process_id, program_name, database_id
order by count() desc;
And to get sleeping connections:
SELECT es.status, last_read, last_write, text, program_name, host_process_id
FROM sys.dm_exec_connections ec
JOIN sys.dm_exec_sessions es
ON ec.session_id = es.session_id
CROSS APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) AS dest
where login_name = 'DatabaseUser'
and text like '%SignalR%'
and es.status = 'sleeping'
Expected behavior
We don't expect one connection per client. And that the connection gets disposed earlier.
Actual behavior
One connection per client results in way over 100 connections which is the default limit for the IIS Applicationpool.
Steps to reproduce
Using version 2.2.2 of SignalR and SignalR.SqlServer.
A connection is "created" for each send, but connection pooling should mean that you only need a connection per "simultaneous" message. Basically, if there isn't an idle connection in the pool when message is sent (because the other connections are being used to send messages as well) a new connection will be created. Eventually these connections will close, but we do need a separate connection for each simultaneous message.
This is likely expected behavior if you have high concurrency. The SQL Server scale-out provider isn't really optimal for high-traffic applications, it's mostly designed for use in environments where one of the other providers (Redis or Service Bus) can't be used due to corporate policy.
I'm going to close this since it's mostly by design. If you have more details you'd like to add (i.e the frequency of messages, the number of active clients, etc.) that might help us understand your scenario more, feel free to post a comment. We don't have any plans to redesign this scale-out provider though so I'd encourage you to try a different provider if SQL isn't scaling appropriately for you.
Hi,
We have a problem where Scaleout with SQL Server holds many connections. And they doesn't get disposed for about 3-5 minutes.
This is the result in the database:
Here is the queries to get the data above:
Active connections for database:
select count() as sessions,
s.host_name,
s.host_process_id,
s.program_name,
db_name(s.database_id) as database_name
from sys.dm_exec_sessions s
where is_user_process = 1
and DB_NAME(s.database_id) = 'DatabaseName'
and s.program_name = '.Net SqlClient Data Provider'
group by host_name, host_process_id, program_name, database_id
order by count() desc;
And to get sleeping connections:
SELECT es.status, last_read, last_write, text, program_name, host_process_id
FROM sys.dm_exec_connections ec
JOIN sys.dm_exec_sessions es
ON ec.session_id = es.session_id
CROSS APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) AS dest
where login_name = 'DatabaseUser'
and text like '%SignalR%'
and es.status = 'sleeping'
Expected behavior
We don't expect one connection per client. And that the connection gets disposed earlier.
Actual behavior
One connection per client results in way over 100 connections which is the default limit for the IIS Applicationpool.
Steps to reproduce
Using version 2.2.2 of SignalR and SignalR.SqlServer.
hubContext.Clients.Users(users).publish(eventName, data);
The code above will open one connection per user in the users list.
Are we doing something wrong or is this the default behaviour? Any way to get around this?
The text was updated successfully, but these errors were encountered: