Semaphore full exception thrown in MemcachedNode #97

theburningmonk opened this Issue Feb 8, 2012 · 5 comments

2 participants


We've just upgraded from v2.8 of Enyim.Memcached client to v2.11 and all of a sudden we're seeing steady SemaphoreFullExceptions being thrown by our web servers. Here's the stack trace:

System.Threading.SemaphoreFullException: Adding the specified count to the semaphore would cause it to exceed its maximum count.
at System.Threading.Semaphore.Release(Int32 releaseCount)
at System.Threading.Semaphore.Release()
at Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl.ReleaseSocket(PooledSocket socket) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\MemcachedNode.cs:line 373
at Enyim.Caching.Memcached.PooledSocket.Dispose(Boolean disposing) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\PooledSocket.cs:line 157
at Enyim.Caching.Memcached.PooledSocket.System.IDisposable.Dispose() in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\PooledSocket.cs:line 163
at Enyim.Caching.Memcached.MemcachedNode.<>c__DisplayClass2.b__0(Boolean readSuccess) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\MemcachedNode.cs:line 497
at Enyim.Caching.Memcached.Protocol.Binary.MultiGetOperation.DoReadAsync() in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\Protocol\Binary\MultiGetOperation.cs:line 109
at Enyim.Caching.Memcached.Protocol.Binary.MultiGetOperation.EndReadAsync(Boolean readSuccess) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\Protocol\Binary\MultiGetOperation.cs:line 123
at Enyim.Caching.Memcached.Protocol.Binary.BinaryResponse.DoDecodeHeader(AsyncIOArgs asyncEvent, Boolean& pendingIO) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\Protocol\Binary\BinaryResponse.cs:line 133
at Enyim.Caching.Memcached.Protocol.Binary.BinaryResponse.DoDecodeHeaderAsync(AsyncIOArgs asyncEvent) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\Protocol\Binary\BinaryResponse.cs:line 124
at Enyim.Caching.Memcached.PooledSocket.AsyncSocketHelper.PublishResult(Boolean isAsync) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\AsyncSocketHelper.cs:line 185
at Enyim.Caching.Memcached.PooledSocket.AsyncSocketHelper.EndReceive() in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\AsyncSocketHelper.cs:line 164
at Enyim.Caching.Memcached.PooledSocket.AsyncSocketHelper.AsyncReadCompleted(Object sender, SocketAsyncEventArgs e) in d:\d\repo\EnyimMemcached\Enyim.Caching\Memcached\AsyncSocketHelper.cs:line 113
at System.Net.Sockets.SocketAsyncEventArgs.OnCompleted(SocketAsyncEventArgs e)
at System.Net.Sockets.SocketAsyncEventArgs.ExecutionCallback(Object ignored)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Net.Sockets.SocketAsyncEventArgs.FinishOperationSuccess(SocketError socketError, Int32 bytesTransferred, SocketFlags flags)
at System.Net.Sockets.SocketAsyncEventArgs.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)
at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP)

We're using the latest version (2.11) available through Nuget.


can you please paste you socketPool config?


I'm using the default socket pool config


We've rolled back to v2.8 and still continues to see this error, and it seems to be load related as it happens much more frequently when we're near peak traffic.

Just to add some more context, we moved from the default transcoder to use a custom transcoder that uses protobuf-net to serialize/deserialize complex object types and as a result we're running 40% less server for an equivalent load, so each server is potentially doing much more work.


try raising the maxPoolSize (

you need to weigh the followings:

  • in an ideal world you'd need as much sockets as much threads (requests) you're running concurrently
  • if requests are coming in faster than you can release a socket you need to raise the limits
  • with high limits it takes much more time to "fail" if a node goes down & more requests will get stuck until the socket times out

you could also try playing with the queueTimeout setting, but just for a quick fix raise the maxPoolSIze to like 100 or 200 and see if it helps


It's one of the things we're considering, but we've hit the 10k connection limit per membase node in the past and considering the number of servers and buckets we're running we probably won't be able to significantly raise the maxPoolSize.

What danger can you see if I put a try-catch around the three places where semaphore.release is being called inside InternalPoolImpl and simply swallow those exceptions?

@enyim enyim closed this Apr 24, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment