Navigation Menu

Skip to content

Commit

Permalink
make cache_lock adaptive
Browse files Browse the repository at this point in the history
Summary: the cache_lock in memcached is now our "giant" lock and is a source of much contention.  by making it adaptive, performance on an 8 core system with 8 server threads goes from 240,000 gets/s to over 300,000.

Reviewed By: sgrimm

Test Plan: ran stress test
           
           blasted with both tcp and udp requests

Revert: OK


git-svn-id: http://svn.facebook.com/svnroot/projects/memcached/trunk@123267 2c7ba8d8-a2f7-0310-a573-de162e16dcc7
  • Loading branch information
ps committed Sep 24, 2008
1 parent 1d33e19 commit 1f405e4
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion thread.c
Expand Up @@ -5,6 +5,7 @@
* $Id$
*/

#define _GNU_SOURCE 1
#include "generic.h"

#include <assert.h>
Expand Down Expand Up @@ -51,6 +52,7 @@ static pthread_mutex_t conn_lock;

/* Lock for cache operations (item_*, assoc_*) */
static pthread_mutex_t cache_lock;
static pthread_mutexattr_t cache_attr;

#if defined(USE_SLAB_ALLOCATOR)
/* Lock for slab allocator operations */
Expand Down Expand Up @@ -779,7 +781,11 @@ void mt_stats_aggregate(stats_t *accum) {
void thread_init(int nthreads, struct event_base *main_base) {
int i;

pthread_mutex_init(&cache_lock, NULL);
pthread_mutexattr_init(&cache_attr);
#ifdef PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP
pthread_mutexattr_settype(&cache_attr, PTHREAD_MUTEX_ADAPTIVE_NP);
#endif
pthread_mutex_init(&cache_lock, &cache_attr);
pthread_mutex_init(&conn_lock, NULL);
#if defined(USE_SLAB_ALLOCATOR)
pthread_mutex_init(&slabs_lock, NULL);
Expand Down

0 comments on commit 1f405e4

Please sign in to comment.