Skip to content


Make gae_mini_profiler thread safe #30

wants to merge 2 commits into from

4 participants


These small changes make gae_mini_profiler threadsafe (ie. usable with the new Python27 runtime + threadsafe enabled).


The request_id needs to be thread_local since it is used outside of the lock. The middleware needs to be synchronized since it uses a lot of instance variables which could be corrupted if multiple threads set them concurrently.


I've also run into threading issues with gae_mini_profiler. But I'm not sure if this patch really fixes it. It certainly helps now that request_id is thread local. But the locking effectively serializes all requests. Maybe since not every request is profiled, this is ok performance-wise, but wouldn't it be better to store all this state in the environ dict?

I'm not too familiar with wsgi, but it looks like environ contains per-request data. I will take a look into this and post a patch if it works.

Khan Academy member

@aburgel Give me a couple days to get this repo updated w/ our latest from the Khan Academy codebase -- we have our own multithreading patch. Will update here in the near future, and you can judge if you like the latest.


@aburgel - true, this patch serializes profiled requests. Non-profiled requests go trough without locking, so this should have a small impact on production (ie. only the percent you want to be profiled gets serialized). I feel that the locking is needed in addition to the thread-local stuff because the recorder instance fetched in wrapped_appstats_app is used later (on the line RequestStats(request_id, environ, self).store()) and without locking an other thread can indirectly change it (ie. if a different thread starts a request I think that the recorder state might be compromised).

@kamens - I'm looking forward to your patch, hopefully it will give a more elegant solution to the problem.


Any updates on this bug?

@irvingpop irvingpop referenced this pull request in kamalgill/flask-appengine-template

Problems with gae-mini-profiler #36

Khan Academy member

The latest version should now play nice w/ threadsafe:true.

The new example app running at is running in multithreaded mode.

The latest version changed the way configurations are overridden (choosing to follow the appengine_config model). The updated instructions at may be needed for those making the switch.

Thx for all the patience!

@kamens kamens closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Jun 20, 2012
  1. @cdman

    Make gae_mini_profiler thread safe

    cdman committed
Commits on Jul 27, 2012
  1. @cdman
Showing with 44 additions and 24 deletions.
  1. +43 −23
  2. +1 −1
@@ -4,6 +4,8 @@
import os
import cPickle as pickle
import re
+import threading
+import base64
# use json in Python 2.7, fallback to simplejson for Python 2.5
@@ -29,8 +31,24 @@
config = gae_mini_profiler.config.ProfilerConfigProduction
-# request_id is a per-request identifier accessed by a couple other pieces of gae_mini_profiler
-request_id = None
+class RequestStore:
+ def __init__(self):
+ self._threadlocal = threading.local()
+ def clear_id(self):
+ if not hasattr(self._threadlocal, 'request_id'): return
+ self._threadlocal.request_id = None
+ def generate_id(self):
+ self._threadlocal.request_id = base64.urlsafe_b64encode(os.urandom(5))
+ return self._threadlocal.request_id
+ def get_id(self):
+ if not hasattr(self._threadlocal, 'request_id'): return None
+ return self._threadlocal.request_id
+requeststore = RequestStore()
class SharedStatsHandler(RequestHandler):
@@ -307,6 +325,7 @@ class ProfilerWSGIMiddleware(object):
def __init__(self, app):
+ self._lock = threading.RLock() = app
self.app_clean = app = None
@@ -319,23 +338,26 @@ def __init__(self, app):
self.end = None
def __call__(self, environ, start_response):
- global request_id
- request_id = None
- # Start w/ a non-profiled app at the beginning of each request
- = self.app_clean
- = None
- self.recorder = None
- self.temporary_redirect = False
- self.simple_timing = cookies.get_cookie_value("g-m-p-disabled") == "1"
# Never profile calls to the profiler itself to avoid endless recursion.
- if config.should_profile(environ) and not environ.get("PATH_INFO", "").startswith("/gae_mini_profiler/"):
+ if not config.should_profile(environ) or environ.get("PATH_INFO", "").startswith("/gae_mini_profiler/"):
+ result =, start_response)
+ for value in result:
+ yield value
+ return
+ try:
+ self._lock.acquire()
+ # Start w/ a non-profiled app at the beginning of each request
+ = self.app_clean
+ = None
+ self.recorder = None
+ self.temporary_redirect = False
+ self.simple_timing = cookies.get_cookie_value("g-m-p-disabled") == "1"
+ requeststore.clear_id()
# Set a random ID for this request so we can look up stats later
- import base64
- request_id = base64.urlsafe_b64encode(os.urandom(5))
+ request_id = requeststore.generate_id()
# Send request id in headers so jQuery ajax calls can pick
# up profiles.
@@ -422,12 +444,10 @@ def wrapped_appstats_app(environ, start_response):
# Just in case we're using up memory in the recorder and profiler
self.recorder = None = None
- request_id = None
+ requeststore.clear_id()
- else:
- result =, start_response)
- for value in result:
- yield value
+ finally:
+ self._lock.release()
def add_handler(self):
if self.handler is None:
@@ -476,10 +496,10 @@ def headers_with_modified_redirect(environ, headers):
reg = re.compile("mp-r-id=([^&]+)")
# Keep any chain of redirects around
- request_id_chain = request_id
+ request_id_chain = requeststore.get_id()
match ="QUERY_STRING"))
if match:
- request_id_chain = ",".join([match.groups()[0], request_id])
+ request_id_chain = ",".join([match.groups()[0], requeststore.get_id()])
# Remove any pre-existing miniprofiler redirect id
location = header[1]
@@ -29,6 +29,6 @@ def profiler_includes_request_id(request_id, show_immediately = False):
def profiler_includes():
- return profiler_includes_request_id(profiler.request_id)
+ return profiler_includes_request_id(profiler.requeststore.get_id())
Something went wrong with that request. Please try again.