worker, burst and tests #153

ouhouhsami opened this Issue Nov 21, 2012 · 2 comments


None yet

2 participants



First, thanks for rq, I'm using it with django-rq, and it works great.

But, I'm experiencing a weird problem, when trying to test my app with work(burst=True) in my tests, (getting the worker with from django_rq import get_worker (
I have allready explained my problem here: ui/django-rq#9 but it seems to me that my issue comes from rq.

After logging the worker class, here is what I get, I added > here I lost data I need < where the process goes wrong for me

[2012-11-21 08:51] DEBUG: worker: Registering birth of worker m2049.544
[2012-11-21 08:51] INFO: worker: RQ worker started, version 0.3.2
[2012-11-21 08:51] INFO: worker: 
[2012-11-21 08:51] INFO: worker: *** Listening on default...
[2012-11-21 08:51] INFO: worker: default:<TestAd: myfunkybrand>) (d5399a33-f03b-44de-89c9-3c1c7c3b3c73)
[2012-11-21 08:51] INFO: horse: Job OK
[2012-11-21 08:51] INFO: horse: Result is kept for 500 seconds.
> here I lost data I need <
[2012-11-21 08:51] INFO: worker: 
[2012-11-21 08:51] INFO: worker: *** Listening on default...
[2012-11-21 08:51] DEBUG: worker: Registering death

To explain a bit more my use case: I have a (django) app, that launch asynchronous job using rq. In my tests, I have the following

class MyTestCase(unittest.TestCase):
    def test_it(self):
        # create some objects that invoke an asynchronous job
        # test that a function, inside my django project has been called via a decorator that count the calls on a function, during the asynchronous job

# the deocorator function

def count_calls(fn):
    def _counting(*args, **kwargs):
        _counting.calls += 1
        return fn(*args, **kwargs)
    _counting.calls = 0
    return _counting

In the log above, before the > here I lost data I need < the function I want to monitor return it has been called, but just after the > here I lost data I need <, the function calls is reset.
So, it occurs that until my > here I lost data I need < flag, one process is kept by my tests (I'm able to monitor that the function has been called), but, as soon as I re-enter in the process job loop here: so, just after my flag, I lost the monitoring data I just had before, like the code loading of my app would be reinit, or an other thing I can't explain.

Is my explanation clear enough, and have you got a reason for why this code doesn't work ?
(Obviously, I need this design for my app, and I works in production, but not the test part !)


nvie commented Nov 25, 2012

The example you provide isn't really clear to me. Could you provide a minimal example that demonstrates the suspected bug?


Thx for your reply.

A first minimal example, which is not exactly what I describe above, but can help you to understand my problem:

# in
from redis import Redis
from rq import Queue, Worker
from rq_func import foo

foo()  # return function calls 1
foo() # return function calls 2

q = Queue('default', connection=Redis())
job = q.enqueue(foo)
w = Worker([q])

# in
count = 0

def foo():
    global count  # I know global is evil, btw it's for demonstration
    count += 1
    print 'function calls', count

And when I call python and get

16:27:58: RQ worker started, version 0.3.2
16:27:58: *** Listening on default...
16:28:01: default: (9d77174a-72db-453f-bbaa-73dd4c0fd642)
function calls 1
16:28:01: Job OK
16:28:01: Result is kept for 500 seconds.
16:28:01: *** Listening on default...

so 'function calls' in the worker return "# return function calls 1", which is not 'right', it should be the third call as it call more or less the ''same'' foo.
I think my problem comes from this kind of things, which is not a bug.

@nvie nvie closed this Jan 18, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment