-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segment causing celery workers to hang in django #51
Comments
Possible related celery/celery#2429 |
I am also experiencing something similar. And in my case it was also narrowed down to analytics library. I haven't tried higher rate limits but it still blocks after a while even with mild traffic. |
hey all, thanks so much for the report and apologies for the delay getting back on this sooner. @calvinfo @f2prateek any idea here? |
Updates? |
Unfortunately I haven't gotten a chance to deeply investigate here. One question though, how are you using celery (multiprocessing, eventlet, We might move this to be a coroutine approach since other libraries might On Thu, Jan 21, 2016 at 2:16 AM, Christian Peters notifications@github.com
|
We are using multiprocessing with 1 thread per process. In our case, subclassing Client class and changing queue to JoinableQueue (from multiprocessing module). |
I think a proper solution would be to make the threading part optional. The idea to introduce celery is the same - push the processing away from the request/responce cycle, so it's double-trouble to have both of them in the way. |
I'd be supportive of that. We could end up passing in an option to force On Fri, Jan 22, 2016 at 12:46 AM, Christian Peters <notifications@github.com
|
Hey all, new to segment and looking to use this package from the server. We already have a celery/rabbitmq task queue pattern, and am concerned about bringing this package in based on this thread. Is this still an issue worth worrying about? If so, considering just making standard REST calls? |
we ended up not triggering segment from within workers, but from within the request. Don't like that solution, but this package is not exactly under what you'd call active development. |
Gotcha. Thanks for the info, @shredding. No worries about potentially dropping events there? Are you just trusting the internal queueing mechanism? |
Not really ... |
Merging into #101 |
We use the django post save signal to trigger segment analytics tracking asynchronously using celery. However when multiple events (about 350 in 20 seconds) are created, all the celery workers hang up consistently at the following output.
[2015-06-12 22:41:34,245: INFO/Worker-2] Starting new HTTPS connection (1): api.segment.io [2015-06-12 22:41:34,574: DEBUG/Worker-2] "POST /v1/batch HTTP/1.1" 200 21 [2015-06-12 22:41:34,578: DEBUG/Worker-2] data uploaded successfully
When the analytics tracking is commented out the workers function as expected. When the the celery rate limit is set to "600/m" the celery workers run without hanging. We have a celery hard time limit of 30 seconds to prevent segment from hanging. We found at a higher rate limit, the hard time limit was hit at high frequency and the analytics tracking was not sent through.
Not sure why the segment library is causing this to happen, please advice.
The text was updated successfully, but these errors were encountered: