Bug 1177798 - Order contributors by recency of last edit. #3369
Conversation
134a619
to
ef0bda3
Compare
To be clear, there is another contributor bar in the footer of document pages, which will use the same cacheback job to populate a list of links. That list is not waffled and will use the cacheback job and therefor Memcache and Celery by default. If you think we should waffle the contributor footer bar as well to prevent potential issues with using Memcache/Celery let me know. |
|
||
<div class="contributor-avatars" data-all-text="{{ _('Show all') }}…<span class='hidden'>{{ _('contributors') }}</span>" {% if contrib_count > contrib_limit %}data-has-hidden="1"{% endif %}> | ||
<div class="contributor-avatars" data-all-text="{{ _('Show all') }}…<span class='hidden'>{{ _('contributors') }}</span>" {% if contributors_count > contributor_show_limit %}data-has-hidden="1"{% endif %}> | ||
<span class="quickstat"> | ||
{% trans count=contrib_count %} | ||
by {{ count }} contributor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be {{ contributors_count }}
now?
Spot-check looks good. I also put a debug in the middle of the I am still concerned about the risk that the initial spike of cacheback jobs for every document could trigger memcache overload issues again. It looks like this So, if we put the entire logic of |
@groovecoder So I just made both the header and footer bar depending on the "top_contributors" waffle flag. I moved the check for it to the view since it's easier for the else branch. Can you take a look again? |
@groovecoder BTW we could rate limit the cacheback task to something like 120/m or something |
cdbe9e0
to
0a61c13
Compare
This looks good to me. I toggled the flag and saw only the first request make the SQL call. If we rate-limit the cacheback job to 120/m, what happens when it hits 121/m? Will it make the next request wait, or just skip refresh from cache? |
The rate limitation happens on Celery's backend level, so what will happen is that Celery will actively prevent from sending the task info to its workers. In other words, the task messages will pile up in RabbitMQ until Celery workers are able to work through them. Since we track the number of RabbitMQ messages with Graphite we'll see if something is wrong. Cacheback on the other side is smart enough that if a job hasn't produced results it'll set a cache value to an empty value (in our case an empty list) with a TTL of 60 seconds by default. That's the assumed period of time we expect cacheback tasks to be resulting in properly filling the cache with an actual result (even though it may be an empty list just as well). In other words, in those 60 seconds cacheback won't try scheduling another Celery task. |
Cool. We also have NR RabbitMQ monitoring: https://rpm.newrelic.com/accounts/263620/dashboard/4681572/page/3 |
This is the first of two steps to getting rid of the profile table and UserProfile model and does a bunch of things: - moves profile fields into custom user model - start using django-sundial as an improved timezone field handling for future improvements - got rid of jsonfield in favor of individual columns for user URLs - migrates data from profile to user model - combined profile and user styles - refactored user model and form code to use static fields instead of dynamically generating them - simplified userban queries
Bug 1177798 - Order contributors by recency of last edit.
This also moves the contributor bar fetching into a cacheback job.