You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of problem:
The requests that the responders send to the Data Providers are allocated on
the global context to ensure that even if the client disconnects, there is
someone to read the reply. However, we forgot to free the structure that
represents the request, which meant that the sssd_nss process grew over time.
Version-Release number of selected component (if applicable):
1.9.2-4
How reproducible:
quite hard
Steps to Reproduce:
1. set a very low cache timeout
2. run account requests in parallel
3. observe the sssd_nss process growing
Actual results:
sssd_nss process is growing
Expected results:
the consumption should stay pretty much the same
Additional info:
This is not easily reproducable, but apart from running many requests and
watching the consumption grow, a quicker, but more involved way might be to
check with the gdb that no tevent_req structures are allocated on top of the
rctx after a request finishes. Please let me know which approach is preferable
for QE.
Cloned from Pagure issue: https://pagure.io/SSSD/sssd/issue/1600
https://bugzilla.redhat.com/show_bug.cgi?id=869443 (Red Hat Enterprise Linux 6)
Comments
Comment from jhrozek at 2012-10-24 12:32:06
Fields changed
blockedby: =>
blocking: =>
coverity: =>
design: =>
design_review: => 0
feature_milestone: =>
fedora_test_page: =>
owner: somebody => jhrozek
patch: 0 => 1
testsupdated: => 0
Comment from dpal at 2012-10-25 15:14:11
Fields changed
milestone: NEEDS_TRIAGE => SSSD 1.9.3
Comment from jhrozek at 2012-10-29 17:20:41
resolution: => fixed
status: new => closed
Comment from jhrozek at 2017-02-24 14:39:59
Metadata Update from @jhrozek:
The text was updated successfully, but these errors were encountered: