You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
==15846== 1 errors in context 1 of 1:
==15846== Invalid read of size 4
==15846== at 0x5BFB6B9: talloc_chunk_from_ptr (talloc.c:452)
==15846== by 0x5BFB6B9: __talloc_get_name (talloc.c:1486)
==15846== by 0x5BFB6B9: talloc_check_name (talloc.c:1509)
==15846== by 0x120F58: kcm_op_queue_add (kcmsrv_op_queue.c:136)
==15846== by 0x120F58: kcm_op_queue_send (kcmsrv_op_queue.c:223)
==15846== by 0x12072F: kcm_cmd_send (kcmsrv_ops.c:162)
==15846== by 0x1115B1: kcm_cmd_dispatch (kcmsrv_cmd.c:364)
==15846== by 0x1115B1: kcm_recv (kcmsrv_cmd.c:512)
==15846== by 0x1115B1: kcm_fd_handler (kcmsrv_cmd.c:600)
==15846== by 0x59F1A4F: epoll_event_loop (tevent_epoll.c:728)
==15846== by 0x59F1A4F: epoll_event_loop_once (tevent_epoll.c:930)
==15846== by 0x59EFEC6: std_event_loop_once (tevent_standard.c:114)
==15846== by 0x59EBCAC: _tevent_loop_once (tevent.c:721)
==15846== by 0x59EBECA: tevent_common_loop_wait (tevent.c:844)
==15846== by 0x59EFE66: std_event_loop_wait (tevent_standard.c:145)
==15846== by 0x7C2D0F2: server_loop (server.c:718)
==15846== by 0x110856: main (kcm.c:313)
==15846== Address 0xdccbea0 is 544 bytes inside a block of size 773 free'd
==15846== at 0x4C2FCC8: free (vg_replace_malloc.c:530)
==15846== by 0x5C024B3: _tc_free_poolmem (talloc.c:1000)
==15846== by 0x5C024B3: _tc_free_internal (talloc.c:1141)
==15846== by 0x5BFAAA7: _tc_free_children_internal (talloc.c:1593)
==15846== by 0x5BFAAA7: _tc_free_internal (talloc.c:1104)
==15846== by 0x5BFAAA7: _talloc_free_internal (talloc.c:1174)
==15846== by 0x5BFAAA7: _talloc_free (talloc.c:1716)
==15846== by 0x59ECFC0: tevent_req_received (tevent_req.c:255)
==15846== by 0x59ECFD8: tevent_req_destructor (tevent_req.c:107)
==15846== by 0x5BFAF30: _tc_free_internal (talloc.c:1078)
==15846== by 0x5BFAF30: _talloc_free_internal (talloc.c:1174)
==15846== by 0x5BFAF30: _talloc_free (talloc.c:1716)
==15846== by 0x111711: kcm_cmd_request_done (kcmsrv_cmd.c:391)
==15846== by 0x11E846: kcm_op_get_cache_uuid_list_done (kcmsrv_ops.c:1303)
==15846== by 0x11B1F7: ccdb_sec_list_done (kcmsrv_ccache_secrets.c:1147)
==15846== by 0x119DB0: sec_list_done (kcmsrv_ccache_secrets.c:224)
==15846== by 0x12495A: tcurl_request_done (tev_curl.c:746)
==15846== by 0x12495A: handle_curlmsg_done (tev_curl.c:234)
==15846== by 0x12495A: process_curl_activity.isra.0 (tev_curl.c:245)
==15846== by 0x124E4B: tcurlsock_input_available (tev_curl.c:288)
==15846== Block was alloc'd at
==15846== at 0x4C2EB1B: malloc (vg_replace_malloc.c:299)
==15846== by 0x5BFE03B: __talloc_with_prefix (talloc.c:698)
==15846== by 0x5BFE03B: _talloc_pool (talloc.c:752)
==15846== by 0x5BFE03B: _talloc_pooled_object (talloc.c:820)
==15846== by 0x59ECCAA: _tevent_req_create (tevent_req.c:73)
==15846== by 0x120C43: kcm_op_queue_send (kcmsrv_op_queue.c:206)
==15846== by 0x12072F: kcm_cmd_send (kcmsrv_ops.c:162)
==15846== by 0x1115B1: kcm_cmd_dispatch (kcmsrv_cmd.c:364)
==15846== by 0x1115B1: kcm_recv (kcmsrv_cmd.c:512)
==15846== by 0x1115B1: kcm_fd_handler (kcmsrv_cmd.c:600)
==15846== by 0x59F1A4F: epoll_event_loop (tevent_epoll.c:728)
==15846== by 0x59F1A4F: epoll_event_loop_once (tevent_epoll.c:930)
==15846== by 0x59EFEC6: std_event_loop_once (tevent_standard.c:114)
==15846== by 0x59EBCAC: _tevent_loop_once (tevent.c:721)
==15846== by 0x59EBECA: tevent_common_loop_wait (tevent.c:844)
==15846== by 0x59EFE66: std_event_loop_wait (tevent_standard.c:145)
==15846== by 0x7C2D0F2: server_loop (server.c:718)
I have a different theory. I think it's a race condition between the request that just finished, calls tevent_req_done which frees the queue entry and between the code that first fetches the queue entry from the hash table and then checks it. I think the entry is first returned from the hash table, then the request is freed and then the queue code checks the entry.
I might reproduce it more often because I run sssd-secrets and sssd-kcm with valgrind and therefore they are slower. But I am still not sure why it can occur even with single kinit operation; as it is described in ticket.
And it needn't be even a different realm. You can just renew the same ticket twice in parallel as a reasonable reproducer. Just ensure that ticker is not Expired :-)
Cloned from Pagure issue: https://pagure.io/SSSD/sssd/issue/3372
It seems to be use after free.
valgrind error:
Part of sssd-kcm.log
Comments
Comment from lslebodn at 2017-04-18 18:39:26
I have a suspicion that it is partially related to expired keys.
Comment from jhrozek at 2017-04-19 21:45:16
I have a different theory. I think it's a race condition between the request that just finished, calls tevent_req_done which frees the queue entry and between the code that first fetches the queue entry from the hash table and then checks it. I think the entry is first returned from the hash table, then the request is freed and then the queue code checks the entry.
Comment from lslebodn at 2017-04-19 22:49:03
I doubt there was any other request. It is still possible.
But I haven't had a time to find reasonable reproducer.
Comment from lslebodn at 2017-04-20 07:15:46
You were probably right. Reasonable reproducer is to kinit from two different terminals with different realm at the same time.
1st terminal:
2nd terminal
I might reproduce it more often because I run sssd-secrets and sssd-kcm with valgrind and therefore they are slower. But I am still not sure why it can occur even with single kinit operation; as it is described in ticket.
Comment from lslebodn at 2017-04-20 07:36:21
And it needn't be even a different realm. You can just renew the same ticket twice in parallel as a reasonable reproducer. Just ensure that ticker is not Expired :-)
e.g.
for i in {1..2} ; do kinit -R & done
Comment from jhrozek at 2017-04-26 23:01:52
Metadata Update from @jhrozek:
Comment from jhrozek at 2017-04-27 17:43:08
Metadata Update from @jhrozek:
Comment from jhrozek at 2017-04-27 17:43:08
Metadata Update from @jhrozek:
Comment from jhrozek at 2017-04-27 17:43:08
Issue linked to Bugzilla: Bug 1446302
Comment from jhrozek at 2017-05-23 19:18:38
Metadata Update from @jhrozek:
Comment from jhrozek at 2017-05-23 19:19:07
Metadata Update from @jhrozek:
Comment from jhrozek at 2017-05-24 16:07:02
master:
Comment from jhrozek at 2017-05-24 16:07:44
Metadata Update from @jhrozek:
Comment from lslebodn at 2017-05-24 16:16:23
Metadata Update from @lslebodn:
The text was updated successfully, but these errors were encountered: