New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dmq: fix memory leak #59
Conversation
Have you observed a leak here? According to the documentation for sip_msg_shm_clone() [1]: "org_msg is cloned along with most of its headers and lumps into one shm memory block (so that a shm_free() on the result will free everything)." |
Yes, i observed leak in dmq_worker process:
|
The problem may be earlier in worker_loop():
The msg has already been cloned and as I understand it, we should not parse a cloned msg since doing so will link pkg structures to shm msg. As we already pre-parsed the msg prior to cloning it (when adding the job to the worker queue), the parsed from header should already be available to us. |
The headers parsed after cloning can be detected and cleaned up, the code should be like:
Something similar is done in tm after running failure_route. I think From is cloned if parsed before cloning. |
Thanks for clarifying, Daniel. As the From header is parsed prior to cloning (and therefore included in the clone), it is pointless to parse it again and have to manually clean it up later - so this part of the code needs changing anyway. I have added a commit to master relating to the above - Andrey, can you confirm if this also fixes the leak you observed? |
Having investigated further, the code which calls parse_from_header() is new - introduced for the purpose of letting the callback function know about the sending node - and I had wrongly assumed we were already parsing the From header prior to cloning. So apologies - I would say this is OK to merge. |
Are you sure it is the right fix? Is there no possibility that someone else is parsing the From header before clonning (e.g., from config file)? |
Hmm, maybe it will be correctly? void worker_loop(int id)
{
dmq_worker_t* worker;
dmq_job_t* current_job;
peer_reponse_t peer_response;
int ret_value;
int not_parsed;
dmq_node_t *dmq_node = NULL;
worker = &workers[id];
for(;;) {
LM_DBG("dmq_worker [%d %d] getting lock\n", id, my_pid());
lock_get(&worker->lock);
LM_DBG("dmq_worker [%d %d] lock acquired\n", id, my_pid());
/* multiple lock_release calls might be performed, so remove
* from queue until empty */
do {
/* fill the response with 0's */
memset(&peer_response, 0, sizeof(peer_response));
current_job = job_queue_pop(worker->queue);
/* job_queue_pop might return NULL if queue is empty */
if(current_job) {
/* extract the from uri */
if (current_job->msg->from->parsed) {
not_parsed = 0;
} else {
not_parsed = 1;
}
if (parse_from_header(current_job->msg) < 0) {
LM_ERR("bad sip message or missing From hdr\n");
} else {
dmq_node = find_dmq_node_uri(node_list, &((struct to_body*)current_job->msg->from->parsed)->uri);
}
ret_value = current_job->f(current_job->msg, &peer_response, dmq_node);
if(ret_value < 0) {
LM_ERR("running job failed\n");
continue;
}
/* add the body to the reply */
if(peer_response.body.s) {
if(set_reply_body(current_job->msg, &peer_response.body,
&peer_response.content_type) < 0) {
LM_ERR("error adding lumps\n");
continue;
}
}
/* send the reply */
if(slb.freply(current_job->msg, peer_response.resp_code,
&peer_response.reason) < 0)
{
LM_ERR("error sending reply\n");
}
/* if body given, free the lumps and free the body */
if(peer_response.body.s) {
del_nonshm_lump_rpl(¤t_job->msg->reply_lump);
pkg_free(peer_response.body.s);
}
if((current_job->msg->from->parsed)&&(not_parsed)){
free_to(current_job->msg->from->parsed);
}
LM_DBG("sent reply\n");
shm_free(current_job->msg);
shm_free(current_job);
worker->jobs_processed++;
}
} while(job_queue_size(worker->queue) > 0);
}
} |
No description provided.