Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dmq: fix memory leak #59

Merged
merged 1 commit into from Feb 2, 2015
Merged

dmq: fix memory leak #59

merged 1 commit into from Feb 2, 2015

Conversation

AndreyRybkin
Copy link

No description provided.

@charlesrchance
Copy link
Member

Have you observed a leak here? According to the documentation for sip_msg_shm_clone() [1]:

"org_msg is cloned along with most of its headers and lumps into one shm memory block (so that a shm_free() on the result will free everything)."

  1. http://rpm.kamailio.org/doxygen/sip-router/branch/master/sip__msg__clone_8c.html

@AndreyRybkin
Copy link
Author

Yes, i observed leak in dmq_worker process:

fm_status:  count=  3648 size=   1887776 bytes from <core>: parser/parse_from.c: parse_from_header(63)
fm_status:  count=  3648 size=    418016 bytes from <core>: parser/parse_addr_spec.c: parse_to_param(281)

@charlesrchance
Copy link
Member

The problem may be earlier in worker_loop():

 95                                 if (parse_from_header(current_job->msg) < 0) {
 96                                         LM_ERR("bad sip message or missing From hdr\n");
 97                                 } else {
 98                                         dmq_node = find_dmq_node_uri(node_list, &((struct to_body*)current_job->msg->from->parsed)->uri);
 99                                 }

The msg has already been cloned and as I understand it, we should not parse a cloned msg since doing so will link pkg structures to shm msg.

As we already pre-parsed the msg prior to cloning it (when adding the job to the worker queue), the parsed from header should already be available to us.

@miconda
Copy link
Member

miconda commented Jan 30, 2015

The headers parsed after cloning can be detected and cleaned up, the code should be like:

       for( hdr=req->headers ; hdr ; hdr=hdr->next ) {
               if ( hdr->parsed && hdr_allocs_parse(hdr) &&
               (hdr->parsed<(void*)t->uas.request ||
               hdr->parsed>=(void*)t->uas.end_request)) {
                       /* header parsed filed doesn't point inside uas.request 
                        * chunck -> it was added by resolving acc attributes ->
                       DBG("removing hdr->parsed %d\n", hdr->type);
                       clean_hdr_field(hdr);
                       hdr->parsed = 0;
               }
       }

Something similar is done in tm after running failure_route.

I think From is cloned if parsed before cloning.

@charlesrchance
Copy link
Member

Thanks for clarifying, Daniel.

As the From header is parsed prior to cloning (and therefore included in the clone), it is pointless to parse it again and have to manually clean it up later - so this part of the code needs changing anyway.

I have added a commit to master relating to the above - Andrey, can you confirm if this also fixes the leak you observed?

@charlesrchance
Copy link
Member

Having investigated further, the code which calls parse_from_header() is new - introduced for the purpose of letting the callback function know about the sending node - and I had wrongly assumed we were already parsing the From header prior to cloning. So apologies - I would say this is OK to merge.

AndreyRybkin pushed a commit that referenced this pull request Feb 2, 2015
@AndreyRybkin AndreyRybkin merged commit bd73d7a into master Feb 2, 2015
@AndreyRybkin AndreyRybkin deleted the AndreyRybkin-dmq branch February 2, 2015 06:31
@miconda
Copy link
Member

miconda commented Feb 2, 2015

Are you sure it is the right fix? Is there no possibility that someone else is parsing the From header before clonning (e.g., from config file)?

@AndreyRybkin
Copy link
Author

Hmm, maybe it will be correctly?

void worker_loop(int id)
{
    dmq_worker_t* worker;
    dmq_job_t* current_job;
    peer_reponse_t peer_response;
    int ret_value;
    int not_parsed;
    dmq_node_t *dmq_node = NULL;

    worker = &workers[id];
    for(;;) {
        LM_DBG("dmq_worker [%d %d] getting lock\n", id, my_pid());
        lock_get(&worker->lock);
        LM_DBG("dmq_worker [%d %d] lock acquired\n", id, my_pid());
        /* multiple lock_release calls might be performed, so remove
         * from queue until empty */
        do {
            /* fill the response with 0's */
            memset(&peer_response, 0, sizeof(peer_response));
            current_job = job_queue_pop(worker->queue);
            /* job_queue_pop might return NULL if queue is empty */
            if(current_job) {
                /* extract the from uri */
                if (current_job->msg->from->parsed) {
                    not_parsed = 0;
                } else {
                    not_parsed = 1;
                }
                if (parse_from_header(current_job->msg) < 0) {
                    LM_ERR("bad sip message or missing From hdr\n");
                } else {
                    dmq_node = find_dmq_node_uri(node_list, &((struct to_body*)current_job->msg->from->parsed)->uri);
                }

                ret_value = current_job->f(current_job->msg, &peer_response, dmq_node);
                if(ret_value < 0) {
                    LM_ERR("running job failed\n");
                    continue;
                }
                /* add the body to the reply */
                if(peer_response.body.s) {
                    if(set_reply_body(current_job->msg, &peer_response.body,
                                &peer_response.content_type) < 0) {
                        LM_ERR("error adding lumps\n");
                        continue;
                    }
                }
                /* send the reply */
                if(slb.freply(current_job->msg, peer_response.resp_code,
                            &peer_response.reason) < 0)
                {
                    LM_ERR("error sending reply\n");
                }

                /* if body given, free the lumps and free the body */
                if(peer_response.body.s) {
                    del_nonshm_lump_rpl(&current_job->msg->reply_lump);
                    pkg_free(peer_response.body.s);
                }
                if((current_job->msg->from->parsed)&&(not_parsed)){
                    free_to(current_job->msg->from->parsed);
                }

                LM_DBG("sent reply\n");
                shm_free(current_job->msg);
                shm_free(current_job);
                worker->jobs_processed++;
            }
        } while(job_queue_size(worker->queue) > 0);
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants