Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

platform: fix mem_trace trace level guard #5614

Merged
merged 1 commit into from
Dec 29, 2017

Conversation

maciejbocianski
Copy link
Contributor

Description

This change fix "trace inside trace" issue in mem_trace module
It prevents from tracing malloc/free calls inside __wrap__realloc_r/__wrap__calloc_r/SUB_REALLOC/SUB_CALLOC

Status

READY

Migrations

Change mbed_mem_trace module API
add functions: mbed_mem_trace_level_incr mbed_mem_trace_level_decr

YES

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 29, 2017

This change fix "trace inside trace" issue in mem_trace module

Was this reported anywhere? Can you please amend your commit to add more details how is this fixing the issue and why are we introducing new API - how it should be used?

@maciejbocianski
Copy link
Contributor Author

Problem was discovered during mbed_drivers-mem_trace test refactoring.
As I understand (and according description in mbed_mem_trace.c) we don't want to trace nested calls of malloc/free inside realloc/calloc - current implementation doesn't work.
Maybe I'm wrong ?

mbed_mem_trace.c:

/* 'trace_level' guards "trace inside trace" situations (for example, the implementation
 * of realloc() might call malloc() internally, and since malloc() is also traced, this could
 * result in two calls to the callback function instead of one. */
static uint8_t trace_level;

@bulislaw
Copy link
Member

bulislaw commented Dec 1, 2017

Could be change the nomenclature used to lock/unlock? And it can be binary, I don't think we need levels really.

@maciejbocianski maciejbocianski force-pushed the mem_trace_fix branch 2 times, most recently from 18b5776 to 5500f5d Compare December 4, 2017 09:21
@maciejbocianski
Copy link
Contributor Author

@bulislaw updated

@maciejbocianski
Copy link
Contributor Author

ping

@0xc0170
Copy link
Contributor

0xc0170 commented Dec 7, 2017

I do not fully understand this fix. trave_level (typo there? should be trace_level) was implemented internally to prevent this.

Why do we need API to expose locking? It can not be hidden, internally done. Is this expected from a user to use everytime mem_trace API is invoked? I can see that we as a caller, we use mem_trace_mutex above, why this is done on top of mutex (it's invoked before the mutex).

@maciejbocianski
Copy link
Contributor Author

maciejbocianski commented Dec 7, 2017

Why do we need API to expose locking? It can not be hidden, internally done. Is this expected from a user to use everytime mem_trace API is invoked? I can see that we as a caller, we use mem_trace_mutex above, why this is done on top of mutex (it's invoked before the mutex).

mem_trace_mutex protects only single trace callback call
When we enter realloc we don't want to trace it's internal calls of malloc/free this is why we lock trace_lock_count at the function beginning.
Without this lock realloc could report also malloc/free trace

@0xc0170
Copy link
Contributor

0xc0170 commented Dec 7, 2017

How previously that level variable inside was failing to prevent this ?

@maciejbocianski
Copy link
Contributor Author

maciejbocianski commented Dec 7, 2017

Previously it was protecting mem_trace_cb call only.
And since it's impossible to call mem_trace_cb recursively it was protecting nothing and was redundancy to mem_trace_mutex

@0xc0170
Copy link
Contributor

0xc0170 commented Dec 11, 2017

/morph build

@mbed-ci
Copy link

mbed-ci commented Dec 11, 2017

Build : SUCCESS

Build number : 675
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/5614/

Triggering tests

/morph test
/morph uvisor-test
/morph export-build

@mbed-ci
Copy link

mbed-ci commented Dec 11, 2017

@mbed-ci
Copy link

mbed-ci commented Dec 11, 2017

@@ -91,6 +91,9 @@ extern "C" {

extern "C" void * __wrap__malloc_r(struct _reent * r, size_t size) {
void *ptr = NULL;
#ifdef MBED_MEM_TRACING_ENABLED
mbed_mem_trace_lock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory trace mutex should be held here, otherwise another thread my have acquired this lock but not the mutex, causing logging on other threads to be dropped. Could mbed_mem_trace_lock() include mem_trace_mutex->lock() to simply this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mistakenly assumed that whole __wrap__malloc_r is protected by __malloc_lock but actually __real__malloc_r is protected

Copy link
Contributor Author

@maciejbocianski maciejbocianski Dec 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure we could encapsulate mem_trace_mutex inside mbed_mem_trace.c module but since it's C file we have to use raw OS mutex.
Other solution could be changing this module to cpp and use SingletonPtr<PlatformMutex>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure which solution will be acceptable?

static uint8_t trace_level;
static uint8_t trace_lock_count;

#define TRACE_IS_LOCKED() (trace_lock_count > 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This returns false even if lock count is 1. Maybe a different name, such as TRACE_FIRST_LOCK would be more appropriate.

@maciejbocianski
Copy link
Contributor Author

updated
@c1728p9

void mbed_mem_trace_lock()
{
mem_trace_mutex->lock();
core_util_atomic_incr_u8(&trace_lock_count, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick - this doesn't need to be atomically incremented since this is protected by the mutex.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@0xc0170
Copy link
Contributor

0xc0170 commented Dec 19, 2017

/morph build

@mbed-ci
Copy link

mbed-ci commented Dec 19, 2017

Build : SUCCESS

Build number : 719
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/5614/

Triggering tests

/morph test
/morph uvisor-test
/morph export-build

@mbed-ci
Copy link

mbed-ci commented Dec 19, 2017

@mbed-ci
Copy link

mbed-ci commented Dec 19, 2017

@0xc0170
Copy link
Contributor

0xc0170 commented Dec 19, 2017

/morph test

@mbed-ci
Copy link

mbed-ci commented Dec 19, 2017

@0xc0170
Copy link
Contributor

0xc0170 commented Dec 22, 2017

/morph build

@mbed-ci
Copy link

mbed-ci commented Dec 22, 2017

Build : SUCCESS

Build number : 754
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/5614/

Triggering tests

/morph test
/morph uvisor-test
/morph export-build

@mbed-ci
Copy link

mbed-ci commented Dec 22, 2017

@mbed-ci
Copy link

mbed-ci commented Dec 22, 2017

@cmonr
Copy link
Contributor

cmonr commented Dec 28, 2017

Restarting builds. Results were not reported.

/morph build

@mbed-ci
Copy link

mbed-ci commented Dec 28, 2017

Build : SUCCESS

Build number : 765
Build artifacts/logs : http://mbed-os.s3-website-eu-west-1.amazonaws.com/?prefix=builds/5614/

Triggering tests

/morph test
/morph uvisor-test
/morph export-build

@mbed-ci
Copy link

mbed-ci commented Dec 28, 2017

@cmonr
Copy link
Contributor

cmonr commented Dec 28, 2017

Restarting uVisor test due to odd failure with K64F.

/morph uvisor-test

@mbed-ci
Copy link

mbed-ci commented Dec 29, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants