Skip to content

Commit 34600f0

Browse files
Steven Rostedtrostedt
authored andcommitted
tracing: Fix race with max_tr and changing tracers
There's a race condition between the setting of a new tracer and the update of the max trace buffers (the swap). When a new tracer is added, it sets current_trace to nop_trace before disabling the old tracer. At this moment, if the old tracer uses update_max_tr(), the update may trigger the warning against !current_trace->use_max-tr, as nop_trace doesn't have that set. As update_max_tr() requires that interrupts be disabled, we can add a check to see if current_trace == nop_trace and bail if it does. Then when disabling the current_trace, set it to nop_trace and run synchronize_sched(). This will make sure all calls to update_max_tr() have completed (it was called with interrupts disabled). As a clean up, this commit also removes shrinking and recreating the max_tr buffer if the old and new tracers both have use_max_tr set. The old way use to always shrink the buffer, and then expand it for the next tracer. This is a waste of time. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
1 parent 0a71e4c commit 34600f0

File tree

1 file changed

+22
-7
lines changed

1 file changed

+22
-7
lines changed

kernel/trace/trace.c

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -709,10 +709,14 @@ update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
709709
return;
710710

711711
WARN_ON_ONCE(!irqs_disabled());
712-
if (!current_trace->use_max_tr) {
713-
WARN_ON_ONCE(1);
712+
713+
/* If we disabled the tracer, stop now */
714+
if (current_trace == &nop_trace)
714715
return;
715-
}
716+
717+
if (WARN_ON_ONCE(!current_trace->use_max_tr))
718+
return;
719+
716720
arch_spin_lock(&ftrace_max_lock);
717721

718722
tr->buffer = max_tr.buffer;
@@ -3185,6 +3189,7 @@ static int tracing_set_tracer(const char *buf)
31853189
static struct trace_option_dentry *topts;
31863190
struct trace_array *tr = &global_trace;
31873191
struct tracer *t;
3192+
bool had_max_tr;
31883193
int ret = 0;
31893194

31903195
mutex_lock(&trace_types_lock);
@@ -3211,7 +3216,19 @@ static int tracing_set_tracer(const char *buf)
32113216
trace_branch_disable();
32123217
if (current_trace && current_trace->reset)
32133218
current_trace->reset(tr);
3214-
if (current_trace && current_trace->use_max_tr) {
3219+
3220+
had_max_tr = current_trace && current_trace->use_max_tr;
3221+
current_trace = &nop_trace;
3222+
3223+
if (had_max_tr && !t->use_max_tr) {
3224+
/*
3225+
* We need to make sure that the update_max_tr sees that
3226+
* current_trace changed to nop_trace to keep it from
3227+
* swapping the buffers after we resize it.
3228+
* The update_max_tr is called from interrupts disabled
3229+
* so a synchronized_sched() is sufficient.
3230+
*/
3231+
synchronize_sched();
32153232
/*
32163233
* We don't free the ring buffer. instead, resize it because
32173234
* The max_tr ring buffer has some state (e.g. ring->clock) and
@@ -3222,10 +3239,8 @@ static int tracing_set_tracer(const char *buf)
32223239
}
32233240
destroy_trace_option_files(topts);
32243241

3225-
current_trace = &nop_trace;
3226-
32273242
topts = create_trace_option_files(t);
3228-
if (t->use_max_tr) {
3243+
if (t->use_max_tr && !had_max_tr) {
32293244
/* we need to make per cpu buffer sizes equivalent */
32303245
ret = resize_buffer_duplicate_size(&max_tr, &global_trace,
32313246
RING_BUFFER_ALL_CPUS);

0 commit comments

Comments
 (0)