Skip to content
Permalink
Browse files

Revert "tests: tickless_concept: Fix slicing time measurement"

Unfortunately this seems to have introduced spurious failures on (at
least) qemu_x86 and qemu_xtensa.

The change limits the timeslice tolerance to +/- 1ms, which isn't
necessarily correct when the tick rate is less than 1ms (though it
will probably work on deterministic hardware as long as the system is
hitting the target at exactly the right tick), and isn't even
theoretically achievable on emulation environments where timing
granularity is limited by the host scheduling quantum.

What this needs to do is check the deadline is off by at most one
tick, and trust the platform integration to have set the tick rate
appropriately.

(I do worry that the earlier version of the test was trying to set the
limit at half the TICKLESS_IDLE_THRESHOLD, though -- that seems weird,
and hints that maybe the test is trying to do something more
elaborate?)

Fixes #17063.

This reverts commit 62c71dc.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
  • Loading branch information...
andyross authored and carlescufi committed Jun 26, 2019
1 parent c790d5b commit c0630346c87bff3f7e35fdc1792ebc20031c7cbf
Showing with 11 additions and 39 deletions.
  1. +11 −39 tests/kernel/tickless/tickless_concept/src/main.c
@@ -23,16 +23,9 @@ static struct k_thread tdata[NUM_THREAD];

/*slice size is set as half of the sleep duration*/
#define SLICE_SIZE __ticks_to_ms(CONFIG_TICKLESS_IDLE_THRESH >> 1)
#define SLICE_SIZE_CYCLES \
(u32_t)(SLICE_SIZE * sys_clock_hw_cycles_per_sec() / 1000)

/*mimimum slice duration accepted by the test: (SLICE_SIZE - 1ms) */
#define SLICE_SIZE_MIN_CYCLES \
(u32_t)(SLICE_SIZE_CYCLES - (sys_clock_hw_cycles_per_sec() / 1000))

/*maximum slice duration accepted by the test: (SLICE_SIZE + 1ms) */
#define SLICE_SIZE_MAX_CYCLES \
(u32_t)(SLICE_SIZE_CYCLES + (sys_clock_hw_cycles_per_sec() / 1000))
/*maximum slice duration accepted by the test*/
#define SLICE_SIZE_LIMIT __ticks_to_ms((CONFIG_TICKLESS_IDLE_THRESH >> 1) + 1)

/*align to millisecond boundary*/
#if defined(CONFIG_ARCH_POSIX)
@@ -51,30 +44,19 @@ static struct k_thread tdata[NUM_THREAD];
} while (0)
#endif
K_SEM_DEFINE(sema, 0, NUM_THREAD);
static u32_t elapsed_slice;

static u32_t cycles_delta(u32_t *reftime)
{
u32_t now, delta;

now = k_cycle_get_32();
delta = now - *reftime;
*reftime = now;

return delta;
}

static s64_t elapsed_slice;

static void thread_tslice(void *p1, void *p2, void *p3)
{
u32_t t = cycles_delta(&elapsed_slice);
s64_t t = k_uptime_delta(&elapsed_slice);

TC_PRINT("elapsed slice %d, expected: <%d, %d>\n",
t, SLICE_SIZE_MIN_CYCLES, SLICE_SIZE_MAX_CYCLES);
TC_PRINT("elapsed slice %lld, expected: <%lld, %lld>\n",
t, SLICE_SIZE, SLICE_SIZE_LIMIT);

/**TESTPOINT: verify slicing scheduler behaves as expected*/
zassert_true(t >= SLICE_SIZE_MIN_CYCLES, NULL);
zassert_true(t <= SLICE_SIZE_MAX_CYCLES, NULL);
zassert_true(t >= SLICE_SIZE, NULL);
/*less than one tick delay*/
zassert_true(t <= SLICE_SIZE_LIMIT, NULL);

/*keep the current thread busy for more than one slice*/
k_busy_wait(1000 * SLEEP_TICKLESS);
@@ -126,23 +108,13 @@ void test_tickless_slice(void)
/*enable time slice*/
k_sched_time_slice_set(SLICE_SIZE, K_PRIO_PREEMPT(0));

/*synchronize to tick boundary*/
k_sleep(1);

/*create delayed threads with equal preemptive priority*/
for (int i = 0; i < NUM_THREAD; i++) {
tid[i] = k_thread_create(&tdata[i], tstack[i], STACK_SIZE,
thread_tslice, NULL, NULL, NULL,
K_PRIO_PREEMPT(0), 0,
SLICE_SIZE + __ticks_to_ms(1));
K_PRIO_PREEMPT(0), 0, SLICE_SIZE);
}

/*synchronize to tick boundary.*/
k_sleep(1);

/*set reference time to last tick boundary*/
elapsed_slice = z_tick_get_32() * sys_clock_hw_cycles_per_tick();

k_uptime_delta(&elapsed_slice);
/*relinquish CPU and wait for each thread to complete*/
for (int i = 0; i < NUM_THREAD; i++) {
k_sem_take(&sema, K_FOREVER);

0 comments on commit c063034

Please sign in to comment.
You can’t perform that action at this time.