Skip to content

Conversation

@kv2019i
Copy link
Collaborator

@kv2019i kv2019i commented Jan 29, 2026

Series that starts building up support for running SOF LL tasks in user-space (on platforms supporting Zephyr user-space). We already have support for DP tasks, so with both LL and DP supported, in theory all audio can be moved to user-space and run in separate memory space. This will isolate audio code from direct hardware access, protect kernel memory and device driver state.

This PR contains initial support for LL scheduler and adds a separate test case to mimic usage of SOF audio pipeline, without yet bringing in any audio dependencies.

The telemetry infra is calling privileged timer functions, so
if the Low-Latency tasks are run in user-space, telemetry must
be disabled.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
The load tracking for Low-Latency tasks depends on low-overhead
access to cycle counter (e.g. CCOUNT on xtensa), which is not
currently available from user-space tasks. Add a dependency to
ensure the LL stats can only be enabled if LL tasks are run in
kernel mode.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Add option to build SOF with support for running LL scheduler
in user-space. This commit adds initial support in the scheduler
and does not yet allow to run full SOF application using the new
scheduler configuration, but has enough functionality to run
scheduler level tests.

No functional change to default build configuration where LL
scheduler is run in kernel mode, or to platforms with no userspace
support.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Add a test case to run tasks with low-latency (LL) scheduler in
user-space. The test does not yet use any audio pipeline functionality,
but uses similar interfaces towards the SOF scheduler interface.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
There are multiple style variants used in SOF for CMakeLists.txt,
but this file now contains multiple variants in the same file. Fix
this and align style to Zephyr style (2 space for indent, no tabs,
no space before opening brackets).

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds initial support for running SOF Low-Latency (LL) scheduler tasks in Zephyr user-space, providing memory protection and isolation between audio code and kernel resources.

Changes:

  • Adds user-space LL scheduler support with dedicated memory domains and heap management
  • Replaces spinlocks with mutexes for user-space compatibility
  • Introduces test case to validate LL task creation and execution in user-space mode

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
zephyr/test/userspace/test_ll_task.c New test case validating user-space LL scheduler functionality with task lifecycle management
zephyr/test/userspace/README.md Documentation update describing new LL scheduler test
zephyr/test/CMakeLists.txt Build configuration to include LL task test when CONFIG_SOF_USERSPACE_LL is enabled
zephyr/Kconfig New CONFIG_SOF_USERSPACE_LL option for enabling user-space LL pipelines
src/schedule/zephyr_ll.c Core LL scheduler implementation modified to support user-space execution with dynamic memory allocation
src/schedule/zephyr_domain.c Domain thread management updated for user-space with mutex-based synchronization
src/schedule/Kconfig Statistics logging disabled for user-space LL scheduler
src/init/init.c Initialization hook for user-space LL resources
src/include/sof/schedule/ll_schedule_domain.h Header updates exposing user-space LL APIs and mutex-based locking
src/include/sof/schedule/ll_schedule.h API declarations for user-space LL heap and memory domain management
src/debug/telemetry/Kconfig Telemetry disabled when user-space LL is enabled

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- Test Zephyr DAI interface, together with SOF DMA
wrapper from a user thread. Mimics the call flows done in
sof/src/audio/dai-zephyr.c. Use cavstool.py as host runner.
- test_ll_test.c
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'test_ll_test.c' to 'test_ll_task.c'.

Suggested change
- test_ll_test.c
- test_ll_task.c

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed in V2.

sch->lock = k_object_alloc(K_OBJ_MUTEX);
if (!sch->lock) {
tr_err(&ll_tr, "mutex allocation failed");
sof_heap_free(sch->heap, sch);
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using sch->heap after it should be initialized but may not be. The heap field is only set in the CONFIG_SOF_USERSPACE_LL branch, but this error path is also within that branch before heap assignment at line 611. Use the local heap variable instead.

Suggested change
sof_heap_free(sch->heap, sch);
sof_heap_free(heap, sch);

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sch_heap isn't set yet

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, this is a bug, will be fixed in V2.

Comment on lines 484 to +486
ll_sch_domain_set_pdata(domain, zephyr_domain);

struct zephyr_domain_thread *dt = zephyr_domain->domain_thread + cpu_get_id();
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable declaration should be at the beginning of the function or block. Move this declaration to the top of the function for consistency with C89/C90 style if required by the project, or to improve readability.

Suggested change
ll_sch_domain_set_pdata(domain, zephyr_domain);
struct zephyr_domain_thread *dt = zephyr_domain->domain_thread + cpu_get_id();
struct zephyr_domain_thread *dt;
ll_sch_domain_set_pdata(domain, zephyr_domain);
dt = zephyr_domain->domain_thread + cpu_get_id();

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines 283 to 285
#define timer_domain_init(timer, clk) zephyr_domain_init(clk)
k_tid_t zephyr_domain_thread_tid(struct ll_schedule_domain *domain);
#endif
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Declaration of zephyr_domain_thread_tid is placed outside the CONFIG_SOF_USERSPACE_LL guard (lines 103-109) where it logically belongs based on its implementation. This function is only implemented when CONFIG_SOF_USERSPACE_LL is enabled, so its declaration should be within the same conditional block to prevent link errors.

Suggested change
#define timer_domain_init(timer, clk) zephyr_domain_init(clk)
k_tid_t zephyr_domain_thread_tid(struct ll_schedule_domain *domain);
#endif
#define timer_domain_init(timer, clk) zephyr_domain_init(clk)
#if CONFIG_SOF_USERSPACE_LL
k_tid_t zephyr_domain_thread_tid(struct ll_schedule_domain *domain);
#endif
#endif

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, will address this in V2.

@kv2019i
Copy link
Collaborator Author

kv2019i commented Jan 29, 2026

Example test run (on Intel PTL):

[    0.000000] <inf> init: print_version_banner: FW ABI 0x301d001 DBG ABI 0x5003000 tags SOF:v2.14-pre-rc-386-g773835cd8fcd zephyr:v4.3.0-4334-gc1a2b3be459d src hash 0xeab6f675 (ref hash 0xeab6f675)
[    0.000000] <dbg> ll_schedule: zephyr_ll_heap_init: init ll heap 0xa0239000, size 94208 (cached)
[    0.000000] <dbg> ll_schedule: zephyr_ll_heap_init: init ll heap 0x40239000, size 94208 (uncached)
[    0.000000] <dbg> ll_schedule: zephyr_ll_scheduler_init: init on core 0

[    0.000000] <dbg> ll_schedule: zephyr_ll_scheduler_init: ll-scheduler init done, sch 0xa02391c0 sch->lock 0x400cd4f8
[    0.000000] <dbg> ll_schedule: zephyr_ll_task_init: ll-scheduler task 0xa01aaa00 init
[    0.000000] <inf> ipc: ipc_init: SOF_BOOT_TEST_STANDALONE, disabling IPC.
*** Booting Zephyr OS build v4.3.0-4334-gc1a2b3be459d ***
===================================================================
Running TESTSUITE userspace_ll
===================================================================
START - ll_task_test
[    0.000095] <dbg> ll_schedule: zephyr_ll_task_init: ll-scheduler task 0x400d5000 init
[    0.000095] <inf> sof_boot_test: ll_task_test: task init done
[    0.000095] <inf> ll_schedule: zephyr_ll_task_schedule_common: task add 0xa02392c0 0xa00ca950U priority 0 flags 0x0
[    0.000095] <dbg> ll_schedule: zephyr_domain_register: entry
[    0.000095] <dbg> ll_schedule: zephyr_domain_register: Grant access to 0x400cd4b8 (core 0, thread 0x400cd540)
[    0.000095] <dbg> ll_schedule: zephyr_domain_register: Added access to 0x40239100
[    0.000095] <inf> ll_schedule: zephyr_domain_register: zephyr_domain_register domain->type 1 domain->clk 0 domain->ticks_per_ms 38400 period 1000
[    0.000095] <dbg> ll_schedule: zephyr_domain_thread_tid: entry
[    0.000095] <dbg> ll_schedule: zephyr_domain_thread_tid: entry
[    0.000095] <dbg> ll_schedule: zephyr_domain_thread_tid: entry
[    0.000095] <dbg> ll_schedule: zephyr_ll_task_schedule_common: granting access to lock 0x400cd4f8 for thread 0x400cd540
[    0.000095] <dbg> ll_schedule: zephyr_domain_thread_tid: entry
[    0.000095] <dbg> ll_schedule: zephyr_ll_task_schedule_common: granting access to domain lock 0x40239090 for thread 0x400cd540
[    0.000095] <inf> sof_boot_test: ll_task_test: task scheduled and running
[    0.000095] <inf> ll_schedule: zephyr_domain_thread_fn: ll core 0 thread starting
[    0.000095] <dbg> ll_schedule: zephyr_ll_run: entry
[    0.000095] <inf> sof_boot_test: task_callback: entry
[    0.000095] <dbg> ll_schedule: zephyr_ll_run: entry
[    0.000095] <inf> sof_boot_test: task_callback: entry
[    0.000095] <dbg> ll_schedule: zephyr_ll_run: entry
[    0.000095] <inf> sof_boot_test: task_callback: entry
[    0.000096] <dbg> ll_schedule: zephyr_ll_run: entry
[    0.000096] <inf> sof_boot_test: task_callback: entry
[    0.000096] <inf> ll_schedule: zephyr_ll_task_done: task complete 0xa02392c0 0xa00ca950U
[    0.000096] <inf> ll_schedule: zephyr_ll_task_done: num_tasks 1 total_num_tasks 1
[    0.000096] <dbg> ll_schedule: zephyr_domain_unregister: entry
[    0.000096] <inf> ll_schedule: zephyr_domain_unregister: zephyr_domain_unregister domain->type 1 domain->clk 0
[    0.000096] <dbg> ll_schedule: zephyr_domain_unregister: exit
[    0.000098] <inf> sof_boot_test: ll_task_test: test complete
 PASS - ll_task_test in 0.011 seconds

#define schedule_task_init_ll zephyr_ll_task_init

struct task *zephyr_ll_task_alloc(void);
k_tid_t zephyr_ll_get_thread(int core);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: I think we use struct k_thread * mostly in SOF and it seems to "work better" with various simulation / testing builds, I was getting "undefined" errors when I tried to use k_tid_t

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lyakh Seems we have a mix in existing code as well. I will switch over a few places, but I won't start changing existing code away from k_tid_t in this PR.


#if defined(__ZEPHYR__)
struct ll_schedule_domain *zephyr_ll_domain(void);
struct ll_schedule_domain *zephyr_domain_init(int clk);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zephyr_ll_domain_init()?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is existing code I just moved around. I'll actually move this back to minimize the diff in V2, but I won't do any additional renames as part of this PR.

struct k_mutex *lock; /**< standard lock */
#else
struct k_spinlock lock; /**< standard lock */
#endif
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: this change actually modifies the current behaviour already.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack @lyakh , discussed offline and I think I'll go back a step and keep the kernel LL build using spinlocks (and/or make it a separate PR).


#if defined(CONFIG_SOF_USERSPACE_LL)
/* Allocate mutex dynamically for userspace access */
domain->lock = k_object_alloc(K_OBJ_MUTEX);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this allocates cached?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok checked now. With this PR, these allocs come from system heap, and will be uncached. So this is good, but I need to keep a close tab on this.


#if CONFIG_SOF_USERSPACE_LL

k_tid_t zephyr_domain_thread_tid(struct ll_schedule_domain *domain)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

struct k_thread * maybe

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a clearly a new addition, so let me change this in V2.


list_init(&sch->tasks);
sch->ll_domain = domain;
sch->core = cpu_get_id();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

core

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, will fix in V2.

sch->lock = k_object_alloc(K_OBJ_MUTEX);
if (!sch->lock) {
tr_err(&ll_tr, "mutex allocation failed");
sof_heap_free(sch->heap, sch);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sch_heap isn't set yet

ZTEST(userspace_ll, ll_task_test)
{
ll_task_test();
ztest_test_pass();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually removed these from my tests, they're doing some long jumps... Are you sure you need this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just following existing tests, but you are right, probably should remove.

I think this should be only used in specific case like: "" * However, if the success case for your test involves a fatal fault, you can call this function from k_sys_fatal_error_handler to indicate that the test passed before aborting the thread.""

* SOF main has booted up and IPC handling is stopped.
* Run test suites with ztest_run_all.
*/
static int run_tests(void)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be good to have a similar test for DP


if (CONFIG_SOF_BOOT_TEST_STANDALONE AND CONFIG_SOF_USERSPACE_LL)
if(CONFIG_SOF_BOOT_TEST_STANDALONE AND CONFIG_SOF_USERSPACE_LL)
zephyr_library_sources(userspace/test_ll_task.c)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like aligning to use TABs instead would make the path smaller

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack - copilot align to tab 8

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really care either way, but this 2-space indent is the style used in both upstream Cmake and upstream Zephyr, so given the mix-of-style we currently have, I think we should just go with this.

Copy link
Collaborator

@softwarecki softwarecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First quick remarks. I still have 2 commits left to review... There is a lot of conditional code added here. Would it not be better to make this a separate scheduler? SOF already supports multiple different schedulers


#if defined(__ZEPHYR__) && CONFIG_SOF_USERSPACE_LL
domain = sof_heap_alloc(zephyr_ll_heap(), SOF_MEM_FLAG_USER | SOF_MEM_FLAG_COHERENT,
sizeof(*domain), sizeof(void *));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing memset

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, right, there were a few ones where I'm replacing rzalloc. I'll address in V2.

domain->lock = rzalloc(SOF_MEM_FLAG_KERNEL | SOF_MEM_FLAG_COHERENT, sizeof(*domain->lock));
#endif
if (!domain->lock) {
rfree(domain);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heap_free(zephyr_ll_heap(), ... for CONFIG_SOF_USERSPACE_LL?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, I'll fix this in V2.

#endif /* CONFIG_SOF_USERSPACE_LL */

struct zephyr_domain_thread {
struct k_thread ll_thread;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep these static objects when CONFIG_SOF_USERSPACE_LL is not enabled? We could keep pointer fields that point to static objects, similar to the dp scheduler solution.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. I went back and forth with this a bit. This is a bit messy to have both, but I do agree there's is value to not touch the current implementation (and keep the objects static). Let my try this in V2 and see how the code looks to you all.

}

key = k_spin_lock(&domain->lock);
k_mutex_lock(domain->lock, K_FOREVER);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spinlock still in use without defined(__ZEPHYR__)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will implement a more conservative approach in V2 where I keep spinlock usage for all existing builds.

(void *)mem_partition.start,
heap->heap.init_bytes);

mem_partition.start = (uintptr_t)sys_cache_uncached_ptr_get(heap->heap.init_mem);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Zephyr maps cached and non-cached addresses when the double map config is enabled. Maybe it is worth check it here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no double mapping any more.

void zephyr_ll_resources_init(void)
{
k_mem_domain_init(&ll_mem_resources.mem_domain, 0, NULL);
k_mutex_init(&ll_mem_resources.lock);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Static kobjects are initialized by default, so there is no need to initialize them manually.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@softwarecki But when I don't have any static initializers (like Z_MUTEX_INITIALIZER(), they will be initialized to zero, so seems an init call is safer bet. Or am I missing some additional logic in Zephyr?

zephyr_domain->timer = k_object_alloc(K_OBJ_TIMER);
if (!zephyr_domain->timer) {
tr_err(&ll_tr, "timer allocation failed");
rfree(zephyr_domain);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heap_free(zephyr_ll_heap(), ...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed in V2.

/* Add zephyr_domain_ops to the memory domain for user thread access */
struct k_mem_partition ops_partition;

ops_partition.start = (uintptr_t)&zephyr_domain_ops;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partition size must be aligned to page size. Consider use APP_TASK_DATA in the zephyr_domain_ops declatarion.

Copy link
Contributor

@jsarha jsarha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went through this once, but did not pick really almost anything that was not picked before. Still many places I do not fully understand. I gather there is a new version coming. I go through this again when its out.

k_thread_abort(&zephyr_domain->domain_thread[core].ll_thread);
if (zephyr_domain->domain_thread[core].ll_thread) {
k_thread_abort(zephyr_domain->domain_thread[core].ll_thread);
k_object_free(zephyr_domain->domain_thread[core].ll_thread);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it be:

#ifdef CONFIG_SOF_USERSPACE_LL
k_object_free(zephyr_domain->domain_thread[core].ll_thread);
else
rfree(zephyr_domain->domain_thread[core].ll_thread);
#endif

ops_partition.size = sizeof(zephyr_domain_ops);
ops_partition.attr = K_MEM_PARTITION_P_RO_U_RO;

k_mutex_lock(&ll_mem_resources.lock, K_FOREVER);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zephyr_ll_mem_domain_add_partition?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, thanks, will fix in V2.

Copy link
Member

@lgirdwood lgirdwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only some minor things from me.

config SOF_TELEMETRY
bool "enable telemetry"
default n
depends on !SOF_USERSPACE_LL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume here we still need to map this page with timer IO for user as RO ?

if (!IS_ENABLED(CONFIG_SOF_USERSPACE_LL) || !dt->ll_thread) {
/* Allocate thread structure dynamically */
#if CONFIG_SOF_USERSPACE_LL
dt->ll_thread = k_object_alloc(K_OBJ_THREAD);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would an object not equally work for LL kernel ? i.e. do we always need to differentiate here ?

struct ll_schedule_domain *ll_domain; /* scheduling domain */
unsigned int core; /* core ID of this instance */
#if CONFIG_SOF_USERSPACE_LL
struct k_mutex *lock; /* mutex for userspace */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question re differentiation, would a mutex work here too for a kernel thread in this use case ? Not a blocker or anything, it would be nice at some point to merge some of these flows around locking since at the end of the day we are using threads in both kernel/user mode.


if (CONFIG_SOF_BOOT_TEST_STANDALONE AND CONFIG_SOF_USERSPACE_LL)
if(CONFIG_SOF_BOOT_TEST_STANDALONE AND CONFIG_SOF_USERSPACE_LL)
zephyr_library_sources(userspace/test_ll_task.c)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack - copilot align to tab 8

Copy link
Collaborator Author

@kv2019i kv2019i left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot of the reviews! I answer most comment inline, but no V2 uploaded. I still have a few comments from @softwarecki and @lyakh I need to cover before uploading V2.

@softwarecki I did look at the option to move code to a separate scheduler file. Especially in zephyr_domain.c, this would bring benefit can keep code readability. OTOH, most of the code is still shared and it does look possible we can converge the kernel/user implementations more down the road. I did now implement a bit of a compromise solution where I split out some user-ll specific functions to a separate file, and implemented separate domain register/unregister functions for user/kernel builds. This will make it easier to see I'm not modifying the existing default kernel LL implementation, while still reusing most of the common code. I'll tidy up opens tomorrow and push a V2 for comments.

#define scheduler_init_ll zephyr_ll_scheduler_init
#define schedule_task_init_ll zephyr_ll_task_init

struct task *zephyr_ll_task_alloc(void);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, will fix in V2.

#define schedule_task_init_ll zephyr_ll_task_init

struct task *zephyr_ll_task_alloc(void);
k_tid_t zephyr_ll_get_thread(int core);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lyakh Seems we have a mix in existing code as well. I will switch over a few places, but I won't start changing existing code away from k_tid_t in this PR.


#if defined(__ZEPHYR__)
struct ll_schedule_domain *zephyr_ll_domain(void);
struct ll_schedule_domain *zephyr_domain_init(int clk);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is existing code I just moved around. I'll actually move this back to minimize the diff in V2, but I won't do any additional renames as part of this PR.


#if defined(__ZEPHYR__) && CONFIG_SOF_USERSPACE_LL
domain = sof_heap_alloc(zephyr_ll_heap(), SOF_MEM_FLAG_USER | SOF_MEM_FLAG_COHERENT,
sizeof(*domain), sizeof(void *));
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, right, there were a few ones where I'm replacing rzalloc. I'll address in V2.


#if defined(CONFIG_SOF_USERSPACE_LL)
/* Allocate mutex dynamically for userspace access */
domain->lock = k_object_alloc(K_OBJ_MUTEX);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok checked now. With this PR, these allocs come from system heap, and will be uncached. So this is good, but I need to keep a close tab on this.


list_init(&sch->tasks);
sch->ll_domain = domain;
sch->core = cpu_get_id();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, will fix in V2.

sch->lock = k_object_alloc(K_OBJ_MUTEX);
if (!sch->lock) {
tr_err(&ll_tr, "mutex allocation failed");
sof_heap_free(sch->heap, sch);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, this is a bug, will be fixed in V2.


if (CONFIG_SOF_BOOT_TEST_STANDALONE AND CONFIG_SOF_USERSPACE_LL)
if(CONFIG_SOF_BOOT_TEST_STANDALONE AND CONFIG_SOF_USERSPACE_LL)
zephyr_library_sources(userspace/test_ll_task.c)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really care either way, but this 2-space indent is the style used in both upstream Cmake and upstream Zephyr, so given the mix-of-style we currently have, I think we should just go with this.

- Test Zephyr DAI interface, together with SOF DMA
wrapper from a user thread. Mimics the call flows done in
sof/src/audio/dai-zephyr.c. Use cavstool.py as host runner.
- test_ll_test.c
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed in V2.

ZTEST(userspace_ll, ll_task_test)
{
ll_task_test();
ztest_test_pass();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just following existing tests, but you are right, probably should remove.

I think this should be only used in specific case like: "" * However, if the success case for your test involves a fatal fault, you can call this function from k_sys_fatal_error_handler to indicate that the test passed before aborting the thread.""

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants