Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xQueueSend between tasks on different processors #40

Open
1 task
loboris opened this issue Feb 23, 2019 · 2 comments
Open
1 task

xQueueSend between tasks on different processors #40

loboris opened this issue Feb 23, 2019 · 2 comments
Milestone

Comments

@loboris
Copy link

loboris commented Feb 23, 2019


  • BUG REPORT

Expected behavior

Sending message with xQueueSend between tasks running on different processors should work.

Actual behavior

Task 1, running on processor #0 sends to the queue with xQueueSend
Task 2, running on processor #1 receives from the queue with xQueueReceive


After task1 sends the message, task2 switches to processor 0 and continues to run on it.


The source of the bug is probably in queue.c in a way the xTaskRemoveFromEventList function is used.

There is quite large amount of other FreeRTOS functions which are not prepared for dual processor usage and can present a big issue if/when used.
This should be addressed as soon as possible!


Test code

#include <stdio.h>
#include <string.h>
#include <FreeRTOS.h>
#include <task.h>
#include <semphr.h>
#include <queue.h>

typedef struct _msg_t {
    uint32_t id;
    char strdata[256];
} msg_t;

static TaskHandle_t test_handle0 = 0;
static TaskHandle_t test_handle1 = 0;
static SemaphoreHandle_t test_task_mutex = NULL;

static QueueHandle_t queue;

//---------------------------------------
static void test_task0(void *pvParameter)
{
    uint64_t n = 0, ticks;
    float x;
    msg_t message = { 0 };
    while (1) {
        vTaskDelay(1000 / portTICK_PERIOD_MS);
        n++;
        ticks = xTaskGetTickCount();
        x = (float)ticks / 1.23456789;
        if (xSemaphoreTake( test_task_mutex, 100) == pdTRUE ) {
            printf("Task0 at %lu: %lu, %lu, %.3f\n", uxPortGetProcessorId(), n, ticks, x);
            xSemaphoreGive(test_task_mutex);
        }
        if ((n % 4) == 0) {
            if (xSemaphoreTake( test_task_mutex, 100) == pdTRUE ) {
                printf("Sending message to task1\n");
                xSemaphoreGive(test_task_mutex);
            }
            message.id = n;
            sprintf(message.strdata, "From task 0, ticks=%lu", ticks);
            if (xQueueSend(queue, (void *)&message, 0) != pdTRUE) {
                if (xSemaphoreTake( test_task_mutex, 100) == pdTRUE ) {
                    printf("Send message failed\n");
                    xSemaphoreGive(test_task_mutex);
                }
            }
        }
    }
    vTaskDelete(NULL);
}

//----------------------------------------
static void test_task1(void *pvParameter)
{
    uint64_t n = 0, ticks;
    float x;
    msg_t message = { 0 };
    while (1) {
        if (xQueueReceive(queue, (void *)&message, 1000 / portTICK_PERIOD_MS) == pdTRUE) {
            if (xSemaphoreTake( test_task_mutex, 100) == pdTRUE ) {
                printf("Message received: id=%u, msg='%s'\n", message.id, message.strdata);
                xSemaphoreGive(test_task_mutex);
            }
        }
        //vTaskDelay(1000 / portTICK_PERIOD_MS);
        n++;
        ticks = xTaskGetTickCount();
        x = (float)ticks / 1.23456789;
        if (xSemaphoreTake( test_task_mutex, 100) == pdTRUE ) {
            printf("Task1 at %lu: %lu, %lu, %.3f\n", uxPortGetProcessorId(), n, ticks, x);
            xSemaphoreGive(test_task_mutex);
        }
    }
    vTaskDelete(NULL);
}

//------------
int main(void)
{
    printf("TEST\n");

    queue = xQueueCreate(4, sizeof(msg_t) );
    configASSERT(queue);

    test_task_mutex = xSemaphoreCreateMutex();
    configASSERT(test_task_mutex);

    xTaskCreateAtProcessor(
            0,                          // processor
            test_task0,                 // function entry
            "TASK0",                    // task name
            configMINIMAL_STACK_SIZE,   // stack_deepth
            NULL,                       // function argument
            1,                          // task priority
            &test_handle0);             // task handle
    configASSERT(test_handle0);

    vTaskDelay(500 / portTICK_PERIOD_MS);
    xTaskCreateAtProcessor(
            1,                           // processor
            test_task1,                  // function entry
            "TASK1",                     // task name
            configMINIMAL_STACK_SIZE,    // stack_deepth
            NULL,                        // function argument
            1,                           // task priority
            &test_handle1);              // task handle
    configASSERT(test_handle1);

    int cnt = 0;
    while (1) {
        vTaskDelay(1000);
        cnt++;
    }
}

SDK version

develop, 68b5563
Kendryte GNU Toolchain v8.2.0-20190213

Hardware

Sipeed MAIX-Bit, Dan-Dock

System

Ubuntu 18.04


Output produced by the test program:

Before the first xQueueSend task1 runs on processor #1.
After xQueueSend it runs on processor #0.
There is no 50 ticks difference between tasks ticks any more, which proves that the tasks actually runs on the same processor.

TEST
Task0 at 0: 1, 101, 81.810
Task1 at 1: 1, 152, 123.120
Task0 at 0: 2, 201, 162.810
Task1 at 1: 2, 252, 204.120
Task0 at 0: 3, 301, 243.810
Task1 at 1: 3, 352, 285.120
Task0 at 0: 4, 401, 324.810
Sending message to task1
Message received: id=4, msg='From task 0, ticks=401'
Task1 at 0: 4, 401, 324.810
Task0 at 0: 5, 501, 405.810
Task1 at 0: 5, 502, 406.620
Task0 at 0: 6, 601, 486.810
Task1 at 0: 6, 602, 487.620
Task0 at 0: 7, 701, 567.810
Task1 at 0: 7, 702, 568.620
Task0 at 0: 8, 801, 648.810
Sending message to task1
Message received: id=8, msg='From task 0, ticks=801'
Task1 at 0: 8, 801, 648.810
Task0 at 0: 9, 901, 729.810
Task1 at 0: 9, 902, 730.620
Task0 at 0: 10, 1001, 810.810
Task1 at 0: 10, 1002, 811.620
Task0 at 0: 11, 1101, 891.810
Task1 at 0: 11, 1102, 892.620
Task0 at 0: 12, 1201, 972.810
@sunnycase sunnycase added this to the future milestone Feb 27, 2019
@sunnycase
Copy link
Member

Currently this scenario is not supported. SMP is not implemented yet.

@loboris
Copy link
Author

loboris commented Feb 27, 2019

I have fixed this issue for this and some other functions and now my application (the real one, not only the test code above) runs as expected.
Looking forward for the full SMP implementations in this SDK ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants