Skip to content

lib: Add mailbox interface#341

Closed
bentheredonethat wants to merge 1 commit intoOpenAMP:mainfrom
bentheredonethat:amd-add-mbox-interface
Closed

lib: Add mailbox interface#341
bentheredonethat wants to merge 1 commit intoOpenAMP:mainfrom
bentheredonethat:amd-add-mbox-interface

Conversation

@bentheredonethat
Copy link
Copy Markdown
Contributor

Add mailbox interface to init, deinit, send message and register callbacks.

Currently only implementation is for Zephyr.

Generic and FreeRTOS are just stubs.

CC @tnmysh

@bentheredonethat bentheredonethat force-pushed the amd-add-mbox-interface branch 5 times, most recently from 06d01de to d2c6627 Compare June 30, 2025 21:39

This comment was marked as outdated.

Add mailbox interface to init, deinit, send message and
register callbacks.

Currently only implementation is for Zephyr.

Linux, Generic and FreeRTOS implementations are just stubs.

Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
@bentheredonethat bentheredonethat force-pushed the amd-add-mbox-interface branch from d2c6627 to 9f1a6ec Compare July 10, 2025 15:45
@bentheredonethat
Copy link
Copy Markdown
Contributor Author

updated per copilot comments CC @arnopo

@arnopo arnopo requested a review from Copilot July 15, 2025 07:34
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a mailbox interface to libmetal, including initialization, deinitialization, enabling/disabling, sending messages, and registering callbacks. Zephyr gets a full implementation, while Linux, Generic, and FreeRTOS backends are provided as stubs. CMakeLists files are updated to include the new headers and source.

  • Introduce metal_mbox_t interface and inline wrappers in lib/mbox.h.
  • Provide Zephyr-specific implementation in zephyr/mbox.h and zephyr/mbox.c.
  • Add stub headers for Linux, Generic, and FreeRTOS and update CMakeLists to include mailbox files.

Reviewed Changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
lib/system/zephyr/mbox.h Define Zephyr mailbox types and inline APIs
lib/system/zephyr/mbox.c Implement __metal_mbox_send with payload checks
lib/system/zephyr/CMakeLists.txt Include mbox.h and mbox.c in build
lib/system/linux/mbox.h Add stub mailbox interface declarations
lib/system/linux/CMakeLists.txt Add mbox.h to Linux build headers
lib/system/generic/mbox.h Add generic stub mailbox interface
lib/system/generic/CMakeLists.txt Add mbox.h to generic build headers
lib/system/freertos/mbox.h Add FreeRTOS stub mailbox interface
lib/system/freertos/CMakeLists.txt Add mbox.h to FreeRTOS build headers
lib/mbox.h Inline wrappers mapping to backend implementations
lib/CMakeLists.txt Include mbox.h in core lib headers
Comments suppressed due to low confidence (2)

lib/system/linux/mbox.h:54

  • The static inline functions __metal_mbox_enable and __metal_mbox_disable are declared without bodies, leading to compilation errors. Provide stub implementations (e.g., return 0) or remove the inline keyword.
static inline int __metal_mbox_enable(metal_mbox_t *mbox);

lib/system/generic/mbox.h:54

  • Similar to Linux, the static inline declarations here lack definitions and will not compile. Add simple stub definitions returning 0 or change them to extern declarations.
static inline int __metal_mbox_enable(metal_mbox_t *mbox);

Comment thread lib/system/zephyr/mbox.c
return -EINVAL;
}
if (len >= max_transfer_size_bytes) {
metal_err("%s: len > max_transfer_size_bytes. %x > %x\n",
Copy link

Copilot AI Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message says len > max_transfer_size_bytes but the check is len >= max_transfer_size_bytes. Update the message to reflect the >= condition or change the condition to > to match the text.

Suggested change
metal_err("%s: len > max_transfer_size_bytes. %x > %x\n",
metal_err("%s: len >= max_transfer_size_bytes. %x >= %x\n",

Copilot uses AI. Check for mistakes.
Comment thread lib/system/zephyr/mbox.c
return -EINVAL;
}
if (len >= max_transfer_size_bytes) {
metal_err("%s: len > max_transfer_size_bytes. %x > %x\n",
Copy link

Copilot AI Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When logging integer sizes, use %u or %zu for unsigned values and %d for signed values instead of %x, which prints hex and may confuse readers.

Suggested change
metal_err("%s: len > max_transfer_size_bytes. %x > %x\n",
metal_err("%s: len > max_transfer_size_bytes. %u > %d\n",

Copilot uses AI. Check for mistakes.
Comment thread lib/mbox.h
Copy link
Copy Markdown
Contributor

@arnopo arnopo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding this PR, it seems to me that you need this API only to define a memory region in the Zephyr Device Tree associated with the mailbox.
If that is the case, I would suggest proposing support for an optional memory region in the Zephyr mailbox driver.
From my point of view, it does not make sense to add this API for Zephyr, which should only use Zephyr’s mailbox API.

Please notice also that the API implementation is missing for NuttX.

Comment thread lib/mbox.h
Comment thread lib/mbox.h
static inline int metal_mbox_send(metal_mbox_t *mbox, void *data, unsigned int len)
{
return __metal_mbox_send(mbox, data, len);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file should only declare the metal_mbox_xx functions that would be defined in a mbox.c for the different systems

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arnopo my reference was https://github.com/openamp/libmetal/blob/main/lib/mutex.h - are you ok given that reference?

Comment thread lib/system/zephyr/mbox.c
const int max_transfer_size_bytes = mbox_mtu_get_dt(mbox->m);

/* If data is NOT NULL, then pack message. */
if (data) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I well remember data can be not NULL for generic samples while the driver not support data so the max_transfer_size_bytes != 0 should be tested first

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok can do!

Comment thread lib/system/zephyr/mbox.h

#include <metal/assert.h>
#include <metal/sys.h>
#include <zephyr/drivers/mbox.h>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any use case where Zephyr would call this metal API? How do you address the legacy IPM API?
I think we should forbid this API for Zephyr, as a Zephyr application should use the Zephyr mailbox or IPM API instead.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arnopo sorry can you add some clarification? i dont know what this refers to.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to avoid such implementation but having a more portable one (from my POV).
Having a generic API header file that declares API with one .c file per system seems to me a better way.
in other term I would like to avoid the use of #include <metal/system/@PROJECT_SYSTEM@/mbox.h>

Comment thread lib/system/generic/mbox.h
* already set up at Zephr init.
* @return 0 on success. Negative value on failure.
*/
extern int __metal_mbox_init(metal_mbox_t *mbox,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using external define weak functions in a mbox.c that return error

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok can do!

@bentheredonethat
Copy link
Copy Markdown
Contributor Author

Regarding this PR, it seems to me that you need this API only to define a memory region in the Zephyr Device Tree associated with the mailbox. If that is the case, I would suggest proposing support for an optional memory region in the Zephyr mailbox driver. From my point of view, it does not make sense to add this API for Zephyr, which should only use Zephyr’s mailbox API.

Please notice also that the API implementation is missing for NuttX.

Hi @arnopo

the APIs we wanted to introduce in generic function were send and receive using generic APIs

For send: As AMD supports buffered and bufferless interrupts (without payload) we want both in there and plan to upstream such support in the zephyr mailbox driver ecosystem.

For receive, mailbox zephyr API suports registering a callback. Again because of handling both buffered and bufferless both are included.

also here is references btw for zephyr send
https://github.com/zephyrproject-rtos/zephyr/blob/main/samples/drivers/mbox/remote/src/main.c#L62
https://github.com/zephyrproject-rtos/zephyr/blob/main/samples/drivers/mbox_data/src/main.c#L58

also i can add the nuttx implementation yes.

@arnopo
Copy link
Copy Markdown
Contributor

arnopo commented Jul 18, 2025

the APIs we wanted to introduce in generic function were send and receive using generic APIs

For send: As AMD supports buffered and bufferless interrupts (without payload) we want both in there and plan to upstream such support in the zephyr mailbox driver ecosystem.

For receive, mailbox zephyr API suports registering a callback. Again because of handling both buffered and bufferless both are included.

This is the point we should clarify: is the need only for Zephyr?

The Zephyr mailbox API allows sending and receiving data. This data transfer is managed by the Zephyr driver associated with your platform.
If I understand your expectation correctly, you would call this new libmetal API from your Zephyr mailbox driver, right?
What I question here is whether you could or should manage the shared memory directly in your Zephyr mailbox driver instead of adding extra layers and introducing direct function calls between Zephyr and libmetal.

@bentheredonethat
Copy link
Copy Markdown
Contributor Author

Hi @arnopo

What I question here is whether you could or should manage the shared memory directly in your Zephyr mailbox driver instead of adding extra layers and introducing direct function calls between Zephyr and libmetal.

In the AMD-Xilinx case, the buffers are reserved for use by these specific interrupts. In our manuals they are called buffered inter-proccessor interrupts. The zephyr mailbox driver makes use of these specifically reserved buffers for the data passing. They are not arbitrarty ranges in memory.

The AMD-Xilinx BSP also implements a mailbox driver for this.

Hopefully that answers your question.

@arnopo
Copy link
Copy Markdown
Contributor

arnopo commented Jul 22, 2025

Hi @arnopo

What I question here is whether you could or should manage the shared memory directly in your Zephyr mailbox driver instead of adding extra layers and introducing direct function calls between Zephyr and libmetal.

In the AMD-Xilinx case, the buffers are reserved for use by these specific interrupts. In our manuals they are called buffered inter-proccessor interrupts. The zephyr mailbox driver makes use of these specifically reserved buffers for the data passing. They are not arbitrarty ranges in memory.

The AMD-Xilinx BSP also implements a mailbox driver for this.

Hopefully that answers your question.

So I 'm lost. I really don't understand why you need this API in zephyr. Calling external API in Zephyr applications look to me not a good idea if a Zephyr APi already exists.

what I would like to understand is:

  • What is missing in Zephyr that requests a new API for AMD ?
  • Why missing feature/checks can not be directly implemented in Zephyr?

From my point of view, this API should be allowed only for bare-metal and FreeRTOS.

@tnmysh
Copy link
Copy Markdown
Collaborator

tnmysh commented Jul 22, 2025

Hi @arnopo

What I question here is whether you could or should manage the shared memory directly in your Zephyr mailbox driver instead of adding extra layers and introducing direct function calls between Zephyr and libmetal.

In the AMD-Xilinx case, the buffers are reserved for use by these specific interrupts. In our manuals they are called buffered inter-proccessor interrupts. The zephyr mailbox driver makes use of these specifically reserved buffers for the data passing. They are not arbitrarty ranges in memory.
The AMD-Xilinx BSP also implements a mailbox driver for this.
Hopefully that answers your question.

So I 'm lost. I really don't understand why you need this API in zephyr. Calling external API in Zephyr applications look to me not a good idea if a Zephyr APi already exists.

@arnopo the Goal is to use mbox interface in OS agnostic manner. So, application where this mbox interface is used can be easily portable to any OS supported by libmetal.

what I would like to understand is:

  • What is missing in Zephyr that requests a new API for AMD ?
  • Why missing feature/checks can not be directly implemented in Zephyr?

From my point of view, this API should be allowed only for bare-metal and FreeRTOS.

If we allow this API only for baremetal or FreeRTOS, it defeats the purpose to use libmetal interface if applicaiton is ported to zephyr. The application that is using libmetal in a standalone fashion, should be portable to any OS.

@arnopo
Copy link
Copy Markdown
Contributor

arnopo commented Jul 23, 2025

Hi @tnmysh

Hi @arnopo

What I question here is whether you could or should manage the shared memory directly in your Zephyr mailbox driver instead of adding extra layers and introducing direct function calls between Zephyr and libmetal.

In the AMD-Xilinx case, the buffers are reserved for use by these specific interrupts. In our manuals they are called buffered inter-proccessor interrupts. The zephyr mailbox driver makes use of these specifically reserved buffers for the data passing. They are not arbitrarty ranges in memory.
The AMD-Xilinx BSP also implements a mailbox driver for this.
Hopefully that answers your question.

So I 'm lost. I really don't understand why you need this API in zephyr. Calling external API in Zephyr applications look to me not a good idea if a Zephyr APi already exists.

@arnopo the Goal is to use mbox interface in OS agnostic manner. So, application where this mbox interface is used can be easily portable to any OS supported by libmetal.

Is it really possible to have same code running on different OSes? As OS such a Zephyr try to be platform agnostic that would means that you code would be OS and platform agnostic?
So I guess in this case it it only for AMD Zephyr down stream, right?

what I would like to understand is:

  • What is missing in Zephyr that requests a new API for AMD ?
  • Why missing feature/checks can not be directly implemented in Zephyr?

From my point of view, this API should be allowed only for bare-metal and FreeRTOS.

If we allow this API only for baremetal or FreeRTOS, it defeats the purpose to use libmetal interface if applicaiton is ported to zephyr. The application that is using libmetal in a standalone fashion, should be portable to any OS.

From a zephyr perspective I would say the zephyr application that uses the mailbox in standalone fashion should use the Zephyr mailbox API not an external one. having an application portable to any OS seems to me, more an AMD objective. One main objectives of an OS like Zephyr seems to provide a hardware abstraction layer that makes applications as platform-independent as possible.
If you introduce code in Zephyr that relies on the libmetal mailbox API, it looks like you are introducing an alternative of the Zephyr API.

Do you plan to upstream the usage of this API in Zephyr?

In any case at least we need documentation explaining the usage of this API in different OSes, because duplicating the API introduces confusion for users, as some other libmetal APIs, such as IRQ, seem to do.

If I remember correctly, we also discussed having two kinds of APIs in libmetal: a basic API mandatory for the OpenAMP porting layer, and an extended API.

@bentheredonethat
Copy link
Copy Markdown
Contributor Author

@arnopo we would like to close this PR if you are OK with it.

Reason is that we want to first make it work in demo and then if needed we will port in in libmetal.

@arnopo arnopo closed this Jul 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants