Skip to content

drivers/rpmsg_virtio: add VIRTIO_RPMSG_F_BUFADDR to support specifying buffer address#18491

Merged
xiaoxiang781216 merged 2 commits intoapache:masterfrom
CV-Bowen:rpmsg-virtio-bufaddr
Mar 5, 2026
Merged

drivers/rpmsg_virtio: add VIRTIO_RPMSG_F_BUFADDR to support specifying buffer address#18491
xiaoxiang781216 merged 2 commits intoapache:masterfrom
CV-Bowen:rpmsg-virtio-bufaddr

Conversation

@CV-Bowen
Copy link
Contributor

@CV-Bowen CV-Bowen commented Mar 4, 2026

Summary

Add support for VIRTIO_RPMSG_F_BUFADDR feature bit in rpmsg virtio, allowing the host and remote sides to specify buffer physical addresses through the resource table config space.

In some multi-core systems, the shared memory pool needs to be split into separate regions for host-to-remote and remote-to-host buffers. This is useful when the TX and RX buffers must reside at specific physical addresses (e.g., different memory banks or regions with different cache/access attributes). The VIRTIO_RPMSG_F_BUFADDR feature allows this configuration to be communicated through the standard resource table mechanism.

Impact

  • No impact on existing configurations — the feature is only activated when both sides negotiate VIRTIO_RPMSG_F_BUFADDR
  • Backward compatible: the reserved field shrinks but total struct size is preserved via PACKED layout
  • No breaking changes to existing rpmsg virtio users

Testing

Build test with qemu-armv8a:v8a_server and qemu-armv8a:v8a_proxy configurations:

cmake -B cmake_out/v8a_server -DBOARD_CONFIG=qemu-armv8a:v8a_server -GNinja
cmake --build cmake_out/v8a_server
cmake -B cmake_out/v8a_proxy -DBOARD_CONFIG=qemu-armv8a:v8a_proxy -GNinja
cmake --build cmake_out/v8a_proxy
❯ qemu-system-aarch64 -cpu cortex-a53 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-chardev stdio,id=con,mux=on -serial chardev:con \
-object memory-backend-file,discard-data=on,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem0,size=4194304,share=yes \
-device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,addr=0xb \
-device virtio-serial-device,bus=virtio-mmio-bus.0 \
-chardev socket,path=/tmp/rpmsg_port_uart_socket,server=on,wait=off,id=foo \
-device virtconsole,chardev=foo \
-mon chardev=con,mode=readline -kernel ./cmake_out/v8a_server/nuttx \
-gdb tcp::7775
[    0.000000] [ 0] [  INFO] [server] pci_register_rptun_ivshmem_driver: Register ivshmem driver, id=0, cpuname=proxy, master=1
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: pci_scan_bus for bus 0
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000600, hdr_type = 00000000
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:00 [1b36:0008]
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar3 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar5 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000200, hdr_type = 00000000
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:08 [1af4:1000]
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0: mask64=fffffffe 32bytes
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1: mask64=fffffff0 4096bytes
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar3 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4: mask64=fffffffffffffff0 16384bytes
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000500, hdr_type = 00000000
[    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:58 [1af4:1110]
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0: mask64=fffffff0 256bytes
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2: mask64=fffffffffffffff0 4194304bytes
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4 set bad mask
[    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar5 set bad mask
[    0.000000] [ 3] [  INFO] [server] ivshmem_probe: shmem addr=0x8000400000 size=4194304 reg=0x10001000
[    0.000000] [ 3] [  INFO] [server] rptun_ivshmem_probe: shmem addr=0x8000400000 size=4194304

NuttShell (NSH) NuttX-12.12.0
server> [    0.000000] [ 0] [  INFO] [proxy] pci_register_rptun_ivshmem_driver: Register ivshmem driver, id=0, cpuname=server, master=0
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: pci_scan_bus for bus 0
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000600, hdr_type = 00000000
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:00 [1b36:0008]
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar3 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar5 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000200, hdr_type = 00000000
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:08 [1af4:1000]
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0: mask64=fffffffe 32bytes
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1: mask64=fffffff0 4096bytes
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar3 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4: mask64=fffffffffffffff0 16384bytes
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000500, hdr_type = 00000000
[    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:58 [1af4:1110]
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0: mask64=fffffff0 256bytes
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2: mask64=fffffffffffffff0 4194304bytes
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar5 set bad mask
[    0.000000] [ 3] [  INFO] [proxy] ivshmem_probe: shmem addr=0x8000400000 size=4194304 reg=0x10001000
[    0.000000] [ 3] [  INFO] [proxy] rptun_ivshmem_probe: shmem addr=0x8000400000 size=4194304
[    0.000000] [ 3] [  INFO] [proxy] rptun_ivshmem_probe: Start the wdog

server> 
server> 
server> 
server> rpmsg ping all 1 1 1 1
[   14.991300] [ 7] [ EMERG] [server] ping times: 1
[   14.992000] [ 7] [ EMERG] [server] buffer_len: 1520, send_len: 17
[   14.993500] [ 7] [ EMERG] [server] avg: 0 s, 92999840 ns
[   14.994500] [ 7] [ EMERG] [server] min: 0 s, 92999840 ns
[   14.995400] [ 7] [ EMERG] [server] max: 0 s, 92999840 ns
[   14.996600] [ 7] [ EMERG] [server] rate: 0.001462 Mbits/sec
[   15.004500] [ 7] [ EMERG] [server] ping times: 1
[   15.005200] [ 7] [ EMERG] [server] buffer_len: 2008, send_len: 17
[   15.006200] [ 7] [ EMERG] [server] avg: 0 s, 77037968 ns
[   15.007100] [ 7] [ EMERG] [server] min: 0 s, 77037968 ns
[   15.008000] [ 7] [ EMERG] [server] max: 0 s, 77037968 ns
[   15.008900] [ 7] [ EMERG] [server] rate: 0.001765 Mbits/sec
server> 
server> uname -a
NuttX server 12.12.0 e9cf314c221-dirty Mar  4 2026 20:27:46 arm64 qemu-armv8a
server> rpmsg dump all
[   18.381900] [ 7] [ EMERG] [server] Local: server Remote: proxy Headrx 8
[   18.384300] [ 7] [ EMERG] [server] Dump rpmsg info between cpu (master: yes)server <==> proxy:
[   18.385800] [ 7] [ EMERG] [server] rpmsg vq RX:
[   18.387000] [ 7] [ EMERG] [server] VQ: rx_vq - size=8; free=0; queued=0; desc_head_idx=32768; available_idx=0; avail.idx=16; used_cons_idx=8; used.idx=8; avail.flags=0x0; used.flags=0x1
[   18.389800] [ 7] [ EMERG] [server] rpmsg vq TX:
[   18.390500] [ 7] [ EMERG] [server] VQ: tx_vq - size=8; free=6; queued=0; desc_head_idx=2; available_idx=0; avail.idx=6; used_cons_idx=4; used.idx=6; avail.flags=0x1; used.flags=0x0
[   18.393100] [ 7] [ EMERG] [server]   rpmsg ept list:
[   18.394200] [ 7] [ EMERG] [server]     ept NS
[   18.394900] [ 7] [ EMERG] [server]     ept rpmsg-sensor
[   18.395700] [ 7] [ EMERG] [server]     ept rpmsg-ping
[   18.396500] [ 7] [ EMERG] [server]     ept rpmsg-syslog
[   18.397300] [ 7] [ EMERG] [server]   rpmsg buffer list:
[   18.398600] [ 7] [ EMERG] [server]     RX buffer, total 8, pending 0
[   18.400300] [ 7] [ EMERG] [server]     TX buffer, total 8, pending 0
[   18.403100] [ 7] [ EMERG] [server] Remote: proxy2 state: 1
[   18.404200] [ 7] [ EMERG] [server] ept NS
[   18.404900] [ 7] [ EMERG] [server] ept rpmsg-sensor
[   18.405700] [ 7] [ EMERG] [server] ept rpmsg-ping
[   18.406600] [ 7] [ EMERG] [server] rpmsg_port queue RX: {used: 0, avail: 8}
[   18.406600] [ 7] [ EMERG] [server] rpmsg buffer list:
[   18.408800] [ 7] [ EMERG] [server] rpmsg_port queue TX: {used: 0, avail: 8}
[   18.408800] [ 7] [ EMERG] [server] rpmsg buffer list:
[   18.413600] [ 7] [ ALERT] [server] sched_dumpstack: backtrace| 5: 0x00000000402a8a20 0x00000000402aa4e8 0x000000004029c1c8 0x000000004028cc54 0x00000000402e4328 0x00000000402e43d8 0x00000000402bd054 0x00000000402bd4dc
[   18.416500] [ 7] [ ALERT] [server] sched_dumpstack: backtrace| 5: 0x00000000402ad390
[   18.418300] [ 7] [ ALERT] [server] sched_dumpstack: backtrace| 6: 0x00000000402a8e40 0x00000000402f6474 0x00000000402bc428 0x00000000402bd5c4 0x00000000402ad390
server> 

yintao and others added 2 commits March 4, 2026 20:31
…he address

support specify the address by h2r_buf_addr and r2h_buf_addr
so is can split shared memory pool

Signed-off-by: yintao <yintao@xiaomi.com>
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
Add open-amp patch 0019 to extend fw_rsc_config with h2r_buf_addr
and r2h_buf_addr fields, allowing the host and remote sides to
specify buffer physical addresses through the resource table.
This enables splitting the shared memory pool into separate regions.

Also update open-amp.defs and open-amp.cmake to apply the new patch
during the open-amp build process.

Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
@CV-Bowen
Copy link
Contributor Author

CV-Bowen commented Mar 4, 2026

@xiaoxiang781216 @anchao

@github-actions github-actions bot added Area: Drivers Drivers issues Size: M The size of the change in this PR is medium labels Mar 4, 2026
@anchao
Copy link
Contributor

anchao commented Mar 5, 2026

@CV-Bowen Thanks. Even with this commit, the legacy rptun driver still requires extensive modifications to adapt to the VIRTIO_RPMSG_F_BUFADDR mechanism. Could we provide a sim demo to help other users get familiar with this configuration?
Besides, @acassis , the new rpmsg virtio framework will introduce a breaking change. The current framework requires all users of rpmsg to modify their private drivers, which sends a very unfriendly signal to developers.

@xiaoxiang781216 xiaoxiang781216 merged commit 6b13940 into apache:master Mar 5, 2026
40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Area: Drivers Drivers issues Size: M The size of the change in this PR is medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants