Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VITIS-8730: documentation of read, write device memory directly #7616

Merged
merged 2 commits into from
Jul 6, 2023

Conversation

vboggara-xilinx
Copy link
Contributor

@vboggara-xilinx vboggara-xilinx commented Jul 5, 2023

Problem solved by the commit

Added documentation of read/write device memory directly
#7615 - Closed

Bug / issue (if any) fixed, which PR introduced the bug, how it was discovered

https://jira.xilinx.com/browse/VITIS-8730

How problem was solved, alternative solutions (if any) and why they were rejected

For the device only buffers (created with xrt::bo::flags::device_only flag) the xrt::bo::sync() operation is not required, only xrt::bo::write() (or xrt::bo::read()) is sufficient for DMA operation. As for the device only buffer there is no host backing storage, the xrt::bo::write() (or xrt::bo::read()) directly performs DMA operation to (or from) the device memory.

Below is the example for creation of device only buffers.

       xrt::bo::flags device_flags = xrt::bo::flags::device_only;
       auto device_only_buffer = xrt::bo(device, size_in_bytes, device_flags, bank_grp_arg0);

Here is how xrt::bo::read() and xrt::bo::write() API's to read/write directly from/to device only buffer there is no host backing storage.

  • xrt::bo::write(const void* src, size_t size, size_t seek): Copies data from src to device buffer directly.
  • xrt::bo::read(void* dst, size_t size, size_t skip): Copies data from device buffer to dst.

Risks (if any) associated the changes in the commit

Low

What has been tested and how, request additional testing if necessary

n/a

Documentation impact (if any)

Yes

@gbuildx
Copy link
Collaborator

gbuildx commented Jul 5, 2023

Build Passed!

@uday610
Copy link
Collaborator

uday610 commented Jul 5, 2023

@vboggara-xilinx , you have added this section in the same section where we are showing DMA code with regular buffer.

20 auto input_buffer = xrt::bo(device, buffer_size_in_bytes, bank_grp_idx_0);
21 // Prepare the input data
22 int buff_data[data_size];
23 for (auto i=0; i<data_size; ++i) {
24 buff_data[i] = i;
25 }
26
27 input_buffer.write(buff_data);
28 input_buffer.sync(XCL_BO_SYNC_BO_TO_DEVICE);

So to have a better continuation of the ongoing topic, I think you can just write like this:

For the device only buffers (created with xrt::bo::flags::device_only flag) the xrt::bo::sync operation is not required, only xrt::bo::write (or xrt::bo::read) is sufficient for DMA operation. As for the device only buffer there is no host backing storage, the xrt::bo::write (or xrt::bo::read) directly performs DMA operation to (or from) the device memory.

--
That's it.

Copy link
Collaborator

@uday610 uday610 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we are adding this on a ongoing topic, I think just adding like below has a better continuation. Nothing more or less needed.

For the device only buffers (created with xrt::bo::flags::device_only flag) the xrt::bo::sync operation is not required, only xrt::bo::write (or xrt::bo::read) is sufficient for DMA operation. As for the device only buffer there is no host backing storage, the xrt::bo::write (or xrt::bo::read) directly performs DMA operation to (or from) the device memory.

Signed-off-by: vboggara <vboggara@xilinx.com>
Signed-off-by: vboggara <vboggara@xilinx.com>
@gbuildx
Copy link
Collaborator

gbuildx commented Jul 5, 2023

Build Passed!

@chvamshi-xilinx chvamshi-xilinx merged commit 54213d6 into Xilinx:master Jul 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants