Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spirv] Move external arrays into seperate buffers #4121

Merged
merged 6 commits into from
Jan 27, 2022

Conversation

bobcao3
Copy link
Collaborator

@bobcao3 bobcao3 commented Jan 25, 2022

Related issue = #3642

First step towards ndarray on SPIRV backends.

Also fixed a few small things to speed up compilation a small bit.

cc: @xwang186

@vercel
Copy link

vercel bot commented Jan 25, 2022

This pull request is being automatically deployed with Vercel (learn more).
To see the status of your deployment, click below or on the icon next to each commit.

🔍 Inspect: https://vercel.com/taichi-api-docs-preview/taichi/EcS3q6NV4qW9rAWMVzQSoBb8WKeL
✅ Preview: Canceled

[Deployment for a075b18 canceled]

@bobcao3
Copy link
Collaborator Author

bobcao3 commented Jan 25, 2022

/format

@netlify
Copy link

netlify bot commented Jan 25, 2022

✔️ Deploy Preview for docsite-preview ready!

🔨 Explore the source changes: a075b18

🔍 Inspect the deploy log: https://app.netlify.com/sites/docsite-preview/deploys/61f28860cd02c400080717de

😎 Browse the preview: https://deploy-preview-4121--docsite-preview.netlify.app

@bobcao3 bobcao3 changed the title [vulkan] Move external arrays into seperate buffers [spirv] Move external arrays into seperate buffers Jan 25, 2022
@bobcao3
Copy link
Collaborator Author

bobcao3 commented Jan 25, 2022

/rerun

Copy link
Contributor

@ailzhang ailzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome thanks!

@@ -5,6 +5,7 @@
#include <functional>
#include <optional>
#include <atomic>
#include <stack>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OOC: is this necessary?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in this class

@@ -427,9 +441,24 @@ void VkRuntime::launch_kernel(KernelHandle handle, RuntimeContext *host_ctx) {
&ti_kernel->ti_kernel_attribs().ctx_attribs, host_ctx, device_,
host_result_buffer_, ctx_buffer.get(), ctx_buffer_host.get());

std::unordered_map<int, DeviceAllocation> ext_arrays;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI in opengl we don't redo device allocation if the size doesn't change. maybe it helps with perf a little bit? https://github.com/taichi-dev/taichi/blob/master/taichi/backends/opengl/opengl_api.cpp#L400 (can be a followup ofc)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's leave it for later, the balance between memory usage and allocation performance is hard

void *host_ptr = host_ctx_->get_arg<void *>(i);
std::memcpy(host_ptr, device_ptr, ret.stride);
// void *host_ptr = host_ctx_->get_arg<void *>(i);
// std::memcpy(host_ptr, device_ptr, ret.stride);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could remove useless comments in the codes

Copy link
Member

@k-ye k-ye left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Please resolve @xwang186 's comment :-)

Context,
ListGen,
};
enum class BufferType { Root, GlobalTmps, Context, ListGen, ExtArr };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OOC, do we really support ListGen buffer in SPIR-V codegen..?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have very low level all dense->bitmasked support

@bobcao3
Copy link
Collaborator Author

bobcao3 commented Jan 27, 2022

/format

@bobcao3
Copy link
Collaborator Author

bobcao3 commented Jan 27, 2022

/rerun

@ailzhang ailzhang merged commit c9baf93 into taichi-dev:master Jan 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants