Skip to content

Conversation

@kou
Copy link
Member

@kou kou commented Nov 13, 2017

arrow-gpu isn't required. If arrow-gpu.pc isn't installed, GPU support is just ignored.

Copy link
Member

@wesm wesm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, very nice! The GPU API is still experimental, I think we should spend some more time expanding the C++ library before we advance bindings too much much further

*
* Since: 0.8.0
*/
GArrowGPUCUDAIPCMemoryHandle *
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't tested yet, we should try to set up some tests to make sure the IPC handle works

arrow::gpu::CudaDeviceManager *manager;
auto status = arrow::gpu::CudaDeviceManager::GetInstance(&manager);
std::shared_ptr<arrow::gpu::CudaHostBuffer> arrow_buffer;
status = manager->AllocateHost(size, &arrow_buffer);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you anticipate to provide for multiple contexts? See ARROW-1423. I don't yet understand the precise semantics around multiple contexts and multiple GPUs, and dealing with CUcontext objects created by other libraries (e.g. I would like to get this stuff working in MapD, and they create and manage their own contexts https://github.com/mapd/mapd-core)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can leave this aspect as TBD. Merging this PR in the meantime

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't write any program with this API yet but it seems that the current API (arrow::gpu::CudaContext is received as an argument explicitly) isn't bad.
I think that we can handle multiple contexts with the current API.

I hope that Arrow API doesn't export any CUDA API such as CUcontext (don't include cuda.h) for integrating with other libraries. It will complicate building programs a bit. I hope that we use void * for CUDA objects like we did for CudaIpcMemHandle.

arrow::cpu::CudaContext::FromBuffer(const void *opaque_context, std::shared_ptr<CudaContext>* context) API may be good to use a context created by other library. It's similar to CudaIpcMemHandle::FromBuffer().

It's better that we discuss this in another issue later. This is a closed place.

buffer = ArrowGPU::CUDABuffer.new(context, handle)
File.open(#{output.path.dump}, "w") do |output|
output.print(buffer.copy_to_host(0, 6).to_s)
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I see you did test the IPC! We should try to get a C++ test working, it looked a bit tricky with googletest so didn't get it working yet

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! Sure!

@wesm wesm closed this in 8f2d152 Nov 14, 2017
@kou kou deleted the glib-gpu branch November 14, 2017 13:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants