Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TARGET] add amd_gpu target #5645

Closed
wants to merge 6 commits into from
Closed

[TARGET] add amd_gpu target #5645

wants to merge 6 commits into from

Conversation

mei-ye
Copy link
Contributor

@mei-ye mei-ye commented May 21, 2020

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.
This target uses vulkan runtime. Once this change is pushed into upstream, a log will be added to tophub. This change contains minimum target-specific codes.

@mei-ye
Copy link
Contributor Author

mei-ye commented May 21, 2020

@tqchen

@tqchen
Copy link
Member

tqchen commented May 21, 2020

Thanks @mei-ye . My understanding is that this is for AMD integrated cards. Given that the programming model is still vulkan, we might want to discuss how to distinguish it from the rocm part. Perhaps an alternative way is to directly use vulkan -device=apu, so that we can still have model specific optimizations(like the case of mali in ARM) while not introduce a new target key.

See example in https://github.com/apache/incubator-tvm/blob/master/python/tvm/target/target.py#L155

@tqchen tqchen changed the title add amd_gpu target [TARGET] add amd_gpu target May 21, 2020
@@ -194,6 +194,10 @@ def webgpu(self, dev_id=0):
"""Construct WebGPU device."""
return self.context(15, dev_id)

def amd_gpu(self, dev_id=0):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can directly use vulkan so it is not necessary

@tmoreau89
Copy link
Contributor

@mei-ye great to see this contribution on getting support for the AMD APU! @tqchen 's comments are correct, ideally we would want to use vulkan -device=apu target with perhaps the optional -model=gfx900 flag to specify the opencl or vulkan backend (it might help with opencl).

@mei-ye
Copy link
Contributor Author

mei-ye commented May 22, 2020

Tianqi and Thierry,

Thanks for our advise. I can see your points, especially the benefit of supporting both vulkan and opencl. But I also have some concerns:

  1. We want to get rid of data movements between CPU and GPU. Do we need a new runtime? If yes, having a new target will be more convenient.
  2. It looks like target-dependent condition checking codes are heavily relying on string match for target names. Without adding a new target, we will need to check both target name and device_name. Is device_name available throughout TVM?

@mei-ye
Copy link
Contributor Author

mei-ye commented May 22, 2020

Also, without a new target, we will need to merge logs for all APU models into opencl_v0.xx.log

@tqchen
Copy link
Member

tqchen commented May 22, 2020

We don't need a new target key for autotvm and logs, see the case for mali https://github.com/uwsampl/tophub/blob/master/tophub/mali_v0.04.log The device is preserved throughout the tvm

In terms of the future runtime data sharing between cpu and GPU. I think that it can be solved directly by extending the opencl or vulkan runtime to support the shared memory, likely the memory will become vulkan_shared, like the cpu_pinned memory in the CUDA.

@mei-ye
Copy link
Contributor Author

mei-ye commented May 22, 2020

yes, we don't need a new target key for autotvm log, but we will need opencl_vxx.log to combine logs for different devices, right?

@tqchen
Copy link
Member

tqchen commented May 22, 2020

See the example case above(the link to mali log) which uses the opencl, but have a separate log by itself

@mei-ye mei-ye closed this May 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants