Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Strategy] Allow cuda cross compilation without physical device. #7063

Merged
merged 5 commits into from Dec 10, 2020

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented Dec 8, 2020

The recent addition of tensorcore schedules has broken TVM's ability to compile for cuda on a machine without a GPU. This is because the strategy registration for tensorcores calls tvm.gpu(0).compute_version, which fails when no gpu is present. I've changed the behavior of nvcc.have_tensorcore to check AutotvmGlobalScope.current.cuda_target_arch when a GPU isn't present. This allows a user to call something like tvm.autotvm.measure.measure_methods.set_cuda_target_arch("sm_62") to specify a cuda cross compilation target on a machine without a GPU and build correctly.

I'm not sure how to test this since it would require a CPU node that's built with the cuda toolkit. Let me know if you have an opinion on tests to add to prevent an error like this from sneaking in again.

@jwfromm
Copy link
Contributor Author

jwfromm commented Dec 8, 2020

@anwang2009 @tqchen @adelbertc Can you guys take a look at this PR?

python/tvm/contrib/nvcc.py Outdated Show resolved Hide resolved
Copy link
Contributor

@anwang2009 anwang2009 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm afaict

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just a nit.

python/tvm/contrib/nvcc.py Outdated Show resolved Hide resolved
@comaniac comaniac merged commit 7a3278a into apache:main Dec 10, 2020
@comaniac
Copy link
Contributor

Thanks @jwfromm @anwang2009.

TusharKanekiDey pushed a commit to TusharKanekiDey/tvm that referenced this pull request Jan 20, 2021
…e. (apache#7063)

* Allow cross compilation of cuda targets without physical device.

* Formatting.

* Add warning when architecture cant be found.

* Use target instead of autotvm arch specification.

* Change warning message.

Co-authored-by: Ubuntu <jwfromm@jwfromm-cpu-dev.itxhlkosmouevgkdrmwxfbs5qh.xx.internal.cloudapp.net>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jan 21, 2021
…e. (apache#7063)

* Allow cross compilation of cuda targets without physical device.

* Formatting.

* Add warning when architecture cant be found.

* Use target instead of autotvm arch specification.

* Change warning message.

Co-authored-by: Ubuntu <jwfromm@jwfromm-cpu-dev.itxhlkosmouevgkdrmwxfbs5qh.xx.internal.cloudapp.net>
electriclilies pushed a commit to electriclilies/tvm that referenced this pull request Feb 18, 2021
…e. (apache#7063)

* Allow cross compilation of cuda targets without physical device.

* Formatting.

* Add warning when architecture cant be found.

* Use target instead of autotvm arch specification.

* Change warning message.

Co-authored-by: Ubuntu <jwfromm@jwfromm-cpu-dev.itxhlkosmouevgkdrmwxfbs5qh.xx.internal.cloudapp.net>
@jwfromm jwfromm deleted the cuda_cross branch April 12, 2023 15:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants