Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[hip][caffe2] Enable detectron on AMD GPU #17862

Closed
wants to merge 2 commits into from
Closed

[hip][caffe2] Enable detectron on AMD GPU #17862

wants to merge 2 commits into from

Conversation

ghost
Copy link

@ghost ghost commented Mar 11, 2019

No description provided.

@ezyang ezyang requested a review from bddppq March 11, 2019 20:11
Copy link
Contributor

@bddppq bddppq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work

@@ -19,6 +20,17 @@ if (BUILD_CAFFE2_OPS)
if (MSVC)
install(FILES $<TARGET_PDB_FILE:caffe2_detectron_ops_gpu> DESTINATION lib OPTIONAL)
endif()
elseif(USE_ROCM)
set(Caffe2_HIP_INCLUDES ${Caffe2_HIP_INCLUDES} ${CMAKE_CURRENT_SOURCE_DIR})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add current dir to include dirs list is not desired, instead we should change the includes in the source files to use path relative to the project root (i.e. #include "modules/detectron/xxx.h" instead of #include "xxx.h")

@@ -123,8 +123,13 @@ __global__ void PSRoIPoolForward(
roundf(offset_bottom_rois[4]) + 1.) * spatial_scale;

// Force too small ROIs to be 1x1
#ifdef __HIP_PLATFORM_HCC__
T roi_width = fmax(roi_end_w - roi_start_w, 0.1); // avoid 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use c10::cuda::compat::max instead, which has the right dispatch to fmax and fmaxf based on the instantiated template type. (and avoid ifdef)

@bddppq bddppq added the module: rocm AMD GPU support for Pytorch label Mar 11, 2019
@@ -63,6 +63,7 @@
"caffe2/utils/*",
"c10/cuda/*",
"c10/cuda/test/CMakeLists.txt",
"modules/detectron/*",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can enable the entire "modules" subdirectory here, atm only detectron module has gpu files and when people add new gpu files we want to automatically enable in rocm.

@ghost
Copy link
Author

ghost commented Mar 12, 2019

Thanks @bddppq , updated the pull request. Please review the changes.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bddppq has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bddppq bddppq self-requested a review March 12, 2019 21:28
Copy link
Contributor

@bddppq bddppq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: rocm AMD GPU support for Pytorch open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants