Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed windows MSVC build compatibility #9

Closed
wants to merge 22 commits into from

Conversation

yuanluxu
Copy link
Contributor

@yuanluxu yuanluxu commented Jan 24, 2020

Fixed a few MSVC compiler (visual studio 2019, MSVC 19.16.27034) compatibility issues

  1. Replaced long with int64_t. aten::data_ptr<long> is not supported in MSVC
  2. pytorch3d/csrc/rasterize_points/rasterize_points_cpu.cpp, inline function is not correctly recognized by MSVC.
  3. pytorch3d/csrc/rasterize_meshes/geometry_utils.cuh
    const auto kEpsilon = 1e-30;
    MSVC does not compile this const into both host and device, change to a MACRO.
  4. pytorch3d/csrc/rasterize_meshes/geometry_utils.cuh,
    const float area2 = pow(area, 2.0);
    2.0 is considered as double by MSVC and raised an error
  5. pytorch3d/csrc/rasterize_points/rasterize_points_cpu.cpp
    std::tuple<torch::Tensor, torch::Tensor> RasterizePointsCoarseCpu() return type does not match the declaration in rasterize_points_cpu.h.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 24, 2020
Copy link
Contributor

@nikhilaravi nikhilaravi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you check any other errors raised by the circle CI build and resolve them? Thank you!
https://circleci.com/gh/facebookresearch/pytorch3d/724?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link

@@ -219,7 +219,7 @@ at::Tensor nn_points_idx_cuda(at::Tensor p1, at::Tensor p2) {
const auto D = p1.size(2);

AT_ASSERTM(p2.size(2) == D, "Point sets must have same last dimension.");
auto idx = at::empty({N, P1}, p1.options().dtype(at::kLong));
auto idx = at::empty({N, P1}, p1.options().dtype(at::kint64_t));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Circle CI build is failing as kint64_t is not a valid torch variable. Was at::kLong causing an issue?

see: https://pytorch.org/cppdocs/api/variable_namespacetorch_1aca5be74e0c90ee0b02abcdf224231c30.html?highlight=klong

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. Sorry this should be kept as it is.

Copy link
Contributor Author

@yuanluxu yuanluxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed the compiling error

@@ -219,7 +219,7 @@ at::Tensor nn_points_idx_cuda(at::Tensor p1, at::Tensor p2) {
const auto D = p1.size(2);

AT_ASSERTM(p2.size(2) == D, "Point sets must have same last dimension.");
auto idx = at::empty({N, P1}, p1.options().dtype(at::kLong));
auto idx = at::empty({N, P1}, p1.options().dtype(at::kint64_t));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. Sorry this should be kept as it is.

@nikhilaravi
Copy link
Contributor

@yuanluxu Apologies for the delay in responding to this PR. We're working on windows support so stay tuned!

@bottler
Copy link
Contributor

bottler commented Feb 18, 2020

Can I clarify the two preprocessor definitions?

I think const float kEpsilon = 1e-30f; would be fine for all compilers.

I think the problem for PixToNdc is that the same function appears in two cpu.cpp files. Can we just change inline float PixToNdc to static float PixToNdx, or put it in an anonymous namespace, in one or both places?

Thanks

@yuanluxu
Copy link
Contributor Author

yuanluxu commented Feb 18, 2020

@bottler

  1. For const float kEpsilon = 1e-30f;, I actually tried this one and several other variants and you would get the same compliant from msvc (see below). This might be an issue for msvc. msvc does not create two copies of the constant on both host and device while gcc does by default.

c:\pytorch3d\pytorch3d\csrc\rasterize_meshes\geometry_utils.cuh(236): error: identifier "kEpsilon" is undefined in device code

C:/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.cu(390): error: identifier "kEpsilon" is undefined in device code

C:/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.cu(426): error: identifier "kEpsilon" is undefined in device code

C:/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.cu(133): error: identifier "kEpsilon" is undefined in device code

C:/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.cu(133): error: identifier "kEpsilon" is undefined in device code

C:/pytorch3d/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.cu(141): error: identifier "kEpsilon" is undefined in device code

6 errors detected in the compilation of "C:/Users/yuanluxu/AppData/Local/Temp/tmpxft_000113dc_00000000-10_rasterize_meshes.cpp1.ii".
error: command 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\nvcc.exe' failed with exit status 1

  1. Changing from inline float PixToNdc to static float PixToNdx works! I updated my fork!

@bottler
Copy link
Contributor

bottler commented Feb 19, 2020

Thanks! Looks good to me.

@nikhilaravi
Copy link
Contributor

@yuanluxu it seems there are conflicts with master now (some files have been updated recently). Can you fix these conflicts and update the PR? Thank you!

@yuanluxu
Copy link
Contributor Author

@nikhilaravi Fixed!

@nikhilaravi
Copy link
Contributor

Looks good! Thanks @yuanluxu for adding this!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikhilaravi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yuanluxu has updated the pull request. Re-import the pull request

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yuanluxu has updated the pull request. Re-import the pull request

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yuanluxu has updated the pull request. Re-import the pull request

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yuanluxu has updated the pull request. Re-import the pull request

@facebook-github-bot
Copy link
Contributor

@yuanluxu has updated the pull request. Re-import the pull request

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yuanluxu has updated the pull request. Re-import the pull request

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanluxu has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yuanluxu merged this pull request in 9e21659.

@eclipse0922 eclipse0922 mentioned this pull request Nov 22, 2023
facebook-github-bot pushed a commit that referenced this pull request Dec 5, 2023
Summary:
Change the data type usage in the code to ensure cross-platform compatibility
long -> int64_t

<img width="628" alt="image" src="https://github.com/facebookresearch/pytorch3d/assets/6214316/40041f7f-3c09-4571-b9ff-676c625802e9">

Tested under
Win 11 and Ubuntu 22.04
with
CUDA 12.1.1 torch 2.1.1

Related issues & PR

#9

#1679

Pull Request resolved: #1689

Reviewed By: MichaelRamamonjisoa

Differential Revision: D51521562

Pulled By: bottler

fbshipit-source-id: d8ea81e223c642e0e9fb283f5d7efc9d6ac00d93
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants