Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't get CUDA intellisense to work with compile_commands.json #8091

Closed
xq114 opened this issue Sep 2, 2021 · 11 comments
Closed

Can't get CUDA intellisense to work with compile_commands.json #8091

xq114 opened this issue Sep 2, 2021 · 11 comments
Assignees
Labels
bug Feature: CUDA fixed Check the Milestone for the release in which the fix is or will be available. Language Service quick fix

Comments

@xq114
Copy link

xq114 commented Sep 2, 2021

Bug type: Language Service

Describe the bug

  • OS and Version: Debian 11
  • VS Code Version: 1.59.1
  • C/C++ Extension Version: 1.6.0
  • Other extensions you installed (and if the issue persists after disabling them): Nsight Visual Studio Code Edition
  • If using SSH remote, specify OS of remote machine:
  • A clear and concise description of what the bug is, including information about the workspace (i.e. is the workspace a single project or multiple projects, size of the project, etc).

When I try to provide compile commands for cuda source files, I found it fail to provide a correct intellisense for these files in vscode. Indicators like __global__ and variables like blockDim are not recognized by vscode. I've made a minimal example to reproduce it.

Steps to reproduce

  1. Open a folder and type in source files.
  2. Open c_cpp_properties.json and edit compile_commands property.
  3. Provide compile_commands.json.
  4. See error

Expected behavior

Intellisense work with cuda source files and compile_commands.json

Code sample and logs

  • Code sample
#include <cuda_runtime.h>

__global__ void
vectorAdd(const float *A, const float *B, float *C, int numElements)
{
    int i = blockDim.x * blockIdx.x + threadIdx.x;

    if (i < numElements)
    {
        C[i] = A[i] + B[i];
    }
}
[{
    "directory": "/home/xq114/Freespace/cuda-test",
    "arguments": ["/usr/local/cuda/bin/nvcc", "-c", "-Xcompiler", "-fPIE", "-I", "/usr/local/cuda/include", "-m64", "-o", "vectorAdd.o", "vectorAdd.cu"],
    "file": "vectorAdd.cu"
}]
  • Configurations in c_cpp_properties.json
{
    "configurations": [
        {
            "name": "Linux",
            "includePath": [
                "${workspaceFolder}/**"
            ],
            "compilerPath": "/usr/local/cuda/bin/nvcc",
            "defines": [],
            "cStandard": "gnu17",
            "cppStandard": "gnu++14",
            "intelliSenseMode": "linux-gcc-x64",
            "compileCommands": "${workspaceFolder}/compile_commands.json"
        }
    ],
    "version": 4
}
  • Logs from running C/C++: Log Diagnostics from the VS Code command palette
-------- Diagnostics - 2/9/2021, 3:44:19 pm
Version: 1.6.0
Current Configuration:
{
    "name": "Linux",
    "includePath": [
        "${workspaceFolder}/**"
    ],
    "compilerPath": "/usr/local/cuda/bin/nvcc",
    "defines": [],
    "cStandard": "gnu17",
    "cppStandard": "gnu++14",
    "intelliSenseMode": "linux-gcc-x64",
    "compileCommands": "${workspaceFolder}/compile_commands.json",
    "compilerPathIsExplicit": true,
    "cStandardIsExplicit": true,
    "cppStandardIsExplicit": true,
    "intelliSenseModeIsExplicit": true,
    "compilerArgs": [],
    "browse": {
        "path": [
            "${workspaceFolder}/**"
        ],
        "limitSymbolsToIncludedHeaders": true
    }
}
Translation Unit Mappings:
[ /home/xq114/Freespace/cuda-test/vectorAdd.cu ]:
    /home/xq114/Freespace/cuda-test/vectorAdd.cu
Translation Unit Configurations:
[ /home/xq114/Freespace/cuda-test/vectorAdd.cu ]:
    Process ID: 65817
    Memory Usage: 16 MB
    Compiler Path: /usr/bin/gcc
    Includes:
        /usr/local/cuda-11.4/targets/x86_64-linux/include
        /usr/include/c++/10
        /usr/include/x86_64-linux-gnu/c++/10
        /usr/include/c++/10/backward
        /usr/lib/gcc/x86_64-linux-gnu/10/include
        /usr/local/include
        /usr/include/x86_64-linux-gnu
        /usr/include
    Standard Version: c++14
    IntelliSense Mode: linux-gcc-x64
    Other Flags:
        --g++
        --gnu_version=100201
        --cuda
    compile_commands.json entry:
        directory: /home/xq114/Freespace/cuda-test
        file: vectorAdd.cu
        arguments:
            /usr/local/cuda/bin/nvcc
            -c
            -Xcompiler
            -fPIE
            -I
            /usr/local/cuda/include
            -m64
            -o
            vectorAdd.o
            vectorAdd.cu
Total Memory Usage: 16 MB
Browse Paths from compile_commands.json, from workspace folder: /home/xq114/Freespace/cuda-test
    /usr/local/cuda-11.4/targets/x86_64-linux/include

------- Workspace parsing diagnostics -------
Number of files discovered (not excluded): 4224
Number of files parsed: 1145
Unable to find host compile command in output of nvcc.

Screenshots
Intellisense work without compile_commands.json
image

Intellisense do not work with compile_commands.json
image

Additional context

@sean-mcmanus
Copy link
Collaborator

sean-mcmanus commented Sep 2, 2021

CUDA IntelliSense works in general with compile commands (i.e. our automated test for that is working), but there might be something special about the arguments used that could be causing it to fail. If you set your C_Cpp.loggingLevel to "Debug" and look for the C/C++ output (not the Diagnostics output) after "Invoking nvcc with command line:" it may show more info on what is going wrong. You might double-check if the "-I", "/usr/local/cuda/include" argument isn't causing the failure (i.e. if the path exists), but I don't see that include being added to your includePath. It's also possible the "-o", "vectorAdd.o" might be causing the issue. Otherwise, @Colengms might be investigate more next week.

@xq114
Copy link
Author

xq114 commented Sep 3, 2021

It seems -o caused the failure. The command vscode-cpptools actually executed is

"/usr/local/cuda/bin/nvcc" -c -Xcompiler -fPIE -I /usr/local/cuda/include -m64 -o "/home/xq114/.config/Code/User/workspaceStorage/6672c0b90e76388acdf83bf2631af162/ms-vscode.cpptools/nvcc_temp/temp.cu" -c -x cu -odir "/home/xq114/.config/Code/User/workspaceStorage/6672c0b90e76388acdf83bf2631af162/ms-vscode.cpptools/nvcc_temp" -keep -keep-dir "/home/xq114/.config/Code/User/workspaceStorage/6672c0b90e76388acdf83bf2631af162/ms-vscode.cpptools/nvcc_temp" -v

which would give the error message

nvcc fatal   : No input files specified; use option --help for more information

@sean-mcmanus
Copy link
Collaborator

sean-mcmanus commented Sep 3, 2021

Okay, that helps -- looks like it's caused by a typo of "--o instead of "-o". We should have a fix for 1.7.0-insiders.

@sean-mcmanus sean-mcmanus assigned sean-mcmanus and unassigned Colengms Sep 3, 2021
@sean-mcmanus sean-mcmanus added quick fix fixed Check the Milestone for the release in which the fix is or will be available. labels Sep 3, 2021
@sean-mcmanus
Copy link
Collaborator

We've made a fix for 1.6.0-insiders -- if it's not fixed in that release, we'll need to investigate more.

@sean-mcmanus sean-mcmanus modified the milestones: 1.7.0, 1.7.0-insiders Sep 8, 2021
@sean-mcmanus
Copy link
Collaborator

@xq114
Copy link
Author

xq114 commented Oct 2, 2021

Sorry for my not able to test this on my linux machine now(because I'm on vacation). On Windows the error persists. My compile_commands.json reads

[{
  "directory": "C:\\Users\\xq114\\_tmp\\snippets\\testcuda",
  "arguments": ["C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.4\\bin\\nvcc.exe", "-c", "-I", "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.4\\include", "-m64", "-o", "build\\.objs\\main\\windows\\x64\\release\\main.cu.obj", "main.cu"],
  "file": "main.cu"
}]

and the error message says

invoking nvcc: "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.4\\bin\\nvcc.exe" -ccbin "C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe" -c -I C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\include -m64 -o build\.objs\main\windows\x64\release\main.cu.obj "c:\\Users\\xq114\\AppData\\Roaming\\Code\\User\\workspaceStorage\\6f036d107c9a60e7947578e0ee822493\\ms-vscode.cpptools\\nvcc_temp\\temp.cu" -c -x cu -odir "c:\\Users\\xq114\\AppData\\Roaming\\Code\\User\\workspaceStorage\\6f036d107c9a60e7947578e0ee822493\\ms-vscode.cpptools\\nvcc_temp" -keep -keep-dir "c:\\Users\\xq114\\AppData\\Roaming\\Code\\User\\workspaceStorage\\6f036d107c9a60e7947578e0ee822493\\ms-vscode.cpptools\\nvcc_temp" -v
Unable to find host compile command in output of nvcc.

The command outputs the following result for me:

nvcc fatal   : A single input file is required for a non-link phase when an outputfile is specified

Besides, everything goes well without the -o parameter.

@xq114
Copy link
Author

xq114 commented Oct 2, 2021

This is caused by spaces in the flags -I C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.4\\include. vscode removed the quotes automatically, which was incorrect.

@Colengms
Copy link
Collaborator

Colengms commented Oct 4, 2021

Hi @xq114 . That issue would seem to be due to cpptools assuming arguments in the arguments field in compile_commands.json include their own shell quoting (or escaping of spaces), if required. The compile_commands.json specification explicitly states that shell quoting is expected in the compile command, in the description of the command field, but is unfortunately vague about whether that requirement is different for the arguments field. https://clang.llvm.org/docs/JSONCompilationDatabase.html . The way I read it, the requirement seems to extend to the arguments field.

We could change this to instead assume that the arguments field must not contain shell quoting/escaping. However, I believe there are some tools that generate compile_commands.json files that make the first assumption (which is why it currently works this way). Unfortunately, it's not possible to support both assumptions. IMHO, it would be a good idea to ask LLVM to clarify the requirement for this field. That would help us to push back when bugs are reported that make the other assumption.

@xq114
Copy link
Author

xq114 commented Oct 5, 2021

Hi @xq114 . That issue would seem to be due to cpptools assuming arguments in the arguments field in compile_commands.json include their own shell quoting (or escaping of spaces), if required. The compile_commands.json specification explicitly states that shell quoting is expected in the compile command, in the description of the command field, but is unfortunately vague about whether that requirement is different for the arguments field. clang.llvm.org/docs/JSONCompilationDatabase.html . The way I read it, the requirement seems to extend to the arguments field.

We could change this to instead assume that the arguments field must not contain shell quoting/escaping. However, I believe there are some tools that generate compile_commands.json files that make the first assumption (which is why it currently works this way). Unfortunately, it's not possible to support both assumptions. IMHO, it would be a good idea to ask LLVM to clarify the requirement for this field. That would help us to push back when bugs are reported that make the other assumption.

I escaped the argument and everything finally works fine! Thanks a lot. I do agree that the LLVM specification should be more concise.

@xq114 xq114 closed this as completed Oct 5, 2021
@xq114
Copy link
Author

xq114 commented Oct 5, 2021

I find that for c++ files the spaces in the arguments field is correctly recognized:

[{
  "directory": "C:\\test space",
  "arguments": ["C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX64\\x64\\cl.exe", "/c", "/EHsc", "/nologo", "/Ii n c", "/Fobuild\\src.obj", "src.cpp"],
  "file": "src.cpp"
}]

the flag gives the following result:

Folder C:/TEST SPACE/I N C/ will be indexed

It seems a flag with whitespace is acceptable. How does the case differ from what happened in CUDA flag processing? In other words, why can't the include flags be recognized first and automatically handled?

@Colengms
Copy link
Collaborator

Colengms commented Oct 5, 2021

How does the case differ from what happened in CUDA flag processing?

Hi @xq114 . The difference is that the CUDA scenario involves invoking an executable (nvcc). We are invoking nvcc using an OS API that evaluates the command line as the shell would (on Windows). The shell parses the command line to discern the arguments, which requires quoting if the arguments have spaces or quotes embedded within them.

Depending on how compiler arguments are provided, they may or may not already have shell quoting/escaping present. If specified via a "command line", that form implies shell quoting/escaping is present. If arguments are provided in a list (such as the arguments field of compile_commands.json, the compilerArgs field in c_cpp_properties.json, or args in a tasks.json), things get complicated.

There are shell features people are accustomed to being able to use in arguments, that require shell parsing. For example, use of backticks in a bash command line (to invoke another process and use its output) are shell-processed. And, use of quotes mid-argument (such as -DMACRO="MULTI WORD MACRO") are shell-processed. So, it may be desirable to support shell parsing for args in those lists.

Internally, we may need to process the contents of arguments (such as -I or -D arguments) in order to use their values. We may also need to pass that value with (the original) shell quoting/escaped to an executable, such as a compiler (nvcc). So, we need to both remove shell quoting/escaping, as well as add (or preserve) it. We may have some inconsistencies in this logic, which I'm looking at improving in the context of this issue: #6773

@github-actions github-actions bot locked and limited conversation to collaborators Nov 19, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Feature: CUDA fixed Check the Milestone for the release in which the fix is or will be available. Language Service quick fix
Projects
None yet
Development

No branches or pull requests

3 participants