Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix clang template function call. #1177

Conversation

PatriosTheGreat
Copy link

Clang can't disambiguation template method call after ".", so we need to specify it manually.

@kiskra-nvidia
Copy link
Member

Thank you for your contribution! So did you you replace nvcc with clang as the device compiler? It's not something we do internally so it's frankly astonishing that only four lines need to be tweaked! Could you provide more details about your environment/workflow (compiler version, options used, etc.) so that I can reproduce the problem on our end? Thanks!

@ezhulenev
Copy link

Yes, we clang as host and device compiler. It's very close to clang head.

@kiskra-nvidia
Copy link
Member

Could you share the details of the compiler options you used to compile the device code? I tried it on my end but I couldn't get it to cooperate, but that would've been with a relatively old clang version...

@ezhulenev
Copy link

Our clang invocation is deep inside bazel toolchain, but it trivially reproducible: https://godbolt.org/z/x3GWqeaff - clang and gcc fail to compile it, msvc is fine (because it's using edg frontent?)

@PatriosTheGreat
Copy link
Author

PatriosTheGreat commented Apr 3, 2024

Hi, sorry for delay with reply.

Here is a cuda reproducible:

#ifndef __global__
#define __host__ __attribute__((host))
#define __global__ __attribute__((global))
#define __device__ __attribute__((device))
#define __constant__ __attribute__((constant))
#endif

template<typename RealPrimitives>
struct PrimitivesWithoutDirect {
  __device__ void directSend(int* inpIx, int* outIx, int* eltN) {
    static_cast<RealPrimitives*>(this)->send(inpIx, eltN);
  }
};

template<typename T, typename RedOp, int P2p>
class Primitives:
  public PrimitivesWithoutDirect<Primitives<T, RedOp, P2p>> {
public:
    template<int Recv, int Send, typename Fn>
    __device__ __forceinline__ void process(Fn &&fn) {
    }
};

template<typename T, typename RedOp>
struct RunWorkElement {
  template<bool BcastSendNotRecv>
  struct Scatterer {
    template<int SlicePerChunk, int MinSrcs, int MaxSrcs, int MinDsts, int MaxDsts>
    __device__ __forceinline__ void operator()(
        int tid, int tn, int slice, int maxSliceSize,
        int nSrcs, void** srcPtrs, int nDsts, void** dstPtrs, int32_t* dstSizes
      ) {
    }
  };


  __device__ __forceinline__ void run() {
      Primitives<T, RedOp, 3> prims;
      Scatterer</*BcastSendNotRecv=*/true> scat;
      for (int i = 0; i < 100; ++i) {
        prims.process</*Recv=*/1, /*Send=*/1>(scat);
      }
  }
    
};


__global__ void run_kernel() {
  RunWorkElement<int, int> runner;
  runner.run();
}

int main() {
    run_kernel<<<1, 1>>>();
}

I'm compiling it with the command:
clang -x cuda '--cuda-gpu-arch=sm_60' '--cuda-gpu-arch=sm_61' '--cuda-gpu-arch=sm_70' '--cuda-gpu-arch=sm_80' -c sample.cu

I'm using clang 16 and getting an error:

sample.cu:49:15: error: missing 'template' keyword prior to dependent template name 'process'
        prims.process</*Recv=*/1, /*Send=*/1>(scat);

To support clang as kernel compiler in NCCL we have a few other small patches like this one for cuda-clang: llvm/llvm-project#73549

@kiskra-nvidia
Copy link
Member

Thank you for providing the additional details! Your fix should be included in the next NCCL release!

sjeaugey added a commit that referenced this pull request Jun 19, 2024
Rework core for NVIDIA Trusted Computing
 * Compress work structs so that they are shared between channels
 * Utilize the full amount of kernel argument space permitted (4k)
   before resorting to work fifo.
 * Rework the task preprocessing phase.
 * Use a separate abortDevFlag which is kept in sync with abortFlag
   using cudaMemcpy operations.
 * Rename src/include/align.h to src/include/bitops.h

Add lazy connection establishment for collective operations
 * Move buffer allocation and connection establishment to the first
   collective operation using that algorithm.
 * Accelerate init time and reduce memory usage.
 * Avoid allocating NVLS buffers if all calls are registered.
 * Compute algo/proto in ncclLaunchCollTasksInfo early on.
 * Connect peers in ncclCollPreconnectFunc if not connected already.
 * Also move shared buffer creation to the first send/recv call.

Accelerate intra-node NVLink detection
 * Make each rank only detect NVLinks attached to its GPU.
 * Fuse XMLs to reconstruct the full NVLink topology

Add init profiling to report time spend in different init phases.
 * Report timings of bootstrap, allgather, search, connect, etc.
 * Add new "PROFILE" category for NCCL_DEBUG_SUBSYS.

Add support for PCI p2p on split PCI switches
 * Detect split PCI switches through a kernel module exposing
   switch information.
 * Update the topology XML and graph to add those inter-switch
   connections.

Add cost estimation API
 * Add a new ncclGroupEndSimulate primitive to return the estimated
   time a group would take.

Net/IB: Add separate traffic class for fifo messages
 * Add NCCL_IB_FIFO_TC to control the traffic class of fifo messages
   independently from NCCL_IB_TC.
   Merges PR #1194

Net/IB: Add support for IB router
 * Use flid instead of lid if subnets do not match
 * Warn if flid is 0

Optimizations and fixes for device network offload (unpack)
 * Double the default number of channels
 * Cache netDeviceType
 * Fix save/increment head logic to enable Tree support.

Support ncclGroupStart/End for ncclCommAbort/Destroy
 * Allow Abort/Destroy to be called within a group when managing
   multiple GPUs with a single process.

Improve Tuner API
 * Provide to the plugin the original cost table so that the plugin
   can leave unknown or disabled algo/proto combinations untouched.
 * Remove nvlsSupport and collnetSupport.

Do not print version to stdout when using a debug file
 * Also print version from all processes with INFO debug level.
   Fixes issue #1271

Fix clang warnings in NVTX headers
 * Update NVTX headers to the latest version
   Fixes issue #1270

Disable port fusion in heterogeneous systems
 * Do not fuse ports if a mix of multi-port and single port are detected.

Fix NVLS graphs search for dual NICs.
 * Fix NVLS graph search when we have more than one NIC per GPU.

Fix crash with collnetDirect
 * Add separate graph search for collnetDirect, testing alltoall paths
   and working similarly to the NVLS search.

Fix hang when nodes have different CPU types
 * Add the CPU type to the rank peer info.
 * Align all ranks on the CPU type after the first allgather.
 * Only use the aligned CPU type for all tuning operations.
   Fixes issue #1136
   Fixes issue #1184

Fix performance of registered send/recv operations
 * Allow for single full size operations
 * Add INFO to confirm the registration of send/recv buffers.

Move all sync ops to finalize stage
 * Ensure ncclCommDestroy is non-blocking if ncclCommFinalize has
   been called.

Improve error reporting during SHM segment creation

Improve support of various compilers
   Merges PR #1177
   Merges PR #1228

Allow net and tuner plugins to be statically linked
 * Search for ncclNet or ncclTuner symbols in the main binary.
   Merges PR #979

Plugin examples includes cleanup
 * Harmonize err.h and common.h usage.
 * Add mixed plugin with both net and tuner.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants