Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add jobs using clang as CUDA compiler #493

Merged
merged 70 commits into from
Oct 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
1e5c6b3
Allow setting CUDA compiler via CMAKE_CUDA_COMPILER envvar.
jrhemstad Sep 26, 2023
388af5b
Move nvcc version check to CUB script.
jrhemstad Sep 26, 2023
b625e93
Add clang-cuda job to matrix.
jrhemstad Sep 26, 2023
260805f
Add compiler field to matrix for clang-cuda.
jrhemstad Sep 26, 2023
5ac0622
Add Thrust clang-cuda job.
jrhemstad Sep 26, 2023
a428b21
Fix formatting.
jrhemstad Sep 26, 2023
3da71f4
s/need/needs/
jrhemstad Sep 26, 2023
25efa7c
Can't spell good.
jrhemstad Sep 26, 2023
ce622cf
[skip-tests] Add clang cuda job to status check job.
jrhemstad Sep 26, 2023
13c95ba
Disable other jobs for now.
jrhemstad Sep 26, 2023
800913c
Disable other jobs in status check.
jrhemstad Sep 26, 2023
5667982
Add output to compute matrix job.
jrhemstad Sep 26, 2023
8e66201
Missin quote.
jrhemstad Sep 26, 2023
8a2560b
Fix logic for enabling CUB benchmarks.
jrhemstad Sep 26, 2023
dc550e7
Fix reference to cuda version in job name.
jrhemstad Sep 26, 2023
afd8f13
make clang-cuda job matrix over libs.
jrhemstad Sep 26, 2023
abb8235
Fix build script to use matrix lib value.
jrhemstad Sep 26, 2023
873db9e
Fix job name in status check.
jrhemstad Sep 26, 2023
93a10e5
Fix formatting.
jrhemstad Sep 26, 2023
26938c2
Fix job name.
jrhemstad Sep 26, 2023
1356a4e
Generate custom matrix with cartesian product of libs.
jrhemstad Sep 26, 2023
7437913
Add hacks that allow clang-cuda to work.
wmaxey Sep 26, 2023
81c8efa
Merge branch 'main' into clang-cuda-ci
jrhemstad Sep 26, 2023
22fcb5e
Merge branch 'clang-cuda-ci' of github.com:jrhemstad/cccl into clang-…
jrhemstad Sep 26, 2023
4d4616b
Do not build RDC tests for Clang CUDA
gevtushenko Sep 26, 2023
4e204b2
Attempt to fix thrust::complex for Clang-CUDA
gevtushenko Sep 27, 2023
9acf004
Fix macro definitions that are nvcc specific
miscco Sep 27, 2023
ff7a43a
Add missing header that is otherwise coming from the cuda side
miscco Sep 27, 2023
1f3e8c4
Fix invalid initialization order in constructor
miscco Sep 27, 2023
7feab38
Fix clang-cuda being picky about attribute orders
miscco Sep 27, 2023
9f70dc0
clang-cuda requires deduction guides to be marked as `__host__ __devi…
miscco Sep 27, 2023
621ca51
Fix some warnings about unused typedefs
miscco Sep 27, 2023
5802424
Fix invalid ifdefs
miscco Sep 27, 2023
523535d
Rename shadowing typedefs
miscco Sep 27, 2023
4085dd7
Work around compiler crash
miscco Sep 27, 2023
71b37bf
Ignore unused private member warning
miscco Sep 27, 2023
0c8b4a0
Remove non compiling code
miscco Sep 27, 2023
afa9004
Add missing includes
miscco Sep 27, 2023
69fe6ed
Fix tests
miscco Sep 27, 2023
c0872b9
We need to force include `force_include.h`
miscco Sep 27, 2023
11dd488
Avoid signed / unsigned warnings
miscco Sep 27, 2023
0be8b9d
Properly escape inside asm
miscco Sep 27, 2023
c5e2d23
Consider clang-cuda for platform-test
miscco Sep 27, 2023
5583d28
clang can handle subsumption
miscco Sep 27, 2023
0a2bd94
Avoid more signed / unsigned issues
miscco Sep 27, 2023
d071520
Escape all the things
miscco Sep 27, 2023
d634859
Silence more warnings
miscco Sep 27, 2023
4223917
Remove trailing commata
miscco Sep 27, 2023
8f4beea
Fix two tests that should fail to compile
miscco Sep 27, 2023
44febea
Fix pipeline divergent threads
miscco Sep 27, 2023
91145f4
Disable two tests that rely on managed variables
miscco Sep 27, 2023
0e2cb57
Disable two tests that rely on managed variables
miscco Sep 27, 2023
3eb8e8b
Merge branch 'clang-cuda-ci' of github.com:jrhemstad/cccl into pr/jrh…
miscco Sep 27, 2023
374ecb2
Fix one more test for SM_80
miscco Sep 27, 2023
0685884
Disable test that fails during runtime with an invalid launch parameter
miscco Sep 27, 2023
b228f26
Re-enable other jobs.
jrhemstad Sep 28, 2023
8980e65
Merge branch 'clang-cuda-ci' of github.com:jrhemstad/cccl into clang-…
jrhemstad Sep 28, 2023
4be1ed2
Re-enable other jobs in status check.
jrhemstad Sep 28, 2023
c8154ee
Update clang-cuda job names.
jrhemstad Sep 28, 2023
7501721
Try not to add invalid flag to clang
miscco Sep 29, 2023
fa10123
try to fix `is_nothrow_invocable` test
miscco Sep 29, 2023
16b6e58
Mark is_swappable test as potentially passing
miscco Sep 29, 2023
34e270a
Make MSVC pass
miscco Sep 29, 2023
894c986
Unfail test that seems to pass
miscco Sep 29, 2023
b95c9e6
Fix test for nvrtc
miscco Sep 30, 2023
42519b6
Fix fail test
miscco Sep 30, 2023
995fea5
Address review comments
miscco Oct 10, 2023
60682bf
Do not pass warnings flags similar to nvcc for clang-cuda
miscco Oct 10, 2023
e1ca980
Merge branch 'main' into clang-cuda-ci
jrhemstad Oct 10, 2023
46f0941
Merge branch 'main' into pr/jrhemstad/493
miscco Oct 11, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/actions/compute-matrix/compute-matrix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ explode_std_versions() {
jq -cr 'map(. as $o | {std: $o.std[]} + del($o.std))'
}

explode_libs() {
jq -cr 'map(. as $o | {lib: $o.lib[]} + del($o.lib))'
}

extract_matrix() {
local file="$1"
local type="$2"
Expand All @@ -23,6 +27,8 @@ extract_matrix() {
write_output "HOST_COMPILERS" "$(echo "$nvcc_full_matrix" | jq -cr '[.[] | .compiler.name] | unique')"
write_output "PER_CUDA_COMPILER_MATRIX" "$(echo "$nvcc_full_matrix" | jq -cr ' group_by(.cuda + .compiler.name) | map({(.[0].cuda + "-" + .[0].compiler.name): .}) | add')"
write_output "NVRTC_MATRIX" "$(echo "$matrix" | jq '.nvrtc' | explode_std_versions)"
local clang_cuda_matrix="$(echo "$matrix" | jq -cr '.["clang-cuda"]' | explode_std_versions | explode_libs)"
write_output "CLANG_CUDA_MATRIX" "$clang_cuda_matrix"
write_output "EXAMPLES_MATRIX" "$(echo "$matrix" | jq -cr '.examples' )"
}

Expand Down
18 changes: 18 additions & 0 deletions .github/workflows/pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ jobs:
HOST_COMPILERS: ${{steps.set-outputs.outputs.HOST_COMPILERS}}
PER_CUDA_COMPILER_MATRIX: ${{steps.set-outputs.outputs.PER_CUDA_COMPILER_MATRIX}}
NVRTC_MATRIX: ${{steps.set-outputs.outputs.NVRTC_MATRIX}}
CLANG_CUDA_MATRIX: ${{steps.set-outputs.outputs.CLANG_CUDA_MATRIX}}
EXAMPLES_MATRIX: ${{steps.set-outputs.outputs.EXAMPLES_MATRIX}}
steps:
- name: Checkout repo
Expand Down Expand Up @@ -111,9 +112,25 @@ jobs:
devcontainer_version: ${{ needs.compute-matrix.outputs.DEVCONTAINER_VERSION }}
is_windows: ${{ contains(matrix.compiler, 'cl') }}

clang-cuda:
name: ${{matrix.lib}} ${{matrix.cpu}}/CTK${{matrix.cuda}}/clang-cuda
needs: compute-matrix
strategy:
fail-fast: false
matrix:
include: ${{ fromJSON(needs.compute-matrix.outputs.CLANG_CUDA_MATRIX) }}
uses: ./.github/workflows/run-as-coder.yml
with:
name: ${{matrix.lib}} CTK${{matrix.cuda}} clang-cuda${{matrix.compiler.version}}/${{matrix.std}}
runner: linux-${{matrix.cpu}}-cpu16
image: rapidsai/devcontainers:${{needs.compute-matrix.outputs.DEVCONTAINER_VERSION}}-cpp-${{matrix.compiler.name}}${{matrix.compiler.version}}-cuda${{matrix.cuda}}-${{matrix.os}}
command: |
CMAKE_CUDA_COMPILER="${{matrix.compiler.exe}}" ./ci/build_${{matrix.lib}}.sh ${{matrix.compiler.exe}} ${{matrix.std}} ${{matrix.gpu_build_archs}}

examples:
name: CCCL Examples
needs: compute-matrix
if: ${{ !contains(github.event.head_commit.message, 'skip-tests') }}
strategy:
fail-fast: false
matrix:
Expand All @@ -137,6 +154,7 @@ jobs:
runs-on: ubuntu-latest
name: CI
needs:
- clang-cuda
- cub
- libcudacxx
- nvrtc
Expand Down
7 changes: 3 additions & 4 deletions ci/build_common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,13 @@ set -eo pipefail
cd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )";

# Script defaults
CUDA_COMPILER=nvcc
CUDA_COMPILER=${CMAKE_CUDA_COMPILER:-nvcc}

# Check if the correct number of arguments has been provided
function usage {
echo "Usage: $0 [OPTIONS] <HOST_COMPILER> <CXX_STANDARD> <GPU_ARCHS>"
echo "The PARALLEL_LEVEL environment variable controls the amount of build parallelism. Default is the number of cores."
echo "The CMAKE_CUDA_COMPILER environment variable can be used to control the CUDA compiler. The -nvcc flag takes precedence."
echo "Example: PARALLEL_LEVEL=8 $0 g++-8 14 \"70\" "
echo "Example: $0 clang++-8 17 \"70;75;80-virtual\" "
echo "Possible options: "
Expand Down Expand Up @@ -54,9 +55,7 @@ readonly CXX_STANDARD=$2

# Replace spaces, commas and semicolons with semicolons for CMake list
readonly GPU_ARCHS=$(echo $3 | tr ' ,' ';')

readonly PARALLEL_LEVEL=${PARALLEL_LEVEL:=$(nproc)}
readonly NVCC_VERSION=$($CUDA_COMPILER --version | grep release | awk '{print $6}' | cut -c2-)

if [ -z ${DEVCONTAINER_NAME+x} ]; then
BUILD_DIR=../build/local
Expand All @@ -83,7 +82,7 @@ COMMON_CMAKE_OPTIONS="
echo "========================================"
echo "Begin build"
echo "pwd=$(pwd)"
echo "NVCC_VERSION=$NVCC_VERSION"
echo "CUDA_COMPILER=$CUDA_COMPILER"
echo "HOST_COMPILER=$HOST_COMPILER"
echo "CXX_STANDARD=$CXX_STANDARD"
echo "GPU_ARCHS=$GPU_ARCHS"
Expand Down
18 changes: 13 additions & 5 deletions ci/build_cub.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

source "$(dirname "$0")/build_common.sh"


# CUB benchmarks require at least CUDA nvcc 11.5 for int128
# Returns "true" if the first version is greater than or equal to the second
version_compare() {
Expand All @@ -12,12 +11,20 @@ version_compare() {
echo "false"
fi
}
readonly ENABLE_CUB_BENCHMARKS=${ENABLE_CUB_BENCHMARKS:=$(version_compare $NVCC_VERSION 11.5)}

if [[ $ENABLE_CUB_BENCHMARKS == "true" ]]; then
echo "CUDA version is $NVCC_VERSION. Building CUB benchmarks."
ENABLE_CUB_BENCHMARKS="false"
ENABLE_CUB_RDC="false"
if [[ "$CUDA_COMPILER" == *nvcc* ]]; then
ENABLE_CUB_RDC="true"
NVCC_VERSION=$($CUDA_COMPILER --version | grep release | awk '{print $6}' | cut -c2-)
if [[ $(version_compare $NVCC_VERSION 11.5) == "true" ]]; then
ENABLE_CUB_BENCHMARKS="true"
echo "nvcc version is $NVCC_VERSION. Building CUB benchmarks."
else
echo "nvcc version is $NVCC_VERSION. Not building CUB benchmarks because nvcc version is less than 11.5."
fi
else
echo "CUDA version is $NVCC_VERSION. Not building CUB benchmarks because CUDA version is less than 11.5."
echo "nvcc version is not determined (likely using a non-NVCC compiler). Not building CUB benchmarks."
fi

CMAKE_OPTIONS="
Expand All @@ -32,6 +39,7 @@ CMAKE_OPTIONS="
-DTHRUST_IGNORE_DEPRECATED_CPP_DIALECT=ON \
-DCUB_IGNORE_DEPRECATED_CPP_DIALECT=ON \
-DCUB_ENABLE_BENCHMARKS="$ENABLE_CUB_BENCHMARKS"\
-DCUB_ENABLE_RDC_TESTS="$ENABLE_CUB_RDC" \
"

configure_and_build "CUB" "$CMAKE_OPTIONS"
2 changes: 2 additions & 0 deletions ci/matrix.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@ pull_request:
- {cuda: *cuda_newest, os: 'windows2022', cpu: 'amd64', compiler: {name: 'cl', version: '14.36', exe: 'cl++'}, gpu_build_archs: '70', std: [14, 17, 20], jobs: ['build']}
nvrtc:
- {cuda: *cuda_newest, os: 'ubuntu22.04', cpu: 'amd64', gpu_build_archs: '70', std: [11, 14, 17, 20]}
clang-cuda:
- {lib: ['thrust', 'cub', 'libcudacxx'], cuda: *cuda_newest, os: 'ubuntu22.04', cpu: 'amd64', compiler: {name: 'llvm', version: '16', exe: 'clang++'}, gpu_build_archs: '70', std: [17, 20]}
examples:
# Strategy: Oldest CUDA + Oldest Host compiler && Newest CUDA + Newest Host Compiler
- {cuda: *cuda_oldest, os: 'ubuntu18.04', cpu: 'amd64', compiler: {name: 'gcc', version: '6', exe: 'g++'}}
Expand Down
24 changes: 22 additions & 2 deletions libcudacxx/.upstream-tests/test/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ else() # NOT LIBCUDACXX_TEST_WITH_NVRTC
set(LIBCUDACXX_CUDA_COMPILER "${CMAKE_CUDA_COMPILER}")
endif()

if (NOT MSVC)
if (NOT MSVC AND NOT ${CMAKE_CUDA_COMPILER_ID} STREQUAL "Clang")
set(LIBCUDACXX_WARNING_LEVEL
"--compiler-options=-Wall \
--compiler-options=-Wextra")
--compiler-options=-Wextra")
endif()

# sccache cannot handle the -Fd option generationg pdb files
Expand All @@ -50,6 +50,18 @@ if (${CMAKE_CXX_COMPILER_ID} STREQUAL "IntelLLVM")
--compiler-options=-fno-fast-math")
endif()

if (${CMAKE_CUDA_COMPILER_ID} STREQUAL "Clang")
string(APPEND LIBCUDACXX_TEST_COMPILER_FLAGS
" -Xclang -fcuda-allow-variadic-functions"
" -Xclang -Wno-unused-parameter"
" -Wno-unknown-cuda-version")

find_package(CUDAToolkit)

string(APPEND LIBCUDACXX_TEST_LINKER_FLAGS
" -L${CUDAToolkit_LIBRARY_DIR} -lcuda -lcudart")
endif()

if (${CMAKE_CUDA_COMPILER_ID} STREQUAL "NVIDIA")
set(LIBCUDACXX_TEST_COMPILER_FLAGS
"${LIBCUDACXX_TEST_COMPILER_FLAGS} \
Expand All @@ -67,6 +79,14 @@ if (${CMAKE_CUDA_COMPILER_ID} STREQUAL "NVHPC")
-stdpar")
endif()

if (${CMAKE_CUDA_COMPILER_ID} STREQUAL "Clang")
set(LIBCUDACXX_TEST_COMPILER_FLAGS
"${LIBCUDACXX_TEST_COMPILER_FLAGS} \
${LIBCUDACXX_FORCE_INCLUDE} \
-I${libcudacxx_SOURCE_DIR}/include \
${LIBCUDACXX_WARNING_LEVEL}")
endif()

set(LIBCUDACXX_COMPUTE_ARCHS_STRING
"${CMAKE_CUDA_ARCHITECTURES}")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,17 +21,18 @@ TEST_NV_DIAG_SUPPRESS(set_but_not_used)
__device__
void test()
{
__shared__ cuda::barrier<cuda::thread_scope_block> b;
init(&b, 2);
__shared__ cuda::barrier<cuda::thread_scope_block>* b;
shared_memory_selector<cuda::barrier<cuda::thread_scope_block>, constructor_initializer> sel;
b = sel.construct(2);

uint64_t token;
asm volatile ("mbarrier.arrive.b64 %0, [%1];"
: "=l"(token)
: "l"(cuda::device::barrier_native_handle(b))
: "l"(cuda::device::barrier_native_handle(*b))
: "memory");
(void)token;

b.arrive_and_wait();
b->arrive_and_wait();
}

int main(int argc, char ** argv)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ struct storage
__host__ __device__
friend bool operator==(const storage & lhs, const T & rhs) {
for (cuda::std::size_t i = 0; i < size; ++i) {
if (lhs.data[i] != rhs + i) {
if (lhs.data[i] != static_cast<T>(rhs + i)) {
return false;
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
//===----------------------------------------------------------------------===//

// UNSUPPORTED: pre-sm-70
// UNSUPPORTED: clang && (!nvcc)

#define _LIBCUDACXX_CUDA_ABI_VERSION 2

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
#include <stdlib.h>

#include "test_macros.h"
#include "cuda_space_selector.h"

TEST_NV_DIAG_SUPPRESS(static_var_with_dynamic_init)
TEST_NV_DIAG_SUPPRESS(186) // pointless comparison of unsigned integer with zero
Expand All @@ -30,7 +31,7 @@ constexpr size_t stages_count = 2; // Pipeline with two stages
// Simply copy shared memory to global out
__device__ __forceinline__ void compute(int* global_out, int const* shared_in){
auto block = cooperative_groups::this_thread_block();
for (int i = 0; i < block.size(); ++i) {
for (int i = 0; i < static_cast<int>(block.size()); ++i) {
global_out[i] = shared_in[i];
}
}
Expand All @@ -46,8 +47,11 @@ __global__ void with_staging(int* global_out, int const* global_in, size_t size,
size_t shared_offset[stages_count] = { 0, block.size() }; // Offsets to each batch

// Allocate shared storage for a two-stage cuda::pipeline:
__shared__ cuda::pipeline_shared_state<cuda::thread_scope::thread_scope_block, stages_count> shared_state;
auto pipeline = cuda::make_pipeline(block, &shared_state);
using pipeline_state = cuda::pipeline_shared_state<cuda::thread_scope::thread_scope_block, stages_count>;
__shared__ pipeline_state* shared_state;
shared_memory_selector<pipeline_state, constructor_initializer> sel;
shared_state = sel.construct();
auto pipeline = cuda::make_pipeline(block, shared_state);

// Each thread processes `batch_sz` elements.
// Compute offset of the batch `batch` of this thread block in global memory:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,13 @@
# define TEST_NVCC
#elif defined(__NVCOMPILER)
# define TEST_NVCXX
#elif defined(__CUDA__)
# define TEST_CLANG_CUDA
#else
# define TEST_HOST
#endif

#if defined(TEST_NVCC)
#if defined(TEST_NVCC) || defined(TEST_CLANG_CUDA)

__host__ __device__ void test() {
#if defined(__CUDA_ARCH__)
Expand Down
2 changes: 1 addition & 1 deletion libcudacxx/.upstream-tests/test/force_include.h
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,6 @@ int main(int argc, char** argv)
return ret;
}

#define main fake_main
#define main __host__ __device__ fake_main

#endif
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
//===----------------------------------------------------------------------===//

// UNSUPPORTED: nvrtc, pre-sm-70
// XFAIL: clang && (!nvcc)

// uncomment for a really verbose output detailing what test steps are being launched
// #define DEBUG_TESTERS
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
//===----------------------------------------------------------------------===//

// UNSUPPORTED: nvrtc, pre-sm-70
// XFAIL: clang && (!nvcc)

// uncomment for a really verbose output detailing what test steps are being launched
// #define DEBUG_TESTERS
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ using aw_aw_pw1 = performer_list<
barrier_arrive_and_wait,
barrier_arrive_and_wait,
async_tester_fence,
clear_token,
clear_token
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont even know how this compiled previously

>;

using aw_aw_pw2 = performer_list<
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ using arithmetic_atomic_testers = extend_tester_list<
fetch_sub_tester<30, 21, 9>,
fetch_min_tester<9, 5, 5>,
fetch_max_tester<5, 9, 9>,
fetch_sub_tester<9, 17, -8>,
fetch_sub_tester<9, 17, -8>
>;

using bitwise_atomic_testers = extend_tester_list<
Expand Down
1 change: 1 addition & 0 deletions libcudacxx/.upstream-tests/test/heterogeneous/helpers.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

#include <cuda/std/type_traits>

#include <cstdio>
#include <new>
#include <thread>
#include <vector>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
#include <cuda/std/concepts>
#include <cuda/std/type_traits>

#include "test_macros.h"

using cuda::std::same_as;

struct S1 {};
Expand All @@ -35,6 +37,9 @@ struct S5 {
int* p;
};

#ifdef TEST_COMPILER_CLANG_CUDA
#pragma clang diagnostic ignored "-Wunused-private-field"
#endif // TEST_COMPILER_CLANG_CUDA
class C1 {};
class C2 {
/* [[maybe_unused]] */ int i;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,6 @@ int main(int, char**)

// TEST(TestLayoutLeftListInitialization, test_layout_left_extent_initialization)
{
typedef int data_t ;
typedef size_t index_t;

cuda::std::layout_left::mapping<cuda::std::extents<size_t,dyn, dyn>> m{cuda::std::dextents<size_t,2>{16, 32}};

static_assert( m.is_exhaustive() == true, "" );
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ int main(int, char**)
typed_test_compare_left();

using index_t = size_t;
using ext1d_t = cuda::std::extents<index_t,dyn>;
using ext2d_t = cuda::std::extents<index_t,dyn,dyn>;

{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ int main(int, char**)
typed_test_compare_right();

using index_t = size_t;
using ext1d_t = cuda::std::extents<index_t,dyn>;
using ext2d_t = cuda::std::extents<index_t,dyn,dyn>;

{
Expand Down
Loading
Loading