Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make runtest error: no CUDA-capable device is detected #7049

Open
bxiong97 opened this issue Mar 8, 2022 · 1 comment
Open

make runtest error: no CUDA-capable device is detected #7049

bxiong97 opened this issue Mar 8, 2022 · 1 comment

Comments

@bxiong97
Copy link

bxiong97 commented Mar 8, 2022

Issue summary

Hi,

I'm installing caffe 1.0 on wsl2 Ubuntu 20.04
I already managed to get
make all
make test
to run without error.

However, when I run make runtest, I got a bunch of errors.

(base) b***@DESKTOP-****:/mnt/c/Users/bx/caffe-1.0$ make runtest
.build_release/tools/caffe
caffe: command line brew
usage: caffe <command> <args>

commands:
  train           train or finetune a model
  test            score a model
  device_query    show GPU diagnostic information
  time            benchmark model execution time

  Flags from tools/caffe.cpp:
    -gpu (Optional; run in GPU mode on given device IDs separated by ','.Use
      '-gpu all' to run on all available GPUs. The effective training batch
      size is multiplied by the number of devices.) type: string default: ""
    -iterations (The number of iterations to run.) type: int32 default: 50
    -level (Optional; network level.) type: int32 default: 0
    -model (The model definition protocol buffer text file.) type: string
      default: ""
    -phase (Optional; network phase (TRAIN or TEST). Only used for 'time'.)
      type: string default: ""
    -sighup_effect (Optional; action to take when a SIGHUP signal is received:
      snapshot, stop or none.) type: string default: "snapshot"
    -sigint_effect (Optional; action to take when a SIGINT signal is received:
      snapshot, stop or none.) type: string default: "stop"
    -snapshot (Optional; the snapshot solver state to resume training.)
      type: string default: ""
    -solver (The solver definition protocol buffer text file.) type: string
      default: ""
    -stage (Optional; network stages (not to be confused with phase), separated
      by ','.) type: string default: ""
    -weights (Optional; the pretrained weights to initialize finetuning,
      separated by ','. Cannot be set simultaneously with snapshot.)
      type: string default: ""
.build_release/test/test_all.testbin 0 --gtest_shuffle
Cuda number of devices: 0
Setting to use device 0
Current device id: 0
Current device name:
Note: Randomizing tests' orders with a seed of 55461 .
[==========] Running 2101 tests from 277 test cases.
[----------] Global test environment set-up.
[----------] 5 tests from EmbedLayerTest/1, where TypeParam = caffe::CPUDevice<double>
[ RUN      ] EmbedLayerTest/1.TestForwardWithBias
E0307 22:13:32.392771  7483 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
E0307 22:13:32.432719  7483 common.cpp:121] Cannot create Curand generator. Curand won't be available.
[       OK ] EmbedLayerTest/1.TestForwardWithBias (114 ms)
[ RUN      ] EmbedLayerTest/1.TestGradient
E0307 22:13:32.469251  7483 common.cpp:141] Curand not available. Skipping setting the curand seed.
[       OK ] EmbedLayerTest/1.TestGradient (7 ms)
[ RUN      ] EmbedLayerTest/1.TestForward
[       OK ] EmbedLayerTest/1.TestForward (0 ms)
[ RUN      ] EmbedLayerTest/1.TestSetUp
[       OK ] EmbedLayerTest/1.TestSetUp (0 ms)
[ RUN      ] EmbedLayerTest/1.TestGradientWithBias
[       OK ] EmbedLayerTest/1.TestGradientWithBias (11 ms)
[----------] 5 tests from EmbedLayerTest/1 (132 ms total)

[----------] 8 tests from SliceLayerTest/2, where TypeParam = caffe::GPUDevice<float>
[ RUN      ] SliceLayerTest/2.TestGradientTrivial
F0307 22:13:32.488232  7483 syncedmem.hpp:22] Check failed: error == cudaSuccess (100 vs. 0)  no CUDA-capable device is detected
*** Check failure stack trace: ***
    @     0x7fc281c001c3  google::LogMessage::Fail()
    @     0x7fc281c0525b  google::LogMessage::SendToLog()
    @     0x7fc281bffebf  google::LogMessage::Flush()
    @     0x7fc281c006ef  google::LogMessageFatal::~LogMessageFatal()
    @     0x7fc280783103  caffe::SyncedMemory::mutable_cpu_data()
    @     0x7fc280600779  caffe::Blob<>::Reshape()
    @     0x7fc280600bce  caffe::Blob<>::Reshape()
    @     0x7fc280600c80  caffe::Blob<>::Blob()
    @     0x55ab7cfc2a6a  caffe::SliceLayerTest<>::SliceLayerTest()
    @     0x55ab7cfc2e20  testing::internal::TestFactoryImpl<>::CreateTest()
    @     0x55ab7d0633c1  testing::internal::HandleExceptionsInMethodIfSupported<>()
    @     0x55ab7d05b106  testing::TestInfo::Run()
    @     0x55ab7d05b265  testing::TestCase::Run()
    @     0x55ab7d05b78c  testing::internal::UnitTestImpl::RunAllTests()
    @     0x55ab7d05b857  testing::UnitTest::Run()
    @     0x55ab7cb36217  main
    @     0x7fc2801060b3  __libc_start_main
    @     0x55ab7cb3dd9e  _start
make: *** [Makefile:534: runtest] Aborted

NO CUDA issue

It cannot find CUDA!! But I have CUDA and driver installed. Verified with:
nvcc --version and nvidia-smi

(base) b***@DESKTOP-****:/mnt/c/Users/bx/caffe-1.0$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Thu_Feb_10_18:23:41_PST_2022
Cuda compilation tools, release 11.6, V11.6.112
Build cuda_11.6.r11.6/compiler.30978841_0

(base) b***@DESKTOP-****:/mnt/c/Users/bx/caffe-1.0$ nvidia-smi
Mon Mar  7 22:25:20 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 511.79       CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0  On |                  N/A |
|  0%   51C    P8    11W / 120W |    431MiB /  3072MiB |      5%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Makefile.config

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# CUDA_DIR := /usr/local/cuda-11.6
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_50,code=sm_50 \
		#-gencode arch=compute_20,code=sm_20 \
		#-gencode arch=compute_20,code=sm_21 \
		#-gencode arch=compute_30,code=sm_30 \
		#-gencode arch=compute_35,code=sm_35 \
		#-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
		# /usr/lib/python2.7/dist-packages/numpy/core/include

# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
ANACONDA_HOME := /home/bear233/anaconda3
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python3.9 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
 PYTHON_LIBRARIES := boost_python3 python3.8
 PYTHON_INCLUDE := /usr/include/python3.8 \
                 # /usr/lib/python3.8/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE)/usr/local/incllude  /usr/include/hdf5/serial/
# LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

System configuration

  • Operating system: Linux(WSL2)
  • CUDA version (if applicable): 11.6
  • CUDNN version (if applicable): 8.3.2
  • Python version (if using pycaffe): 3.9.7

Could someone please help me with it? I already tried out most solutions I found on internet and no luck. Many thanks!!!!!!

@bxiong97
Copy link
Author

bxiong97 commented Mar 8, 2022

NEW UPDATE:

I tried CUDA_VISIBLE_DEVICES=0 make runtest instead of make runtest

And I got more test cases past but still got the same issue with CUDA device number = 0.

(base) b***@DESKTOP-****:/mnt/c/Users/bx/caffe-1.0$ CUDA_VISIBLE_DEVICES=0 make runtest
.build_release/tools/caffe
caffe: command line brew
usage: caffe <command> <args>

commands:
  train           train or finetune a model
  test            score a model
  device_query    show GPU diagnostic information
  time            benchmark model execution time

  Flags from tools/caffe.cpp:
    -gpu (Optional; run in GPU mode on given device IDs separated by ','.Use
      '-gpu all' to run on all available GPUs. The effective training batch
      size is multiplied by the number of devices.) type: string default: ""
    -iterations (The number of iterations to run.) type: int32 default: 50
    -level (Optional; network level.) type: int32 default: 0
    -model (The model definition protocol buffer text file.) type: string
      default: ""
    -phase (Optional; network phase (TRAIN or TEST). Only used for 'time'.)
      type: string default: ""
    -sighup_effect (Optional; action to take when a SIGHUP signal is received:
      snapshot, stop or none.) type: string default: "snapshot"
    -sigint_effect (Optional; action to take when a SIGINT signal is received:
      snapshot, stop or none.) type: string default: "stop"
    -snapshot (Optional; the snapshot solver state to resume training.)
      type: string default: ""
    -solver (The solver definition protocol buffer text file.) type: string
      default: ""
    -stage (Optional; network stages (not to be confused with phase), separated
      by ','.) type: string default: ""
    -weights (Optional; the pretrained weights to initialize finetuning,
      separated by ','. Cannot be set simultaneously with snapshot.)
      type: string default: ""
.build_release/test/test_all.testbin 0 --gtest_shuffle
Cuda number of devices: 0
Setting to use device 0
Current device id: 0
Current device name:
Note: Randomizing tests' orders with a seed of 54062 .
[==========] Running 2101 tests from 277 test cases.
[----------] Global test environment set-up.
[----------] 3 tests from MSRAFillerTest/0, where TypeParam = float
[ RUN      ] MSRAFillerTest/0.TestFillFanIn
E0308 00:11:50.928552   340 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
E0308 00:11:50.970577   340 common.cpp:121] Cannot create Curand generator. Curand won't be available.
[       OK ] MSRAFillerTest/0.TestFillFanIn (95 ms)
[ RUN      ] MSRAFillerTest/0.TestFillAverage
[       OK ] MSRAFillerTest/0.TestFillAverage (0 ms)
[ RUN      ] MSRAFillerTest/0.TestFillFanOut
[       OK ] MSRAFillerTest/0.TestFillFanOut (1 ms)
[----------] 3 tests from MSRAFillerTest/0 (96 ms total)

[----------] 12 tests from ArgMaxLayerTest/1, where TypeParam = double
[ RUN      ] ArgMaxLayerTest/1.TestCPUMaxValTopK
E0308 00:11:50.980310   340 common.cpp:141] Curand not available. Skipping setting the curand seed.
[       OK ] ArgMaxLayerTest/1.TestCPUMaxValTopK (3 ms)
[ RUN      ] ArgMaxLayerTest/1.TestSetupAxisMaxVal
[       OK ] ArgMaxLayerTest/1.TestSetupAxisMaxVal (2 ms)
[ RUN      ] ArgMaxLayerTest/1.TestSetupMaxVal
[       OK ] ArgMaxLayerTest/1.TestSetupMaxVal (1 ms)
[ RUN      ] ArgMaxLayerTest/1.TestCPUAxisMaxValTopK
[       OK ] ArgMaxLayerTest/1.TestCPUAxisMaxValTopK (18 ms)
[ RUN      ] ArgMaxLayerTest/1.TestCPUAxis
[       OK ] ArgMaxLayerTest/1.TestCPUAxis (4 ms)
[ RUN      ] ArgMaxLayerTest/1.TestCPUTopK
[       OK ] ArgMaxLayerTest/1.TestCPUTopK (1 ms)
[ RUN      ] ArgMaxLayerTest/1.TestCPU
[       OK ] ArgMaxLayerTest/1.TestCPU (1 ms)
[ RUN      ] ArgMaxLayerTest/1.TestSetupAxis
[       OK ] ArgMaxLayerTest/1.TestSetupAxis (0 ms)
[ RUN      ] ArgMaxLayerTest/1.TestSetupAxisNegativeIndexing
[       OK ] ArgMaxLayerTest/1.TestSetupAxisNegativeIndexing (1 ms)
[ RUN      ] ArgMaxLayerTest/1.TestSetup
[       OK ] ArgMaxLayerTest/1.TestSetup (0 ms)
[ RUN      ] ArgMaxLayerTest/1.TestCPUMaxVal
[       OK ] ArgMaxLayerTest/1.TestCPUMaxVal (1 ms)
[ RUN      ] ArgMaxLayerTest/1.TestCPUAxisTopK
[       OK ] ArgMaxLayerTest/1.TestCPUAxisTopK (19 ms)
[----------] 12 tests from ArgMaxLayerTest/1 (51 ms total)

[----------] 27 tests from ReductionLayerTest/1, where TypeParam = caffe::CPUDevice<double>
[ RUN      ] ReductionLayerTest/1.TestMeanCoeff
[       OK ] ReductionLayerTest/1.TestMeanCoeff (7 ms)
[ RUN      ] ReductionLayerTest/1.TestAbsSumCoeffAxis1
[       OK ] ReductionLayerTest/1.TestAbsSumCoeffAxis1 (0 ms)
[ RUN      ] ReductionLayerTest/1.TestMeanCoeffGradient
[       OK ] ReductionLayerTest/1.TestMeanCoeffGradient (1 ms)
[ RUN      ] ReductionLayerTest/1.TestAbsSumCoeffGradient
[       OK ] ReductionLayerTest/1.TestAbsSumCoeffGradient (0 ms)
[ RUN      ] ReductionLayerTest/1.TestAbsSumGradient
[       OK ] ReductionLayerTest/1.TestAbsSumGradient (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumOfSquaresCoeff
[       OK ] ReductionLayerTest/1.TestSumOfSquaresCoeff (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumCoeff
[       OK ] ReductionLayerTest/1.TestSumCoeff (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumOfSquaresGradient
[       OK ] ReductionLayerTest/1.TestSumOfSquaresGradient (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumCoeffAxis1
[       OK ] ReductionLayerTest/1.TestSumCoeffAxis1 (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSetUpWithAxis2
[       OK ] ReductionLayerTest/1.TestSetUpWithAxis2 (0 ms)
[ RUN      ] ReductionLayerTest/1.TestAbsSumCoeffAxis1Gradient
[       OK ] ReductionLayerTest/1.TestAbsSumCoeffAxis1Gradient (1 ms)
[ RUN      ] ReductionLayerTest/1.TestAbsSum
[       OK ] ReductionLayerTest/1.TestAbsSum (0 ms)
[ RUN      ] ReductionLayerTest/1.TestAbsSumCoeff
[       OK ] ReductionLayerTest/1.TestAbsSumCoeff (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumGradient
[       OK ] ReductionLayerTest/1.TestSumGradient (1 ms)
[ RUN      ] ReductionLayerTest/1.TestSumOfSquaresCoeffAxis1Gradient
[       OK ] ReductionLayerTest/1.TestSumOfSquaresCoeffAxis1Gradient (1 ms)
[ RUN      ] ReductionLayerTest/1.TestSumOfSquaresCoeffGradient
[       OK ] ReductionLayerTest/1.TestSumOfSquaresCoeffGradient (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSum
[       OK ] ReductionLayerTest/1.TestSum (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumOfSquares
[       OK ] ReductionLayerTest/1.TestSumOfSquares (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSetUpWithAxis1
[       OK ] ReductionLayerTest/1.TestSetUpWithAxis1 (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumOfSquaresCoeffAxis1
[       OK ] ReductionLayerTest/1.TestSumOfSquaresCoeffAxis1 (0 ms)
[ RUN      ] ReductionLayerTest/1.TestMeanCoeffGradientAxis1
[       OK ] ReductionLayerTest/1.TestMeanCoeffGradientAxis1 (1 ms)
[ RUN      ] ReductionLayerTest/1.TestMeanGradient
[       OK ] ReductionLayerTest/1.TestMeanGradient (1 ms)
[ RUN      ] ReductionLayerTest/1.TestSumCoeffGradient
[       OK ] ReductionLayerTest/1.TestSumCoeffGradient (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSumCoeffAxis1Gradient
[       OK ] ReductionLayerTest/1.TestSumCoeffAxis1Gradient (1 ms)
[ RUN      ] ReductionLayerTest/1.TestMeanCoeffAxis1
[       OK ] ReductionLayerTest/1.TestMeanCoeffAxis1 (0 ms)
[ RUN      ] ReductionLayerTest/1.TestSetUp
[       OK ] ReductionLayerTest/1.TestSetUp (0 ms)
[ RUN      ] ReductionLayerTest/1.TestMean
[       OK ] ReductionLayerTest/1.TestMean (0 ms)
[----------] 27 tests from ReductionLayerTest/1 (15 ms total)

[----------] 2 tests from InfogainLossLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] InfogainLossLayerTest/0.TestGradient
F0308 00:11:51.111351   340 cudnn_softmax_layer.cpp:15] Check failed: status == CUDNN_STATUS_SUCCESS (1 vs. 0)  CUDNN_STATUS_NOT_INITIALIZED
*** Check failure stack trace: ***
    @     0x7fadd223e1c3  google::LogMessage::Fail()
    @     0x7fadd224325b  google::LogMessage::SendToLog()
    @     0x7fadd223debf  google::LogMessage::Flush()
    @     0x7fadd223e6ef  google::LogMessageFatal::~LogMessageFatal()
    @     0x7fadd0cad050  caffe::CuDNNSoftmaxLayer<>::LayerSetUp()
    @     0x7fadd0cf96b6  caffe::InfogainLossLayer<>::LayerSetUp()
    @     0x555a5f4d2b35  caffe::GradientChecker<>::CheckGradientExhaustive()
    @     0x555a5f6e1dfb  caffe::InfogainLossLayerTest_TestGradient_Test<>::TestBody()
    @     0x555a5f9ca211  testing::internal::HandleExceptionsInMethodIfSupported<>()
    @     0x555a5f9c204d  testing::Test::Run()
    @     0x555a5f9c2188  testing::TestInfo::Run()
    @     0x555a5f9c2265  testing::TestCase::Run()
    @     0x555a5f9c278c  testing::internal::UnitTestImpl::RunAllTests()
    @     0x555a5f9c2857  testing::UnitTest::Run()
    @     0x555a5f49d217  main
    @     0x7fadd07440b3  __libc_start_main
    @     0x555a5f4a4d9e  _start
make: *** [Makefile:534: runtest] Aborted

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant