Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cv::cudacodec::createVideoReader Not Working as expected #3359

Closed
ThinkWD opened this issue Oct 8, 2022 · 32 comments
Closed

cv::cudacodec::createVideoReader Not Working as expected #3359

ThinkWD opened this issue Oct 8, 2022 · 32 comments

Comments

@ThinkWD
Copy link

ThinkWD commented Oct 8, 2022

Hi I am unable to run video reader GPU Sample using cv::cudacodec::createVideoReader . My configuration is as follows
Ubuntu 20.04 CUDA 11.6. Open CV build from Source.

opencv build:

General configuration for OpenCV 4.6.0 =====================================
  Version control:               unknown

  Extra modules:
    Location (extra):            /root/workspace/opencv/opencv_contrib-4.6.0/modules
    Version control (extra):     unknown

  Platform:
    Timestamp:                   2022-09-29T08:24:21Z
    Host:                        Linux 4.15.0-193-generic x86_64
    CMake:                       3.14.4
    CMake generator:             Unix Makefiles
    CMake build tool:            /usr/bin/make
    Configuration:               RELEASE

  CPU/HW features:
    Baseline:                    SSE SSE2 SSE3
      requested:                 SSE3
    Dispatched code generation:  SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
      requested:                 SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
      SSE4_1 (18 files):         + SSSE3 SSE4_1
      SSE4_2 (2 files):          + SSSE3 SSE4_1 POPCNT SSE4_2
      FP16 (1 files):            + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
      AVX (5 files):             + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
      AVX2 (33 files):           + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
      AVX512_SKX (8 files):      + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX

  C/C++:
    Built as dynamic libs?:      YES
    C++ standard:                11
    C++ Compiler:                /usr/bin/c++  (ver 9.4.0)
    C++ flags (Release):         -fsigned-char -ffast-math -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
    C++ flags (Debug):           -fsigned-char -ffast-math -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
    C Compiler:                  /usr/bin/cc
    C flags (Release):           -fsigned-char -ffast-math -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
    C flags (Debug):             -fsigned-char -ffast-math -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
    Linker flags (Release):      -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    Linker flags (Debug):        -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    ccache:                      NO
    Precompiled headers:         NO
    Extra dependencies:          m pthread cudart_static dl rt nppc nppial nppicc nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cudnn cufft -L/usr/local/cuda/lib64 -L/usr/lib/x86_64-linux-gnu
    3rdparty dependencies:

  OpenCV modules:
    To be built:                 aruco barcode bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann fuzzy gapi hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow phase_unwrapping photo plot quality rapid reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto
    Disabled:                    world
    Disabled by dependency:      -
    Unavailable:                 alphamat cvv freetype hdf java julia matlab ovis python2 python3 sfm viz
    Applications:                tests perf_tests apps
    Documentation:               NO
    Non-free algorithms:         NO

  GUI:                           NONE
    GTK+:                        NO
    VTK support:                 NO

  Media I/O: 
    ZLib:                        /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.11)
    JPEG:                        /usr/lib/x86_64-linux-gnu/libjpeg.so (ver 80)
    WEBP:                        build (ver encoder: 0x020f)
    PNG:                         /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.6.37)
    TIFF:                        /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.1.0)
    JPEG 2000:                   build (ver 2.4.0)
    OpenEXR:                     /usr/lib/x86_64-linux-gnu/libImath.so /usr/lib/x86_64-linux-gnu/libIlmImf.so /usr/lib/x86_64-linux-gnu/libIex.so /usr/lib/x86_64-linux-gnu/libHalf.so /usr/lib/x86_64-linux-gnu/libIlmThread.so (ver 2_3)
    HDR:                         YES
    SUNRASTER:                   YES
    PXM:                         YES
    PFM:                         YES

  Video I/O:
    DC1394:                      YES (2.2.5)
    FFMPEG:                      YES
      avcodec:                   YES (58.54.100)
      avformat:                  YES (58.29.100)
      avutil:                    YES (56.31.100)
      swscale:                   YES (5.5.100)
      avresample:                YES (4.0.0)
    GStreamer:                   NO
    v4l/v4l2:                    YES (linux/videodev2.h)

  Parallel framework:            pthreads

  Trace:                         YES (with Intel ITT)

  Other third-party libraries:
    VA:                          NO
    Lapack:                      NO
    Eigen:                       NO
    Custom HAL:                  NO
    Protobuf:                    build (3.19.1)

  NVIDIA CUDA:                   YES (ver 11.6, CUFFT CUBLAS NVCUVID FAST_MATH)
    NVIDIA GPU arch:             35 37 50 52 60 61 70 75 80 86
    NVIDIA PTX archs:

  cuDNN:                         YES (ver 8.4.0)

  OpenCL:                        YES (no extra features)
    Include path:                /root/workspace/opencv/opencv-4.6.0/3rdparty/include/opencl/1.2
    Link libraries:              Dynamic load

  Python (for build):            /opt/conda/bin/python3

  Java:                          
    ant:                         NO
    JNI:                         NO
    Java wrappers:               NO
    Java tests:                  NO

  Install to:                    /root/workspace/opencv/opencv
-----------------------------------------------------------------

This is the code I run:

// const std::string fname = "rtsp://admin:abcd1234@192.168.0.121:554/h264/ch1/main/av_stream";
    const std::string fname = "./video/video.mp4";

    std::cout << cv::getBuildInformation() << std::endl;

    cv::cuda::GpuMat d_frame;
    cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname);

I tried rtsp and video files and got the same error:

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.6.0) /root/workspace/opencv/opencv_contrib-4.6.0/modules/cudacodec/src/video_parser.cpp:63: error: (-217:Gpu API call) Unknown error code [Code = 723438896] in function 'VideoParser'

Thanks in advance!

@cudawarped
Copy link
Contributor

cudawarped commented Oct 9, 2022

[Code = 723438896]

Looks like an unusual error code. Are you able to run any other OpenCV cuda functions? If so can you

  1. confirm that the big_buck_bunny.mp4 test video produces the same issue and,
  2. if possible try running the encoding sample from the Nvidia Video Codec SDK?

Which GPU and driver version are you using?

@ThinkWD
Copy link
Author

ThinkWD commented Oct 11, 2022

I'm sorry to reply to your message so late.

Looks like an unusual error code.

In fact this error code is indeterminate, I got several different error codes in several attempts, such as:

[Code = -244827920]
[Code = 652536048]
[Code = 1594538224]
[Code = -1155204880]
[Code = 1572964592]

Are you able to run any other OpenCV cuda functions?

and

if possible try running the encoding sample from the Nvidia Video Codec SDK?

I'm sorry I can't verify this at the moment, maybe you can provide an example to help me verify this?

Since I'm building opencv on Ubuntu, I didn't find an example in the examples that looked right to run, and when I tried to run video_writer.cpp I got the following error:

Device 0:  "NVIDIA GeForce RTX 3080"  10010Mb, sm_86, Driver/Runtime ver.11.70/11.30
Read 1 frame
Frame Size : 672x384
Open CPU Writer
Open CUDA Writer
terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.6.0) /home/lx_dir/opencv/opencv-4.6.0/modules/core/include/opencv2/core/private.cuda.hpp:112: error: (-213:The function/feature is not implemented) The called functionality is disabled for current build or platform in function 'throw_no_cuda'

confirm that the big_buck_bunny.mp4 test video produces the same issue and,

Yes, I got the same error.

Which GPU and driver version are you using?

I also suspected that it might be a problem with my device, so I tried it on another device with a different GPU, but encountered the same problem.

Device 1:

Product Name: NVIDIA GeForce RTX 3080

# nvidia-smi
NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7

# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:15:13_PDT_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0

Device 2:

Product Name: NVIDIA GeForce RTX 2060

# nvidia-smi
NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7

# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:18:20_PST_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0

And this is the cmake command I used when building opencv. Maybe there is something wrong with it?

mkdir -p build && cd build

cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=../../opencv \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.6.0/modules \
-D WITH_CUDA=ON \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D WITH_NVCUVID=ON \
-D WITH_CUBLAS=ON \
-D WITH_CUFFT=ON \
-D WITH_FFMPEG=ON \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
..

make -j$(nproc) && make install

@cudawarped
Copy link
Contributor

I'm sorry I can't verify this at the moment, maybe you can provide an example to help me verify this?

I just noticed that you are using the sample code. As far as I am aware that is broken so that could be your issue.

Try replacing

cv::imshow("GPU", d_frame);

with

d_frame.download(frame);
cv::imshow("CPU", frame);

here.

The sample should display the video decoded on the cpu then the gpu, does the cpu version display correctly?

@ThinkWD
Copy link
Author

ThinkWD commented Oct 11, 2022

When the program runs to cv::cudacodec::createVideoReader(fname), it stops working and reports an error. The code after that will not be executed. So I don't think this is the problem

@cudawarped
Copy link
Contributor

Good point. Have you built the tests if so does opencv_test_cudaarithm work?

If you haven't built the tests then you need to confirm that CUDA is working, a different error each time implies it could be returning an error from a previous CUDA call.

If something simple like the below

GpuMat src(100, 100, CV_8UC1);
GpuMat dst(1000, 1000, CV_8UC1);
cv::cuda::resize(src, dst, { 1000,1000 });
Mat dstHost; dst.download(dst);

works then CUDA is working and the next step would be to download and build the Nvidia samples and see if they work.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 12, 2022

If something simple like the below works then CUDA is working

I modified and ran this example

cv::cuda::GpuMat src(cv::imread("./img.jpg"));
cv::cuda::GpuMat dst;
cv::cuda::resize(src, dst, {1000, 1000});
cv::Mat dstHost; dst.download(dstHost);
cv::imwrite("./res.jpg", dstHost);

and got the correct output.

and the next step would be to download and build the Nvidia samples and see if they work.

I tried to build Nvidia samples and ran Video_Codec_SDK_11.1.5/Samples/AppDec with the following command:

./AppDec -i big_buck_bunny.mp4 -o ouput.mp4 -resize 800x600

I got the following output:

GPU in use: NVIDIA GeForce RTX 3080
Decode with demuxing.
[INFO ][13:15:52] Media format: QuickTime / MOV (mov,mp4,m4a,3gp,3g2,mj2)
Session Initialization Time: 3 ms 
[INFO ][13:15:52] Video Input Information
        Codec        : MPEG-4 (ASP)
        Frame rate   : 24/10 = 2.4 fps
        Sequence     : Progressive
        Coded size   : [672, 384]
        Display area : [0, 0, 672, 384]
        Chroma       : YUV 420
        Bit depth    : 8
Video Decoding Params:
        Num Surfaces : 4
        Crop         : [0, 0, 672, 384]
        Resize       : 800x600
        Deinterlace  : Weave

Total frame decoded: 125
Saved in file ouput.mp4 in NV12 format
Session Deinitialization Time: 1 ms 

I can't view output.mp4, but I think it's working properly

@cudawarped
Copy link
Contributor

Looks to be related to #3362 and #3359.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 12, 2022

So I should try to lower the opencv version?
Also I only found example of decoding in the examples, no example of encoding and pushing streams, is there anything relevant for my reference? Thank you very much for your reply!

@cudawarped
Copy link
Contributor

So I should try to lower the opencv version?

I wouldn't think that would make a difference, it looks like an error generated by the driver api but I can't recreate on my side. That said I'm using wsl with the same CUDA runtime and driver version (11.7) but I wouldn't expect that to be the issue.

Also I only found example of decoding in the examples, no example of encoding and pushing streams, is there anything relevant for my reference?

The encoding hasn't worked for a few years and if you want to push rtsp streams you probably want to use a different library to OpenCV.

Can you run bin/opencv_test_cudaarithm to see if you get the same errors as #3361?

@ThinkWD
Copy link
Author

ThinkWD commented Oct 12, 2022

Of course. I ran all the test programs starting with opencv_test_cuda, where opencv_test_cudafilters and opencv_test_cudabgsegm did not report any errors.
The following are some of the results obtained by running (intercepting the last part)

opencv_test_cudaarithm
[----------] Global test environment tear-down
[==========] 11340 tests from 68 test cases ran. (4640 ms total)
[  PASSED  ] 11327 tests.
[  FAILED  ] 13 tests, listed below:
[  FAILED  ] CUDA_Arithm/Exp.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, CV_32F, whole matrix)
[  FAILED  ] CUDA_Arithm/Exp.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, CV_32F, sub matrix)
[  FAILED  ] CUDA_Arithm/Exp.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, CV_32F, whole matrix)
[  FAILED  ] CUDA_Arithm/Exp.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, CV_32F, sub matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 32FC1, AngleInDegrees(false), whole matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 32FC1, AngleInDegrees(false), sub matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 32FC1, AngleInDegrees(true), whole matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 32FC1, AngleInDegrees(true), sub matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, 32FC1, AngleInDegrees(false), whole matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, 32FC1, AngleInDegrees(false), sub matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, 32FC1, AngleInDegrees(true), whole matrix)
[  FAILED  ] CUDA_Arithm/PolarToCart.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, 32FC1, AngleInDegrees(true), sub matrix)
[  FAILED  ] CUDA/GpuMat_SetTo.SameVal/8, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 8SC1, whole matrix)

13 FAILED TESTS
opencv_test_cudacodec
[----------] Global test environment tear-down
[==========] 30 tests from 8 test cases ran. (1695 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 30 tests, listed below:
[  FAILED  ] CUDA_Codec/CheckSet.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4")
[  FAILED  ] CUDA_Codec/CheckExtraData.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, ("highgui/video/big_buck_bunny.mp4", 45))
[  FAILED  ] CUDA_Codec/CheckExtraData.Reader/1, where GetParam() = (NVIDIA GeForce RTX 3080, ("highgui/video/big_buck_bunny.mov", 45))
[  FAILED  ] CUDA_Codec/CheckExtraData.Reader/2, where GetParam() = (NVIDIA GeForce RTX 3080, ("highgui/video/big_buck_bunny.mjpg.avi", 0))
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4")
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/1, where GetParam() = (NVIDIA GeForce RTX 3080, "cv/video/768x576.avi")
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/2, where GetParam() = (NVIDIA GeForce RTX 3080, "cv/video/1920x1080.avi")
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/3, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.avi")
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/4, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.h264")
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/5, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.h265")
[  FAILED  ] CUDA_Codec/CheckKeyFrame.Reader/6, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mpg")
[  FAILED  ] CUDA_Codec/Video.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4")
[  FAILED  ] CUDA_Codec/Video.Reader/1, where GetParam() = (NVIDIA GeForce RTX 3080, "cv/video/768x576.avi")
[  FAILED  ] CUDA_Codec/Video.Reader/2, where GetParam() = (NVIDIA GeForce RTX 3080, "cv/video/1920x1080.avi")
[  FAILED  ] CUDA_Codec/Video.Reader/3, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.avi")
[  FAILED  ] CUDA_Codec/Video.Reader/4, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.h264")
[  FAILED  ] CUDA_Codec/Video.Reader/5, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.h265")
[  FAILED  ] CUDA_Codec/Video.Reader/6, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mpg")
[  FAILED  ] CUDA_Codec/VideoReadRaw.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.h264")
[  FAILED  ] CUDA_Codec/VideoReadRaw.Reader/1, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.h265")
[  FAILED  ] CUDA_Codec/CheckParams.Reader/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Codec/CheckDecodeSurfaces.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4")
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", true, true, true)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/1, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", true, true, false)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/2, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", true, false, true)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/3, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", true, false, false)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/4, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", false, true, true)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/5, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", false, true, false)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/6, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", false, false, true)
[  FAILED  ] CUDA_Codec/CheckInitParams.Reader/7, where GetParam() = (NVIDIA GeForce RTX 3080, "highgui/video/big_buck_bunny.mp4", false, false, false)

30 FAILED TESTS
opencv_test_cudafeatures2d
[----------] Global test environment tear-down
[==========] 256 tests from 3 test cases ran. (283 ms total)
[  PASSED  ] 224 tests.
[  FAILED  ] 32 tests, listed below:
[  FAILED  ] CUDA_Features2D/FAST.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(25), FAST_NonmaxSuppression(false))
[  FAILED  ] CUDA_Features2D/FAST.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(25), FAST_NonmaxSuppression(true))
[  FAILED  ] CUDA_Features2D/FAST.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(50), FAST_NonmaxSuppression(false))
[  FAILED  ] CUDA_Features2D/FAST.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(50), FAST_NonmaxSuppression(true))
[  FAILED  ] CUDA_Features2D/FAST.Async/0, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(25), FAST_NonmaxSuppression(false))
[  FAILED  ] CUDA_Features2D/FAST.Async/1, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(25), FAST_NonmaxSuppression(true))
[  FAILED  ] CUDA_Features2D/FAST.Async/2, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(50), FAST_NonmaxSuppression(false))
[  FAILED  ] CUDA_Features2D/FAST.Async/3, where GetParam() = (NVIDIA GeForce RTX 3080, FAST_Threshold(50), FAST_NonmaxSuppression(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(4), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/12, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/13, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(2), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/16, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/17, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/18, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/19, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(3), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/20, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/21, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(31), ORB_BlurForDescriptor(true))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/22, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(false))
[  FAILED  ] CUDA_Features2D/ORB.Accuracy/23, where GetParam() = (NVIDIA GeForce RTX 3080, ORB_FeaturesCount(1000), ORB_ScaleFactor(1.2), ORB_LevelsCount(8), ORB_EdgeThreshold(31), ORB_firstLevel(0), ORB_WTA_K(4), 0, ORB_PatchSize(29), ORB_BlurForDescriptor(true))

32 FAILED TESTS
opencv_test_cudaimgproc
[----------] Global test environment tear-down
[==========] 2948 tests from 28 test cases ran. (2110 ms total)
[  PASSED  ] 2741 tests.
[  FAILED  ] 207 tests, listed below:
[  FAILED  ] EqualizeHistIssue.Issue18035
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(false), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(false), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(true), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(true), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(false), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(false), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(true), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(true), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(false), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(false), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(true), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(true), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/0, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(false), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/1, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(false), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/2, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(true), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/3, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(3), L2gradient(true), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/4, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(false), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/5, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(false), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/6, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(true), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/7, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(5), L2gradient(true), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/8, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(false), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/9, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(false), sub matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/10, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(true), whole matrix)
[  FAILED  ] CUDA_ImgProc/Canny.Async/11, where GetParam() = (NVIDIA GeForce RTX 3080, AppertureSize(7), L2gradient(true), sub matrix)
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerBG2BGR/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerGB2BGR/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerRG2BGR/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerGR2BGR/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerBG2BGR_MHT/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerGB2BGR_MHT/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerRG2BGR_MHT/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/Demosaicing.BayerGR2BGR_MHT/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/SwapChannels.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, whole matrix)
[  FAILED  ] CUDA_ImgProc/SwapChannels.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, sub matrix)
[  FAILED  ] CUDA_ImgProc/SwapChannels.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, whole matrix)
[  FAILED  ] CUDA_ImgProc/SwapChannels.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, sub matrix)
[  FAILED  ] CUDA_ImgProc/ConnectedComponents.Concentric_Circles/0, where GetParam() = (NVIDIA GeForce RTX 3080, 8, 4, -1)
[  FAILED  ] CUDA_ImgProc/ConnectedComponents.Concentric_Circles/1, where GetParam() = (NVIDIA GeForce RTX 3080, 8, 4, 0)
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/12, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/13, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/16, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/17, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/18, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/19, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/20, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/21, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/22, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/23, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/24, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/25, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/26, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/27, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/28, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/29, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/30, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/31, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/32, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/33, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/34, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/35, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/36, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/37, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/38, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/39, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/40, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/41, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/42, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/43, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/44, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/45, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/46, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/47, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/48, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/49, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/50, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/51, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/52, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/53, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/54, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/55, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/56, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/57, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/58, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/59, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/60, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/61, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/62, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/63, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/64, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/65, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/66, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/67, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/68, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/69, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/70, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerHarris.Accuracy/71, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/12, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/13, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/16, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/17, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/18, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/19, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/20, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/21, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/22, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/23, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/24, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/25, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/26, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/27, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/28, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/29, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/30, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/31, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/32, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/33, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/34, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/35, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, BORDER_REFLECT, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/36, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/37, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/38, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/39, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/40, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/41, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/42, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/43, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/44, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/45, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/46, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/47, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT101, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/48, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/49, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/50, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/51, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/52, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/53, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/54, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/55, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/56, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/57, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/58, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/59, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REPLICATE, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/60, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/61, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/62, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/63, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(3), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/64, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/65, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/66, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/67, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(5), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/68, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(0))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/69, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(3))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/70, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(5))
[  FAILED  ] CUDA_ImgProc/CornerMinEigen.Accuracy/71, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, BORDER_REFLECT, BlockSize(7), ApertureSize(7))
[  FAILED  ] CUDA_ImgProc/GoodFeaturesToTrack.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, MinDistance(0))
[  FAILED  ] CUDA_ImgProc/GoodFeaturesToTrack.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, MinDistance(3))
[  FAILED  ] CUDA_ImgProc/EqualizeHistExtreme.Case2/7, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 7)
[  FAILED  ] CUDA_ImgProc/EqualizeHistExtreme.Case2/27, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, 27)
[  FAILED  ] CUDA_ImgProc/EqualizeHistExtreme.Case2/412, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, 156)
[  FAILED  ] CUDA_ImgProc/HoughLinesProbabilistic.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, sub matrix)
[  FAILED  ] CUDA_ImgProc/HoughLinesProbabilistic.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, whole matrix)
[  FAILED  ] CUDA_ImgProc/HoughLinesProbabilistic.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 113x113, sub matrix)
[  FAILED  ] CUDA_ImgProc/HoughCircles.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, whole matrix)
[  FAILED  ] CUDA_ImgProc/GeneralizedHough.Ballard/0, where GetParam() = (NVIDIA GeForce RTX 3080, whole matrix)
[  FAILED  ] CUDA_ImgProc/GeneralizedHough.Ballard/1, where GetParam() = (NVIDIA GeForce RTX 3080, sub matrix)
[  FAILED  ] CUDA_ImgProc/MatchTemplateBlackSource.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, cv::TM_CCOEFF_NORMED)
[  FAILED  ] CUDA_ImgProc/MatchTemplateBlackSource.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, cv::TM_CCORR_NORMED)
[  FAILED  ] CUDA_ImgProc/MatchTemplate_CCOEF_NORMED.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, ("matchtemplate/source-0.png", "matchtemplate/target-0.png"))
[  FAILED  ] CUDA_ImgProc/MatchTemplate_CanFindBigTemplate.SQDIFF_NORMED/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/MatchTemplate_CanFindBigTemplate.SQDIFF/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/MeanShift.Filtering/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/MeanShift.Proc/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_ImgProc/MeanShiftSegmentation.Regression/0, where GetParam() = (NVIDIA GeForce RTX 3080, MinSize(0))
[  FAILED  ] CUDA_ImgProc/MeanShiftSegmentation.Regression/1, where GetParam() = (NVIDIA GeForce RTX 3080, MinSize(4))
[  FAILED  ] CUDA_ImgProc/MeanShiftSegmentation.Regression/2, where GetParam() = (NVIDIA GeForce RTX 3080, MinSize(20))
[  FAILED  ] CUDA_ImgProc/MeanShiftSegmentation.Regression/3, where GetParam() = (NVIDIA GeForce RTX 3080, MinSize(84))
[  FAILED  ] CUDA_ImgProc/MeanShiftSegmentation.Regression/4, where GetParam() = (NVIDIA GeForce RTX 3080, MinSize(340))
[  FAILED  ] CUDA_ImgProc/MeanShiftSegmentation.Regression/5, where GetParam() = (NVIDIA GeForce RTX 3080, MinSize(1364))

207 FAILED TESTS
opencv_test_cudalegacy
[----------] Global test environment tear-down
[==========] 14 tests from 5 test cases ran. (3952 ms total)
[  PASSED  ] 12 tests.
[  FAILED  ] 2 tests, listed below:
[  FAILED  ] CUDA_Legacy/NCV.HaarCascadeLoader/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Legacy/NCV.HaarCascadeApplication/0, where GetParam() = NVIDIA GeForce RTX 3080

 2 FAILED TESTS
  YOU HAVE 1 DISABLED TEST
opencv_test_cudaobjdetect
[----------] Global test environment tear-down
[==========] 11 tests from 5 test cases ran. (54 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 11 tests, listed below:
[  FAILED  ] detect/CalTech.HOG/0, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000009_0.png")
[  FAILED  ] detect/CalTech.HOG/1, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000032_0.png")
[  FAILED  ] detect/CalTech.HOG/2, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000165_0.png")
[  FAILED  ] detect/CalTech.HOG/3, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000261_0.png")
[  FAILED  ] detect/CalTech.HOG/4, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000469_0.png")
[  FAILED  ] detect/CalTech.HOG/5, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000527_0.png")
[  FAILED  ] detect/CalTech.HOG/6, where GetParam() = (NVIDIA GeForce RTX 3080, "caltech/image_00000574_0.png")
[  FAILED  ] detect/Hog_var.HOG/0, where GetParam() = (NVIDIA GeForce RTX 3080, "/hog/road.png")
[  FAILED  ] detect/Hog_var_cell.HOG/0, where GetParam() = (NVIDIA GeForce RTX 3080, "/hog/road.png")
[  FAILED  ] CUDA_ObjDetect/LBP_Read_classifier.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 0)
[  FAILED  ] CUDA_ObjDetect/LBP_classify.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 0)

11 FAILED TESTS
opencv_test_cudaoptflow
[----------] Global test environment tear-down
[==========] 46 tests from 6 test cases ran. (62 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 46 tests, listed below:
[  FAILED  ] CUDA_OptFlow/BroxOpticalFlow.Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_OptFlow/BroxOpticalFlow.OpticalFlowNan/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/0, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(1), DataType(0))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/1, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(1), DataType(2))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/2, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(1), DataType(4))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/3, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(1), DataType(5))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/4, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(3), DataType(0))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/5, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(3), DataType(2))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/6, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(3), DataType(4))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/7, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(3), DataType(5))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/8, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(4), DataType(0))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/9, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(4), DataType(2))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/10, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(4), DataType(4))
[  FAILED  ] CUDA_OptFlow/PyrLKOpticalFlow.Sparse/11, where GetParam() = (NVIDIA GeForce RTX 3080, Chan(4), DataType(5))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(5), 0, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(5), 0, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(5), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(5), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(7), 0, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(7), 0, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(7), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.3), PolyN(7), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(5), 0, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(5), 0, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(5), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(5), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/12, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(7), 0, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/13, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(7), 0, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(7), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.5), PolyN(7), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/16, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(5), 0, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/17, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(5), 0, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/18, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(5), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/19, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(5), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/20, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(7), 0, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/21, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(7), 0, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/22, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(7), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(false))
[  FAILED  ] CUDA_OptFlow/FarnebackOpticalFlow.Accuracy/23, where GetParam() = (NVIDIA GeForce RTX 3080, PyrScale(0.8), PolyN(7), 0|OPTFLOW_FARNEBACK_GAUSSIAN, UseInitFlow(true))
[  FAILED  ] CUDA_OptFlow/OpticalFlowDual_TVL1.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, Gamma(0))
[  FAILED  ] CUDA_OptFlow/OpticalFlowDual_TVL1.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, Gamma(1))
[  FAILED  ] CUDA_OptFlow/OpticalFlowDual_TVL1.Async/0, where GetParam() = (NVIDIA GeForce RTX 3080, Gamma(0))
[  FAILED  ] CUDA_OptFlow/OpticalFlowDual_TVL1.Async/1, where GetParam() = (NVIDIA GeForce RTX 3080, Gamma(1))
[  FAILED  ] CUDA_OptFlow/NvidiaOpticalFlow_1_0.Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_OptFlow/NvidiaOpticalFlow_1_0.OpticalFlowNan/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_OptFlow/NvidiaOpticalFlow_2_0.Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_OptFlow/NvidiaOpticalFlow_2_0.OpticalFlowNan/0, where GetParam() = NVIDIA GeForce RTX 3080

46 FAILED TESTS
opencv_test_cudastereo
[----------] Global test environment tear-down
[==========] 128 tests from 9 test cases ran. (940 ms total)
[  PASSED  ] 115 tests.
[  FAILED  ] 13 tests, listed below:
[  FAILED  ] CudaStereo_StereoSGM.regression
[  FAILED  ] CUDA_StereoSGM_funcs/StereoSGM_CensusTransformImage.Image/0, where GetParam() = (NVIDIA GeForce RTX 3080, "stereobm/aloe-L.png", whole matrix)
[  FAILED  ] CUDA_StereoSGM_funcs/StereoSGM_CensusTransformImage.Image/1, where GetParam() = (NVIDIA GeForce RTX 3080, "stereobm/aloe-L.png", sub matrix)
[  FAILED  ] CUDA_StereoSGM_funcs/StereoSGM_CensusTransformImage.Image/2, where GetParam() = (NVIDIA GeForce RTX 3080, "stereobm/aloe-R.png", whole matrix)
[  FAILED  ] CUDA_StereoSGM_funcs/StereoSGM_CensusTransformImage.Image/3, where GetParam() = (NVIDIA GeForce RTX 3080, "stereobm/aloe-R.png", sub matrix)
[  FAILED  ] CUDA_Stereo/StereoBM.Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/StereoBM.PrefilterXSobelRegression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/StereoBM.PrefilterNormRegression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/StereoBM.Streams/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/StereoBM.Uniqueness_Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/StereoBeliefPropagation.Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/StereoConstantSpaceBP.Regression/0, where GetParam() = NVIDIA GeForce RTX 3080
[  FAILED  ] CUDA_Stereo/ReprojectImageTo3D.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 128x128, CV_8U, whole matrix)

13 FAILED TESTS
opencv_test_cudawarping
[----------] Global test environment tear-down
[==========] 2864 tests from 12 test cases ran. (6737 ms total)
[  PASSED  ] 2792 tests.
[  FAILED  ] 72 tests, listed below:
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/12, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/13, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/16, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/17, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/18, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/19, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/20, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/21, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/22, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/23, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/24, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/25, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/26, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/27, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/28, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/29, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/30, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/31, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/32, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/33, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/34, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpAffineNPP.Accuracy/35, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/0, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/1, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/2, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/3, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/4, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/5, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC1, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/6, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/7, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/8, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/9, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/10, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/11, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC3, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/12, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/13, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/14, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/15, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/16, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/17, where GetParam() = (NVIDIA GeForce RTX 3080, 8UC4, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/18, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/19, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/20, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/21, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/22, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/23, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC1, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/24, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/25, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/26, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/27, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/28, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/29, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC3, inverse, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/30, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, direct, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/31, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, direct, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/32, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, direct, INTER_CUBIC)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/33, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, inverse, INTER_NEAREST)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/34, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, inverse, INTER_LINEAR)
[  FAILED  ] CUDA_Warping/WarpPerspectiveNPP.Accuracy/35, where GetParam() = (NVIDIA GeForce RTX 3080, 32FC4, inverse, INTER_CUBIC)

72 FAILED TESTS

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

The encoding hasn't worked for a few years and if you want to push rtsp streams you probably want to use a different library to OpenCV.

I see in the build information that FFMPEG and GStreamer are available in Video I/O, can I use them through opencv? Are there any examples or any other references about this? Thank you for your help

@cudawarped
Copy link
Contributor

Just to confirm:

  1. I see above you built the samples for the latest version of Nvidia's Video Coding SDK, did you also use the headers from this when you built OpenCV?

  2. Which libs did you link against the stubs or the driver, you can check with

    cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY

I see in the build information that FFMPEG and GStreamer are available in Video I/O, can I use them through opencv?

Usage questions should be placed on the OpenCV forum. Video writing in OpenCV is performed usign the cv::VideoWriter api which can optionaly use FFmpeg or GStreamer behind the scenes. As far as I know the FFmpeg backend will only write to files but I would have thourght you could construct a GStreamer pipeline to push streams via ip.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

I see above you built the samples for the latest version of Nvidia's Video Coding SDK, did you also use the headers from this when you built OpenCV?

Yes, I used Video_Codec_SDK_11.1.5 when I built opencv

Which libs did you link against the stubs or the driver

cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY

output

CUDA_nvcuvid_LIBRARY=/usr/local/cuda-11.3/lib64/libnvcuvid.so

I copied this library from Video_Codec_SDK_11.1.5

Usage questions should be placed on the OpenCV forum.

I will be there for help, thank you

@cudawarped
Copy link
Contributor

Do you have both cuda toolkit 11.3 and 11.6 on your machine?

I'm wondering if OpenCV is trying to use the stub library at runtime, what is the output from

ldd bin/opencv_test_cudacodec | grep libnvcuvid

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

Do you have both cuda toolkit 11.3 and 11.6 on your machine?

No, I have two machine, one with cuda toolkit 11.3 and the other with 11.6

I'm wondering if OpenCV is trying to use the stub library at runtime, what is the output from ldd bin/opencv_test_cudacodec | grep libnvcuvid

libnvcuvid.so.1 => /usr/local/cuda-11.3/lib64/libnvcuvid.so.1 (0x00007f88a8b72000)

@cudawarped
Copy link
Contributor

What happens if you remove the stub library libnvcuvid.so which you copied accross to /usr/local/cuda-11.3/lib64/ then generate your build files, does it pick up the driver's version of libnvcuvid?

If not can you remove the stub library from the installation directory /usr/local/cuda-11.3/lib64/ and build by passing

-DCUDA_nvcuvid_LIBRARY=Video_Codec_SDK_11.1.5/Lib/linux/stubs/x86_64/libnvcuvid.so

to cmake to see if this fixes your issue.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

I remove the stub library libnvcuvid.so and rebuild, now cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY output that:

CUDA_nvcuvid_LIBRARY=/usr/lib/x86_64-linux-gnu/libnvcuvid.so

@cudawarped
Copy link
Contributor

Can you try building and see if the error remains and if

ldd bin/opencv_test_cudacodec | grep libnvcuvid

has changed.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

Yes, I'm in the process of building it, it may take a while

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

I rebuild, now cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY output that:

libnvcuvid.so.1 => /lib/x86_64-linux-gnu/libnvcuvid.so.1 (0x00007f2fcb831000)

But when I run bin/opencv_test_cudaarithm I get the same error as before.

After that I tried to run video_reader.cpp without any more errors and everything seems to be fine.

@cudawarped
Copy link
Contributor

But when I run bin/opencv_test_cudaarithm I get the same error as before.

Is it the same error, or just an error related to OPENCV_TEST_DATA_PATH not being set?

After that I tried to run video_reader.cpp without any more errors and everything seems to be fine.

Thats great, I thought it was a long shot but it seems to have paid off!

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

Is it the same error, or just an error related to OPENCV_TEST_DATA_PATH not being set?

It's the exact same error, but I'm not sure if it's caused by my incorrect operation and I think I should rebuild it once to verify it.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

The previous multiple builds generated files scattered in multiple similarly named folders, so I deleted them all and rebuilt.
Now I can be sure that running bin/opencv_test_cudaarithm and all the test programs starting with opencv_test_cuda will output the exact same error

@cudawarped
Copy link
Contributor

So the video_reader sample is still working for you?

What is your output from

opencv_test_cudacodec --gtest_filter=CUDA_Codec/Video.Reader/0

It is it similar to

Note: Google Test filter = CUDA_Codec/Video.Reader/0
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from CUDA_Codec/Video
[ RUN ] CUDA_Codec/Video.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3070 Ti Laptop GPU, "highgui/video/big_buck_bunny.mp4")
unknown file: Failure
C++ exception with description "OpenCV(4.6.0-dev) /home/b/repos/opencv/opencv_contrib/modules/cudacodec/src/cuvid_video_source.cpp:66: error: (-217:Gpu API call) CUDA_ERROR_FILE_NOT_FOUND [Code = 301] in function 'CuvidVideoSource'
" thrown in the test body.
[ FAILED ] CUDA_Codec/Video.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3070 Ti Laptop GPU, "highgui/video/big_buck_bunny.mp4") (419 ms)
[----------] 1 test from CUDA_Codec/Video (419 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (419 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] CUDA_Codec/Video.Reader/0, where GetParam() = (NVIDIA GeForce RTX 3070 Ti Laptop GPU, "highgui/video/big_buck_bunny.mp4")

If so you need to clone the extra repo and add

export OPENCV_TEST_DATA_PATH=extra/testdata1/

or similar before running the test.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

So the video_reader sample is still working for you?

Yes, it still works.

What is your output from opencv_test_cudacodec --gtest_filter=CUDA_Codec/Video.Reader/0

It is similar to the one you mentioned.

If so you need to clone the extra repo and add export OPENCV_TEST_DATA_PATH=extra/testdata1/ or similar before running the test.

I don't quite understand how this is to be done and I didn't notice where the extra repo is. Also, thanks to your help, isn't the codec already working as expected?

@cudawarped
Copy link
Contributor

I don't quite understand how this is to be done and I didn't notice where the extra repo is. Also, thanks to your help, isn't the codec already working as expected?

Yes, I just wanted to cofirm that the errors you are seeing from opencv_test_cudacodec are a result of not having the input video's which are contained in OpenCV's extra repository and not a result of another internal error.

@ThinkWD
Copy link
Author

ThinkWD commented Oct 13, 2022

Yes, I just wanted to cofirm that the errors you are seeing from opencv_test_cudacodec are a result of not having the input video's which are contained in OpenCV's extra repository and not a result of another internal error.

yes, clone the extra repo and add export OPENCV_TEST_DATA_PATH=extra/testdata/, opencv_test_cudacodec now passes without any more errors.
and opencv_test_cudafeatures2d, opencv_test_cudalegacy, opencv_test_cudaobjdetect, opencv_test_cudastereo and opencv_test_cudawarping all passed without error

@cudawarped
Copy link
Contributor

Great thanks for checking that, I guess this issue can be closed then?

@ThinkWD ThinkWD closed this as completed Oct 13, 2022
@hamidreza-hashempour
Copy link

hamidreza-hashempour commented Jul 26, 2023

I rebuild, now cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY output that:

libnvcuvid.so.1 => /lib/x86_64-linux-gnu/libnvcuvid.so.1 (0x00007f2fcb831000)

But when I run bin/opencv_test_cudaarithm I get the same error as before.

After that I tried to run video_reader.cpp without any more errors and everything seems to be fine.

Hello there, could you explain how you built your OpenCV as I have faced a same issue?
I have tried:
1- Linking /usr/lib/x86_64-linux-gnu/libnvcuvid.so directly to CUDA_nvcuvid_LIBRARY. And then, making opencv with this command:
cmake -D OPENCV_EXTRA_MODULES_PATH= /opencv/opencv/contrib/modules/ -D WITH_XINE=ON -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_DNN_CUDA=ON -D WITH_NVCUVID=ON -D WITH_CUDNN=ON -D BUILD_DOCS=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D CUDA_nvcuvid_LIBRARY=/usr/lib/x86_64-linux-gnu/libnvcuvid.so ..

2- Making a soft link of libnvcuvid.so in /usr/lib which is linked to /usr/lib/x86_64-linux-gnu/libnvcuvid.so. And then, building opencv with this command:
cmake -D OPENCV_EXTRA_MODULES_PATH= /opencv/opencv/contrib/modules/ -D WITH_XINE=ON -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_DNN_CUDA=ON -D WITH_NVCUVID=ON -D WITH_CUDNN=ON -D BUILD_DOCS=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D CUDA_nvcuvid_LIBRARY=/usr/lib/libnvcuvid.so ..

In the both cases, opencv_test_cudacodec is not linked to libnvcuvid. I.e,
ldd bin/opencv_test_cudacodec | grep libnvcuvid
does not give me anything, and accordingly I cannot execute cv::cudacodec::createVideoReader.
CUDA 11.2
Ubuntu 22.04.2 LTS

Thx again.

@cudawarped
Copy link
Contributor

1- Linking /usr/lib/x86_64-linux-gnu/libnvcuvid.so directly to CUDA_nvcuvid_LIBRARY. And then, making opencv with this command:
cmake -D OPENCV_EXTRA_MODULES_PATH= /opencv/opencv/contrib/modules/ -D WITH_XINE=ON -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_DNN_CUDA=ON -D WITH_NVCUVID=ON -D WITH_CUDNN=ON -D BUILD_DOCS=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D CUDA_nvcuvid_LIBRARY=/usr/lib/x86_64-linux-gnu/libnvcuvid.so ..

If you have the Nvidia driver installed and a GPU which supports the Nvidia Video Coding SDK then I would expect /usr/lib/x86_64-linux-gnu/libnvcuvid.so to exist and be picked up by CMake. Did /usr/lib/x86_64-linux-gnu/libnvcuvid.so exist or did you manually create it by copying the stub library?

@hamidreza-hashempour
Copy link

hamidreza-hashempour commented Jul 27, 2023

1- Linking /usr/lib/x86_64-linux-gnu/libnvcuvid.so directly to CUDA_nvcuvid_LIBRARY. And then, making opencv with this command:
cmake -D OPENCV_EXTRA_MODULES_PATH= /opencv/opencv/contrib/modules/ -D WITH_XINE=ON -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_DNN_CUDA=ON -D WITH_NVCUVID=ON -D WITH_CUDNN=ON -D BUILD_DOCS=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D CUDA_nvcuvid_LIBRARY=/usr/lib/x86_64-linux-gnu/libnvcuvid.so ..

If you have the Nvidia driver installed and a GPU which supports the Nvidia Video Coding SDK then I would expect /usr/lib/x86_64-linux-gnu/libnvcuvid.so to exist and be picked up by CMake. Did /usr/lib/x86_64-linux-gnu/libnvcuvid.so exist or did you manually create it by copying the stub library?

THX for the reply.
The drivers are pre-installed. I have tried different versions of drivers that NVIDIA provided for my GPU.
Just to be more specific, I installed driver version 450.245 with GTX 1060.
(plz note that I tried newer drivers versions but the problem is not resolved).
And yes, as long as I install the driver, /usr/lib/x86_64-linux-gnu/ would change based on the installed driver (I do not copy libnvcuvid manually).

The weird thing is that, as you said, CMake should pick up /usr/lib/x86_64-linux-gnu/ as search directory and it does.
I recheck it by :
cat CMakeVars.txt | grep CUDA_LIBS_PATH
CUDA_LIBS_PATH=/usr/local/cuda/lib64;/usr/lib/x86_64-linux-gnu

inside CMakeVars.txt. There are other shared libraries picked from /usr/lib/x86_64-linux-gnu that are linked in CMakeVars.txt.
But it seems it cannot pick libnvcuvid.
Other thing may worth to note is that, I followed your suggestion for removing the soft links (so I removed the soft link from /usr/lib/libnvcuvid and just kept libnvcuvid.so in x86_64-linux-gnu, which is made by the driver). Then I build opencv with this cmake commad:
cmake -D OPENCV_EXTRA_MODULES_PATH=/opencv/opencv/contrib/modules/ -D WITH_XINE=ON -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_DNN_CUDA=ON -D WITH_NVCUVID=ON -D WITH_CUDNN=ON -D BUILD_DOCS=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_OPENNI=ON -D WITH_OPENCL=ON ..

Then I check
cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY
=/usr/lib/libnvcuvid.so
meaning that it is not picked from /usr/lib/x86_64-linux-gnu, but still the old location.

@cudawarped
Copy link
Contributor

cudawarped commented Jul 28, 2023

Then I check
cat CMakeVars.txt | grep CUDA_nvcuvid_LIBRARY
=/usr/lib/libnvcuvid.so
meaning that it is not picked from /usr/lib/x86_64-linux-gnu, but still the old location.

You need to remove the stub library /usr/lib/libnvcuvid.so and possibly clean your build directory. The stub libs should not be on a path which can be picked up at run time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants