-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Hi, im new to this, run codeproject with Blueiris. Every so often this errors pop up and i have no clue.... Any chance of a little help as to why? After i restart YOYOv8 it gos back to working for so long then pops up again maybe in a day or so. No real timeframe it seems
my system:
Server version: 2.9.5
System: Windows
Operating System: Windows (Windows 11 22H2)
CPUs: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (Intel)
1 CPU x 8 cores. 16 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 2080 Ti (11 GiB) (NVIDIA)
Driver: 581.29, CUDA: 10.1.105 (up to: 13.0), Compute: 7.5, cuDNN: 9.8
System RAM: 32 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 9.0.4
.NET SDK: 9.0.203
Default Python: Not found
Go: Not found
NodeJS: 18.20.7
Rust: Not found
Video adapter info:
NVIDIA GeForce RTX 2080 Ti:
Driver Version 32.0.15.8129
Video Processor NVIDIA GeForce RTX 2080 Ti
System GPU info:
GPU 3D Usage 10%
GPU RAM Usage 2.2 GiB
Global Environment variables:
CPAI_APPROOTPATH =
CPAI_PORT = 32168
Issues:
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
09:15:21:Object Detection (YOLOv8): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\detect.py", line 149, in do_detection
results = detector.predict(img, imgsz=640,
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\ultralytics\engine\model.py", line 273, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\ultralytics\engine\predictor.py", line 204, in call
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\torch\autograd\grad_mode.py", line 43, in generator_context
response = gen.send(None)
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\ultralytics\engine\predictor.py", line 278, in stream_inference
with profilers[0]:
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\ultralytics\utils\ops.py", line 47, in enter
self.start = self.time()
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\ultralytics\utils\ops.py", line 62, in time
torch.cuda.synchronize(self.device)
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\bin\windows\python39\venv\lib\site-packages\torch\cuda_init_.py", line 566, in synchronize
return torch._C._cuda_synchronize()