help/ #8027
Replies: 87 comments 285 replies
-
I want to save the image in which the object is detected and not detected into different folders. |
Beta Was this translation helpful? Give feedback.
-
I'm seeking clarification regarding the imgsz parameter in YOLO (You Only Look Once) and its impact on image resizing and bounding boxes. In my dataset, all images have a consistent size of 1920x1080 pixels. If I set the imgsz parameter to 640, will the images be internally downscaled to 640x640 pixels by YOLO during training or inference? In the context of this resizing, I'm curious about the effect on bounding boxes. Do their coordinates change, or does YOLO handle the calculation of new bounding boxes internally to accommodate the downscaled images? I want to ensure that I understand how YOLO manages the resizing process and its implications for object detection accuracy. Any insights or pointers to relevant documentation would be greatly appreciated. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi, WRT inference/predict in yolov8 how can you obtain the run number ? Or the folder to /run/detect/predictXX where XX is the sequential number ? Reason would like to automatically get the image with all the bounding boxes. example /run/detect/predict46 Using :- Thanks |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher thanks for being so responsive and helpful. It's really impressive. My question is: What color does yolov8 use as infill when a training image not square? Maybe I'm misunderstanding, but if the algorithm pads the image to a square size, I'm just curious what color it pads with (zeros, i.e., black?). If it makes any difference, I'm currently training a classification model. |
Beta Was this translation helpful? Give feedback.
-
Is it possible to combine two YOLOv8 weights? |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a question regarding the |
Beta Was this translation helpful? Give feedback.
-
Hello, I was wondering if you could give me some clarification on the freeze parameter for yolov8. When training begins, the training script automatically prints the layers of the model. There seem to be 23 blocks but over 100 different components. In your examples, you always use In my personal training I used |
Beta Was this translation helpful? Give feedback.
-
i was using YOLOv8 for number detection ( meter readings ), it worked pretty good. but i need small help. |
Beta Was this translation helpful? Give feedback.
-
Dear community thank you to each of the members I want to extract the tree crown boundary using the Yolo8 model. After training the model when I predict the RGB image each tree has multiple polygons but it should be a single polygon for a tree. Have you any idea how to address this issue? |
Beta Was this translation helpful? Give feedback.
-
in my YAML file i have 11 labels, with their respective values. But when I use save_txt command the labels are saved in a text file, but i want the exact values to be saved, because label 10 has value ".", which is important for meter reading value. How can I save the values in text file rather than labels. Below is my YAML file: names: |
Beta Was this translation helpful? Give feedback.
-
I have not found any resolution to the following issue with yolov8. Whenever I start training my model, the val-set score is immediately 1 for Precision and Recall. I believe this is an error. Therefore, it's really hard to monitor training. Here's an example:
|
Beta Was this translation helpful? Give feedback.
-
I want to load model before doing prediction. model.load_state_dict(torch.load(opt.saved_model, map_location=device)) like this one. I want load model when I run file right after. now i'm using custom yolov8 model. |
Beta Was this translation helpful? Give feedback.
-
Hi Guys at ultralytics { Still, there's the general route of training using defining everything as torch variables and then have a training loop |
Beta Was this translation helpful? Give feedback.
-
hello |
Beta Was this translation helpful? Give feedback.
-
i have trained the YOLOv8n with a deck of cards, it was detecting most of the cards well but there was confusion with the similar cards such as 5 and 3 etc. So i tried to fine tune that best.pt model by providing the data sets of cards 5 and 3 only but now it only detects 5 and 3 not the other cards. I have a lot of deck of cards to train and i want to have only one model to detect them. So how can i do that...I have already tried to retrain that best.pt but it forgets the previously trained deck...So how can i do that because i do not found any appropriate answer in the official website. Is it possible, if yes then how...Reply as soon as possible... |
Beta Was this translation helpful? Give feedback.
-
That's very kind of you.
I have started the training. My training set consists of 60 images. While
in every epoch images it is showing 12 and mAP as zero. Am I wrong anywhere?
…On Sun, Jun 9, 2024 at 1:27 AM Glenn Jocher ***@***.***> wrote:
Hello Swarnalatha,
Thank you for your kind words! 😊 We're thrilled to hear that the guidance
provided has been helpful.
Regarding your current progress, it looks like you're on the right track
with organizing your dataset and integrating custom modules into your YOLO
model. Let's ensure everything is set up correctly:
Dataset Organization and YAML Configuration
Your directory structure and YAML configuration should look like this:
*Directory Structure:*
dataset/
├── images/
│ ├── train/
│ └── val/
└── labels/
├── train/
└── val/
*YAML File Configuration:*
path: ../dataset # path to dataset root directorytrain: images/train # train images (relative to 'path')val: images/val # val images (relative to 'path')test: # test images (optional)nc: 80 # number of classesnames: ['class1', 'class2', ...] # list of class names
Ensure that the paths in your YAML file are relative to the path
specified at the top. This should resolve the "str object does not support
item assignment" error.
Adding Ghost Bottleneck and CARAFE Modules
To integrate custom modules like Ghost Bottleneck and CARAFE, follow these
steps:
1.
*Define Custom Modules:*
Create a new Python file, e.g., custom_modules.py, and implement your
modules:
import torchimport torch.nn as nn
class GhostBottleneck(nn.Module):
# Define your Ghost Bottleneck module here
pass
class CARAFE(nn.Module):
# Define your CARAFE module here
pass
2.
*Modify the Model Architecture:*
Integrate these custom modules into the YOLO model architecture by
editing the model's YAML configuration file or directly modifying the
model's Python code:
# Example of modifying the model.yamlbackbone:
# Add GhostBottleneck and CARAFE layers in the backbone
- [GhostBottleneck, ...]
- [CARAFE, ...]
3.
*Update the Model Code:*
Ensure the custom modules are imported and used in the model
definition:
from custom_modules import GhostBottleneck, CARAFE
class CustomYOLOModel(nn.Module):
def __init__(self):
super(CustomYOLOModel, self).__init__()
self.layer1 = GhostBottleneck(...)
self.layer2 = CARAFE(...)
# Add other layers and configurations
4.
*Training and Testing:*
Train your modified model using the standard training scripts provided
by Ultralytics:
yolo train data=your_data.yaml model=custom_model.yaml epochs=100
Additional Steps
1.
*Minimum Reproducible Example (MRE):*
If you encounter further issues, please provide a minimum reproducible
example. This helps us investigate and resolve your issue more efficiently.
You can find more details on creating an MRE here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
2.
*Version Check:*
Ensure you are using the latest versions of torch and ultralytics. If
not, please upgrade your packages and try again.
If you need any more assistance or detailed guidance, feel free to ask.
We're here to help! 🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KE6R565TXXUWEPBTZ3ZGNO2DAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TOMJWGA2DS>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
|
Beta Was this translation helpful? Give feedback.
-
Thank you, I will do that.
For accelerating the speed I am trying to use GPU. In my system NVidia
GeForce GT730 version GPU. And I installed CuDA toolkit and Cudnn. But my
jupyter note book is not using the GPU. I checked the kernel. It is not
detecting.
…On Mon, 10 Jun, 2024, 5:15 pm Paula Derrenger, ***@***.***> wrote:
Hi Swarnalatha,
Thank you for your kind words and for sharing your progress! 😊
Regarding your training issue, it seems like there might be a couple of
things to check:
1.
*Dataset Size and Configuration*: With 60 images in your training set,
it's expected that each epoch processes a subset of these images based on
your batch size. Could you verify your batch size setting? If it's set to
12, this would explain why you see 12 images per epoch.
2.
*Zero mAP*: A zero mAP (mean Average Precision) typically indicates
that the model isn't detecting any objects correctly. This could be due to
several reasons:
- *Annotations*: Ensure that your annotations are correctly formatted
and match the images.
- *Classes*: Verify that the number of classes in your dataset
configuration file (.yaml) matches the actual classes in your
dataset.
- *Training Duration*: With only 60 images, the model might need
more epochs to start learning effectively. Consider increasing the number
of epochs.
To help us investigate further, could you please provide a minimum
reproducible example (MRE) of your code? This will allow us to reproduce
the issue on our end and provide a more accurate solution. You can find
more details on creating an MRE here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Additionally, please ensure you are using the latest versions of torch
and ultralytics. You can update your packages with the following commands:
pip install --upgrade torch ultralytics
Here's a quick example of how you might set up your training script:
from ultralytics import YOLO
# Load the modelmodel = YOLO('yolov8n.pt')
# Set up training argumentsargs = {
'data': 'path/to/your_data.yaml',
'epochs': 100,
'batch': 12,
'imgsz': 640
}
# Train the modelresults = model.train(**args)
If you need further assistance, feel free to ask. We're here to help! 🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KCYVQW4U32WDY673IDZGWGVRAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TOMRWGQ2DA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply.
The installed GPU driver in my system is in an older version. Torch is not
supporting that version. I will upgrade my GPU and come back if any
clarifications are required.
…On Tue, Jun 11, 2024 at 1:02 AM Paula Derrenger ***@***.***> wrote:
Hi Swarnalatha,
Thank you for providing more details about your setup! 😊 Let's see if we
can get your GPU up and running for your Jupyter notebook.
First, let's verify that your system is correctly set up to use the GPU.
Here are a few steps to troubleshoot and ensure everything is configured
properly:
1.
*Check CUDA and cuDNN Installation*:
Ensure that CUDA and cuDNN are correctly installed and compatible with
your GPU. You can verify the CUDA installation by running:
nvcc --version
This should display the CUDA version installed on your system.
2.
*Verify GPU Availability in PyTorch*:
Make sure PyTorch can detect your GPU. You can run the following code
in a Python environment:
import torchprint(torch.cuda.is_available())print(torch.cuda.get_device_name(0))
This should return True and the name of your GPU if everything is set
up correctly.
3.
*Update PyTorch and Ultralytics*:
Ensure you are using the latest versions of torch and ultralytics. You
can update them using:
pip install --upgrade torch ultralytics
4.
*Set the Device in Your Training Script*:
Explicitly set the device to GPU in your training script. Here’s an
example:
from ultralytics import YOLO
# Load the modelmodel = YOLO('yolov8n.pt')
# Set up training argumentsargs = {
'data': 'path/to/your_data.yaml',
'epochs': 100,
'batch': 12,
'imgsz': 640,
'device': 'cuda' # Ensure the model uses the GPU
}
# Train the modelresults = model.train(**args)
If your Jupyter notebook still does not detect the GPU, please ensure that
the notebook kernel is running in the same environment where CUDA and
PyTorch are installed. You can check the kernel environment by running:
!which python
This should point to the Python executable in your environment with CUDA
and PyTorch.
If you continue to face issues, please provide a minimum reproducible
example (MRE) of your code so we can better assist you. You can find more
details on creating an MRE here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Feel free to reach out if you need further assistance. We're here to help!
🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KFRB4PIOKRKMVZOBJTZGX5LPAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TOMZRGA4DK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
|
Beta Was this translation helpful? Give feedback.
-
Hello,
In the model YAML file also should we change the number of classes? Is that
the reason for showing mAP as zero?
…On Mon, Jun 10, 2024 at 12:04 PM Swarna Y.V ***@***.***> wrote:
That's very kind of you.
I have started the training. My training set consists of 60 images. While
in every epoch images it is showing 12 and mAP as zero. Am I wrong anywhere?
On Sun, Jun 9, 2024 at 1:27 AM Glenn Jocher ***@***.***>
wrote:
> Hello Swarnalatha,
>
> Thank you for your kind words! 😊 We're thrilled to hear that the
> guidance provided has been helpful.
>
> Regarding your current progress, it looks like you're on the right track
> with organizing your dataset and integrating custom modules into your YOLO
> model. Let's ensure everything is set up correctly:
> Dataset Organization and YAML Configuration
>
> Your directory structure and YAML configuration should look like this:
>
> *Directory Structure:*
>
> dataset/
> ├── images/
> │ ├── train/
> │ └── val/
> └── labels/
> ├── train/
> └── val/
>
> *YAML File Configuration:*
>
> path: ../dataset # path to dataset root directorytrain: images/train # train images (relative to 'path')val: images/val # val images (relative to 'path')test: # test images (optional)nc: 80 # number of classesnames: ['class1', 'class2', ...] # list of class names
>
> Ensure that the paths in your YAML file are relative to the path
> specified at the top. This should resolve the "str object does not support
> item assignment" error.
> Adding Ghost Bottleneck and CARAFE Modules
>
> To integrate custom modules like Ghost Bottleneck and CARAFE, follow
> these steps:
>
> 1.
>
> *Define Custom Modules:*
> Create a new Python file, e.g., custom_modules.py, and implement your
> modules:
>
> import torchimport torch.nn as nn
> class GhostBottleneck(nn.Module):
> # Define your Ghost Bottleneck module here
> pass
> class CARAFE(nn.Module):
> # Define your CARAFE module here
> pass
>
> 2.
>
> *Modify the Model Architecture:*
> Integrate these custom modules into the YOLO model architecture by
> editing the model's YAML configuration file or directly modifying the
> model's Python code:
>
> # Example of modifying the model.yamlbackbone:
> # Add GhostBottleneck and CARAFE layers in the backbone
> - [GhostBottleneck, ...]
> - [CARAFE, ...]
>
> 3.
>
> *Update the Model Code:*
> Ensure the custom modules are imported and used in the model
> definition:
>
> from custom_modules import GhostBottleneck, CARAFE
> class CustomYOLOModel(nn.Module):
> def __init__(self):
> super(CustomYOLOModel, self).__init__()
> self.layer1 = GhostBottleneck(...)
> self.layer2 = CARAFE(...)
> # Add other layers and configurations
>
> 4.
>
> *Training and Testing:*
> Train your modified model using the standard training scripts
> provided by Ultralytics:
>
> yolo train data=your_data.yaml model=custom_model.yaml epochs=100
>
>
> Additional Steps
>
> 1.
>
> *Minimum Reproducible Example (MRE):*
> If you encounter further issues, please provide a minimum
> reproducible example. This helps us investigate and resolve your issue more
> efficiently. You can find more details on creating an MRE here
> <https://docs.ultralytics.com/help/minimum_reproducible_example>.
> 2.
>
> *Version Check:*
> Ensure you are using the latest versions of torch and ultralytics. If
> not, please upgrade your packages and try again.
>
> If you need any more assistance or detailed guidance, feel free to ask.
> We're here to help! 🚀
>
> —
> Reply to this email directly, view it on GitHub
> <#8027 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/A6BY3KE6R565TXXUWEPBTZ3ZGNO2DAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TOMJWGA2DS>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***
> com>
>
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
|
Beta Was this translation helpful? Give feedback.
-
Hi, I had earlier trained my custom data with YOLO v8. Now I want to train with YOLOv10, I looked at the official site for the training code & found the following, did exactly the same step: from ultralytics import YOLO Load YOLOv10n model from scratchmodel = YOLO("yolov10b.yaml") Train the modelmodel.train(data="data.yaml", epochs=100, imgsz=640) But I'm getting the following error: NotImplementedError: WARNING 'YOLO' model does not support '_new' mode for 'None' task yet. I have all the mentioned files. Can you help me out? |
Beta Was this translation helpful? Give feedback.
-
I changed the network in source code and have trained successfully. But when I want to convert the model into onnx with |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hi, I want to save yolo8-s model (without self training) into onnx . the error is: |
Beta Was this translation helpful? Give feedback.
-
I wonder why it goes wrong: |
Beta Was this translation helpful? Give feedback.
-
Hello,
I installed CUDA 12.5 which is compatible with torch. When I am trying to
import in jupyter notebook., it is showing no module found with the name of
torch. But in IDLE it is returning as true when I ask for cuda
availability. What might be the problem?
…On Tue, 11 Jun, 2024, 1:02 am Paula Derrenger, ***@***.***> wrote:
Hi Swarnalatha,
Thank you for providing more details about your setup! 😊 Let's see if we
can get your GPU up and running for your Jupyter notebook.
First, let's verify that your system is correctly set up to use the GPU.
Here are a few steps to troubleshoot and ensure everything is configured
properly:
1.
*Check CUDA and cuDNN Installation*:
Ensure that CUDA and cuDNN are correctly installed and compatible with
your GPU. You can verify the CUDA installation by running:
nvcc --version
This should display the CUDA version installed on your system.
2.
*Verify GPU Availability in PyTorch*:
Make sure PyTorch can detect your GPU. You can run the following code
in a Python environment:
import torchprint(torch.cuda.is_available())print(torch.cuda.get_device_name(0))
This should return True and the name of your GPU if everything is set
up correctly.
3.
*Update PyTorch and Ultralytics*:
Ensure you are using the latest versions of torch and ultralytics. You
can update them using:
pip install --upgrade torch ultralytics
4.
*Set the Device in Your Training Script*:
Explicitly set the device to GPU in your training script. Here’s an
example:
from ultralytics import YOLO
# Load the modelmodel = YOLO('yolov8n.pt')
# Set up training argumentsargs = {
'data': 'path/to/your_data.yaml',
'epochs': 100,
'batch': 12,
'imgsz': 640,
'device': 'cuda' # Ensure the model uses the GPU
}
# Train the modelresults = model.train(**args)
If your Jupyter notebook still does not detect the GPU, please ensure that
the notebook kernel is running in the same environment where CUDA and
PyTorch are installed. You can check the kernel environment by running:
!which python
This should point to the Python executable in your environment with CUDA
and PyTorch.
If you continue to face issues, please provide a minimum reproducible
example (MRE) of your code so we can better assist you. You can find more
details on creating an MRE here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Feel free to reach out if you need further assistance. We're here to help!
🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KFRB4PIOKRKMVZOBJTZGX5LPAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TOMZRGA4DK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. It is working now😊
…On Wed, 19 Jun, 2024, 3:26 pm Glenn Jocher, ***@***.***> wrote:
@swarnalathayv <https://github.com/swarnalathayv> hi Swarnalatha,
Thank you for reaching out! 😊 It sounds like you're encountering an
environment issue where Jupyter Notebook isn't recognizing the torch
module, while it works fine in IDLE. Let's troubleshoot this step-by-step:
1.
*Verify CUDA Installation*:
Ensure that CUDA is correctly installed by running:
nvcc --version
This should display the CUDA version installed on your system.
2.
*Check PyTorch Installation*:
Make sure PyTorch is installed in the same environment that Jupyter
Notebook is using. You can verify this by running:
import torchprint(torch.cuda.is_available())print(torch.cuda.get_device_name(0))
This should return True and the name of your GPU if everything is set
up correctly.
3.
*Update Packages*:
Ensure you are using the latest versions of torch and ultralytics. You
can update them using:
pip install --upgrade torch ultralytics
4.
*Check Jupyter Kernel Environment*:
Ensure that your Jupyter Notebook kernel is running in the same
environment where CUDA and PyTorch are installed. You can check this by
running:
!which python
This should point to the Python executable in your environment with
CUDA and PyTorch.
5.
*Set the Device in Your Training Script*:
Explicitly set the device to GPU in your training script. Here’s an
example:
from ultralytics import YOLO
# Load the modelmodel = YOLO('yolov8n.pt')
# Set up training argumentsargs = {
'data': 'path/to/your_data.yaml',
'epochs': 100,
'batch': 12,
'imgsz': 640,
'device': 'cuda' # Ensure the model uses the GPU
}
# Train the modelresults = model.train(**args)
If the issue persists, please provide a minimum reproducible example (MRE)
of your code so we can better assist you. You can find more details on
creating an MRE here
<https://docs.ultralytics.com/help/minimum_reproducible_example>.
Feel free to reach out if you need further assistance. We're here to help!
🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KGTN2V6FUMZ77GE3VDZIFIUNAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMJVG4YDC>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for your support.
After training, the results were saved in the run--detect--train13 folder.
All the images given for validation are detected, but all images are
arranged in tile format (12 images given for validation, all images
converted to single image 4X3 tile format).
And I loaded my custom model (best.pt) for testing on the images. Here is
the code snippet
model = YOLO('best.pt')
img = 'test.tif'
result = model(img)
result.print()
But it is showing like print is not in the list.
How to display my input image with bounding boxes?
Kindly help.
…On Wed, Jun 19, 2024 at 11:20 PM Paula Derrenger ***@***.***> wrote:
@swarnalathayv <https://github.com/swarnalathayv> hi Swarnalatha,
Thank you for the update! I'm glad to hear that everything is working now
😊.
If you encounter any further issues or have additional questions, please
don't hesitate to reach out. For more detailed guides and resources, you
can always visit our Help Page <https://docs.ultralytics.com/help/>.
Happy coding! 🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6BY3KGI4DSLPFJSNQGQVD3ZIHAFNAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMRQGU3DG>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
--
Thanks & Regards
Swarnalatha YV
Sci/ Eng - SC
VSSC, Trivandrum.
Ph: 0471-256-4375
|
Beta Was this translation helpful? Give feedback.
-
Hi, I try to load and inference yolov8s model in opencv, c++, I faced a bug:
the result is seems empty.
on exact same image I get full results hen loading with ultralytics in
python.
my code:
DNN* net;
Mat blob, prob;
*net = readNet("yolov8.onnx");
setNetInputSize("yolov8.onnx");
net->setPreferableBackend(DNN_BACKEND_CUDA);
net->setPreferableTarget(DNN_TARGET_CUDA);
std::cout << "loading succeeded" << std::endl;
blob = blobFromImage(image, 1.0, Size(640, 640), (0,0,0), true);
net->setInput(blob);
prob = net->forward();
than I try to print the result:
std::cout<<"prob.rows="<<prob.rows<<" prob.cols="<<prob.cols<<'\n';
for (int i = 0; i < prob.rows; i++) {
for (int j = 0; j < prob.cols; j++) {
std::cout << (int)prob.at<uchar>(i, j) << " ";
}
std::cout << std::endl;
}
and get:
prob.rows=-1 prob.cols=-1
what am I doing wrong?
בתאריך יום ה׳, 20 ביוני 2024 ב-12:39 מאת Paula Derrenger <
***@***.***>:
… @swarnalathayv <https://github.com/swarnalathayv> hi Swarnalatha,
Thank you for reaching out and for your support! 😊
To address your issue with displaying the input image with bounding boxes,
it seems like there might be a small confusion with the method names. The
result.print() method is not available, but you can use the result.show()
method to display the image with bounding boxes. Here’s how you can modify
your code snippet:
from ultralytics import YOLO
# Load your custom modelmodel = YOLO('best.pt')
# Specify the image for testingimg = 'test.tif'
# Run the model on the imageresult = model(img)
# Display the image with bounding boxesresult.show()
This should display your input image with the detected bounding boxes.
Regarding the arrangement of validation images in a tile format, this is a
default behavior for visualizing multiple images together. If you prefer to
view them individually, you can save the results and view them separately:
# Save the results to a directoryresult.save(save_dir='path/to/save/directory')
This will save each image with bounding boxes in the specified directory.
If you encounter any further issues or have additional questions, please
don't hesitate to reach out. For more detailed guides and resources, you
can always visit our Help Page <https://docs.ultralytics.com/help/>.
Happy coding! 🚀
—
Reply to this email directly, view it on GitHub
<#8027 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AO26URDIUZDVRR77BLDY743ZIKPNDAVCNFSM6AAAAABCZTU2BOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMRWHEZDK>
.
You are receiving this because you were mentioned.
Message ID:
***@***.***
com>
--
אתי כהן (א.שאול)
|
Beta Was this translation helpful? Give feedback.
-
import cv2 Email settingspassword = "" # Your email password server = smtplib.SMTP("smtp.gmail.com: 587") def send_email(to_email, from_email, object_detected=1): class ObjectDetection:
Usage#region_points = [(51, 832), (287, 820), (567, 452), (159, 428)] Hello Glen , Iam working on trepassing project, My goal is to detect the object that touched or entered into into that region, iam getting alerts when object detected and iam not able to change the color of bounding boxes of the detected objects to red , when they crossed or touched the region , i have mentioned. can you help me with the adjusting the code |
Beta Was this translation helpful? Give feedback.
-
help/
Find comprehensive guides and documents on Ultralytics YOLO tasks. Includes FAQs, contributing guides, CI guide, CLA, MRE guide, code of conduct & more.
https://docs.ultralytics.com/help/
Beta Was this translation helpful? Give feedback.
All reactions