Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov5 does not train or detect #4839

Closed
Elektrikci21 opened this issue Sep 17, 2021 · 25 comments
Closed

yolov5 does not train or detect #4839

Elektrikci21 opened this issue Sep 17, 2021 · 25 comments
Labels
bug Something isn't working

Comments

@Elektrikci21
Copy link

Before submitting a bug report, please be aware that your issue must be reproducible with all of the following,
otherwise it is non-actionable, and we can not help you:

If this is a custom dataset/training question you must include your train*.jpg, val*.jpg and results.png
figures, or we can not help you. You can generate these with utils.plot_results().

🐛 Bug

When I train coco128 dataset with yolov5, it trains and there is no warning about training. After training, yolov5 produce some folders and files such as weights, confusion_matrix.png, results.csv, results.png... When we look at results.png, confusion_matrix.png, result.csv they are just empty. You can see result.png below:

results

result.csv is same wiith resul.png as well. Some files is not created properly as I said but weights is created and they have same size with yolov5s.pt.

If I try to predict a sample with the trained model, it does not give me compatible result. There are no rectangles, error message, warning or something else but if I predict an image with yolov5s.pt which is produced by default, detection work. There is considirable result.

You can seee all processes below:

To Reproduce (REQUIRED)

To train a model

Input:


python train.py --img 640 --batch 8 --epochs 3 --data data/coco128.yaml  --image-weights '' --name test --workers 1

Output:

train: weights=yolov5s.pt, cfg=, data=data/coco128.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=3, batch_size=8, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=True, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=1, project=runs/train, entity=None, name=test4, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=100
github: skipping check (not a git repository), for updates see https://github.com/ultralytics/yolov5
YOLOv5  2021-9-11 torch 1.9.0+cu111 CUDA:0 (NVIDIA GeForce GTX 1650 Ti, 4096.0MB)

hyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Weights & Biases: run 'pip install wandb' to automatically track and visualize YOLOv5  runs (RECOMMENDED)
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Focus                     [3, 32, 3]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  3    156928  models.common.C3                        [128, 128, 3]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  3    625152  models.common.C3                        [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1    656896  models.common.SPP                       [512, 512, [5, 9, 13]]
  9                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 24      [17, 20, 23]  1    229245  models.yolo.Detect                      [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 283 layers, 7276605 parameters, 7276605 gradients, 17.1 GFLOPs

Transferred 362/362 items from yolov5s.pt
Scaled weight_decay = 0.0005
optimizer: SGD with parameter groups 59 weight, 62 weight (no decay), 62 bias
train: Scanning '..\datasets\coco128\labels\train2017.cache' images and labels... 128 found, 0 missing, 2 empty, 0 corr
val: Scanning '..\datasets\coco128\labels\train2017.cache' images and labels... 128 found, 0 missing, 2 empty, 0 corrup
Plotting labels...

autoanchor: Analyzing anchors... anchors/target = 4.27, Best Possible Recall (BPR) = 0.9935
Image sizes 640 train, 640 val
Using 1 dataloader workers
Logging results to runs\train\test42
Starting training for 3 epochs...

     Epoch   gpu_mem       box       obj       cls    labels  img_size
       0/2     1.86G       nan       nan       nan       113       640: 100%|██████████| 16/16 [00:23<00:00,  1.44s/it]
C:\Users\monst\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|█| 8/8 [00:03<00:00,  2.45
                 all        128          0          0          0          0          0

     Epoch   gpu_mem       box       obj       cls    labels  img_size
       1/2     2.45G       nan       nan       nan       128       640: 100%|██████████| 16/16 [00:17<00:00,  1.08s/it]
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|█| 8/8 [00:03<00:00,  2.48
                 all        128          0          0          0          0          0

     Epoch   gpu_mem       box       obj       cls    labels  img_size
       2/2     2.45G       nan       nan       nan       221       640: 100%|██████████| 16/16 [00:17<00:00,  1.09s/it]
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|█| 8/8 [00:03<00:00,  2.39
                 all        128          0          0          0          0          0

3 epochs completed in 0.021 hours.
Optimizer stripped from runs\train\test42\weights\last.pt, 14.8MB
Optimizer stripped from runs\train\test42\weights\best.pt, 14.8MB
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Using categorical units to plot a list of strings that are all parsable as floats or dates. If these strings should be plotted as numbers, cast to the appropriate data type before plotting.
Results saved to runs\train\test42

To detect

input:

python detect.py --source ../datasets/coco128/images/train2017/000000000009.jpg --weights runs/train/test42/weights/best.pt --img 640 --view-img

output:

detect: weights=['runs/train/test42/weights/best.pt'], source=../datasets/coco128/images/train2017/000000000009.jpg, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False
YOLOv5  2021-9-11 torch 1.9.0+cu111 CUDA:0 (NVIDIA GeForce GTX 1650 Ti, 4096.0MB)

Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPs
image 1/1 C:\xampp\htdocs\Yaz\datasets\coco128\images\train2017\000000000009.jpg: 480x640 Done. (0.025s)
runs\detect\exp18\000000000009.jpg
Speed: 1.0ms pre-process, 24.9ms inference, 0.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp18

Here the output image

000000000009
There are no labels, it is like input image.,

Expected behavior

A clear and concise description of what you expected to happen.

Environment

- OS:

Host Name: DESKTOP-3NBVRK6
OS Name: Microsoft Windows 10 Home
OS Version: 10.0.19042 N/A Build 19042
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
System Boot Time: 16.09.2021, 17:48:39
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 165 Stepping 2 GenuineIntel ~2400 Mhz
BIOS Version: INSYDE Corp. 1.07.10TFB5, 5.10.2020
Windows Directory: C:\Windows
System Directory: C:\Windows\system32
Boot Device: \Device\HarddiskVolume1
System Locale: tr;Türkçe
Input Locale: tr;Türkçe
Time Zone: (UTC+03:00) İstanbul
Total Physical Memory: 16.173 MB
Available Physical Memory: 7.350 MB
Virtual Memory: Max Size: 18.605 MB
Virtual Memory: Available: 7.710 MB
Virtual Memory: In Use: 10.895 MB
Page File Location(s): C:\pagefile.sys
Domain: WORKGROUP
Hotfix(s): 10 Hotfix(s) Installed.
[01]: KB5004331
[02]: KB4562830
[03]: KB4570334
[04]: KB4577586
[05]: KB4580325
[06]: KB4586864
[07]: KB4589212
[08]: KB4598481
[09]: KB5005565
[10]: KB5005699

Hyper-V Requirements: VM Monitor Mode Extensions: Yes
Virtualization Enabled In Firmware: Yes
Second Level Address Translation: Yes
Data Execution Prevention Available: Yes
- GPU:
Intel(R) UHD Graphics
NVIDIA GeForce GTX 1650 Ti

DriverVersion
27.20.100.9365
30.0.14.7111

Python version: 3.9.7

If you have another files or informations, just let me know.
Thanks.

@Elektrikci21 Elektrikci21 added the bug Something isn't working label Sep 17, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Sep 17, 2021

👋 Hello @Elektrikci21, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 17, 2021

@Elektrikci21 it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@Elektrikci21
Copy link
Author

@glenn-jocher I did everything from scratch but it does not work again.

@iceisfun
Copy link

iceisfun commented Sep 21, 2021

@Elektrikci21 I'm suggesting you add many more samples to your training data set, ideally >1k images. From my experience anything less than 250-350 images per class you might have trouble getting any meaningful results.

Also, make sure your labels are correct and if possible add background images to your model by adding images with a blank or empty label txt file that do not include things you have labeled. Ideally you would have over 1k samples and maybe 100 background images. I have over 10k samples for most of my classes now after having only started with about 1k. Trying to train with too few images is a very common issue people have.

If you want a good data set to train with you can download a pre configured data set with good labels to see how your data set differs from a large well labeled data set.

https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

This is the most important bit, if you want this to work read this and follow it.

https://docs.ultralytics.com/yolov5/tutorials/tips_for_best_training_results

Also make sure your training data set is representative of the input data you expect -- eg don't make all super high end images then take inference images with a 1950s web cam. Try to get training data from real world examples of how you expect data to come in.

@MangoloD
Copy link

I get the same problem, when it works on GPU, the box, obj, cls are nan and ap is 0, but when it works on CPU, you can get the correct results

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 28, 2021

@MangoloD it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.8 environment, clone the latest repo (code changes daily), and pip install -r requirements.txt again. We also highly recommend using one of our verified environments below.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@ayayaQ
Copy link

ayayaQ commented Sep 29, 2021

Perhaps this is a GPU issue? I have a GPU from the 16 series like Elektrikci21, an Nvidia GTX 1660 Ti, and my training would always produce nan values from epoch 0. I fixed this by switching to an Nvidia GTX 1070 and training immediately worked as expected.

@Elektrikci21
Copy link
Author

@ayayaQ your experiement is make sense. We could be sure if we would have more example about this sitiuation. I also can add this example; I also try to make image classification by using tensorflow.keras and my training results are always wrong. When I predict on the model, results always return same index and same accuracy that is 1.

@Elektrikci21
Copy link
Author

@iceisfun I think it is not about samples. Because when I train same dataset with same configuration in google colab, results make sense, everything is okay.

@ayayaQ
Copy link

ayayaQ commented Oct 1, 2021

I think this could be an issue with the included version of cuDNN in PyTorch? I noticed one of the issues fixed in cuDNN release 8.2.2 stated as related to this GPU series.

NVIDIA Turing GTX 16xx users of cuDNN would observe invalid values in convolution output. This issue has been fixed in this release.

@Elektrikci21
Copy link
Author

@ayayaQ Thanks for your research and sharing them with us. Latest version of PyTorch supports CUDA 10.2 and CUDA 11.1 but cuDNN 8.2.2 supports 10.2 and 11.4 so we have to use 10.2 version. I will try to do it and edit this answer.

@Elektrikci21
Copy link
Author

@ayayaQ I tested your answer and problem is solved. I just changed cuda version from 11.1 to 10.2 and there is no problem. I used these codes.

pip3 install torch==1.9.1+cu102 torchvision==0.10.1+cu102 torchaudio===0.9.1 -f https://download.pytorch.org/whl/torch_stable.html

You can find same code at here

@MangoloD If you have not solved the problem yet, please try this way and notice us about your results.

Thanks for everyone.

@ayayaQ
Copy link

ayayaQ commented Oct 1, 2021

I can confirm with the 10.2 installation that my training is now working as expected on the GTX 1660 Ti.

@DoSquared
Copy link

Hello Everybody, I have a similar problem hope you can help me.
Training seems to be done:
image

Then I run the detection on a directory with one test image:
image

I do not seem to find any labels on the image I am testing!

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 1, 2022

@DaliaMahdy 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results.

Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

  • Images per class. ≥ 1500 images per class recommended
  • Instances per class. ≥ 10000 instances (labeled objects) per class recommended
  • Image variety. Must be representative of deployed environment. For real-world use cases we recommend images from different times of day, different seasons, different weather, different lighting, different angles, different sources (scraped online, collected locally, different cameras) etc.
  • Label consistency. All instances of all classes in all images must be labelled. Partial labelling will not work.
  • Label accuracy. Labels must closely enclose each object. No space should exist between an object and it's bounding box. No objects should be missing a label.
  • Label verification. View train_batch*.jpg on train start to verify your labels appear correct, i.e. see example mosaic.
  • Background images. Background images are images with no objects that are added to a dataset to reduce False Positives (FP). We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference, 1% of the total). No labels are required for background images.

COCO Analysis

Model Selection

Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.

YOLOv5 Models

  • Start from Pretrained weights. Recommended for small to medium sized datasets (i.e. VOC, VisDrone, GlobalWheat). Pass the name of the model to the --weights argument. Models download automatically from the latest YOLOv5 release.
python train.py --data custom.yaml --weights yolov5s.pt
                                             yolov5m.pt
                                             yolov5l.pt
                                             yolov5x.pt
                                             custom_pretrained.pt
  • Start from Scratch. Recommended for large datasets (i.e. COCO, Objects365, OIv6). Pass the model architecture yaml you are interested in, along with an empty --weights '' argument:
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
                                                      yolov5m.yaml
                                                      yolov5l.yaml
                                                      yolov5x.yaml

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

  • Epochs. Start with 300 epochs. If this overfits early then you can reduce epochs. If overfitting does not occur after 300 epochs, train longer, i.e. 600, 1200 etc epochs.
  • Image size. COCO trains at native resolution of --img 640, though due to the high amount of small objects in the dataset it can benefit from training at higher resolutions such as --img 1280. If there are many small objects then custom datasets will benefit from training at native or higher resolution. Best inference results are obtained at the same --img as the training was run at, i.e. if you train at --img 1280 you should also test and detect at --img 1280.
  • Batch size. Use the largest --batch-size that your hardware allows for. Small batch sizes produce poor batchnorm statistics and should be avoided.
  • Hyperparameters. Default hyperparameters are in hyp.scratch-low.yaml. We recommend you train with default hyperparameters first before thinking of modifying any. In general, increasing augmentation hyperparameters will reduce and delay overfitting, allowing for longer trainings and higher final mAP. Reduction in loss component gain hyperparameters like hyp['obj'] will help reduce overfitting in those specific loss components. For an automated method of optimizing these hyperparameters, see our Hyperparameter Evolution Tutorial.

Further Reading

If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/

Good luck 🍀 and let us know if you have any other questions!

@shubhambagwari
Copy link

shubhambagwari commented Aug 14, 2022

@Elektrikci21 @DaliaMahdy
I am facing the same problem. I was also training the model but the model unable to detect anything.

While training getting nan at box, obj, cls. What could be the reason?
image

Here is the result{graph}.
results

In training, it boxes some objects. But when I am testing the model unable to detect it.
Model file saved as best.pt and last. pt both the model files are in PyTorch.

@plastic-waste-database
Copy link

I can confirm with the 10.2 installation that my training is now working as expected on the GTX 1660 Ti.

Anyone manage to try using Cuda 11.3?

@DoSquared
Copy link

I realized I had a problem with the dataset from the roboflow side, it worked when I fixed it

@glenn-jocher
Copy link
Member

@wtjasmine glad to hear that you were able to identify and fix the issue with your dataset. Dataset problems can often lead to unexpected results during training. Make sure to thoroughly review and validate your dataset to ensure it is well-labeled and representative of your target objects. If you encounter any further issues or have any additional questions, feel free to ask.

@wtjasmine
Copy link

Hi, I haven't able to fix the issue, I have checked my dataset label, all objects are well labeled. I have followed all these recommendations on my dataset:

  • Images per class. ≥ 1500 images per class recommended

  • Instances per class. ≥ 10000 instances (labeled objects) per class recommended

  • Image variety. Must be representative of deployed environment. For real-world use cases we recommend images from different times of day, different seasons, different weather, different lighting, different angles, different sources (scraped online, collected locally, different cameras) etc.

  • Label consistency. All instances of all classes in all images must be labelled. Partial labelling will not work.

  • Label accuracy. Labels must closely enclose each object. No space should exist between an object and it's bounding box. No objects should be missing a label.

  • Label verification. View train_batch*.jpg on train start to verify your labels appear correct, i.e. see example mosaic.

  • Background images. Background images are images with no objects that are added to a dataset to reduce False Positives (FP). We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference, 1% of the total). No labels are required for background images.

I have tried to train with 300 epochs without changing any default settings of the model, the training stops halfway after 100 epochs the metrics do not show improvement. Although the losses of both validation and training still decreasing during training, metrics like precision, recall, and mAP seems to stick around 0.53, does this means the training has reached the final results and training should stop? Because I am trying to detect small objects from images, could you pls suggest some tips to improve the results? Thank you.

@glenn-jocher
Copy link
Member

@wtjasmine hi there,

I understand that you have followed the recommended guidelines for your dataset, ensuring sufficient images per class, instances per class, image variety, label consistency, label accuracy, label verification, and background images.

Regarding your training, if the losses are still decreasing but the metrics like precision, recall, and mAP appear to be stagnant around 0.53, it might indicate that the training has reached a plateau and further training might not lead to significant improvements. However, there are a few tips you can try to potentially improve the results for detecting small objects:

  1. Adjust the model architecture: You can try modifying the YOLOv5 architecture to better handle small objects. Experiment with different backbone networks, feature pyramid networks, or anchor box configurations.

  2. Increase the input image size: By increasing the image size, you provide more detailed information for the model to detect small objects. However, note that this may increase the computational requirements and training time.

  3. Augment the dataset: Apply data augmentation techniques specifically aimed at enhancing small object detection, such as random scaling, translations, rotations, or adding noise. This can help the model generalize better to small objects.

  4. Adjust the confidence threshold: You can experiment with different confidence thresholds during inference to find an optimal balance between recall and precision, particularly for small objects.

  5. Fine-tune the hyperparameters: Explore different learning rates, weight decay values, or optimizer settings to find the best hyperparameters for your specific task.

Remember to monitor the training and validation metrics closely to evaluate the impact of these changes. Feel free to share any further questions or issues you may have. Good luck with your project!

@vrushank41
Copy link

I'm getting this error after training all the 100 epochs .. but the last step is not being completed. This error doesn't make my training 100% complete.

image

@glenn-jocher
Copy link
Member

@vrushank41 the link you provided seems to be broken, and I'm unable to view the error image. Could you please double-check the link and ensure it is accessible to me? Additionally, if you could provide the error message or a description of the issue, I'd be more than happy to assist you further.

@vrushank41
Copy link

image

This is the image showing the training code provided by the ultralytics hub.

image

This is the image showing the error which I am getting after all the 100 epochs is being completed. It is unable to move to the further step due to some missing files(here .csv file)

@glenn-jocher
Copy link
Member

@vrushank41 It seems you're encountering an issue with a missing .csv file following the completion of the training process. Without being able to view the content from the provided links, it's challenging to identify the exact cause of the problem. However, it sounds like there might be an unexpected behavior at the end of the training process.

To better understand the issue, it would be helpful to have a bit more information:

  1. Could you provide the specific error message or describe the missing .csv file in more detail? This will help in diagnosing and troubleshooting the problem.

  2. Additionally, if you could share the specific steps you followed and the exact command used for training, it would provide context for understanding where the issue might have arisen.

Please feel free to provide additional details, and I'd be more than happy to help you resolve this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

10 participants