Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change confidence threshold in model.train? #11707

Closed
1 task done
Nadayz opened this issue May 6, 2024 · 6 comments
Closed
1 task done

How to change confidence threshold in model.train? #11707

Nadayz opened this issue May 6, 2024 · 6 comments
Labels
question Further information is requested

Comments

@Nadayz
Copy link

Nadayz commented May 6, 2024

Search before asking

Question

Hello,

I want to change the confidence threshold in train mode; how can I do that?

model = YOLO("yolov8m.pt")
results = model.train(data="data.yaml",optimizer="Adam")

Additional

No response

@Nadayz Nadayz added the question Further information is requested label May 6, 2024
@jdiaz97
Copy link

jdiaz97 commented May 6, 2024

I don't think that's a parameter https://docs.ultralytics.com/modes/train/#train-settings

@glenn-jocher
Copy link
Member

Hello,

You're correct; the confidence threshold is not directly adjustable during the training phase using model.train(). The confidence threshold is typically used during inference to filter out detections based on their confidence scores. When training, the model learns to predict bounding boxes and class confidences, and the actual threshold to use can be set later during validation or prediction.

If you have further questions or need assistance with anything else, feel free to ask! Happy coding! 🚀

@jihunddok
Copy link

jihunddok commented May 7, 2024

Umm... I understood your answer, but unsuitable at my case.
I'm sorry for my poor English.

by the way, To summarize what I want to do it is as follows.

  1. I have 17 segmentation models, each classifying only one class.
  2. I use these models to predict 17 kinds of object.
  3. Do prediction each models for 1 image.
  4. get results each prediction, and sum results.
  5. draw Mask by sum results with openCV.
  6. but this case, I have a critical problem.
  7. like this.

image

  1. In this case, detect and mask same object, and each confidence is very similar and high.
  2. furthermore, the object is pen, but model predicted pencil that higher confidence.
  3. so, I wanna solve this problem by postprocessing result
  • Whether a mask for the same area is specified
  • Additionally, priority selection for overlapping areas
  • check this process all of each model's prediction
  • objects can be detected more than 30.
  1. Is their any solution for this case? except only enhance model performance?

@glenn-jocher
Copy link
Member

@jihunddok hello! Thank you for providing the clear summary of your scenario. It seems like the challenge you're facing is mainly about handling overlapping masks and prioritizing certain detections when multiple models predict different objects in similar areas.

One effective approach could be to implement a post-processing step where you can merge or prioritize overlapping masks based on certain criteria. Here’s a basic strategy:

  1. Intersection Over Union (IoU) - Calculate the IoU for overlapping masks. If IoU exceeds a threshold (e.g., 0.5), you may consider them as overlapping.

  2. Confidence Score Priority - In cases of overlap, keep the mask with the higher confidence score and discard the lower one.

  3. Class Priority List - If you know certain objects (like 'pen' over 'pencil') are more likely or important, you can create a priority list. Use this list to determine which mask to keep when overlaps occur.

Here’s a simple pseudocode implementation:

def process_predictions(predictions, confidence_threshold=0.5, iou_threshold=0.5):
    # Assuming predictions is a list of tuples (mask, confidence, class_id)
    # Sort predictions by confidence score
    predictions.sort(key=lambda x: x[1], reverse=True)

    final_masks = []
    for current_mask, current_conf, current_class in predictions:
        keep = True
        for final_mask, _, _ in final_masks:
            iou = calculate_iou(current_mask, final_mask)
            if iou > iou_threshold:
                keep = False
                break
        if keep:
            final_masks.append((current_mask, current_conf, current_class))

    return final_masks

# Utility function to calculate IoU
def calculate_iou(mask1, mask2):
    # Implement IoU calculation between two masks
    pass

This method does not require enhancing model performance but helps in intelligently managing the outputs from multiple models. Use OpenCV to draw masks from final_masks which should now have reduced overlaps and prioritized detections. If modifying overlapping areas based on model output is feasible, adjusting the iou_threshold and confidence_threshold in the process_predictions function could grant better control over the outcomes.

Please test and adapt the code as necessary for your specific application context. Hope this helps! Let me know if you have further questions or need more specific examples. Happy coding! 🚀

@Nadayz
Copy link
Author

Nadayz commented May 8, 2024

Hello,

You're correct; the confidence threshold is not directly adjustable during the training phase using model.train(). The confidence threshold is typically used during inference to filter out detections based on their confidence scores. When training, the model learns to predict bounding boxes and class confidences, and the actual threshold to use can be set later during validation or prediction.

If you have further questions or need assistance with anything else, feel free to ask! Happy coding! 🚀

Thanks for your answer.
OK. when I try to train the model on my data. the precision is good, but Recall is very low.
How can I fix this problem?

results.csv

@glenn-jocher
Copy link
Member

Hello,

Low recall often indicates that your model is missing detections, leading to fewer true positives. Here are a couple of suggestions that you might find helpful:

  1. Adjust the IoU threshold during training - Lowering the Intersection over Union (IoU) threshold may increase the number of positives the model detects as it relaxes the criteria for a positive match.

  2. Data Augmentation - Consider augmenting your training data with varied transformations to help the model generalize better, potentially detecting more true positives.

  3. Reevaluate the dataset - Ensure that your dataset is balanced and annotations are accurate. Sometimes, an imbalance or inaccurate annotations can lead to lower recall.

Here's a quick example on how to adjust the IoU threshold if you're using Ultralytics YOLO:

from ultralytics import YOLO

# Load a model
model = YOLO('path_to_model.pt')

# Set lower IoU threshold
results = model.train(data='data.yaml', iou_t=0.3)

Adapting the iou_t directly in the training settings might help increase recall.

Reviewing the 'results.csv' you attached could provide more insight into specific reasons why recall might be low with your current settings. Looking forward to hearing back from you! 😊

@Nadayz Nadayz closed this as completed May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants