-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to display it on my laptop's camera #13502
Comments
well,i also encounter issue like this |
Please examine the contents of the document carefully ! |
Hello! Thank you for sharing your code and detailing your issue. Let's work through this together to get your model displaying detections from your laptop's camera. Firstly, ensure you have the latest versions of pip install --upgrade torch ultralytics Your code looks mostly correct, but there are a few adjustments we can make to ensure everything runs smoothly. Here's a refined version of your script: import cv2
from ultralytics import YOLO
# Load the trained YOLOv8 model (adjust the path to your model file)
model = YOLO('box-obb.pt')
# Initialize webcam
cap = cv2.VideoCapture(0) # 0 is the default device ID for the webcam
if not cap.isOpened():
print("Error: Could not open webcam.")
exit()
while True:
# Capture frame-by-frame
ret, frame = cap.read()
if not ret:
print("Failed to grab frame")
break
# Run YOLOv8 model on the frame
results = model(frame)
# Extract the detections
detections = results[0].boxes.xyxy.cpu().numpy() # xyxy format (xmin, ymin, xmax, ymax, confidence, class)
# Loop over detections and draw bounding boxes
for det in detections:
xmin, ymin, xmax, ymax, confidence, class_id = det
if confidence > 0.5: # Confidence threshold
cv2.rectangle(frame, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0, 255, 0), 2)
label = f"{model.names[int(class_id)]}: {confidence:.2f}"
cv2.putText(frame, label, (int(xmin), int(ymin) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('YOLOv8 Box Detection', frame)
# Break the loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
cap.release()
cv2.destroyAllWindows() Key Adjustments:
If you still encounter issues, please ensure your environment meets the requirements and try running the script again. If the problem persists, providing any error messages or additional context will help us assist you better. Feel free to refer to the documentation for more details on prediction modes and settings. Happy coding! 😊 |
I tried your suggestion and adjusted something in my code and here is the error I got It said AttributeError: "NoneType" object has no attributes 'xyxy' |
Since no object was detected, result is null. If you want to use OpenCV to show real-time detection results, you can use
If you want to get the values of
I hope this helps. |
@sunmooncode hello! Thank you for your patience and for providing the error details. It looks like the issue arises when no objects are detected, resulting in a Here's an updated version of your script that includes handling for cases where no objects are detected: import cv2
from ultralytics import YOLO
# Load the trained YOLOv8 model (adjust the path to your model file)
model = YOLO('box-obb.pt')
# Initialize webcam
cap = cv2.VideoCapture(0) # 0 is the default device ID for the webcam
if not cap.isOpened():
print("Error: Could not open webcam.")
exit()
while True:
# Capture frame-by-frame
ret, frame = cap.read()
if not ret:
print("Failed to grab frame")
break
# Run YOLOv8 model on the frame
results = model(frame)
# Check if any detections were made
if results and results[0].boxes:
# Extract the detections
detections = results[0].boxes.xyxy.cpu().numpy() # xyxy format (xmin, ymin, xmax, ymax, confidence, class)
# Loop over detections and draw bounding boxes
for det in detections:
xmin, ymin, xmax, ymax, confidence, class_id = det
if confidence > 0.5: # Confidence threshold
cv2.rectangle(frame, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0, 255, 0), 2)
label = f"{model.names[int(class_id)]}: {confidence:.2f}"
cv2.putText(frame, label, (int(xmin), int(ymin) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('YOLOv8 Box Detection', frame)
# Break the loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
cap.release()
cv2.destroyAllWindows() Key Adjustments:
# Run YOLOv8 model on the frame
results = model(frame)
# Plot the results
img = results[0].plot()
# Display the resulting frame
cv2.imshow('YOLOv8 Box Detection', img) This should help handle cases where no objects are detected and ensure your script runs smoothly. If you continue to face issues, please ensure your environment is up-to-date with the latest versions of Happy coding! 😊 |
Hello thank you for your help! I was able to run it on my laptop here is the result May I ask how to fix it when it detects other objects/scenes as "Parcels" also even if I set the Confidence Thresh hold to "0.95", because currently I only want the model to detect parcels only. Thank you! |
To improve the model's specificity in detecting only "parcels" and reduce false positives for other objects or scenes, you can consider the following strategies: Fine-Tune the Model: Example configuration for training with adjusted class weightstrain_config = { |
It appears you're encountering an issue where the model misidentifies other objects as "parcels," even after setting a high confidence threshold. To address this and focus the model on parcel detection, consider the following steps: Improve Model Training
Post-Processing
for det in detections:
xmin, ymin, xmax, ymax, confidence, class_id = det
if confidence > 0.95 and int(class_id) == parcel_class_id: # Assuming parcel_class_id is known
# Draw bounding box and label |
@1716775457damn hello! Thank you for your detailed comment. It sounds like you're encountering an issue where the model misidentifies other objects as "parcels," even after setting a high confidence threshold. Let's explore some strategies to improve the model's performance and focus on parcel detection. Improve Model Training
Post-Processing
Here's an example of how you can implement class filtering in your code: parcel_class_id = 1 # Replace with the actual class ID for parcels
for det in detections:
xmin, ymin, xmax, ymax, confidence, class_id = det
if confidence > 0.95 and int(class_id) == parcel_class_id:
cv2.rectangle(frame, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0, 255, 0), 2)
label = f"{model.names[int(class_id)]}: {confidence:.2f}"
cv2.putText(frame, label, (int(xmin), int(ymin) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) Additional Tips
By implementing these strategies, you should be able to improve the model's ability to specifically detect parcels while reducing false positives for other objects. If you continue to face issues, please ensure your environment is up-to-date with the latest versions of Happy coding! 😊 |
Hello, I have gathered data sets in a real environment scenario and labelled 2 classes, one for parcel and the other class for other objects, is this good enough for fine-tuning and improving the model? I am also currently annotating 650+ images taken in a real environment. The goal is to only detect "parcels" and not other objects. |
Hello @KennethEladistu, Thank you for your detailed follow-up! It's great to hear that you've gathered a diverse dataset and are working on precise annotations. Here are some additional tips to ensure your model focuses on detecting parcels effectively: Dataset and Annotation Tips
Fine-Tuning the ModelGiven your goal to detect only parcels, here are some steps to fine-tune your model:
Example Code for Class FilteringTo ensure the model only detects parcels during inference, you can filter out detections that do not belong to the parcel class. Here's an example: parcel_class_id = 1 # Replace with the actual class ID for parcels
for det in detections:
xmin, ymin, xmax, ymax, confidence, class_id = det
if confidence > 0.95 and int(class_id) == parcel_class_id:
cv2.rectangle(frame, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0, 255, 0), 2)
label = f"{model.names[int(class_id)]}: {confidence:.2f}"
cv2.putText(frame, label, (int(xmin), int(ymin) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) Evaluation and Iteration
Keeping Your Environment UpdatedEnsure your environment is up-to-date with the latest versions of If you continue to face issues, providing a minimum reproducible example would be very helpful for further investigation. You can refer to our guide on creating a minimum reproducible example. Happy coding! 😊 |
Search before asking
Question
Hello currently I have trained my data set on Roboflow and I downloaded the "best.pt" file from google collab. Currently I want my model to work on my laptop's camera and for it to detect the parcels I have trained my data sets with. Why is it not displaying. Here is my code for my inference.py.
`import cv2
from ultralytics import YOLO
Load the trained YOLOv8 model (adjust the path to your model file)
model = YOLO('box-obb.pt')
Initialize webcam
cap = cv2.VideoCapture(0) # 0 is the default device ID for the webcam
if not cap.isOpened():
print("Error: Could not open webcam.")
exit()
while True:
# Capture frame-by-frame
ret, frame = cap.read()
if not ret:
print("Failed to grab frame")
break
When everything is done, release the capture
cap.release()
cv2.destroyAllWindows()
`
Additional
No response
The text was updated successfully, but these errors were encountered: