modes/track/ #7906
Replies: 86 comments 224 replies
-
Can I run two models simultaneously in one video? I want that two models will works simultaneously with cumulative results? Is it possible? Please let me know. Thanks in advance!! |
Beta Was this translation helpful? Give feedback.
-
Hi, First of all, I have been loving working with Yolov8. Great tool! However, I have been having difficulties with a certain task. I want to use model.track on videos that I have, and then use save_crop = True, but save with a naming convention where I can track each persons ID. Currently, save_crop just gives me the cropped images of the objectes detected, but there is not way to know from which frame of the video are the crops, also, which ID is attached which cropped image. The visualization through cv2.imshow shows the IDs accross the different frames, but I cant find a way to save them. The naming convention I am looking for is something like this: "frame_30_ID_1.jpg" My current code looks something like this: from ultralytics import YOLO model = YOLO("yolov8n.pt") # load model video_path = "path/to/video.mp4" ret = True while ret:
cap.release() Any help would be greatly apprecitated! Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @pderrenger . Can I run the models using my phones camera? Can you please share the code to invoke my mobile's camera to test the model? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi, help me understand why I get this error when tracking with segmentation model . My ultimate goal is to use a custom car plate segmentation model for tracking. Thank you very much
|
Beta Was this translation helpful? Give feedback.
-
Yolov8 has very high overall practicality. Can I implement tracking with two cameras? I hope that when a car tracked by camera A moves to camera B, its frame ID remains the same. However, currently there is always an ID switch happening. Is it because of the model's accuracy? def cam2(): cap=cam a = threading.Thread(target=cam1) a.start() |
Beta Was this translation helpful? Give feedback.
-
Hey there,
|
Beta Was this translation helpful? Give feedback.
-
Hi I saw that I can use an openvino IR format model just like any other pytorch model and then run tracking like normal. I was wondering how I would load the IR '.xml' and '.bin' files as arguments into YOLO(), or if I should load my model using openvino library? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Can i use yolov8 model to track and reidentify person with same id assigned to it in multiple camera feed ? |
Beta Was this translation helpful? Give feedback.
-
How can we only track moving objects in the Plotting Tracks Over Time code: from collections import defaultdict import cv2 from ultralytics import YOLO Load the YOLOv8 modelmodel = YOLO('yolov8n.pt') Open the video filevideo_path = "path/to/video.mp4" Store the track historytrack_history = defaultdict(lambda: []) Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() |
Beta Was this translation helpful? Give feedback.
-
import cv2 model = YOLO('yolov8_custom_train.engine', task="detect") Path to the input video fileinput_video_path = '/content/gdrive/MyDrive/yolov8-tensorrt/inference/output_video.mp4' Path to the output video fileoutput_video_path = 'outputtest_video.mp4' Define the coordinates of the polygonpolygon_points = [(670, 66), (1237, 550), (514, 1054), (161, 295)] Open the input video filecap = cv2.VideoCapture(input_video_path) Get video propertiesframe_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) Define the codec and create VideoWriter objectfourcc = cv2.VideoWriter_fourcc(*'mp4v') Function for finding the centroiddef calculate_centroid(box): Function to check if two bounding boxes overlapdef check_overlap(box1, box2): Read until video is completedwhile cap.isOpened():
Release video objectscap.release() Close all OpenCV windowscv2.destroyAllWindows() In this, I am tracking a label Person but in the next 2 to 3 frames, ids are changing so any solution for this? |
Beta Was this translation helpful? Give feedback.
-
What is the difference between these attributes of results[0].boxes: |
Beta Was this translation helpful? Give feedback.
-
is it possible to use our own weighs as a model to track? or we must include the yolov8n.pt? |
Beta Was this translation helpful? Give feedback.
-
So I am using Yolov8 for my current project, it's been a breeze so far. I do have a question on the tracking method provided by the Yolov8. When I am using the generic yolov8n model(or even a custom model mixed with few objects), I know I can specifically filter out things that doesn't interest me by their ID as below:
But, when I caught the an object that I am interested, can I at that time or at that frame, issue a track command to start tracking it? if it can be done, can you tell me how? an short example will be even better! thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi, I want some detailed help and guidance on how to use custom tracker models with my custom yolov8 pose model, the Re-identification problem is being face using bytetrack.yaml so I think I should use StrongSORT or DeepSORT. Therefore, I want the ultralytics team to help me on selecting my tracker model or use multiple tracker models, and guide me properly on how to use them with my YOLOv8 custom trained model. |
Beta Was this translation helpful? Give feedback.
-
import random opening the file in read modemy_file = open("utils/coco.txt", "r") reading the filedata = my_file.read() replacing end splitting the text | when newline ('\n') is seen.class_list = data.split("\n") Generate random colors for class listdetection_colors = [] load a pretrained YOLOv8n modelmodel = YOLO("weights/yolov8n.pt", "v8") Vals to resize video frames | small frame optimise the runframe_wid = 640 def CarBehaviour(frame, color_threshold=1100):
def detect_and_draw(frame, model, class_list, detection_colors):
Open video capturecap = cv2.VideoCapture("/home/opencv_env/Vehicle-rear-lights-analyser-master/testing_data/road_2.mp4") if not cap.isOpened(): while True:
When everything done, release the capturecap.release() |
Beta Was this translation helpful? Give feedback.
-
I have a question which is how can I get some tracking metrics such as MOTA ,MOTP from video.The follows are my code about track. Load the YOLOv8 modelmodel = YOLO('3.pt') Open the video filevideo_path = "5.mp4" Store the track historytrack_history = defaultdict(lambda: []) output_path = 'video/5.mp4' fps = cap.get(cv2.CAP_PROP_FPS) fourcc = cv2.VideoWriter_fourcc(*'XVID') Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() out.release() |
Beta Was this translation helpful? Give feedback.
-
Hello, I am trying to run the above code in Jupyter Notebook. How can I solve this issue? Thank you! |
Beta Was this translation helpful? Give feedback.
-
While inferencing 'tracking' on a video, is it possible to find that the performance of the object detection model is affecting the overall tracking result or the trackers performance, because (there were some objects even without any detections (bboxes) ) If not how would I approach this, can you give some insights into it, I will figure it out. Thankyou, |
Beta Was this translation helpful? Give feedback.
-
Hello,
Thank you! |
Beta Was this translation helpful? Give feedback.
-
Does this track function return the same as the predict function, just with an added ID. Or does this return something else? |
Beta Was this translation helpful? Give feedback.
-
@title Speed Trackerimport cv2 from ultralytics.utils.checks import check_imshow from collections import defaultdict track_history = defaultdict(lambda: []) w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) result = cv2.VideoWriter("SpeedTracker.avi", time_per_frame = 1.0 / fps while cap.isOpened():
result.release() how do we calculate the accuracy for the bytetrack trackiing |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a few questions.
1.1
1.2
Thank you so much! |
Beta Was this translation helpful? Give feedback.
-
Hi , Load a pretrained YOLOv8n modelmodel = YOLO('../ultralytics/ultralytics/yolo/v8/runs/weights/best_car_1.pt') img = Image.open('./image_1.jpg') Define line pointsline_position = 0 width = int(img_array.get(cv2.CAP_PROP_FRAME_WIDTH))height = int(img_array.get(cv2.CAP_PROP_FRAME_HEIGHT))x_line = int(line_position * width / 100) pt11 = (x_line-100, height) region_points = [pt1,pt2,pt11,pt22] Keep track of detected defect IDsresult_dict = {} def image_pred(image):
image_pred(img) |
Beta Was this translation helpful? Give feedback.
-
j'ai essayer de faire ce code pour détecter chaque véhicule, lui attribuer un ID unique, suivre chaque véhicule jusqu'à ce qu'il disparaisse de la vidéo, et enregistrer les informations de temps, de vitesse et d'accélération dans un fichier Excel . Monter Google Drivedrive.mount('/content/drive') Charger le modèle YOLOmodel = YOLO('yolov8n.pt') Ouvrir la vidéovideo_path = '/content/drive/MyDrive/codejdid /video3.mp4' fps = cap.get(cv2.CAP_PROP_FPS) Créer un objet VideoWriter pour sauvegarder la vidéofourcc = cv2.VideoWriter_fourcc(*'mp4v') Dictionnaire pour enregistrer les informations des véhiculesvehicle_data = defaultdict(lambda: {'timestamps': [], 'positions': [], 'speeds': [], 'accelerations': []}) while cap.isOpened():
cap.release() Convertir les données en DataFramedata = [] df = pd.DataFrame(data, columns=['ID', 'Timestamp', 'Position', 'Speed', 'Acceleration']) Sauvegarde du DataFrame en fichier Excelexcel_path = '/content/drive/MyDrive/codejdid/vehicle_tracking_results.xlsx' Vérifier si le fichier existe et le téléchargerif os.path.exists(excel_path): ValueError: not enough values to unpack (expected 4, got 1) |
Beta Was this translation helpful? Give feedback.
-
j'ai essayer de faire ce code pour détecter chaque véhicule, lui attribuer un ID unique, suivre chaque véhicule jusqu'à ce qu'il disparaisse de la vidéo, et enregistrer les informations de temps, de vitesse et d'accélération dans un fichier Excel . Monter Google Drivedrive.mount('/content/drive') Charger le modèle YOLOmodel = YOLO('yolov8n.pt') Ouvrir la vidéovideo_path = '/content/drive/MyDrive/codejdid /video3.mp4' fps = cap.get(cv2.CAP_PROP_FPS) Créer un objet VideoWriter pour sauvegarder la vidéofourcc = cv2.VideoWriter_fourcc(*'mp4v') Dictionnaire pour enregistrer les informations des véhiculesvehicle_data = defaultdict(lambda: {'timestamps': [], 'positions': [], 'speeds': [], 'accelerations': []}) while cap.isOpened():
cap.release() Convertir les données en DataFramedata = [] df = pd.DataFrame(data, columns=['ID', 'Timestamp', 'Position', 'Speed', 'Acceleration']) Sauvegarde du DataFrame en fichier Excelexcel_path = '/content/drive/MyDrive/codejdid/vehicle_tracking_results.xlsx' Vérifier si le fichier existe et le téléchargerif os.path.exists(excel_path): ValueError: not enough values to unpack (expected 4, got 1) |
Beta Was this translation helpful? Give feedback.
-
comment se fait la conversion du pixel en mètre pour que je puisse déterminer la position x et y des véhicules détectés en m , leurs vitesses en mètre par seconde et leurs accélération en mètre par seconde au carré ?? |
Beta Was this translation helpful? Give feedback.
-
Hello First of all, grateful for the great effort and excellent tools with ultralytics, I have a problem: I don't have any track_id in the inference, could you help me? context: I have a model that I train to segment an object, the model predicts correctly, in fact its false positives are rare, but due to the difficulty of the problem the confidence in the predictions is very low (0.02 and higher) With this I want to make a tracker of the objects in video, the segmentation model works well and does not lose any true positives, but its confidence as I expressed is low, the problem is that the tracker cannot assign any track_id, and that is a problem for me, because I need that track_id I've used a custom tracker to try to improve the results, but still can't: my code is simple: from collections import defaultdict
from ultralytics import YOLO
track_history = defaultdict(lambda: [])
model = YOLO("path/to/best.pt")
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, show=True, tracker="custom_track.yaml", conf=0.001, persist=True)
track_ids = results[0].boxes.id.int().cpu().tolist()
print(track_ids) # problem, no there aren't ids
>> [None, None]
confs = result.boxes.conf
print(confs) # yes, there are some objets
>> [0.03, 0.09]
else:
break
cap.release()
cv2.destroyAllWindows() my custom_track.yaml: tracker_type: bytetrack
track_high_thresh: 0.01
new_track_thresh: 0.01
track_low_thresh: 0.01
track_buffer: 60
match_thresh: 0.65 Could someone please help me? |
Beta Was this translation helpful? Give feedback.
-
0: 384x640 2 heads, 330.1ms cls: tensor([0.]) cls: tensor([0.]) If object is detecting even with more than 0.9 confidence then why tracking is not happening. |
Beta Was this translation helpful? Give feedback.
-
MASA: Matching Anything By Segmenting Anything (CVPR24) |
Beta Was this translation helpful? Give feedback.
-
Hello there! |
Beta Was this translation helpful? Give feedback.
-
modes/track/
Learn how to use Ultralytics YOLO for object tracking in video streams. Guides to use different trackers and customise tracker configurations.
https://docs.ultralytics.com/modes/track/
Beta Was this translation helpful? Give feedback.
All reactions