Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to print the results of people ID, coordinates of the box in each second as csv #11762

Open
1 task done
ArielZhanghj opened this issue May 8, 2024 · 5 comments
Open
1 task done
Labels
question Further information is requested

Comments

@ArielZhanghj
Copy link

ArielZhanghj commented May 8, 2024

Search before asking

Question

After print the video results, I want to know how to print the identification results, and use these information to draw the trajectories and heat map in a period of time of the video.
I want to print the frame number, people ID, coordinates of the box... in each 20 frame (the frame rate of video is 20 f/s) as csv, which as shown in the figure.
0508

Additional

No response

@ArielZhanghj ArielZhanghj added the question Further information is requested label May 8, 2024
@glenn-jocher
Copy link
Member

Hello! It sounds like you're looking to extract and save trajectory and tracking details into a CSV file at a specific frame rate. Here's a concise way to achieve this with the YOLOv8 model:

You'll need to run the tracker, collect results, and then save the relevant data (ID, bounding box coordinates, frame number) to a CSV.

import csv
from ultralytics import YOLO

# Load your model
model = YOLO('path/to/model.pt')

# Setting up CSV to save the data
with open('output.csv', 'w', newline='') as file:
    writer = csv.writer(file)
    writer.writerow(["Frame", "ID", "X", "Y", "Width", "Height"])

    # Process video and track frame by frame
    for frame_idx, (path, im, im0s, vid_cap, s) in enumerate(model.track(source='path/to/video.mp4', stream=True)):
        if frame_idx % 20 == 0:  # capture every 20 frames
            for *xyxy, conf, cls, track_id in im0s["tracking"]:
                if int(cls) == 0:  # class '0' for people, change this based on your class labels
                    x1, y1, x2, y2 = map(int, xyxy)
                    width, height = x2 - x1, y2 - y1
                    writer.writerow([frame_idx, track_id, x1, y1, width, height])

# This script assumes that you are tracking 'people', as denoted by class '0'.

This script will save every 20th frame's tracking info into output.csv, adjusting for the class label that corresponds to 'people' in your model. Update class labels as necessary for your specific dataset. Adjust the path to your model and video accordingly. Happy tracking! 🚀

@ArielZhanghj
Copy link
Author

Thank you so much!

@glenn-jocher
Copy link
Member

You're welcome! I'm glad I could help. If you have any more questions or need further assistance as you work with YOLOv8, feel free to ask. Happy coding! 😊

@ArielZhanghj
Copy link
Author

You're welcome! I'm glad I could help. If you have any more questions or need further assistance as you work with YOLOv8, feel free to ask. Happy coding! 😊

Thank you for your time in advance. There is still two questions in print save trajectory and tracking details.
Firstly, I found add the ‘save_txt’ to ‘results = model track( )’ can put the text as s shown in the figure below. The last column shows all the pedestrian IDs recognized for each frame, but I'd like to also be able to derive what frame the recognition resulted in. I should know the time factor later. Is there anything that can be done?
Secondly, in order to get the trajectory and show all the pedestrian trajectories which are recognized within a period of time in the picture, How should it be done? Is there a better way than the previous one (my plan was to get the coordinate information of all pedestrians in each frame and concatenate the coordinates of pedestrians with the same id, but in reality, when the video is outputted, the trajectory is actually outputted)
微信图片_20240518160311

@glenn-jocher
Copy link
Member

@ArielZhanghj hello! I'm here to help with your questions about tracking and trajectory analysis using YOLOv8. 😊

  1. Frame Information in Output: To include frame information in your output, you can modify the tracking loop to write the frame index along with the detection details. Here’s a quick example of how you might adjust your code:

    for frame_idx, (path, im, im0s, vid_cap, s) in enumerate(model.track(source='video.mp4', stream=True)):
        results = model.track(im, save_txt=True)
        # Save or process results here, including frame_idx which is your frame number

    This will allow you to keep track of the frame number alongside the detections.

  2. Plotting Trajectories: Your approach to concatenate coordinates of pedestrians with the same ID over frames is correct. To visualize trajectories, you can use libraries like matplotlib or cv2 to draw lines or curves that connect these points across frames. Here’s a basic way to plot trajectories using OpenCV:

    import cv2
    import numpy as np
    
    # Assuming 'tracks' is a dictionary where keys are track_ids and values are lists of (x, y) coordinates
    for frame in video_frames:
        for track_id, coordinates in tracks.items():
            for i in range(1, len(coordinates)):
                cv2.line(frame, coordinates[i - 1], coordinates[i], color=(0, 255, 0), thickness=2)
    
        cv2.imshow('Trajectories', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

This method should help you visualize pedestrian paths effectively. If you need more detailed guidance or further assistance, feel free to ask. Happy tracking! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants