-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'list' object has no attribute 'masks' #13788
Comments
👋 Hello @codinglearningnovice, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
@codinglearningnovice hello, Thank you for reaching out and providing detailed information about your issue. It looks like you're encountering an error because the To resolve this, you need to iterate over the while(count < TRAIN_SIZE):
try:
ret, frame = cap.read()
if currentFrame % FRAME_SKIP == 0:
count += 1
if count % int(TRAIN_SIZE/10) == 0:
print(str((count/TRAIN_SIZE)*100) + "% done")
# Perform human segmentation
results = model(frame)
for result in results:
person_masks = result.masks[result.boxes.cls == 0]
person_mask_3ch = cv2.cvtColor(person_masks, cv2.COLOR_GRAY2BGR)
masked_frame = cv2.bitwise_and(frame, person_mask_3ch)
inverted_mask = cv2.bitwise_not(person_mask_3ch)
result_frame = cv2.bitwise_and(masked_frame, inverted_mask)
resized_frame = cv2.resize(result_frame, (output_width, output_height))
name = 'trydata/resized_frame.jpg' + str(count) + '.jpg'
cv2.imwrite(name, resized_frame)
video.write(resized_frame.astype('uint8'))
except Exception as e:
print(e)
break
currentFrame += 1
print(str(count) + " Frames collected")
cap.release()
video.release() Additionally, please ensure that you are using the latest versions of pip install --upgrade torch ultralytics If the issue persists, please provide a minimum reproducible example so we can investigate further. You can find more details on how to create one here. I hope this helps! If you have any further questions, feel free to ask. 😊 |
thanks for your reply, tried this, it doesnt give me the result, it issues this error below 0: 384x640 1 person, 190.0ms
1 Frames collected am i doing something wrongly? |
Hello @codinglearningnovice, Thank you for your update. It looks like the error you're encountering is related to the To help us investigate further, could you please provide a minimum reproducible example of your code? This will allow us to reproduce the issue on our end and find a solution more effectively. You can find guidelines on how to create one here. In the meantime, let's ensure that the while(count < TRAIN_SIZE):
try:
ret, frame = cap.read()
if currentFrame % FRAME_SKIP == 0:
count += 1
if count % int(TRAIN_SIZE/10) == 0:
print(str((count/TRAIN_SIZE)*100) + "% done")
# Perform human segmentation
results = model(frame)
for result in results:
person_masks = result.masks[result.boxes.cls == 0].numpy() # Ensure masks are numpy arrays
person_mask_3ch = cv2.cvtColor(person_masks, cv2.COLOR_GRAY2BGR)
masked_frame = cv2.bitwise_and(frame, person_mask_3ch)
inverted_mask = cv2.bitwise_not(person_mask_3ch)
result_frame = cv2.bitwise_and(masked_frame, inverted_mask)
resized_frame = cv2.resize(result_frame, (output_width, output_height))
name = 'trydata/resized_frame.jpg' + str(count) + '.jpg'
cv2.imwrite(name, resized_frame)
video.write(resized_frame.astype('uint8'))
except Exception as e:
print(e)
break
currentFrame += 1
print(str(count) + " Frames collected")
cap.release()
video.release() Additionally, please ensure you are using the latest versions of pip install --upgrade torch ultralytics If the issue persists, please share the minimum reproducible example so we can assist you further. Thank you for your cooperation! 😊 |
Bug description:When running inference on a video to segment the person and manipulate each frame, I get an error related to the expected input from the cv2.cvt, seems to be a type mismatch MRE:
Error message:OpenCV(4.8.0) 👎 error: (-5:Bad argument) in function 'cvtColor'
Dependencies:ultralytics==8.2.0 |
Hello @codinglearningnovice, Thank you for providing a detailed description of the issue and the minimum reproducible example (MRE). It looks like the error is due to a type mismatch when using First, please make sure you are using the latest versions of pip install --upgrade torch ultralytics Here’s a revised version of your code snippet that includes a check to ensure import cv2
from ultralytics import YOLO
# Load the YOLOv8 segmentation model
model = YOLO("yolov8n-seg.pt")
cap = cv2.VideoCapture('dancee.mp4')
output_width, output_height = 96, 64 # Adjust as needed
video = cv2.VideoWriter('output_video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), 30, (output_width, output_height))
count = 0
TRAIN_SIZE = 1000 # Adjust as needed
FRAME_SKIP = 5 # Adjust as needed
currentFrame = 0
while count < TRAIN_SIZE:
try:
ret, frame = cap.read()
if not ret:
break
if currentFrame % FRAME_SKIP == 0:
count += 1
if count % int(TRAIN_SIZE / 10) == 0:
print(f"{(count / TRAIN_SIZE) * 100}% done")
# Perform human segmentation
results = model(frame)
for result in results:
person_masks = result.masks[result.boxes.cls == 0].numpy() # Ensure masks are numpy arrays
if person_masks.size == 0:
continue # Skip if no person masks are found
person_mask_3ch = cv2.cvtColor(person_masks[0], cv2.COLOR_GRAY2BGR) # Convert the first mask to 3 channels
masked_frame = cv2.bitwise_and(frame, person_mask_3ch)
inverted_mask = cv2.bitwise_not(person_mask_3ch)
result_frame = cv2.bitwise_and(masked_frame, inverted_mask)
resized_frame = cv2.resize(result_frame, (output_width, output_height))
name = f'trydata/resized_frame_{count}.jpg'
cv2.imwrite(name, resized_frame)
video.write(resized_frame.astype('uint8'))
except Exception as e:
print(e)
break
currentFrame += 1
print(f"{count} Frames collected")
cap.release()
video.release() This code ensures that If the issue persists, please provide any additional details or errors you encounter. This will help us further investigate and provide a more accurate solution. Thank you for your patience and cooperation! 😊 |
Search before asking
Question
i am working in colab, i am trying to access a video, in all frame of video, i want to segment the person in the frame, convert his image to black, then every other image in the background to white. the snippet of my code is this
while(count < TRAIN_SIZE):
try:
ret, frame = cap.read()
except Exception as e:
print(e)
break
currentFrame += 1
print(str(count)+" Frames collected")
cap.release()
video.release()
but i keep geting these errors
WARNING⚠️ 'source' is missing. Using 'source=/usr/local/lib/python3.10/dist-packages/ultralytics/assets'.
image 1/2 /usr/local/lib/python3.10/dist-packages/ultralytics/assets/bus.jpg: 640x480 4 persons, 1 bus, 1 skateboard, 12.0ms
image 2/2 /usr/local/lib/python3.10/dist-packages/ultralytics/assets/zidane.jpg: 384x640 2 persons, 1 tie, 8.8ms
Speed: 2.9ms preprocess, 10.4ms inference, 2.4ms postprocess per image at shape (1, 3, 384, 640)
'list' object has no attribute 'masks'
Additional
No response
The text was updated successfully, but these errors were encountered: