-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference/postprocessing yolov8n on rockchip 3588 #13043
Comments
👋 Hello @ViktorPavlovA, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Hello! It looks like you're working with the output tensors from the YOLOv8 model on a Rockchip RK3588. The output tensors you're seeing correspond to different scales of detection outputs, each containing bounding box (bbox) information and class predictions. For YOLO models, the output typically includes:
The order in your output tensors likely follows this format: To process these outputs, you'll need to apply post-processing steps such as:
Here’s a simplified example of how you might start to interpret these outputs: # Example to interpret the output tensor
for output in outputs:
# Assuming the output shape is (1, num_predictions, attributes)
predictions = output[0]
for pred in predictions:
x_center, y_center, width, height, conf, *class_probs = pred
if conf > CONFIDENCE_THRESHOLD:
class_id = np.argmax(class_probs)
class_confidence = class_probs[class_id]
if class_confidence > CLASS_CONFIDENCE_THRESHOLD:
# This is a valid detection
print(f"Detected class: {class_id} with bbox: {x_center}, {y_center}, {width}, {height}") Adjust Hope this helps! Let me know if you have more questions. 😊 |
Thank you for help and expain! I write guide how to work with post processing on rockchip 3588. Maybe it will be useful for Yolo community. |
Hello Viktor, Thank you so much for sharing your guide on post-processing with YOLOv8 on the Rockchip 3588! 🌟 It's fantastic to see community members contributing valuable resources. I'm sure many will find it helpful. We appreciate your efforts in enriching the YOLO community. Keep up the great work! |
Hi, my model has same shape and when using code provided in your comment: #13043 (comment) Could be that reason that im using my custom trained model? outputs = rknn.inference(inputs=[input_img])
for id,i in enumerate(outputs):
print(id)
print(i.shape)
#print(i[0])
print("\n")
print("_"*20)
CONFIDENCE_THRESHOLD =0.5
CLASS_CONFIDENCE_THRESHOLD = 0.5
# Example to interpret the output tensor
for output in outputs:
# Assuming the output shape is (1, num_predictions, attributes)
predictions = output[0]
for pred in predictions:
x_center, y_center, width, height, conf, *class_probs = pred
if conf > CONFIDENCE_THRESHOLD:
class_id = np.argmax(class_probs)
class_confidence = class_probs[class_id]
if class_confidence > CLASS_CONFIDENCE_THRESHOLD:
# This is a valid detection
print(f"Detected class: {class_id} with bbox: {x_center}, {y_center}, {width}, {height}")
# Extract the output tensor and squeeze extra dimensions``` |
@livelove1987 , so right now is it works? About custom model: Nope, i also used custom model with script in |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hello, Yolo community .
I try to used yolov8n model on rockchip, but i miss understand how after inference used output's tensor. Can you answer ? What's order of tensors [x_c,y_c,w,h,confidence, ?, ?, ?] or it's right order.
OUTPUT
Additional
rknn-lite-1.5.2
The text was updated successfully, but these errors were encountered: