-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert yolov8n hand pose to int8 tflite face value error #13514
Comments
Hi Kris, Thank you for reaching out and providing the details of your issue. Let's work together to resolve this. Firstly, could you please confirm that you are using the latest versions of pip install --upgrade torch ultralytics Next, to better understand and reproduce the issue, could you provide a minimum reproducible code example? This will help us investigate the problem more effectively. You can refer to our guide on creating a minimum reproducible example here: Minimum Reproducible Example. Additionally, ensure that your Here’s a refined version of your code snippet to ensure all parameters are correctly set: import cv2
from ultralytics import YOLO
# Load the pretrained model
model = YOLO("model/best.pt")
# Define the image size
image_size = 224
# Export the model to INT8 TFLite format
model.export(format="tflite", imgsz=image_size, int8=True, data="hand_keypoint.yaml") If the issue persists, please share the exact error message or any additional logs you receive. This information will be crucial for diagnosing the problem. Looking forward to your response so we can get this resolved! 😊 |
Hi @glenn-jocher , I change to latest torch and ultralytics, and converting is successful. Thanks, |
Hi @glenn-jocher , I want to use the int8 tflite model to do the inference to see the result, but face the error.
Thanks, |
@kris-himax hi Kris, Thank you for reaching out and providing the details of your issue. It's great to hear that you successfully converted your model to INT8 TFLite format! Let's address the inference part now. To perform inference using a TFLite model, you need to use the TensorFlow Lite Interpreter. Here's an example of how you can modify your code to load and run inference with a TFLite model: import cv2
import os
import numpy as np
import tensorflow as tf
# Load the TFLite model
tflite_model_path = "model/best_saved_model_yolov8n_hand_pose_imgz_224_tflite/best_full_integer_quant.tflite"
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Define the image size
image_size = 224
# Function to preprocess the image
def preprocess_image(image_path):
image = cv2.imread(image_path)
image = cv2.resize(image, (image_size, image_size))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = np.expand_dims(image, axis=0).astype(np.float32)
return image
# Function to run inference
def run_inference(image):
interpreter.set_tensor(input_details[0]['index'], image)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
return output_data
# Folder containing test images
folder = "./test_image_hand"
for filename in os.listdir(folder):
image_path = os.path.join(folder, filename)
image = preprocess_image(image_path)
# Run inference
results = run_inference(image)
# Process results (this part will depend on your specific model's output format)
# For demonstration, let's assume the results contain bounding box coordinates
# and you need to draw them on the image
for result in results:
# Assuming result contains [x, y, w, h]
x, y, w, h = result
x1, y1 = int(x - w / 2), int(y - h / 2)
x2, y2 = int(x + w / 2), int(y + h / 2)
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
# Display the image with annotations
cv2.imshow("YOLOv8 Inference", image[0])
cv2.waitKey(0)
cv2.destroyAllWindows() This code snippet demonstrates how to load a TFLite model, preprocess images, run inference, and display the results. Please adjust the result processing part according to your specific model's output format. If you encounter any further issues or have additional questions, feel free to ask. We're here to help! 😊 |
Hi @glenn-jocher , Thank you. It works fine. |
@kris-himax hi Kris, That's fantastic to hear! 🎉 If you have any more questions or run into any other issues, feel free to reach out. We're here to help! |
i ran the code that you provided
and i am facing this error ValueError: too many values to unpack (expected 4)" for my converted tflite model, the output shape is (1, 5, 8400), i have no idea where to retrieve the bounding box coordinate nor the bounding box confidence score given with this output shape can you kindly assist on this? thanks in advance |
Hi @JellyJ98, Thank you for providing the detailed error message and the code snippet. It looks like the output shape of your TFLite model is To properly parse the output, you need to reshape and interpret these values correctly. Here’s an updated version of your code to handle the output format: import cv2
import os
import numpy as np
import tensorflow as tf
# Load the TFLite model
tflite_model_path = "model/best_saved_model_yolov8n_hand_pose_imgz_224_tflite/best_full_integer_quant.tflite"
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Define the image size
image_size = 224
# Function to preprocess the image
def preprocess_image(image_path):
image = cv2.imread(image_path)
image = cv2.resize(image, (image_size, image_size))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = np.expand_dims(image, axis=0).astype(np.float32)
return image
# Function to run inference
def run_inference(image):
interpreter.set_tensor(input_details[0]['index'], image)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
return output_data
# Function to process the output
def process_output(output_data, threshold=0.5):
output_data = np.squeeze(output_data)
boxes = []
for i in range(output_data.shape[1]):
x, y, w, h, conf = output_data[:, i]
if conf > threshold:
x1, y1 = int((x - w / 2) * image_size), int((y - h / 2) * image_size)
x2, y2 = int((x + w / 2) * image_size), int((y + h / 2) * image_size)
boxes.append((x1, y1, x2, y2, conf))
return boxes
# Folder containing test images
folder = "./test_image_hand"
for filename in os.listdir(folder):
image_path = os.path.join(folder, filename)
image = preprocess_image(image_path)
# Run inference
results = run_inference(image)
# Process results
boxes = process_output(results)
# Draw bounding boxes on the image
for (x1, y1, x2, y2, conf) in boxes:
cv2.rectangle(image[0], (x1, y1), (x2, y2), (0, 255, 0), 2)
# Display the image with annotations
cv2.imshow("YOLOv8 Inference", image[0])
cv2.waitKey(0)
cv2.destroyAllWindows() This code snippet includes a If you encounter any further issues or have additional questions, feel free to ask. We're here to help! 😊 |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hi,
I use the pretrained model from here.
And use the following python code to convert to int8 tflite,but face error.
How can I fix it?
Thanks,
Kris
Additional
No response
The text was updated successfully, but these errors were encountered: