-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
my model is optimizing the weights and giving me the option of preview and deployment #732
Comments
👋 Hello @PrakharJoshi54321, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:
If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix. If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response. We try to respond to all issues as promptly as possible. Thank you for your patience! |
@PrakharJoshi54321 Hello! The "Optimizing weights" process can take a while. Let's wait for a bit to see if the process finishes successfully. If the process fails, could you share your model ID (URL) so I can investigate? |
Hello @PrakharJoshi54321, Thank you for providing the details and the screenshot. It looks like your model has completed the training process but encountered an issue during the weight optimization phase. Let's address this step-by-step:
For more detailed guidance, you can refer to the Ultralytics HUB Models Documentation. If the issue persists, please provide any error messages or logs you encounter, and we can further investigate the problem. Thank you for your patience and cooperation. The YOLO community and the Ultralytics team are here to help you! |
@PrakharJoshi54321 It looks like your model didn’t successfully upload the weights, which is why Ultralytics HUB is asking you to resume training from the last checkpoint (62). I suggest resuming training as recommended in the UI. |
''' Path to Tesseract executablepytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' Load the modelsspeed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking Path to the video filevideo_path = 'video.mp4' # Replace with your video file path Initialize video capturecap = cv2.VideoCapture(video_path) w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) Video writervideo_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution Init speed-estimation objectspeed_obj = solutions.SpeedEstimator( while cap.isOpened():
cap.release() comment-: ultralytics is just amazing any help will be apriciated |
check if the speed is greater than 50 km/hr store the vehicle no, speed and track id in the excel sheet |
Hello @PrakharJoshi54321, Thank you for your kind words about Ultralytics! We're thrilled to hear that you're enjoying using our tools. Let's enhance your script to store vehicle information in an Excel sheet when the speed exceeds 50 km/hr. Here's an updated version of your script that includes this functionality: import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
# Load the models
speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection
# Path to the video file
video_path = 'video.mp4' # Replace with your video file path
# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"
w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution
# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)
# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break
# Speed detection and tracking
results = speed_model(im0)
if results:
print(f"Tracks detected: {len(results)}")
else:
print("No tracks detected in this frame.")
# Ensure tracks have valid data
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
print(f"Vehicle detected at: {x1, y1, x2, y2}")
cropped_image = im0[y1:y2, x1:x2]
# Perform number plate detection
plate_results = plate_model(cropped_image)
for plate_result in plate_results:
plate_boxes = plate_result.boxes.xyxy.numpy()
if len(plate_boxes) == 0:
print("No number plate detected in this vehicle bounding box.")
for plate_box in plate_boxes:
px1, py1, px2, py2 = map(int, plate_box)
plate_cropped_image = cropped_image[py1:py2, px1:px2]
# Convert the cropped image to a format suitable for OCR
plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(plate_cropped_image_rgb)
# Use Tesseract to extract text
plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
print(f'Detected Number Plate: {plate_text}')
# Draw the bounding box for the plate and add the text
cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Write the frame with detections and speed estimation
im0, speeds = speed_obj.estimate_speed(im0, results)
video_writer.write(im0)
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 50:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False) This script will now store the vehicle number, speed, and track ID in an Excel sheet if the speed exceeds 50 km/hr. The If you encounter any issues or have further questions, please let us know. The YOLO community and the Ultralytics team are always here to help! |
This code is throwing error as the function here is not returning two values and you are saying to store value in two variable. How this is possible? "im0, speeds = speed_obj.estimate_speed(im0, results)" |
pro.zip please do this for me all the efforts will be appreciated |
Hello @PrakharJoshi54321, Thank you for sharing your project files and providing details about your requirements. Let's address the integration of your number plate detection model and the speed tracking functionality, ensuring that vehicle information is stored in an Excel sheet when the speed exceeds 50 km/hr. First, let's correct the issue with the import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
# Load the models
speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection
# Path to the video file
video_path = 'video.mp4' # Replace with your video file path
# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"
w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution
# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)
# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break
# Speed detection and tracking
results = speed_model(im0)
if results:
print(f"Tracks detected: {len(results)}")
else:
print("No tracks detected in this frame.")
# Ensure tracks have valid data
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
print(f"Vehicle detected at: {x1, y1, x2, y2}")
cropped_image = im0[y1:y2, x1:x2]
# Perform number plate detection
plate_results = plate_model(cropped_image)
for plate_result in plate_results:
plate_boxes = plate_result.boxes.xyxy.numpy()
if len(plate_boxes) == 0:
print("No number plate detected in this vehicle bounding box.")
for plate_box in plate_boxes:
px1, py1, px2, py2 = map(int, plate_box)
plate_cropped_image = cropped_image[py1:py2, px1:px2]
# Convert the cropped image to a format suitable for OCR
plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(plate_cropped_image_rgb)
# Use Tesseract to extract text
plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
print(f'Detected Number Plate: {plate_text}')
# Draw the bounding box for the plate and add the text
cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# Write the frame with detections and speed estimation
im0, speeds = speed_obj.estimate_speed(im0, results)
video_writer.write(im0)
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 50:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False) This script now correctly handles the return values from the If you encounter any further issues or have additional questions, please let us know. The YOLO community and the Ultralytics team are here to support you! |
Is it working inyour system please share snip and detailed process its my college project |
Do the correct ocr |
Hello @PrakharJoshi54321, Thank you for reaching out! To assist you effectively, we need to ensure a few things:
Regarding your OCR integration, here’s a refined approach to ensure accurate OCR detection:
Here’s an example of how you can preprocess the image and configure Tesseract: import cv2
import pytesseract
from PIL import Image
# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
def preprocess_image(image):
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply thresholding
_, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
return thresh
def extract_text_from_image(image):
# Preprocess the image
preprocessed_image = preprocess_image(image)
# Convert to PIL Image
pil_image = Image.fromarray(preprocessed_image)
# Use Tesseract to extract text
text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
return text
# Example usage
image = cv2.imread('path_to_image.jpg')
text = extract_text_from_image(image)
print(f'Detected Text: {text}') This example demonstrates how to preprocess the image before passing it to Tesseract for OCR. You can adjust the preprocessing steps based on your specific requirements. If you continue to face issues, please share the minimal reproducible example, and we’ll be happy to assist you further. The YOLO community and the Ultralytics team are here to help! |
i am taking 5 km/hr for testing and it is showing me this Vehicle detected at: (815, 196, 871, 255) 0: 640x608 1 0, 116.3ms |
Write the frame with detections and speed estimation
|
packages in environment at C:\Users\cairuser1\miniconda3\envs\speedss:Name Version Build Channelasttokens 2.4.1 pyhd8ed1ab_0 conda-forge list of packages |
Hello @PrakharJoshi54321, Thank you for providing the detailed list of packages in your environment. It looks like you're encountering an issue with the Step 1: Verify Package VersionsFirst, ensure that you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Step 2: Minimum Reproducible ExampleTo help us diagnose the issue more effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. Step 3: Correcting the
|
Traceback (most recent call last): provide me fast please |
resolve this fast please |
Hello @PrakharJoshi54321, Thank you for your patience. Let's address the issue you're facing with the Step 1: Verify Package VersionsFirst, ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Step 2: Correcting the
|
Is this correct |
Hello @PrakharJoshi54321, Thank you for reaching out! Let's address your issue step-by-step to ensure we provide the best possible support. Step 1: Minimum Reproducible ExampleTo help us diagnose the issue effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. Having a reproducible example is crucial for us to investigate and resolve the issue efficiently. Step 2: Verify Package VersionsPlease ensure you are using the latest versions of pip install --upgrade torch ultralytics hub-sdk Using the most recent versions helps ensure that any known bugs are fixed and you have access to the latest features and improvements. Step 3: Correcting the
|
Search before asking
Question
Additional
No response
The text was updated successfully, but these errors were encountered: