You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
why yolov8 uses torch.zeros(1, ch, s, s) as input to forward pass to calculate model stride ? can someone explain me will different task will have different stride
#4492
m = self.model[-1] # Detect()
if isinstance(m, (Detect, Segment, Pose)):
s = 256 # 2x min stride
m.inplace = self.inplace
forward = lambda x: self.forward(x)[0] if isinstance(m, (Segment, Pose)) else self.forward(x)
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
self.stride = m.stride
m.bias_init() # only run once
else:
self.stride = torch.Tensor([32]) # default stride for i.e. RTDETR
@crackwitz in YOLOv8, the stride calculation using torch.zeros(1, ch, s, s) as input to the forward pass is a method to determine the stride of the feature maps relative to the input image. The stride is the step size with which the model's convolutional filters move across the image. It's crucial for scaling the output feature maps back to the original image dimensions during tasks like detection, segmentation, and pose estimation.
The reason for using a tensor of zeros is that we are not interested in the actual output values during this stride calculation; we only need the dimensions of the output feature maps to compute the stride. By passing a tensor with a known size through the model, we can observe how the dimensions change and calculate the stride accordingly.
Different tasks may indeed have different strides because the architecture of the model's layers and how they process the input can vary depending on the task. For example, segmentation models might have a different series of layers that preserve spatial resolution more than detection models, leading to different stride values.
The stride is then stored in m.stride, which is used later during inference to correctly scale and interpret the model's predictions relative to the input image size. This step is essential for accurate localization of objects, segmentation masks, or keypoints in the original image space. The stride calculation is a one-time setup operation that configures the model for subsequent forward passes with actual data.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
reference task.py file:
Build strides
m = self.model[-1] # Detect()
if isinstance(m, (Detect, Segment, Pose)):
s = 256 # 2x min stride
m.inplace = self.inplace
forward = lambda x: self.forward(x)[0] if isinstance(m, (Segment, Pose)) else self.forward(x)
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
self.stride = m.stride
m.bias_init() # only run once
else:
self.stride = torch.Tensor([32]) # default stride for i.e. RTDETR
Beta Was this translation helpful? Give feedback.
All reactions