You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for :
model.fit( X, Y) #where model is the retinanet object with appropriate backbone using keras_cv.models.RetinaNet()
X would be the array of images.
So for X0 (first image), will Y0 be array of [class, xmin, xmax, ymin, ymax] for all objects in the image?
What is the correct format?
Right now, I have the annotations saved in [class, xmin, xmax, ymin, ymax] format in a separate text file for each image
Is there any alternative for such case?
Thank you for any help.
The text was updated successfully, but these errors were encountered:
@adityaroy10,
RetinaNet, a popular single-stage detector, which is accurate and runs fast. RetinaNet uses a feature pyramid network to efficiently detect objects at multiple scales and introduces a new loss, the Focal loss function, to alleviate the problem of the extreme foreground-background class imbalance. Also there is COCO2017 dataset which has around 118k images for the training and testing.
How to form the dataset for training?
for :
model.fit( X, Y) #where model is the retinanet object with appropriate backbone using keras_cv.models.RetinaNet()
X would be the array of images.
So for X0 (first image), will Y0 be array of [class, xmin, xmax, ymin, ymax] for all objects in the image?
What is the correct format?
Right now, I have the annotations saved in [class, xmin, xmax, ymin, ymax] format in a separate text file for each image
Is there any alternative for such case?
Thank you for any help.
The text was updated successfully, but these errors were encountered: