We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@JunMa11 您好✿,谢谢您们的工作,目前有个问题没有非常明白想请教你 多目标大目标包含小目标的训练数据如何处理 请问是每张图片对每个mask 生成一个二值化图像吗,例如这样一张1.png有三个对象,1.png+1mask.png 1.png+2mask.png 1.png+3mask.png , 多个对象的话,mask 数量比img 数据多几倍
这样的话数据输入应该怎么操作尼? 我的想法是,一张图片多个对象,都是如下返回格式,只是,每次的image 是同一个图片而已,以mask 对象数量为准 return { "image": torch.tensor(img_padded).float(), "gt2D": torch.tensor(gt2D[None, :,:]).long(), "bboxes": torch.tensor(bboxes[None, None, ...]).float(), # (B, 1, 4) "image_name": img_name, "new_size": torch.tensor(np.array([img_resize.shape[0], img_resize.shape[1]])).long(), "original_size": torch.tensor(np.array([img_3c.shape[0], img_3c.shape[1]])).long() }
The text was updated successfully, but these errors were encountered:
@JunMa11
Sorry, something went wrong.
Hi @skycat88 ,
I would suggest separating it into two files where one file contains the larger object and the other contains the small object.
@JunMa11 thank you, I split multiple targets into individual binary images for training.
No branches or pull requests
@JunMa11
您好✿,谢谢您们的工作,目前有个问题没有非常明白想请教你
多目标大目标包含小目标的训练数据如何处理
请问是每张图片对每个mask 生成一个二值化图像吗,例如这样一张1.png有三个对象,1.png+1mask.png 1.png+2mask.png 1.png+3mask.png , 多个对象的话,mask 数量比img 数据多几倍
这样的话数据输入应该怎么操作尼? 我的想法是,一张图片多个对象,都是如下返回格式,只是,每次的image 是同一个图片而已,以mask 对象数量为准
return {
"image": torch.tensor(img_padded).float(),
"gt2D": torch.tensor(gt2D[None, :,:]).long(),
"bboxes": torch.tensor(bboxes[None, None, ...]).float(), # (B, 1, 4)
"image_name": img_name,
"new_size": torch.tensor(np.array([img_resize.shape[0], img_resize.shape[1]])).long(),
"original_size": torch.tensor(np.array([img_3c.shape[0], img_3c.shape[1]])).long()
}
The text was updated successfully, but these errors were encountered: