-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert from COCO instance segmentation format to YOLOv5 instance segmentation Without Roboflow? #10621
Comments
👋 Hello @ichsan2895, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
You can code it yourself in python, just keep in mind on coco, origin is top left, in yolo it's center |
Using this script, you can convert the COCO segmentation format to the YOLO segmentation format. RectLabel is an offline image annotation tool for object detection and segmentation.
|
Thanks I will check out JSON2YOLO script, I will report back if I see any trouble |
Did you see any trouble? @ichsan2895 |
Sorry for slow respon Yes the JSON2COCO (https://github.com/ultralytics/JSON2YOLO) is failed to work in my computer. The log seems success, but the label/annot txt was not found. I'm not sure what happen. The COCO was made by labelme annotator. So, it has directory config as :
Fortunatelly, After One week of debugging. I created a jupyter notebook that mix the code from JSON2COCO & stackoverflow to converting COCO to YOLO and you can download it here : Just change the last cell to desired
|
The python notebook worked perfectly for me, thank you ! |
Can you share sample coco json file? This code didn't work.
|
Sure, I have make this sample coco json with Please take a look.. |
Hi, @ichsan2895,
I'm sure there are many apps in the Supervisely ecosystem that can help solve your tasks. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Hi 👋🏻 I'm probably late to the party, but you can convert between formats with import supervision as sv
sv.DetectionDataset.from_coco(
images_directory_path='...',
annotations_path='...',
force_masks=True
).as_yolo(
images_directory_path='...',
annotations_directory_path='...',
data_yaml_path='...'
) |
@SkalskiP thanks for sharing your solution! We appreciate your input and contribution to the YOLOv5 community. Your code snippet using |
is segment support? |
@lonngxiang yes, the |
We updated our general_json2yolo.py script so that the RLE mask with holes can be converted to the YOLO segmentation format correctly. |
@ryouchinsa thank you for sharing the update to the |
tks, i try use this datasets with 1 label https://universe.roboflow.com/naumov-igor-segmentation/car-segmetarion: but i use this script coco2yolo,but i got 2 labeles,i donnot know why
|
@lonngxiang it looks like the issue might be related to the conversion process. One possibility is that the COCO dataset includes multiple categories, leading to the creation of multiple labels during the conversion. You may want to review the original COCO annotations and ensure that only the desired category (in this case, "car") is included in the annotations. Double-checking the original COCO annotations to ensure that only the "car" category is present could help resolve the issue. Additionally, you might want to inspect the Feel free to reach out if you have further questions or need additional assistance! |
tks; but how to use this script if i use this download datasets |
@lonngxiang i understand your question, but as an open-source contributor, I am unable to guide you on using specific third-party datasets, such as the one from Roboflow, as I am not associated with them. I recommend referencing the documentation or support resources provided by Roboflow for guidance on using their datasets with the |
tks, I have utilized the Supervision library to convert the COCO segmentation format to the YOLO format. However, when I ran the Ultralytics command, the results were not as expected .
|
Hi @lonngxiang 👋🏻, I'm the creator of Supervision. Have you been able to solve your conversion problem? |
yes,but use sv.DetectionDataset.from_coco().as_yolo() not work for yolo segment format;finally i fixed by used this methods https://github.com/ultralytics/JSON2YOLO/blob/master/general_json2yolo.py |
@lonngxiang it’s great to hear that you found a solution! If you have any other questions or encounter more issues in the future, feel free to ask. We’re here to help. Good luck with your project! |
Hi @SkalskiP,
|
@ryouchinsa for k in range(2):
# forward connection
if k == 0:
# idx_list: [[5], [12, 0], [7]]
for i, idx in enumerate(idx_list):
# middle segments have two indexes
# reverse the index of middle segments
# 첫번째와 마지막 세그먼트를 제외한 나머지 세그먼트들은 두개의 인덱스를 가지고 있음
# idx_list = [ [p], [p, q], [p, q], ... , [q]]
if len(idx) == 2 and idx[0] > idx[1]:
idx = idx[::-1]
# segments[i] : (N, 2)
segments[i] = segments[i][::-1, :]
segments[i] = np.roll(segments[i], -idx[0], axis=0)
segments[i] = np.concatenate([segments[i], segments[i][:1]])
# deal with the first segment and the last one
if i in [0, len(idx_list) - 1]:
s.append(segments[i])
else:
idx = [0, idx[1] - idx[0]]
s.append(segments[i][idx[0] : idx[1] + 1])
else:
for i in range(len(idx_list) - 1, -1, -1):
if i not in [0, len(idx_list) - 1]:
idx = idx_list[i]
nidx = abs(idx[1] - idx[0])
s.append(segments[i][nidx:])
return s |
Well, in the end, it depends on how the train code parses it, but I'm curious about whether that method is efficient. |
Hi @youngjae-avikus, For example, we are going to merge 3 polygons into a polygon. k == 0: // forward k == 1: // backward If you have any questions, please let us know. |
@ryouchinsa looks correct to me |
@glenn-jocher, thanks for reviewing my explanation. |
@ryouchinsa you're welcome! If you have any more questions or need further assistance, feel free to reach out. Happy to help! |
Search before asking
Question
Hello, is it possible to convert COCO instance segmentation Custom dataset to YOLOv5 instance segmentation dataset (without Roboflow ) or maybe creating from scratch?
I already check this tutorial
Train On Custom Data 1st and this tutorial Format of YOLO annotations
Most of tutorial just tell format for BBOX and doesn't tell how to convert COCO to YOLO
But I don't find any tutorial for converting COCO to YOLOv5 without Roboflow
Can somebody help me?
Thanks for sharing
Additional
No response
The text was updated successfully, but these errors were encountered: