Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 problems happened when I used entry.py for evaluate #117

Open
xpzwzwz opened this issue Jan 5, 2024 · 2 comments
Open

2 problems happened when I used entry.py for evaluate #117

xpzwzwz opened this issue Jan 5, 2024 · 2 comments

Comments

@xpzwzwz
Copy link

xpzwzwz commented Jan 5, 2024

When I executed the following code in the command line in ubuntu, two problems occurred,

  1. WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
  2. AssertionError: /home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/xdecoder_data/coco/panoptic_val2017/000000000139.png。

First,I don't mske any changes to focalt_unicl_lang.yaml;
Second,I found 000000000139.jpg in the path "/xdecoder_data/coco/panoptic_val2017/" from the COCO official website's val2017 image dataset, which contains a total of 5000 jpg images. Is 000000000139.jpg the same as 000000000139.png?How do I obtain 000000000139.png?

Waiting for the author's response.Thx!!!!

The following content is the detailed information of the command and error.

CUDA_VISIBLE_DEVICES=0,1 mpirun -n 2 python entry.py evaluate \

        --conf_files configs/xdecoder/focalt_unicl_lang.yaml \
        --overrides \
        COCO.INPUT.IMAGE_SIZE 1024 \
        MODEL.DECODER.CAPTIONING.ENABLED True \
        MODEL.DECODER.RETRIEVAL.ENABLED True \
        MODEL.DECODER.GROUNDING.ENABLED True \
        COCO.TEST.BATCH_SIZE_TOTAL 2 \
        COCO.TRAIN.BATCH_SIZE_TOTAL 2 \
        COCO.TRAIN.BATCH_SIZE_PER_GPU 1 \
        VLP.TEST.BATCH_SIZE_TOTAL 2 \
        VLP.TRAIN.BATCH_SIZE_TOTAL 2 \
        VLP.TRAIN.BATCH_SIZE_PER_GPU 1 \
        MODEL.DECODER.HIDDEN_DIM 512 \
        MODEL.ENCODER.CONVS_DIM 512 \
        MODEL.ENCODER.MASK_DIM 512 \
        ADE20K.TEST.BATCH_SIZE_TOTAL 2 \
        FP16 True \
        WEIGHT True \
        RESUME_FROM /home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/pretrained_model/xdecoder_focalt_last.pt

No protocol specified
WARNING:utils.arguments:Overrided COCO.INPUT.IMAGE_SIZE from 1024 to 1024
WARNING:utils.arguments:Overrided MODEL.DECODER.CAPTIONING.ENABLED from True to True
WARNING:utils.arguments:Overrided MODEL.DECODER.RETRIEVAL.ENABLED from True to True
WARNING:utils.arguments:Overrided MODEL.DECODER.GROUNDING.ENABLED from True to True
WARNING:utils.arguments:Overrided COCO.TEST.BATCH_SIZE_TOTAL from 8 to 2
WARNING:utils.arguments:Overrided COCO.TRAIN.BATCH_SIZE_TOTAL from 2 to 2
WARNING:utils.arguments:Overrided COCO.TRAIN.BATCH_SIZE_PER_GPU from 1 to 1
WARNING:utils.arguments:Overrided VLP.TEST.BATCH_SIZE_TOTAL from 3 to 2
WARNING:utils.arguments:Overrided VLP.TRAIN.BATCH_SIZE_TOTAL from 2 to 2
WARNING:utils.arguments:Overrided VLP.TRAIN.BATCH_SIZE_PER_GPU from 1 to 1
WARNING:utils.arguments:Overrided MODEL.DECODER.HIDDEN_DIM from 512 to 512
WARNING:utils.arguments:Overrided MODEL.ENCODER.CONVS_DIM from 512 to 512
WARNING:utils.arguments:Overrided MODEL.ENCODER.MASK_DIM from 512 to 512
WARNING:utils.arguments:Overrided ADE20K.TEST.BATCH_SIZE_TOTAL from 8 to 2
WARNING:utils.arguments:Overrided COCO.INPUT.IMAGE_SIZE from 1024 to 1024
WARNING:utils.arguments:Overrided MODEL.DECODER.CAPTIONING.ENABLED from True to True
WARNING:utils.arguments:Overrided MODEL.DECODER.RETRIEVAL.ENABLED from True to True
WARNING:utils.arguments:Overrided MODEL.DECODER.GROUNDING.ENABLED from True to True
WARNING:utils.arguments:Overrided COCO.TEST.BATCH_SIZE_TOTAL from 8 to 2
WARNING:utils.arguments:Overrided COCO.TRAIN.BATCH_SIZE_TOTAL from 2 to 2
WARNING:utils.arguments:Overrided COCO.TRAIN.BATCH_SIZE_PER_GPU from 1 to 1
WARNING:utils.arguments:Overrided VLP.TEST.BATCH_SIZE_TOTAL from 3 to 2
WARNING:utils.arguments:Overrided VLP.TRAIN.BATCH_SIZE_TOTAL from 2 to 2
WARNING:utils.arguments:Overrided VLP.TRAIN.BATCH_SIZE_PER_GPU from 1 to 1
WARNING:utils.arguments:Overrided MODEL.DECODER.HIDDEN_DIM from 512 to 512
WARNING:utils.arguments:Overrided MODEL.ENCODER.CONVS_DIM from 512 to 512
WARNING:utils.arguments:Overrided MODEL.ENCODER.MASK_DIM from 512 to 512
WARNING:utils.arguments:Overrided ADE20K.TEST.BATCH_SIZE_TOTAL from 8 to 2
INFO:trainer.distributed_trainer:Setting SAVE_DIR as ../../data/output/test
INFO:trainer.distributed_trainer:Using CUDA
WARNING:trainer.utils.mpi_adapter:----------------
WARNING:trainer.utils.mpi_adapter:MPI Adapter data
WARNING:trainer.utils.mpi_adapter:----------------
WARNING:trainer.utils.mpi_adapter:environment info: single-node AML or other MPI environment
WARNING:trainer.utils.mpi_adapter:init method url: tcp://127.0.0.1:36873
WARNING:trainer.utils.mpi_adapter:world size: 2
WARNING:trainer.utils.mpi_adapter:local size: 2
WARNING:trainer.utils.mpi_adapter:rank: 0
WARNING:trainer.utils.mpi_adapter:local rank: 0
WARNING:trainer.utils.mpi_adapter:master address: 127.0.0.1
WARNING:trainer.utils.mpi_adapter:master port: 36873
WARNING:trainer.utils.mpi_adapter:----------------
WARNING:trainer.utils.mpi_adapter:trying to initialize process group ...
INFO:trainer.distributed_trainer:Setting SAVE_DIR as ../../data/output/test
INFO:trainer.distributed_trainer:Using CUDA
WARNING:trainer.utils.mpi_adapter:----------------
WARNING:trainer.utils.mpi_adapter:MPI Adapter data
WARNING:trainer.utils.mpi_adapter:----------------
WARNING:trainer.utils.mpi_adapter:environment info: single-node AML or other MPI environment
WARNING:trainer.utils.mpi_adapter:init method url: tcp://127.0.0.1:36873
WARNING:trainer.utils.mpi_adapter:world size: 2
WARNING:trainer.utils.mpi_adapter:local size: 2
WARNING:trainer.utils.mpi_adapter:rank: 1
WARNING:trainer.utils.mpi_adapter:local rank: 1
WARNING:trainer.utils.mpi_adapter:master address: 127.0.0.1
WARNING:trainer.utils.mpi_adapter:master port: 36873
WARNING:trainer.utils.mpi_adapter:----------------
WARNING:trainer.utils.mpi_adapter:trying to initialize process group ...
WARNING:trainer.utils.mpi_adapter:process group initialized
WARNING:trainer.utils.mpi_adapter:process group initialized
INFO:trainer.distributed_trainer:Save config file to ../../data/output/test/conf_copy.yaml
INFO:trainer.distributed_trainer:Base learning rate: 0.0001
INFO:trainer.distributed_trainer:Number of GPUs: 2
INFO:trainer.distributed_trainer:Gradient accumulation steps: 1
INFO:trainer.utils.hook:Adding global except hook for the distributed job to shutdown MPI if unhandled exception is raised on some of the ranks.
INFO:trainer.distributed_trainer:Base learning rate: 0.0001
INFO:trainer.distributed_trainer:Number of GPUs: 2
INFO:trainer.distributed_trainer:Gradient accumulation steps: 1
INFO:trainer.utils.hook:Adding global except hook for the distributed job to shutdown MPI if unhandled exception is raised on some of the ranks.
INFO:trainer.default_trainer:Imported base_dir at base_path ./
INFO:trainer.default_trainer:Imported base_dir at base_path ./
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
WARNING:datasets.registration.register_vlp_datasets:WARNING: Cannot find VLPreDataset. Make sure datasets are accessible if you want to use them for training or evaluation.
INFO:trainer.default_trainer:Pipeline for training: XDecoderPipeline
INFO:trainer.default_trainer:-----------------------------------------------
INFO:trainer.default_trainer:Evaluating model ...
cfg, here is modeling.language:
{'PIPELINE': 'XDecoderPipeline', 'TRAINER': 'xdecoder', 'SAVE_DIR': '../../data/output/test', 'base_path': './', 'RESUME': False, 'WEIGHT': True, 'RESET_DATA_LOADER': False, 'RESUME_FROM': '/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/pretrained_model/xdecoder_focalt_last.pt', 'EVAL_AT_START': False, 'WANDB': False, 'LOG_EVERY': 100, 'FIND_UNUSED_PARAMETERS': False, 'FP16': True, 'PORT': '36873', 'LOADER': {'JOINT': True, 'KEY_DATASET': 'coco'}, 'VERBOSE': True, 'MODEL': {'NAME': 'xdecoder_model', 'HEAD': 'xdecoder_head', 'MASK_ON': False, 'KEYPOINT_ON': False, 'LOAD_PROPOSALS': False, 'DIM_PROJ': 512, 'BACKBONE_DIM': 768, 'TEXT': {'ARCH': 'vlpencoder', 'NAME': 'transformer', 'TOKENIZER': 'clip', 'CONTEXT_LENGTH': 77, 'WIDTH': 512, 'HEADS': 8, 'LAYERS': 12, 'AUTOGRESSIVE': True}, 'BACKBONE': {'NAME': 'focal_dw', 'PRETRAINED': '', 'LOAD_PRETRAINED': False, 'FOCAL': {'PRETRAIN_IMG_SIZE': 224, 'PATCH_SIZE': 4, 'EMBED_DIM': 96, 'DEPTHS': [2, 2, 6, 2], 'FOCAL_LEVELS': [3, 3, 3, 3], 'FOCAL_WINDOWS': [3, 3, 3, 3], 'DROP_PATH_RATE': 0.3, 'MLP_RATIO': 4.0, 'DROP_RATE': 0.0, 'PATCH_NORM': True, 'USE_CONV_EMBED': True, 'SCALING_MODULATOR': True, 'USE_CHECKPOINT': False, 'USE_POSTLN': True, 'USE_POSTLN_IN_MODULATION': False, 'USE_LAYERSCALE': True, 'OUT_FEATURES': ['res2', 'res3', 'res4', 'res5'], 'OUT_INDICES': [0, 1, 2, 3]}}, 'ENCODER': {'NAME': 'transformer_encoder_fpn', 'IGNORE_VALUE': 255, 'NUM_CLASSES': 133, 'LOSS_WEIGHT': 1.0, 'CONVS_DIM': 512, 'MASK_DIM': 512, 'NORM': 'GN', 'IN_FEATURES': ['res2', 'res3', 'res4', 'res5'], 'DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES': ['res3', 'res4', 'res5'], 'COMMON_STRIDE': 4, 'TRANSFORMER_ENC_LAYERS': 6}, 'DECODER': {'NAME': 'xdecoder', 'TRANSFORMER_IN_FEATURE': 'multi_scale_pixel_decoder', 'MASK': True, 'GROUNDING': {'ENABLED': True, 'MAX_LEN': 5, 'TEXT_WEIGHT': 2.0, 'CLASS_WEIGHT': 0.5}, 'DETECTION': False, 'CAPTION': {'ENABLED': True, 'PHRASE_PROB': 0.0, 'SIM_THRES': 0.95}, 'CAPTIONING': {'ENABLED': True, 'STEP': 50}, 'RETRIEVAL': {'ENABLED': True, 'DIM_IMG': 768, 'ENSEMBLE': True}, 'DEEP_SUPERVISION': True, 'NO_OBJECT_WEIGHT': 0.1, 'CAPTION_WEIGHT': 1.0, 'CAPTIONING_WEIGHT': 2.0, 'RETRIEVAL_WEIGHT': 2.0, 'BACKBONER_WEIGHT': 8.0, 'GCLASS_WEIGHT': 0.4, 'GMASK_WEIGHT': 1.0, 'GDICE_WEIGHT': 1.0, 'OCLASS_WEIGHT': 0.4, 'OMASK_WEIGHT': 1.0, 'ODICE_WEIGHT': 1.0, 'CLASS_WEIGHT': 2.0, 'MASK_WEIGHT': 5.0, 'DICE_WEIGHT': 5.0, 'BBOX_WEIGHT': 5.0, 'GIOU_WEIGHT': 2.0, 'HIDDEN_DIM': 512, 'NUM_OBJECT_QUERIES': 101, 'NHEADS': 8, 'DROPOUT': 0.0, 'DIM_FEEDFORWARD': 2048, 'PRE_NORM': False, 'ENFORCE_INPUT_PROJ': False, 'SIZE_DIVISIBILITY': 32, 'TRAIN_NUM_POINTS': 12544, 'OVERSAMPLE_RATIO': 3.0, 'IMPORTANCE_SAMPLE_RATIO': 0.75, 'DEC_LAYERS': 10, 'TOP_GROUNDING_LAYERS': 3, 'TOP_CAPTION_LAYERS': 3, 'TOP_CAPTIONING_LAYERS': 3, 'TOP_RETRIEVAL_LAYERS': 3, 'TEST': {'SEMANTIC_ON': True, 'INSTANCE_ON': True, 'PANOPTIC_ON': True, 'OVERLAP_THRESHOLD': 0.8, 'OBJECT_MASK_THRESHOLD': 0.8, 'SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE': False}}}, 'COCO': {'INPUT': {'MIN_SIZE_TRAIN': 800, 'MAX_SIZE_TRAIN': 1333, 'MIN_SIZE_TRAIN_SAMPLING': 'choice', 'MIN_SIZE_TEST': 800, 'MAX_SIZE_TEST': 1333, 'IMAGE_SIZE': 1024, 'MIN_SCALE': 0.1, 'MAX_SCALE': 2.0, 'DATASET_MAPPER_NAME': 'coco_panoptic_lsj', 'IGNORE_VALUE': 255, 'COLOR_AUG_SSD': False, 'SIZE_DIVISIBILITY': 32, 'RANDOM_FLIP': 'horizontal', 'MASK_FORMAT': 'polygon', 'FORMAT': 'RGB', 'CROP': {'ENABLED': True}}, 'DATASET': {'DATASET': 'coco'}, 'TEST': {'DETECTIONS_PER_IMAGE': 100, 'NAME': 'coco_eval', 'IOU_TYPE': ['bbox', 'segm'], 'USE_MULTISCALE': False, 'BATCH_SIZE_TOTAL': 2, 'MODEL_FILE': '', 'AUG': {'ENABLED': False}}, 'TRAIN': {'ASPECT_RATIO_GROUPING': True, 'BATCH_SIZE_TOTAL': 2, 'BATCH_SIZE_PER_GPU': 1, 'SHUFFLE': True}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 2, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': True}}, 'VLP': {'INPUT': {'IMAGE_SIZE': 224, 'DATASET_MAPPER_NAME': 'vlpretrain', 'IGNORE_VALUE': 255, 'COLOR_AUG_SSD': False, 'SIZE_DIVISIBILITY': 32, 'MASK_FORMAT': 'polygon', 'FORMAT': 'RGB', 'CROP': {'ENABLED': True}}, 'TRAIN': {'BATCH_SIZE_TOTAL': 2, 'BATCH_SIZE_PER_GPU': 1}, 'TEST': {'BATCH_SIZE_TOTAL': 2}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 16, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': True}}, 'INPUT': {'PIXEL_MEAN': [123.675, 116.28, 103.53], 'PIXEL_STD': [58.395, 57.12, 57.375]}, 'DATASETS': {'TRAIN': ['coco_2017_train_panoptic_filtall_with_sem_seg_caption_grounding', 'vlp_train'], 'TEST': ['coco_2017_val_panoptic_with_sem_seg', 'vlp_captioning_val', 'refcocog_val_umd', 'vlp_val', 'ade20k_panoptic_val'], 'SIZE_DIVISIBILITY': 32, 'PROPOSAL_FILES_TRAIN': []}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 16, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': True}, 'SOLVER': {'BASE_LR': 0.0001, 'STEPS': [0.88889, 0.96296], 'MAX_ITER': 1, 'GAMMA': 0.1, 'WARMUP_FACTOR': 1.0, 'WARMUP_ITERS': 10, 'WARMUP_METHOD': 'linear', 'WEIGHT_DECAY': 0.05, 'OPTIMIZER': 'ADAMW', 'LR_SCHEDULER_NAME': 'WarmupMultiStepLR', 'LR_MULTIPLIER': {'backbone': 0.1, 'lang_encoder': 0.1}, 'WEIGHT_DECAY_NORM': 0.0, 'WEIGHT_DECAY_EMBED': 0.0, 'CLIP_GRADIENTS': {'ENABLED': True, 'CLIP_TYPE': 'full_model', 'CLIP_VALUE': 5.0, 'NORM_TYPE': 2.0}, 'AMP': {'ENABLED': True}, 'MAX_NUM_EPOCHS': 50}, 'ADE20K': {'INPUT': {'MIN_SIZE_TRAIN': 640, 'MIN_SIZE_TRAIN_SAMPLING': 'choice', 'MIN_SIZE_TEST': 640, 'MAX_SIZE_TRAIN': 2560, 'MAX_SIZE_TEST': 2560, 'MASK_FORMAT': 'polygon', 'CROP': {'ENABLED': True, 'TYPE': 'absolute', 'SIZE': '(640, 640)', 'SINGLE_CATEGORY_MAX_AREA': 1.0}, 'COLOR_AUG_SSD': True, 'SIZE_DIVISIBILITY': 640, 'DATASET_MAPPER_NAME': 'mask_former_panoptic', 'FORMAT': 'RGB'}, 'DATASET': {'DATASET': 'ade'}, 'TEST': {'BATCH_SIZE_TOTAL': 2}}, 'REF': {'INPUT': {'PIXEL_MEAN': [123.675, 116.28, 103.53], 'PIXEL_STD': [58.395, 57.12, 57.375], 'MIN_SIZE_TEST': 512, 'MAX_SIZE_TEST': 1024, 'FORMAT': 'RGB'}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 0, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': False}, 'TEST': {'BATCH_SIZE_TOTAL': 1}}, 'SUN': {'INPUT': {'PIXEL_MEAN': [123.675, 116.28, 103.53], 'PIXEL_STD': [58.395, 57.12, 57.375], 'MIN_SIZE_TEST': 512, 'MAX_SIZE_TEST': 1024}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 0, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': False}, 'TEST': {'BATCH_SIZE_TOTAL': 8}}, 'SCAN': {'INPUT': {'PIXEL_MEAN': [123.675, 116.28, 103.53], 'PIXEL_STD': [58.395, 57.12, 57.375], 'MIN_SIZE_TEST': 512, 'MAX_SIZE_TEST': 1024}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 0, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': False}, 'TEST': {'BATCH_SIZE_TOTAL': 8}}, 'BDD': {'INPUT': {'PIXEL_MEAN': [123.675, 116.28, 103.53], 'PIXEL_STD': [58.395, 57.12, 57.375], 'MIN_SIZE_TEST': 800, 'MAX_SIZE_TEST': 1333}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': False, 'NUM_WORKERS': 0, 'LOAD_PROPOSALS': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'ASPECT_RATIO_GROUPING': False}, 'TEST': {'BATCH_SIZE_TOTAL': 8}}, 'CITY': {'INPUT': {'MIN_SIZE_TRAIN': 1024, 'MIN_SIZE_TRAIN_SAMPLING': 'choice', 'MIN_SIZE_TEST': 1024, 'MAX_SIZE_TRAIN': 4096, 'MAX_SIZE_TEST': 2048, 'CROP': {'ENABLED': True, 'TYPE': 'absolute', 'SIZE': '(512, 1024)', 'SINGLE_CATEGORY_MAX_AREA': 1.0}, 'COLOR_AUG_SSD': True, 'SIZE_DIVISIBILITY': -1, 'FORMAT': 'RGB', 'DATASET_MAPPER_NAME': 'mask_former_panoptic', 'MASK_FORMAT': 'polygon'}, 'TEST': {'EVAL_PERIOD': 5000, 'BATCH_SIZE_TOTAL': 8, 'AUG': {'ENABLED': False, 'MIN_SIZES': [512, 768, 1024, 1280, 1536, 1792], 'MAX_SIZE': 4096, 'FLIP': True}}, 'DATALOADER': {'FILTER_EMPTY_ANNOTATIONS': True, 'NUM_WORKERS': 4}}, 'command': 'evaluate', 'conf_files': ['configs/xdecoder/focalt_unicl_lang.yaml'], 'overrides': ['COCO.INPUT.IMAGE_SIZE', '1024', 'MODEL.DECODER.CAPTIONING.ENABLED', 'True', 'MODEL.DECODER.RETRIEVAL.ENABLED', 'True', 'MODEL.DECODER.GROUNDING.ENABLED', 'True', 'COCO.TEST.BATCH_SIZE_TOTAL', '2', 'COCO.TRAIN.BATCH_SIZE_TOTAL', '2', 'COCO.TRAIN.BATCH_SIZE_PER_GPU', '1', 'VLP.TEST.BATCH_SIZE_TOTAL', '2', 'VLP.TRAIN.BATCH_SIZE_TOTAL', '2', 'VLP.TRAIN.BATCH_SIZE_PER_GPU', '1', 'MODEL.DECODER.HIDDEN_DIM', '512', 'MODEL.ENCODER.CONVS_DIM', '512', 'MODEL.ENCODER.MASK_DIM', '512', 'ADE20K.TEST.BATCH_SIZE_TOTAL', '2', 'FP16', 'True', 'WEIGHT', 'True', 'RESUME_FROM', '/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/pretrained_model/xdecoder_focalt_last.pt'], 'world_size': 2, 'local_size': 2, 'rank': 1, 'local_rank': 1, 'CUDA': True, 'GRADIENT_ACCUMULATE_STEP': 1, 'EVAL_PER_UPDATE_NUM': 0, 'LR_SCHEDULER_PARAMS': {}, 'device': device(type='cuda', index=1), 'BASENAME': 'focalt_unicl_lang.yaml'}
INFO:trainer.default_trainer:Evaluation start ...
Traceback (most recent call last):
File "entry.py", line 75, in
main()
File "entry.py", line 70, in main
trainer.eval()
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/trainer/default_trainer.py", line 79, in eval
results = self._eval_on_set(self.save_folder)
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/trainer/default_trainer.py", line 87, in _eval_on_set
results = self.pipeline.evaluate_model(self, save_folder)
File "./pipeline/XDecoderPipeline.py", line 122, in evaluate_model
eval_batch_gen = self.get_dataloaders(trainer, dataset_label, is_evaluation=True)
File "./pipeline/XDecoderPipeline.py", line 60, in get_dataloaders
dataloaders = build_eval_dataloader(self._opt)
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/datasets/build.py", line 421, in build_eval_dataloader
dataloaders += [build_detection_test_loader(cfg, dataset_name, mapper=mapper)]
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/modeling/utils/config.py", line 84, in wrapped
explicit_args = _get_args_from_config(from_config, args, kwargs)
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/modeling/utils/config.py", line 137, in _get_args_from_config
ret = from_config_func(args, kwargs)
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/datasets/build.py", line 154, in _test_loader_from_config
dataset = get_detection_dataset_dicts(
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/datasets/build.py", line 124, in get_detection_dataset_dicts
dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names]
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/datasets/build.py", line 124, in
dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names]
File "/home/xupeng/anaconda3/envs/ape/lib/python3.8/site-packages/detectron2/data/catalog.py", line 58, in get
return f()
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/datasets/registration/register_coco_panoptic_annos_semseg.py", line 145, in
lambda: load_coco_panoptic_json(panoptic_json, image_root, panoptic_root, sem_seg_root, metadata),
File "/home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/datasets/registration/register_coco_panoptic_annos_semseg.py", line 124, in load_coco_panoptic_json
assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"]
AssertionError: /home/xupeng/AIprojects/Segment-Everything-Everywhere-All-At-Once-1.0/xdecoder_data/coco/panoptic_val2017/000000000139.png
WARNING:trainer.utils.hook:
************************************
WARNING:trainer.utils.hook:DefaultTrainer:
WARNING:trainer.utils.hook: Uncaught exception on rank 1.
WARNING:trainer.utils.hook: Calling MPI_Abort() to shut down MPI...
WARNING:trainer.utils.hook:******************************************

MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

@MaureenZOU
Copy link
Collaborator

please refer to mask2former for panoptic segmentation dataset preparation: https://github.com/facebookresearch/Mask2Former
xxx.png is the panoptic ground truth.

@xpzwzwz
Copy link
Author

xpzwzwz commented Jan 5, 2024

@MaureenZOU Thx!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants