Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to add evaluation? #94

Closed
wangyakunn opened this issue Jun 9, 2023 · 6 comments
Closed

How to add evaluation? #94

wangyakunn opened this issue Jun 9, 2023 · 6 comments

Comments

@wangyakunn
Copy link

Thanks a lot!
I find "#no evaluation" in custom_dataset.py in line 357. That means this is a no evaluation version.And I want to know how to fix it to add the evaluation?

@github-actions
Copy link

github-actions bot commented Jun 9, 2023

Message that will be displayed on users first issue

@OrangeSodahub
Copy link
Owner

Yes, we didn't produce the validation datasets nor the evaluation script. You could refer to the pcdet/tools/pred.py for how to get customed predictions and other evaluation scripts in pcdet for the evaluation metrics caculation.

@OrangeSodahub
Copy link
Owner

BTW, we use our customed evaluation tools in here , however, not recommended for you, you'd better refer to official evaluation tools in KITTI, Waymo, etc.

@wangyakunn
Copy link
Author

wangyakunn commented Jun 9, 2023

Thanks for your reply!

# 2022.04.30: Remove evaluation
 
   logger.info('**********************Start evaluation %s/%s(%s)**********************' %
                (cfg.EXP_GROUP_PATH, cfg.TAG, args.extra_tag))
    test_set, test_loader, sampler = build_dataloader(
        dataset_cfg=cfg.DATA_CONFIG,
        class_names=cfg.CLASS_NAMES,
        batch_size=args.batch_size,
        dist=dist_train, 
        workers=args.workers, 
        logger=logger, 
        training=False
    )
    print("len(test_loader) = ",len(test_loader))
    eval_output_dir = output_dir / 'eval' / 'eval_with_train'
    eval_output_dir.mkdir(parents=True, exist_ok=True)
    args.start_epoch = max(args.epochs - args.num_epochs_to_eval, 0)  # Only evaluate the last args.num_epochs_to_eval epochs

    repeat_eval_ckpt(
        model.module if dist_train else model,
        test_loader, args, eval_output_dir, logger, ckpt_dir,
        dist_test=dist_train
    )
    logger.info('**********************End evaluation %s/%s(%s)**********************' %
                (cfg.EXP_GROUP_PATH, cfg.TAG, args.extra_tag))

I tried to unravel the comments in this code but it looks like doesn't work.I find that this build_dataloader() cannot load val data.

test_set, test_loader, sampler = build_dataloader(
        dataset_cfg=cfg.DATA_CONFIG,
        class_names=cfg.CLASS_NAMES,
        batch_size=args.batch_size,
        dist=dist_train, 
        workers=args.workers, 
        logger=logger, 
        training=False
    )

Why it works in train dataset but test dataset? And I would like to know if it is complex to add evaluatation based on your code.

@OrangeSodahub
Copy link
Owner

What does I find that this build_dataloader() cannot load val data. mean?

Our code is mainly based on OpenPCDet, add evaluation is not difficult, you could just regard our code same as the OpenPCDet and add it. Two steps suggested: get the predictions from model (refer to pred.py, different from OpenPCDet), and evaluate them (same as the OpenPCDet but need to fit your customed dataset also.

@wangyakunn
Copy link
Author

I'm sorry i made a little mistake about "cannot load val data" and now I have solved it.
I read part of Openpcdet code about eval carefully. Thanks for your suggestions and I got it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants