Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use pub_test.py with v1.0-test! #22

Closed
YoushaaMurhij opened this issue Jul 30, 2020 · 21 comments
Closed

Unable to use pub_test.py with v1.0-test! #22

YoushaaMurhij opened this issue Jul 30, 2020 · 21 comments

Comments

@YoushaaMurhij
Copy link

I am facing this error because I am trying to evaluate on the testing set. How to view the predictions after running dist_test.py?

Traceback (most recent call last):
  File "tools/tracking/pub_test.py", line 192, in <module>
    eval_tracking()
  File "tools/tracking/pub_test.py", line 160, in eval_tracking
    args.root
  File "tools/tracking/pub_test.py", line 176, in eval
    nusc_dataroot=root_path,
  File "/home/*******/CenterPoint_ws/nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py", line 85, in __init__
    gt_boxes = load_gt(nusc, self.eval_set, TrackingBox, verbose=verbose)
  File "/home/*******/CenterPoint_ws/nuscenes-devkit/python-sdk/nuscenes/eval/common/loaders.py", line 94, in load_gt
    'Error: You are trying to evaluate on the test set but you do not have the annotations!'
AssertionError: Error: You are trying to evaluate on the test set but you do not have the annotations!

First, I ran :

python tools/dist_test.py configs/centerpoint/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset.py --work_dir work_dirs/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset --checkpoint work_dirs/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset/epoch_20.pth --speed_test --testset

after that :
bash tracking_scripts/centerpoint_voxel_1440_dcn_flip_testset.sh

@tianweiy
Copy link
Owner

AssertionError: Error: You are trying to evaluate on the test set but you do not have the annotations!

This is not an error. The test set annotation is not available (need to submit to test server). There is no immediate function to see the visualization result for tracking. To see the detection output, you can comment out the if statement here

and the devkit will plot some images. Though, the visualization in the devkit is not that good (doesn't really give a sense of your detection quality without gt annotations) so you probably want to use other tools to visualize the detection/tracking in camera view / 3d.

@tianweiy
Copy link
Owner

This function is useful to look at to visualize the object in camera view. https://github.com/nutonomy/nuscenes-devkit/blob/274725ae1b3a2d921725016e3f4b383b8b218d3a/python-sdk/nuscenes/nuscenes.py#L903

@YoushaaMurhij
Copy link
Author

Thanks!

@iamsiddhantsahu
Copy link

@tianweiy The predicted bounding boxes co-ordinates are with respect to the lidar frame. Before transforming them to a particular camera, we need to first determine which camera's translation and rotation matrix we should use. The render_annotation() function from the nuScenes devkit takes as input the annotation token token with which the image path is known and the bounding boxes are plotted.

But in our case I was wondering how can we determine to which camera we transform the co-ordinates of the predicted bounding box and plot the bounding boxes after that?

@abhigoku10
Copy link

@YoushaaMurhij @iamsiddhantsahu @tianweiy i trying to run a inference code but unable to find a command to run the inference on nuscenes but I found the above mentioned command in which had mentioned about tools/dist_test.py but in configs i am not able to find " nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset.py" can you please share some insights on this
or can you tell me which command should we use to run inference on nuscenes and waymo dataset
THanks in advance

@tianweiy
Copy link
Owner

Thanks for the interest. We have some basic updates to the codebase recently. You can replace the original config with

The other commands are the same to generate nuScenes results.

For Waymo models, we are not able to share it publicly due to license agreement, you can send me an email to access those models. Please provide the necessay informations mentioned here
https://github.com/tianweiy/CenterPoint/tree/master/configs/waymo

@tianweiy tianweiy reopened this Feb 12, 2021
@YoushaaMurhij
Copy link
Author

YoushaaMurhij commented Feb 21, 2021

I used tools/dist_test.py to get the predictions.pkl for the validation-set to use them for tracking
python tools/dist_test.py /home/josh94mur/centerpoint/CenterPoint/configs/nusc/pp/nusc_centerpoint_pp_02voxel_two_pfn_10sweep.py --work_dir work_dirs/val --checkpoint working_dir/val/latest.pth --speed_test --testset --gpus 2
Can I use the same script to get the predictions of Test-set? I got an error related to GT KeyError: 'gt_names'
I also want to check mAP. Can I use the resulted .pkl for that? (submit to the server)

@tianweiy
Copy link
Owner

yeah, to fix the bug, change the config to the following

train_anno = "data/nuScenes/infos_train_10sweeps_withvelo_filter_True.pkl"
val_anno = "data/nuScenes/infos_val_10sweeps_withvelo_filter_True.pkl"
test_anno = "data/nuScenes/infos_test_10sweeps_withvelo_filter_True.pkl"

data = dict(
    samples_per_gpu=4,
    workers_per_gpu=8,
    train=dict(
        type=dataset_type,
        root_path=data_root,
        info_path=train_anno,
        ann_file=train_anno,
        nsweeps=nsweeps,
        class_names=class_names,
        pipeline=train_pipeline,
    ),
    val=dict(
        type=dataset_type,
        root_path=data_root,
        info_path=val_anno,
        test_mode=True,
        ann_file=val_anno,
        nsweeps=nsweeps,
        class_names=class_names,
        pipeline=test_pipeline,
    ),
    test=dict(
        type=dataset_type,
        root_path=data_root,
        info_path=test_anno,
        test_mode=True,
        ann_file=test_anno,
        nsweeps=nsweeps,
        class_names=class_names,
        pipeline=test_pipeline,
        version='v1.0-test'
    ),
)

you need to use the json files and zip it and then submit to the server

@YoushaaMurhij
Copy link
Author

YoushaaMurhij commented Feb 22, 2021

Thank you for your fast and clear answer!
The resulted .json file gave me a 0.0 mAP on the server. I used infos_test_10sweeps_withvelo.pkl
I can't figure out what's wrong! Any suggestions?

@tianweiy
Copy link
Owner

have you first tested the model on val? (dist_test.py without the --testset flag)

@tianweiy
Copy link
Owner

if val is ok, please send me an email with your generated json file so that I can take a look

@YoushaaMurhij
Copy link
Author

The val is OK. I will send you an e-mail!

@YoushaaMurhij
Copy link
Author

YoushaaMurhij commented Mar 15, 2021

I tried :
python tools/nusc_tracking/pub_test.py --work_dir working_dir/track --checkpoint working_dir/test/infos_test_10sweeps_withvelo.json --max_age 3 --root data/nuScenes/v1.0-test --version v1.0-test
to get the tracking results on testset but got this error:

    'Error: Requested split {} which is not compatible with NuScenes version {}'.format(eval_split, version)
AssertionError: Error: Requested split val which is not compatible with NuScenes version v1.0-test

@tianweiy Do I need to modify the config for the testset tracking ?

@tianweiy
Copy link
Owner

I think the file is already generated in the folder. You need to submit to server for evaluation.

@tianweiy
Copy link
Owner

Otherwise please attach full log. Especially where does this error come from which line of code

@YoushaaMurhij
Copy link
Author

YoushaaMurhij commented Mar 15, 2021

I can see .pkl and .json in the folder. I zipped .json and submitted to detection task.
for tracking I need the same thing?

Thank you for your response and good luck with your coming work :)

@tianweiy
Copy link
Owner

Yeah zip the json and submit to the tracking server

@YoushaaMurhij
Copy link
Author

Sorry, but submitting the .json to the tracking server is giving a Failed status. The same .json gave normal results on the detection server. What I am doing wrong?

@tianweiy
Copy link
Owner

is the json file called "tracking_result.json" ?

@tianweiy
Copy link
Owner

wait that means tracking is not running. Please attach all outputs after you run pub_test.py

@YoushaaMurhij
Copy link
Author

I think I found it. the files are there regarding the previous error. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants