This is the evaluation scripts for evaluating in the scoring trials of xDR Challenge 2023 (IPIN 2023 competition track5).
This README introduces evaluation indexes evaluated by the evaluation scripts and the requirement for using the scripts.
Note that the dataset for xDR Challenge 2023 is not included in the repository. The dataset is exclusively provided for the registered participants of this competition. Please copy all files in the gt, gis folders of the dataset to corresponding folders (gt, gis) in /xDR-Challenge-2023-evaluation/dataset/ for running evaluation with the sample dataset of the xDR Challenge 2023.
Name of Index | Corresponding indicators | Description |
---|---|---|
I_ce | CE (Circular Error) | Checking the absolute positional error between trajectory and ground truth at check points. |
I_ca | CA_l (Circular Accuracy in the local space) | Checking the deviation of the error distribution in local x-y coordinate system |
I_eag | EAG (Error Accumulation Gradient) | Checking the speed of error accumulation from the correction points |
I_ve | VE (Velocity Error) | Checking the error of velocity compared with correct velocity of ground-truth |
I_obstacle | Requirement for Obstacle Avoidance | Checking the percentage of points of the trajectory in walkable area |
I_ve is evaluated by calculating average of velocity errors in 1sec. time window (+/- 0.5 sec from the evaluation points) compared with GT.
This averaging operation is intended to smooth peaky high-frequency fluctuation of the velocity.
Index Max Score(100) Min Score(0) formula
I_ce | ce < 1.0 | 30 < ce | 100 - (100 * (ce - 1.))/29
I_ca | ca = 0.0 | 10 < ca | 100 - (10 * ca)
I_eag | eag < 0.05 | 2.0 < eag | 100 - (100 * (eag - 0.05))/1.95
I_ve | ve < 0.1 | 2.0 < ve | 100 - (100 * (ve - 0.1))/1.9
I_obstacle | obs = 1.0 | obs = 0.0 | 100 * obs
The winner of the competition is determined by weighted-sum of the indexes. The weights are shown as follows;
I_ce = 0.25
I_ca = 0.20
I_eag = 0.25
I_ve = 0.15
I_obstacle = 0.15
Note that frequency of the evaluation depends on the frequency of the ground-truth data. The frequency of the ground-truth data for xDR Challenge 2023 is about 100Hz. If the sampling frequency of your estimation is less than 100Hz, your estimation can not be accurately evaluated. We recommend you to estimate trajectories in 100Hz or to up-sample the trajectories to 100Hz.
python==3.8.5
numpy==1.23.4
pandas==1.5.0
scipy==1.8.1
matplotlib==3.3.2
seaborn==0.10.1
Filename | Description |
---|---|
do_evaluation_XC2023.py | Execute evaluation script for index |
requirements.txt | File for summarizing the requirements |
git clone --recursive https://github.com/PDR-benchmark-standardization-committee/xDR-Challenge-2023-evaluation
cd xDR-Challenge-2023-evaluation
pip install -r requirements.txt
Please place (copy) the file of estimated trajectory at [dataset]/[traj]/. The estimated trajectories with BLE information and without BLE information should be placed in separated folders.
- with BLE: _est
- without BLE: _pdr_est The file structure of the evaluation scripts is shown below.
xDR-Challenge-2023-evaluation/
├ dataset/
| ├ gis/
| | ├ beacon_list.csv
| | ├ FLD01_0.01_0.01.bmp
| | ├ FLU01_0.01_0.01.bmp
| | └ FLU02_0.01_0.01.bmp
| |
| ├ gt/
| | ├ *_*_gt.csv
| | └ *_*_gt.csv
| |
| └ traj/
| ├ *_*_est.csv [**estimation with BLE files**]
| ├ *_*_pdr_est.csv [**estimation files**]_pdr.csv
|
├ evtools/
├ output/
├ do_evaluation_XC2023.py
├ requirements.txt
└ README.md
The contents of the estimated trajectory file are separated by commas and are as follows. Note that Headers should not be included in the trajectory file.
Timestamp (s) | x(m) | y(m) | floor |
---|
You need to select estimation and ground truth folder path for evaluation
python do_evaluation_XC2023.py -t [estimation_path]
If you want to see the demo estimation score results, you just execute following script
python do_evaluation_XC2023.py -t dataset/traj/
Results are evaluation indexes and the integrated index. They are evaluated for each trajectory. Average of the indexes of the trajectories in the dataset are used for the competition. The results of indexes are saved in [output] folder.
There are optional arguments in evaluation scripts.
If you add "--draw" option, you can obtain histogram of CA or graph of EAG, Map of obstacle interference for OE. They are saved as folder named the trajectory name in output folder.
python do_evaluation_XC2023.py -t [estimation_folder] --draw
If you add "—output_path" option, you can indicate the name of the output folder.
python do_evaluation_XC2023.py -t [estimation_folder] --output_path new_output_folder/
If you add "--est_weight" option, you can change index weights to calculate competition score in index_weights.ini Default weights of indexes are the weights used for the competition.