Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus
Update: October 2023
We are happy to announce that an extended version of our previous work has been published in the IEEE Transactions in Aerospace and Electronic Systems.
We have updated the repository to include:
- Support for a lighter ResNet model from [1].
- Faster, more efficient ways to generate heatmaps.
- Bug correction in the pseudo-label generation process.
If you find our work or code useful, please cite:
@article{perez2023spacecraft,
title={Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus},
author={P{\'e}rez-Villar, Juan Ignacio Bravo and Garc{\'\i}a-Mart{\'\i}n, {\'A}lvaro and Besc{\'o}s, Jes{\'u}s and Escudero-Vi{\~n}olo, Marcos},
journal={IEEE Transactions on Aerospace and Electronic Systems},
year={2023},
publisher={IEEE}
}
This paper presents the second ranking solution to the Kelvins Pose Estimation 2021 Challenge. The proposed solution has ranked second in both Sunlamp and Lightbox categories, with the best total average error over the two datasets.
The main contributions of the paper are:
- A spacecraft pose estimation algorithm that incorporates 3D structure information during training, providing robustness to intensity based domain-shift.
- An unsupervised domain adaptation scheme based on robust pseudo-label generation and self-training.
The proposed architecture with the losses incorporating the 3D information are depicted in the following figure:
This section contains the instructions to execute the code. The repository has been tested in a system with:
- Ubuntu 18.04
- CUDA 11.2
- Conda 4.8.3
You can download the original SPEED+ dataset from Zenodo. The dataset has the following structure:
Dataset structure (click to open)
speedplus
│ LICENSE.md
│ camera.json # Camera parameters
│
└───synthetic
│ │ train.json
│ │ validation.json
│ │
│ └───images
│ │ img000001.jpg
│ │ img000002.jpg
│ │ ...
│
└───sunlamp
│ │ test.json
│ │
│ └───images
│ │ img000001.jpg
│ │ img000002.jpg
│ │ ...
│
└───lightbox
│ │ test.json
│ │
│ └───images
│ │ img000001.jpg
│ │ img000002.jpg
│ │ ...
SPEED+ provides the ground-truth information as pairs of images and poses (relative position and orientation of the spacecraft w.r.t the camera). Our method assumes the ground-truth is provided as key-point maps. We generate the key-point maps prior to the training to improve the speed. You can choose to download our computed key-points or create them manually.
Download and decompress the kptsmap.zip file. Place the kptsmap folder under the synthetic folder of the speedplus dataset.
- Download from Mega
- Download from GoogleDrive
Notes from update: These heatmaps only work with the data loader "loaders/speedplus_segmentation_precomputed.py".
We provide two methods to generate the heatmaps:
- The legacy method based on .npz files:
python create_maps.py --cfg configs/experiment.json
Note: if heatmaps based on .npz files are to be used, use them in conjuction with the data loader "loaders/speedplus_segmentation_precomputed.py"
- The new method based on .png files. This method sould be faster:
python create_maps_image.py --cfg configs/experiment.json
Note: if heatmaps based on .png files are to be used, use them in conjuction with the data loader "loaders/speedplus_segmentation_precomputed_image.py"
Please make sure that the correct "split_submission" field is in the config file before generation.
Place the keypoints file "kpts.mat" into the speed_root folder
To clone the repository, type in your terminal:
git clone https://github.com/JotaBravo/spacecraft-uda.git
After instaling conda go to the spacecraft-uda folder and type in your terminal:
conda env create -f env.yml
conda activate spacecraft-uda
The training process is controlled with configuration files defined in .json. You can find example configuration files under the folder "configs/".
To train a model simply modify the configuration file with your required values. NOTE: The current implementation only supports squared images.
Configuration example (click to open)
{
"root_dir" : "path to your datasets",
"path_pretrain" : "path to your pretrained weights", # Put "" for no weight initalization
"path_results" : "./results",
"device" : "cuda",
"start_epoch" :0, # Starting epoch
"total_epochs" :20, # Number of total epochs (N-1)
"save_tensorboard" :100, # Number of steps to save to tensorboard
"save_epoch" :5, # Save every number of epochs
"save_optimizer" :false, # Flag to save or not the optimzer
"mean" :41.3050, # Mean value of the training dataset
"std" :37.0706, # Standard deviation of training the dataset
"mean_val" :41.1280, # Mean value of the validation dataset
"std_val" :36.9064, # Mean value of the validation dataset
"batch_size" :8, # Batch size to input the GPU during training
"batch_size_test" :1, # Batch size to input the GPU during test
"num_stacks" :2, # Number of stacks of the hourglass network
"lr" :2.5e-4, # Learning rate
"num_workers" :8, # Number of CPU workers (might fail in Windows)
"pin_memory" :true,
"rows" :640, # Resize input image rows (currently only supporting rows=cols)
"cols" :640, # Resize input image cols (currently only supporting rows=cols)
"alpha_heatmap":10, # Flag to activate pnp loss
"activate_lpnp":true, # Flag to activate pnp loss
"activate_l3d": true, # Flag to activate 3D loss
"weigth_lpnp": 1e-1, # Weight of the PnP loss
"weigth_l3d": 1e-1, # Weight of the 3D loss
"split_submission": "synthetic", # Dataset to use to generate labels
"isloop":false # Flag to true if training with pseudo-labels, false otherwise
}
Then, after properly modifying the configuration file under the repository folder type:
python main.py --cfg "configs/experiment.json"
Notes from update: If you wish to use a simpler ResNet model please execute the following command:
python main_resnet.py --cfg "configs/experiment_resnet34.json"
And make sure that the "resnet_size" field in the config is available.
The script will take the initial configuration file and the training weights associated to that training file to generate pseudo-labels and train a new model. Every iteration a new configuration file is generated automatically so the results are not overwritten.
To train the pseudo-labelling loop you first need to configure the "main_loop.py" script by specifying the path to the folder where the configuration files will be stored, the initial configuration file and the number of iterations. In each iteration a new configuration file will be created in the BASE_CONFIG folder with an increased niter counter. For example you first create the folder "configs_loop_sunlamp_10_epoch" and place the config file "loop_sunlamp_niter_0000.json" under it. For the next iteration of the pseudolabelling a new configuration file loop_sunlamp_niter_0001.json will be created.
NITERS = 100
BASE_CONFIG = "configs_loop_sunlamp_10_epoch" # folder path
BASE_FILE = "loop_sunlamp_niter_0000.json"
After you have crated the configuration files, you will need to manually place the weights used for the first iteration of the pseudo-labelling process. Under the "results" folder create a folder with the BASE_CONFIG name, and then another subfolder with the BASE_FILE name. For example "results/configs_loop_sunlamp_10_epoch/loop_sunlamp_niter_0000.json". Under that folder place a new subfolder called "ckpt" containing a file of weights named "init.pth". The final result should look as "results/configs_loop_sunlamp_10_epoch/loop_sunlamp_niter_0000.json/ckpt/init.pth"
The init.pth should be the weights of the model trained over the synthetic domain. If you want to skip that training phase you can use our available weights in Section 5 of this page.
Go to the folder where you have the dataset saved and duplicate the Sunlamp and Lightbox folders, renaming the new ones as "sunlamp_train" and "lightbox_train". In these folders the new pseudo-labels will be stored and generated.
python main_loop.py
You can monitor the training process via TensorBoard by typing in the command line:
tensorboard --logdir="path to your logs folder"
This work is supported by Comunidad Autónoma de Madrid (Spain) under the Grant IND2020/TIC-17515
[1] - Xiao, B., Wu, H., & Wei, Y. (2018). Simple baselines for human pose estimation and tracking. In Proceedings of the European conference on computer vision (ECCV) (pp. 466-481).