Skip to content

BensonRen/Drone_based_solar_PV_detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Utilizing geospatial data for assessing energy security: Mapping small solar home systems using unmanned aerial vehicles and deep learning

This repository is the implementation of Utilizing geospatial data for assessing energy security: Mapping small solar home systems using unmanned aerial vehicles and deep learning. Currently submitting, by Simiao Ren,Jordan Malof, T. Robert Fetter, Robert Beach, Jay Rineer and Kyle Bradbury.

This is a repo building up from MRS framework developed by Dr. Bohao Huang. All the major components are directly forked from the MRS framework and this is a application scenario. For guidance on the usage of the framework, please check repo's original demos

Installation

The list of dependencies is listed in the env.yml file.

Flow chart process

Individual steps to reproduce each of the experiments are illustrated in the flow chart below. This is laying the roadmap for how you can recreate the results in our paper. A more detailed explanation can be found below.

flow_chart

Sample imagery

sample_img

Dataset download

Duke Forest dataset (a.k.a. main dataset, catalyst dataset) download

The main dataset used in this work can be downloaded here. Please refer to the README.txt that comes with the dataset for a detailed guideline of the dataset layout. To reproduce the work, only the imgs.zip is needed (for the moving speed experiment, photos are cut from video and included in the imgs.zip file along with annotations).

Rwanda drone imagery dataset download (raw)

Follow instruction in the MLhub website. You can find the original paper publishing this dataset here.

Rwanda drone imagery dataset solar PV annotation download

We manually annotated the home solar systems in the Rwanda drone imagery above and you can download the masks here.

Dataset pre-processing

First run the jupyter notebook that comes with the dataset, detailed instructions are given in the notebook. This would prepare for you the dataset needed for all six experiments that we've done (including ones in the appendix). Note that this step only help you setup the images and annotation positions.

pre_processing_imgs.ipynb

Then, as part of the process to use the MRS framework, one would need to run the dataset pre-processing. Go to ./data folder and run the MRS framework pre-processor:

python Exp1_preprocessing.py

Model training

The model training is the same as the MRS framework. Simply change the corresponding entries in the config.json and run below:

python train --config config.json

If you do not want to waste your time training the model or hyper-sweeping hyper-parameters, feel free to use ours! Below are the list of models that we trained:

Experiment index Details Config File Model
Exp1_1_d1 Final model on main dataset, from 1.6cm to 2.2cm (d1: altitude 45m-65m) config Box
Exp1_1_d2 Final model on main dataset, from 2.2cm to 3.0cm (d2: altitude 65m-85m) config Box
Exp1_1_d3 Final model on main dataset, from 3.0cm to 3.7cm (d3: altitude 85m-105m) config Box
Exp1_1_d4 Final model on main dataset, from 3.7cm to 4.5cm (d4: altitude 105m-125m) config Box
Exp1_2_res_7.5 Final model on simulated "satellite" imagery for effective resolution of 7.5cm config Box
Exp1_2_res_15 Final model on simulated "satellite" imagery for effective resolution of 15cm config Box
Exp1_2_res_30 Final model on simulated "satellite" imagery for effective resolution of 30cm config Box
Exp3_rwanda Final model on rwanda dataset config Box

For Exp1_2_res_60, we did not learn anything eventually so no model is enclosed. The resized image is also impossible for humans to identify any solar panels inside.

Config.json details

The positional entries to be changed to user locations:

Field name Usage
data_dir The directory containing the patches cut from the Exp1_preprocessing step above. Change it according to the experient you are working on
train_file The training file of the img. This should be highly duplicate with data_dir above and points to file_list_train.txt
valid_file The validation file of the img. This should be highly duplicate with data_dir above and points to file_list_valid.txt
finetune_dir The pretrained model weights checkpoint file to start from. This should be the pretrained model of satellite imagery for majority of times
save_root The place to save the trained model weights etc.

The hyper-parameters tuned:

Field name Usage
class_weight Important Hyper-param: The class weight added to positive and negative class. The latter is for the positive class.
loss_weights The weight added to combine two different loss, the cross entropy loss and the softiou loss.
learn_rate_encoder The learning rate of the encoder
learn_rate_decoder The learning rate of the decoder
decay_rate The decay rate of the learning rate at decay step illustrated next
decay_step The step in unit of epoch to decay the learning rate, this is one time decay for each step (not multiples of that)

Other hyper-params variable name may lead to confusion :

Field name Usage
mean The mean of the pretrained dataset (no need to change)
std The standard deviation of the pretrained dataset (no need to change)
num_workers The number of workers for data fetching
alpha, gamma The hyper-parameter for optimizer
bp_loss_idx The index for 'criterion_name' to be included in the back propagation training loss (other remaining ones are only report purpose but no gradient would be passed)
save_epoch Save the model Every X epoch

Model selection

There are two ways to choose the model, usually they lead to very similar choice since the metrics are highly correlated. First method is the iou based selection, this is the simpler one as it only need one to look at the tensorboard:

tensorboard --logdir=../models/

There would be three values recorded for both training and validaiton set: IoU, cross entropy and softIoU. Model selected using this method in this work is selected from validaiton IoU values. Note that these values are pixel-wise average instead of object-wise average.

The other method of choosing the best trained model comes from object wise scores. Since this is highly duplicate with the model inference part below, please refer to the model inference for more details.

Models selected using this method in this work is selected from Average Precision of object wise score.

Inference

Adjust the corresponding fields according to the comments and run:

python infer.py

Running the command above would generate the confidence map (pixel wise) at SAVE_ROOT specified at the top of the script. If you meet error prompt of 'file_list_raw' not found, make use of the script:

python make_file_list.py

Object-wise post-processing

Since the inference code only output a confidence map, post-processing steps need to take place for converting them into a predicted solar panels and then go through object-wise scorings. The process of post-processing is illustrated in below figure:

post_process

This process is built-in in the evaluation step below (where one have the ground truth masks of solar panels), but for application stage inference case where no ground truth masks are presented, one might want to go through the evaluation step without the need to compare with ground truth mask. Run the following script for the post-processing:

python post_processing.py

Evaluation

After we have the confidence map predictions, we would like to generate the Precision-Recall curves from the predictions. Adjust the corresponding fields according to the comments and run:

python object_pr.py

Make sure you change the field labeled !!! Change this !!! to your own directories.

Cost estimation for drone operation

Go to cost estimation for the jupyter notebook for everything related to the cost estimations. total_cost compare_cost

Rwanda experiment

Follow the exact same procedure as the Exp 1 but with the Rwanda dataset instead. Rwanda_confusion

Contributing and issue report

If you would like to contribute or want to file a bug report, feel free to raise a issue or make a merge request.

Funding

Support for this work was provided by the Nicholas Institute for the Environment’s Catalyst program and from the Alfred P. Sloan Foundation Grant G-2020-13922 through the Duke University Energy Data Analytics Ph.D. Student Fellowship.

Credit

We would like to express special thanks for Dr. Leslie Collins for providing useful feedbacks and discussion. We also would like to thank Dr. Bohao Huang for his MRS framework code, Mr. Wei (Wayne) Hu for his help in developing the code and discussion, Mr. Trey Gowdy for his helpful discussions and expertise in energy data and other energy fellows / mentors during the Duke University Energy Data Analytics Ph.D. Student Fellowship Program for their suggestions, questions and comments!

We also thank the Duke Forest for their use of the UAV flight zone for data collection!

License

The project is licensed under the MIT license.

Please cite this work if some of the code or datasets are helpful in your scientific endeavours. For specific datasets, please also cite the respective original source(s), given in the preprint/manuscript.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •