# mfauvel/RS_Lab

Switch branches/tags
Nothing to show
c9ebea5 Feb 27, 2018
Mathieu Fauvel Update for 2018
2 contributors

### Users who have contributed to this file

2988 lines (2538 sloc) 103 KB

# 1 Introduction

## 1.1 Objectives of the labworks

The main objective of these labworks is to

be able to use and to extract information from remote sensing images for land management.

Information can be any knowledge of a given landscape (landcover, land-use, humidity, …) that is used to understand the configuration and/or the evolution of landscape.

In terms of competences, you should be able to master at the end of the sessions the items listed in tables c:1, c:2, c:3, c:4 and c:5. Each of them is organized as a set of tasks that should be mastered progressively.

 Remember Properties of remote sensing images. Understand Physical meaning of each sampling of a given image. Apply Open and visualize a remote sensing image, extract its properties. Analyze Describe a remote sensing image. Recognize specific object. Evaluate Choose the good image adapted to what you are looking for. Create Create a set of properties needed for your problematic.
 Remember Definition of spectral indices in general and of the NDVI in particular. Understand What (and why) does the NDVI emphasize. Apply Perform the computation of spectral indices. Analyze Analysis of the vegetation cover using NDVI. Evaluate Choose the right spectral index. Create Select from the literature a set of possible indices.
 Remember Spectral signature for vegetation and water object. Understand Histogram of images. Apply Compute an histogram. Analyze Extract radiometric properties of some classes. Evaluate Choose relevant spectral bands and/or indices for the segmentation of several classes. Create Perform a segmentation using radiometric statistics on one or many spectral variables and/or indices.
 Remember Definition of pixel-wise supervised classification. Understand The parameters of several classification algorithm, how the spatial sampling of the ground truth data influence the training and validation steps. Apply Classification algorithms. Analyze Interpret the confusion matrix and the thematic map, the quality of a ground truth. Evaluate Compare two classification maps/results. Create Choose the most appropriate classifier for one given application, build good ground truth data.
 Remember How to combine several bands, apply a given function to train a classifier or to predict a thematic map. Understand The different inputs and outputs of OTB functions, how to use their corresponding documentation. Apply Apply a set of different functions in a pipeline. Analyze Define the different processing needed to perform a given task. Evaluate Evaluate the accuracy of the given processing, check for errors. Create Shell scripts that automatize most the processes, in order to apply them on a large set of images or to apply several embedded processes.
 Remember How to implement your algorithm in Qgis. Understand The different inputs and outputs of Qgis processing GUI, and how to use their corresponding documentation. Apply Create an algorithm in python working on its own (outside of Qgis). Analyze Define the different constraints of your plugin (libraries, integer or float images, same projection…) Evaluate Check for errors and/or alert user within Qgis using alert or log messages. Create A Qgis plugin having algorithms available in the processing toolbox in order to combine them to other Qgis algorithms.

## 1.2 Remote sensing software

In these labworks, free and open sources softwares will be used to visualize remote sensing images, to process them and to implement processing chains. In the following, each software/tools will be briefly described. Interested reader can find more information on the associated website. In particular, the installation process is not detailed. However, they can be freely download and installed on many operating systems from their official website.

Students from the MASTER SIGMA - Specialization Agrogéomatique (A. Greco) has made a youtube channel to help you in using/installing the different softwares:

### 1.2.1 Orfeo ToolBox (OTB)

OTB is a C++ library for remote sensing images processing. It has been developed by the CNES (French space agency) during the ORFEO program to prepare, accompany and promote the use and the exploitation of the images derived from Pleiades satellites (PHR). Processing tools from OTB are appropriated to big images. When possible, processes are paralyzed and tiled automatically for users. Many applications derived from OTB and called OTB-Applications are directly usable for most of the common processing, they are described here. For advanced users, it is possible to develop program based on the OTB library (not considered in these labworks).

Monteverdi2 is graphical user interface that allows users to visualize and process remote sensing images with OTB-Applications. It is also developed by the CNES during the ORFEO program.

### 1.2.2 QGIS

QGIS is a Geographic Information System (GIS). It is used to open, visualize and process digital map. It includes several spatial analysis tools working mainly on vector data. QGIS can be extended by several plugin (https://plugins.qgis.org/) and modules, such as the OTB applications.

### 1.2.3 Geospatial Data Abstraction Library (GDAL)

GDAL is a library for the processing of raster and vector data. Similar to OTB, it has several applications that can be used directly. For advanced users, it is possible to develop program based on the GDAL library (not considered in these labworks).

### 1.2.4 Python

Pyhton is a programming language. It has several programming capabilities, such as object-oriented, functional programming, dynamic type and memory management that make it widely used in several applications:

• Web and internet development,
• Scientific and numeric computing,
• Software development.

It has a large number of available packages that can be used in many applications. For instance, it is possible to call OTB-Applications or GDAL from Python.

## 1.3 Sequences

SequencesTypeVolumeTopics
[2018-02-27 Tue 10:10-12:10]CTD02:00:00Visualization of remote sensing data
[2018-02-27 Tue 13:30-17:30]CTD04:00:00Spectral indices + Segmentation
[2018-02-28 Wed 13:30-17:30]CTD04:00:00Detection of floods + Classification 1/2
[2018-03-02 Fri 08:00-10:00]Projet00:00:00Non présentiel -> Training/validation sets
[2018-03-02 Fri 10:00-12:00]CTD02:00:00Classification 2/2 + SITS 1/2
[2018-03-02 Fri 13:30-17:30]CTD04:00:00SITS 2/2
[2018-03-06 Tue 13:30-17:30]CTD04:00:00Dynamic Habitat Index
[2018-03-08 Thu 08:00-10:00]Exam02:00:00Exam Group 1
[2018-03-08 Thu 10:10-12:10]Exam02:00:00Exam Group 2
Total30:00:00

## 1.4 During the labworks

For the presential sequences, you won’t have to do any report. But you will have to write your personal material on remote sensing. You are encouraged to write it progressively during the sessions. It will be the only document approved for the exam (with those on moodle). The length of each sequence should let you enough time to write the report.

For the non presential sequences, you will be asked to write a document that describe briefly the results and how you obtained them. Discussion between all groups will be done during the next session.

# 2 Data sets

These images were acquired over the Fabas forest in 2013. Images were acquired the <2013-10-12 Sat> and the <2013-12-10 Tue>, respectively. A true color composition is given in Figure fabas_1.

Images are stored using the GeoTIFF format. It is an extended version of the TIFF format, which allows to embed geospatial information within the file. GeoTIFF can be read by most of the remote sensing and GIS software. Table tab:bands:pleiades gives the band order of the data.

BandChannel
1Red
2Green
3Blue
4Infra-red

## 2.2 Formosat 2 Satellite image time series

This time series was acquired in 2012 over the region of Saint Lys. It consists in a set of Formosat-2 images along the year 2012. Figure \ref{fig:SITS} provide information about the acquisition date and the Figure sits:f2 shows a false colors composition for the date [2012-05-03 Thu]. Table tab:bands:formosat gives the band order of the data.

BandChannel
1Blue
2Green
3Red
4Infra-red

## 2.3 Historical Maps

The figure fig:hm shows an historical map. This is a scan performed by the IGN of an old manually drawn map.

# 3 Visualization of remote sensing data

## 3.1 Vizualization of remote sensing image

The vizualisation of remote sensing images can be done either with Monteverdi2 or QGIS[fn:: The library matplotlib of python is not adapted to visualize remote sensing image and should be avoided.]. QGIS might be a more efficient when it comes to visualize several images, or for the vizualisation of vector layers. It will be used in these labworks.

Most of the information regarding the vizualisation of raster data with QGIS can be found online http://docs.qgis.org/2.14/en/docs/user_manual/working_with_raster/raster_properties.html.

More generally, to use raster data with QGIS is described here http://docs.qgis.org/2.14/en/docs/user_manual/working_with_raster/index.html.

In this labwork, a few properties will be reviewed and you are encouraged to check (at least) the given references.

### 3.1.1 Vizualization of grayscale image

Open the image fabas_10_12_2013.tif with QGIS. The default view is a colour composition, with the bands/channels association given in Table tab:asso. To start easy, we just open one band at a time: right click on the name of the opened image in the Layer pane et select Properties. Then select the tab Style and Band rendering. In the render type, select Singleband gray and the band you want to display.

You surely have to do Contrast enhancement. Check the doc for that.

BandChannel
1Red
2Green
3Blue

### 3.1.2 Vizualization of colour image

Now you can visualize a colour images, by selecting three spectral bands among those available from the data. Again, Contrast enhancement should be done.

## 3.2 Get data information

Before opening a remote sensing data, it is possible to get some information about its properties. For instance, using gdalinfo it is possible to extract several information. It can be used as

gdalinfo fabas_10_12_2013.tif

Help on the function can be obtained using the command alone or by doing :

man gdalinfo

Equivalently, it is possible to get the same information using the function otbcli_ReadImageInfo from the OTB-Applications:

otbcli_ReadImageInfo -in fabas_10_12_2013.tif

# 4 Spectral indices: Normalized Difference Vegetation Index

Among the available radiometric indices, only the NDVI is considered in this labwork. NDVI is widely used for vegetation monitoring because it can be related to chlorophyll content and photosynthesis.

Using the OTB-Applications, it is possible to use otbcli_BandMath. The syntax is similar, since we need to define the image, the bands used and the expression of our processing:

otbcli_BandMath -il fabas_12_10_2013.tif -out ndvi_fabas.tif -exp "(im1b4-im1b1)/(im1b4+im1b1)"
1. Compare the NDVI obtained for each date and explain your results.

# 5 Segmentation of remote sensing images

## 5.2 Segmentation of 1D histogram

In this part, the extraction of image’s pixels sharing the same radiometric behavior is considered. The analysis of the histogram is used to estimate this behavior. When only one material is segmented, the output is a binary image (image with value 0 or 1), where pixels having value 1 are from the same material. Figure fig😷water gives an example of such outputs. When several material are considered, the output is an images with integer values (1, 2, 3 …), depending on the number of materials.

A usual work-flow is proposed in this part. First, QGIS is used to analyze the data and set-up the processing (parameters etc). Then, the OTB-Applications are used to automatize the processing.

## 5.3 Graphical Modeler

For the segmentation of the NVDI, two processings are required

1. First, the computation of the NDVI from the original image,
2. Second, the definition of the interval of values to extract the relevant pixels.

With the graphical modeler, it is possible to define your workflow, to automatize complex tasks. Take a look at http://docs.qgis.org/2.14/en/docs/user_manual/processing/modeler.html.

# 6 Change detection: Detection of floods

Change detection in remote sensing consists in detecting differences between two images, or a set of images. It can be used to detect changes in vegetation properties or in land cover. It is also used in disaster management, to detect impacted areas. In this labwork, we are dealing with floods. In Figure fig:cd is shown two quickbird images, before and after a flooding. The objective is to identify the impacted area to provide a map of these zones

# 7 Classification of remote sensing images

## 7.1 Introduction

The aim of this labwork is to perform the classification of remote sensing images using supervised algorithms. The principle is the same than segmentation. But now the gray level intervals are not defined manually and the definition of a radiometric behavior is not limited to a rectangular area in the spectral domain. Furthermore, since all the computation are done by supervised algorithms, it is possible to use more information than one or two bands and the full multispectral image can be use. In fact, more than one image can be used. In this work, the two Fabas images will be classified: first separately and then conjointly.

The OTB proposes various classifiers, each one having different characteristics. In order to train (or learn) the classifier, some labeled pixels should be provided. It is possible to construct the ground-truth (set of labeled pixels) in different ways:

• Using GIS layer and extract the relevant information at the pixel level.
• Do field survey and use GPS to identify pixels.
• Do photo-interpretation when possible.

In this works, the ground-truth is provided as a vector file, see fig:gt. Five classes are considered, they are given in Table tab:classes.

ClassesSparse vegetationBare soilWoody vegetationWaterBuilt up
Attribute12345

During this labwork, it is proposed to compare in terms of classification accuracy and processing time some of the classifiers proposed in OTB and all the combination of input data, i.e.:

• K-nn, Bayes, SVM and Random Forest.
• The ground-truth being composed of pixels from one date, and two concatenated dates.

## 7.2 Getting started with OTB

There are several steps to do a classification.

1. Learn the classifier: It is done with TrainImagesClassifier. It takes as inputs, the (set of) remote sensing image(s), the ground-truth (in vector format), and some parameters of the method. To learn the classifier, only the pixels inside the ground-truth are used. After this step, a model that contains the parameters is saved. If asked, a confusion matrix is computed.
2. Classify the image: Once the classifier is learned, it is possible to apply the model to all the pixels of the image. It can be done with ImageClassifier.
3. Compute the accuracy of the thematic map according to some groundthruth. This groundthruth should not be spatially correlated with the one used for training. The confusion matrix can be computed using the function ComputeConfusionMatrix.

## 7.3 Automatize the process with scripts (shell, python or the Graphical modeler)

It is possible to run directly the OTB-Applications from the the command line. This way, it is possible to run several operations on one data set or on several data sets automatically. A brief introduction to command line tools is given in Appendix #sec:shell.

The three previous OTB-Applications are available from the command line interface (CLI), same name with the prefix otbcli_ :

• otbcli_TrainImagesClassifier,
• otbcli_ImageClassifier,
• otbcli_ComputeConfusionMatrix.

The same inputs than in QGIS should be provided (raster and vector file, algorithm parameters …). For instance, if you are in the directory where the data are, learning the KNN classifier with default parameters do the following, classifying the whole image and computing the confusion matrix reduce to

otbcli_TrainImagesClassifier \
-io.il fabas_12_10_2013.tif \
-io.vd train_fabas.shp \
-classifier knn \
-io.out model.mod
otbcli_ImageClassifier \
-in fabas_12_10_2013.tif \
-model model.mod \
-out fabas_classif.tif
otbcli_ComputeConfusionMatrix \
-in fabas_classif.tif \
-out matconf.csv \
-ref vector \
-ref.vector.in valid_fabas.shp

This is nothing else than what you provide in QGIS ! In the following, we are going to combine Python scripts and the OTB Applications to define our processing chain. Two python modules will be use: os and glob. These modules are very convenient to manage files, folder and to launch applications. Also, we are going to benefit Python abilities to process strings.

# Load the module
import os

# Launch the application
os.system('otbcli_TrainImagesClassifier -io.il fabas_12_10_2013.tif -io.vd train_fabas.shp -classifier knn -io.out model.mod')
os.system('otbcli_ImageClassifier -in fabas_12_10_2013.tif -model model.mod -out fabas_classif.tif')
os.system('otbcli_ComputeConfusionMatrix -in fabas_classif.tif -out matconf.csv -ref vector -ref.vector.in valid_fabas.shp')

or equivalently:

# Load the module
import os

# Define processing
train = 'otbcli_TrainImagesClassifier -io.il fabas_12_10_2013.tif -io.vd train_fabas.shp -classifier knn -io.out model.mod'
classify = 'otbcli_ImageClassifier -in fabas_12_10_2013.tif -model model.mod -out fabas_classif.tif'
validate = 'otbcli_ComputeConfusionMatrix -in fabas_classif.tif -out matconf.csv -ref vector -ref.vector.in valid_fabas.shp'

# Launch the application
os.system(train)
os.system(classify)
os.system(validate)

Additional usefull references are given section #sec:python, take the time to read them.

Alternatively, it is possible to use the batch processing interface provided by the graphical modeler of QGIS. It allows the batch execution of a model over several inputs. Take a look at the following link for further details:

According to your preference, use one of these techniques to do the following tasks:

## 7.4 Multi dates

From the same area, two dates are available. It is possible to use them conjointly in many ways. Two possible solutions are considered here. The first one consider the second date as additional data, i.e., there are twice as many pixels in the training set. For each pixel, we have its reflectance the [2013-10-12 Sat] and the [2013-12-10 Tue]. The second one is to consider that we have the temporal evolution of the reference.

The first approach can be simply done by providing the two images as inputs to the training function. The classification of the whole images is then done independently (two classification maps). The second approach necessitates to concatenate the two dates before training. The concatenation can be done using the function otbcli_ConcatenateImages. The classification of the whole image is then done conjointly (only one classification map).

PixelDateClassBGRIR
$\mathbf{x}_1$[2013-10-12 Sat]Broadleave0.300.400.200.80
$\mathbf{x}_1$[2013-12-10 Tue]Broadleave0.400.450.430.40
$\mathbf{x}_2$[2013-10-12 Sat]Conifer0.290.410.180.75
$\mathbf{x}_2$[2013-12-10 Tue]Conifer0.270.360.300.70
$\mathbf{x}_3$[2013-10-12 Sat]Bare soil0.390.370.380.39
$\mathbf{x}_3$[2013-12-10 Tue]Bare soil0.420.440.430.40

Works:

1. Using pixels from Table tab, plot on spreadsheet all pixels according to both approaches. Discuss the advantages and drawbacks of each approach in terms of how it captures the specto-temporal behavior of the different classes.
2. Perform the classification using both approaches, for all the classifiers.
3. Report the results on the collaborative spreadsheet.

## 7.5 Influence of the spatial distribution of the learning samples

In order to evaluate the influence of the validation samples, you will investigate several reference layers to compute the confusion matrix. Since OTB only select a few samples from all the available one (can be controlled with the options samples.mt and samples.mv), we need to repeat the experiment several times, to avoid bias.

Select one classifier for all the experiments. You are encouraged to define a python/shell script or a modeler.

# 8 Satellite Image Time Series

## 8.1 Objectives

The objectives of this part are two-folds. First, it is proposed to build a Satellite Image Time Series (SITS) given a set of images acquired over the same area. Then, we are going to classify winter/summer crops using the SITS. Reference and validation samples were extracted from the RPG for the same year. Table tab:RPG provides the different classes available of these area. Details about the data are given #sec:sits.

ValueLabelClassAttribute
1WheatWinter Crop1
2Grain maize and silageSummer Crop2
3BarleyWinter Crop1
4Other cerealsWinter Crop1
5RapeseedWinter Crop1
6SunflowerSummer Crop2
7Other oleaginousSummer Crop2
8Protein cropsSummer Crop2
15Grain leguminousWinter Crop2
16FodderGrassland3
18Permanent grasslandGrassland3

## 8.2 Construction of the SITS

Before classifying the SITS, you need to built it. In these labwork, two SITS will be considered. One build will all the spectral bands, and the other one using the NDVI only.

## 8.3 Classification of the SITS

Two scenario will be considered in this labwork. Classification of the whole SITS and classification of the best date.

## 8.4 Extraction of the best couple of dates

We have seen in the previous part that one date is not enough to get a correct classification rate. In that section, we are going to test all the possible couple of dates, to find the best one in terms of classification accuracy.

How to do it ? Just test all the possible combinations! Be aware that using t1 and t2 is the same than using t2 and t1. Here we have 13 dates, so the total number of couples is . I really hope you can use bash script now …

The code given figure code:best:dates might help you. It extracts all the possible couples of files from a set of files in a given directory, the files ended with *m.tif.

Analyze the three best results in terms of accuracy. Interpret the results given the classes to be classified, the geographical area and its practical consideration (should we buy the complete SITS, or just some periods of the years? …)

# 9 Dynamic Habitat Index

## 9.1 Introduction

In this labworks, we are going to compute several indices of habitat dynamic’s in order to define several ecozones. It is bases on the following paper:

Nicholas C. Coops, Michael A. Wulder, Dennis C. Duro, Tian Han, Sandra Berry, The development of a Canadian dynamic habitat index using multi-temporal satellite estimates of canopy light absorbance, Ecological Indicators, Volume 8, Issue 5, September 2008, Pages 754-766, ISSN 1470-160X, http://dx.doi.org/10.1016/j.ecolind.2008.01.007. (http://www.sciencedirect.com/science/article/pii/S1470160X08000071)

These indicators underly vegetation dynamic, they are usually computed in the fraction of photosynthetically active radiation (fPAR) absorbed by the vegetation. However these data are not available. So in this lab, the NDVI will be used. The data is described in #sec:sits.

## 9.2 Construction of the SITS

Before analyzing the SITS, you need to built it.

## 9.3 Computation of the dynamic indices

The second part of the labwork concern the computation of the dynamic indices. Let us note the vector of fPAR values of pixel i $\mathbf{x}_i=[\mathbf{x}_i(t_1),\ldots,\mathbf{x}_i(t_d)]$. Three indices have been defined:

1. The cumulative annual greenness,
1. The annual minimum cover,
1. The greenness coefficient of variation.

## 9.4 Characterization of ecozones

Perform a segmentation of the SITS using the three indices as input values. A primarily study suggests the number of ecozones is 4 for this area. Look at the function otbcli_KMeansClassification to perform the automatic segmentation of you data.

# 10 Python for Remote Sensing data analysis

## 10.1 Template filters

### 10.1.1 Introduction

In this labwork, images will be provided under the Scipy format. How to open remote sensing images with GDAL will be addressed later. To load an image using Scipy just do

import scipy as sp
image = sp.load('dataname.npy')

In the following, you will have to write python functions (mainly image filters). In figure code:skeleton is provided a skeleton of such function, using simple docstring convention. You are highly encouraged to put comment in your code !

<code:skeleton>

### 10.1.2 Template filters function

Using the previously defined skeleton, implement the following filter: max, min, median, mean. You can use methods of scipy array class describe below:

https://docs.scipy.org/doc/numpy/reference/generated/numpy.amax.html

https://docs.scipy.org/doc/numpy/reference/generated/numpy.amin.html

https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html

https://docs.scipy.org/doc/numpy/reference/generated/numpy.median.html

### 10.1.3 To go further

• Extend your function for multidimensional images.
• Provide function with a rectangular template (with odd size).
• For the mean filter, add a new parameter that define the number of cycle, i.e., the number of times the filter is applied iteratively on the image.
• Implement the following filters and try to explain what are they action.
• max(im)-im
• im-min(im)
• max(im)-min(im)

## 10.2 Historical map processing

This section is dedicated to the implementation of the filtering part of the following paper

P-A Herrault, D Sheeren, M Fauvel, and M Paegelow. Automatic extraction of forests from historical maps based on unsupervised classification in the cielab color space. In Geographic Information Science at the Heart of Europe, pages 95–112. Springer International Publishing, 2013.

It concerns the filtering of historical maps, see #sec:data:hm to see the data to be processed. The filtering consists in the application of the consecutive filters, on every bands of the color image:

1. Local max filter,
2. Local min filter,
3. Local median filter,

as illustrated in the (simplified) processing chain shown in fig:hm. You should write a script that takes as arguments:

• The name of the image,
• the name of the output image,
• the size of the min/max filter,
• the size of the median filter,

an, off course, do the right processing.

1. Play with it to find the best couple of parameters to remove black line in the historical map.

## 10.3 QGIS Script

QGIS offers facilities to call python scripts directly from QGIS. We are going to see how it is possible to use our previous script, which slight modifications, directly in QGIS. Doing this has two main advantages:

1. Let QGIS does the job for the selection and the visualization of the images,
2. Use QGIS pre-defined function to handle arguments of our functions.

### 10.3.1 Install Script Runner

Script Runner is a QGIS plugin that allows to integrate python script very easily: you don’t need to worry about linking your script to QGIS, it is done automatically. To install it, just do Extensions -> Manage and Install Plugins and select the plugin.

Once the plugin is installed in QGIS, you can import your scripts.

### 10.3.2 Creating script for QGIS and Script Runner

Your script needs slight modification in order to be imported in Script Runner.

1. Add the following modules to interact with QGIS:
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from qgis.core import *
from qgis.gui import *

2. First you should define a function run_script(iface) where the processing is done. iface is a python object that gives access to several QGIS objects and classes. It is used to communicate with QGIS. You can also add arguments to the function: when the script is run, a window will be automatically created to ask you to fill the value of each parameter. For instance, if you need to define bands corresponding to infrared and red and the name of the output file:
def run_script(iface,IR,R,data_name)

3. The second modification need is to get the name of the data of current layer in QGIS. It is done using iface object:
layer = iface.activeLayer() # Get the current layer
name = layer.dataProvider().dataSourceUri() # Get the path of the layer


Then, once the processed file has been saved on the hard drive, it is possible to load directly the file into QGIS, again by using iface:

raster_lyr = iface.addRasterLayer('name_of_the_data','name_of_the_layer')

4. The last modification is not mandatory, but it will simplify the use of your script. Don’t use import, but directly put all your additional functions before the run_script function.

## 10.4 A plugin to make accessible an algorithm in the Qgis Processing toolbox

Thanks to the Processing ToolBox developped by Victor Olaya, it is now possible to use your algorithm with others QGIS algorithms. The plugin automatically creates a Graphical User Interface (GUI), hence there are some guidelines to follow in order to be compatible with QGIS.

### 10.4.1 Plugin Builder in QGIS

Plugin Builder is a plugin in QGIS which allows you to automatize the creation of plugins by giving him some information (name of your processing plugin, author name, bug tracker url…). Some of them are mandatory because the aim of Plugin Builder is to share your creation with every user of Qgis. Once you fulfill all these informations, you will find your plugin in the same folder as others already installed plugins :

# ~/ refer to your user /home/USER folder
cd ~/.qgis2/python/plugins/YourPluginClassName

After the creation of your plugin, you should have your folder organize as follow:

ls #list of files and folders in YourPluginClassName
__init__.py  pluginModule_algorithm.py  pylintrc          test
Makefile     pluginModule_provider.py   README.html

As there are not so much documentation available for using GUI of Processing ToolBox, the easiest way to find the functions and classes to call is by watching the code : Inputs(parameters), Outputs.

#### 10.4.2.1 Setting inputs

By default, the pluginModule_algorithm.py is the one that contains lines used to create the standard GUI and to run the script. This file imports different classes :

from PyQt4.QtCore import QSettings
from qgis.core import QgsVectorFileWriter

from processing.core.GeoAlgorithm import GeoAlgorithm
from processing.core.parameters import ParameterVector
from processing.core.outputs import OutputVector
from processing.tools import dataobjects, vector

For our work (Making the Historical Map Filter process in Qgis Algorithm), we need two different inputs :

• Raster image
• Integer values (closing filter size, median filter size and median filter iteration)

When looking in Inputs (parameters) in processing code, we see different inputs corresponding to each of our need :

class ParameterNumber(Parameter) # line 302
# ...
# then
# ...
class ParameterRaster(Parameter) # line 364

Now to import these two classes, simply call in your pluginModule_algorithm.py:

from processing.core.parameters import ParameterRaster,ParameterNumber

Parameters can have differents arguments (only accept shapefiles with polygon type not points, etc…). When looking in the parameter file (__init__ of ParameterNumber)

# __init__ of ParameterNumber class
def __init__(self, name='', description='', minValue=None, maxValue=None,
default=None, optional=False, metadata={})

Once you found arguments in the __init__ of your parameter class, add this asked variable in the defineCharacteristics function of your pluginModule_algorithm.py :

def defineCharacteristics(self):
# add input of Number type
#...
ParameterNumber(
self.CLOSING_SIZE,
self.tr('Window size of closing filter'),
minValue=3,
default=5))

Just don’t forget to define your variable CLOSING_SIZE at the beginning of your pluginClassAlgorithm(GeoAlgorithm) :

# add input of Number type
class pluginClassAlgorithm(GeoAlgorithm):
#...
CLOSING_SIZE = 'Size of closing filter window'
# And so on (INPUT_RASTER...)

#### 10.4.2.2 Setting outputs

The Historical Map filtering process outputs one raster. When looking in Outputs file in the processing code, we see every output compatible with the Processing GUI.

class OutputRaster(Output)

To import this Output settings, simply import this class :

from processing.core.outputs import OutputRaster

Then add this argument in the defineCharacteristics function :

def defineCharacteristics(self):
#...
OutputRaster(
self.OUTPUT_RASTER,
self.tr('Output raster (filtered image)')))

As for the inputs, don’t forget to define the variable in your class root :

# add input of Number type
class pluginClassAlgorithm(GeoAlgorithm):
CLOSING_SIZE = 'CLOSING_SIZE'
OUTPUT_RASTER = 'OUTPUT_RASTER'
# And so on (INPUT_RASTER...)

#### 10.4.2.3 Connect the processing code to your function

Once inputs and outputs are correctly defined, you need to use them to run your class or function. To get the value the user choose for inputs and outputs parameters, the GeoAlgorithm class class has a function named getParameterValue. Simply use it to give it in your function arguments.

def processAlgorithm(self, progress):
"""Here is where the processing itself takes place."""

INPUT_RASTER = self.getParameterValue(self.INPUT_RASTER)
#...
OUTPUT_RASTER = self.getOutputValue(self.OUTPUT_RASTER)

#classify
historicalMapProcess(INPUT_RASTER,OUTPUT_RASTER,*args)

### 10.4.3 Debug and/or talk with the final user

It’s hard to debug a code inside of Qgis. That’s why you have to make sure your code perfectly works outside of Qgis before implementing it inside this tool.

#### 10.4.3.1 Log a message

In the qgis.core folder, you can import QgsMessageLog which allows you to write some lines in the log panel below the map in the main window of Qgis.

from qgis.core import QgsMessageLog

Then, when you need to verify some arguments, or to write something useful for the user or for you, simply write :

QgsMessageLog.logMessage("Input raster is "+str(INPUT_RASTER))

The log is a good tool to store some informations (for debugging it is a great option), but to alert the user if there’s a problem, it’s not the best choice. Qgis let you the possibility to write a message over the map with a colorful background : the user can’t ignore it !

from qgis.utils import iface

The message must first retrieve the interface currently used (iface), the push his message over the map :

iface.messageBar().pushMessage("Error", "I'm just a student, I do mistakes, lot of mistakes!")

You can set the duration of the message (it will automatically hide after) and the altert color (from light to critical)

from qgis.gui import QgsMessageBar

# TOUT VA BIEN ! Green color background
iface.messageBar().pushMessage("Well done !", "My algorithm works perfectly !",level=QgsMessageBar.SUCCESS,duration=3)

# OH OH... There's a bug, isn't it ? Light blue background.
iface.messageBar().pushMessage("Ohoh...", "It's not just works as I expected...",level=QgsMessageBar.INFO,duration=5)

# CATASTROPHE ! Red background, it's horryfing !
iface.messageBar().pushMessage("Error", "There's a serious problem over here... Maybe I should rewrite all the code...",level=QgsMessageBar.CRITICAL,duration=10)


# 11 Appendix

## 11.1 Short introduction to shell

This section provides an introduction to shell programming and shell scripts. A script is a set of commands, which allows to write a processing chain for a given image, or to apply one processing to a set of images. Of course, mixing these two situations is possible. You can find more information easily on the web, a good starting point can be the Wikibook.

Shell is a programming language that is available on all GNU/Linux distributions. It can be used directly from the prompt (interactive mode), or by writing a file with a set of commands to be run. This file should start with the line

#!/bin/bash

In the following, it is assumed that we are working on the file script.sh. To insert comment inside the script, the symbol # has to be used.

# This is a comment

With Linux, a file can be writable, readable and/or executable. To be run as a script, it should be at least executable by the OS. It can be by done by running the following command:

chmod +x script.sh

To run it, just do

./script.sh

### 11.1.1 Basic commands

• cd: Change directory. To enter a directory, do cd Name_Of_Directory.
• ls: List all the file in the current directory.
• pwd: Return the name of the current directory.
• cp: Copy a file/directory, for instance cp A B.
• mv: Move a file to another, for instance mv A B.
• mkdir: Create a directory, mkdir Name_Of_Directory.

For instance, to get all the tif files in the current folder:

ls *tif
fabas_10_12_2013.tif  fabas_12_10_2013.tif

### 11.1.2 Variables

In shell, a variable is a string (not a number). It can be defined as:

var1='Mathieu' # Store "Mathieu" in variables "var1"
var2='Fauvel'
var3='34'

Be careful to spaces: there are no spaces, otherwise an error is returned! A variable is displayed using the echo function and the variable is accessed with the command $. echo$var1 $var2 # print "Mathieu Fauvel" echo "$var3 ans"      # print "33 ans"
echo '$var3 ans' # print "$var3 ans"
Mathieu Fauvel
34 ans
$var3 ans Note the difference between the simple quote =’= and the double quote =”=. The simple quote does not evaluate the variable while the double quote does. It is possible to pass parameters to the script, solely by adding them when the script is called. They are accessible using the command $ following by the order number of appearance when the script is called. Let define the script.sh file.

# ./script.sh Name FamilyName Age
echo $1$2
echo "J ai (eu) $3 ans !" When we do this, we have the following output: chmod +x script.sh ./script.sh Mathieu Fauvel 33 Mathieu Fauvel J ai (eu) 33 ans ! ### 11.1.3 Loop As in any programming language, loop are very useful to apply a series of processing to several elements of a sequence. The example below applies a processing on all tif files of the current directory: for i in *.tif # For all tif file do cp$i ndvi_$i # create a new file and add ndvi_ at the beginning of the filename done ### 11.1.4 Sequence It is possible to define sequences of string like this: for name in bayes libsvm knn rf do echo$name
done
bayes
libsvm
knn
rf

Sequences of numbers can be defined like this:

for i in seq 1 5
do
echo $i done 1 2 3 4 5 ## 11.2 Short introduction to Python :header-args+: :results outputA good starting point is the following link: http://kitchingroup.cheme.cmu.edu/pycse/pycse.html. Here, I just review few things that are usefull for the labwork. But python is far more than this short introduction. ### 11.2.1 String Handling strings with python is very easy. It is possible to add strings together, as with number! Pay attention to spaces… name="Mathieu" surname="Fauvel" print name + surname To use numbers in strings, it is necessary to convert them, using the function str print "Bonjour j\'ai eu " + str(36) + " ans" ### 11.2.2 Loop It is very easy to iterate over a list with python. The list can be made of numbers, strings etc … Since a list is iterable, defining a for loop is just: listeNumber = [1,2,3,4] print listeNumber for item in listeNumber: print(item) listeString = ['knn','bayes','libsvm','rf'] print listeString for item in listeString: print(item) ### 11.2.3 Glob The glob module finds all the path-names matching a given pattern. It uses standard Unix (shell) path expansion rules. However, results are returned in arbitrary order and therefore sometimes ordering operation is necessary. It returns a list of pathnames, or a iterator which can be useful for large processing. Below some examples to see how it works. First, check what is in my figures directory: ls figures/ If we want to get all the files, we just need to do import glob files = glob.glob("figures/*") for files_ in files: print files_ If we only want the png files: import glob files = glob.glob("figures/*.png") for files_ in files: print files_ The iterator is iglob, it does the same job than glob, but without storing all the results simultaneously. import glob for files_ in glob.iglob("figures/*.png"): print files_ ### 11.2.4 Argparse Argparse (https://docs.python.org/3.6/library/argparse.html) is module to parse options and arguments from the command-line interface. It defines what are the mandatory argument, generates help and usages messages and errors at runtime. For instance, suppose we have a function that needs two parameters: the name of a multispectral file and the size of the template filter. Argparse handles everything: import argparse # Initialization of the filter parser = argparse.ArgumentParser() # Add arguments parser.add_argument("-in",dest="image",help="Image to be processed",type=str) parser.add_argument("-p",help="Size of the template",type=int,default=1) args = parser.parse_args() print args.image print args.p # 12 Python and Shell Codes for the labworks ## 12.1 Visualization ### 12.1.1 Get pixels mean values on collaborative spreadsheet import scipy as sp import urllib2 import time import os import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') URL = ['https://framacalc.org/fauvel_rs_water','https://framacalc.org/fauvel_rs_grassland','https://framacalc.org/fauvel_rs_forest','https://framacalc.org/fauvel_rs_baresoil'] LABEL = ['Water','Grassland','Forest','Bare soil'] COLOR = ['b','g','r','m'] for i,url in enumerate(URL): try: with open('temp.csv','wb') as file: file.write(urllib2.urlopen(url+'.csv').read()) data = sp.genfromtxt('temp.csv',delimiter=',',skip_header=1) os.remove("temp.csv") m,s=data.mean(axis=0),data.std(axis=0) plt.figure() plt.plot(range(1,5),m,'k',lw=2) plt.title(LABEL[i]) plt.ylim([0,1000]) plt.draw() except KeyboardInterrupt: exit() except: print("Error in reading class {0}".format(LABEL[i])) plt.show() ### 12.1.2 Boxplots import scipy as sp import urllib2 import time import os import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') URL = ['https://framacalc.org/fauvel_rs_water','https://framacalc.org/fauvel_rs_grassland','https://framacalc.org/fauvel_rs_forest','https://framacalc.org/fauvel_rs_baresoil'] LABEL = ['Water','Grassland','Forest','Bare soil'] fig=plt.figure() for i,url in enumerate(URL): try: with open('temp.csv','wb') as file: file.write(urllib2.urlopen(url+'.csv').read()) data = sp.genfromtxt('temp.csv',delimiter=',',skip_header=1) os.remove("temp.csv") ax = fig.add_subplot(4,1,i+1) plt.boxplot(data) ax.set_title(LABEL[i]) ax.set_ylim([0,1000]) except KeyboardInterrupt: exit() except: print("Error in reading class {0}".format(LABEL[i])) plt.show() ## 12.2 Change detection #!/bin/bash # NDVI BEFORE otbcli_BandMath -il tsunami_before.dat -out ndvi_before.tif -exp "(im1b4-im1b3)/(im1b4+im1b3)" # NDVI AFTER otbcli_BandMath -il tsunami_after.dat -out ndvi_after.tif -exp "(im1b4-im1b3)/(im1b4+im1b3)" # Diff NDVI otbcli_BandMath -il ndvi_after.tif ndvi_before.tif -out temp.tif -exp "im1b1-im2b1" otbcli_BandMath -il temp.tif -out diff.tif -exp "(im1b1>1?1:im1b1)" # if(im1b1>1,1,im1b1) # Seuil otbcli_BandMath -il diff.tif -out floods.tif -exp "(im1b1<-0.4?1:0)" # if(im1b1<-0.4,1,im1b1)  ## 12.3 Comparison of several classifier and several data set import scipy as sp import urllib2 import time import os import matplotlib.pyplot as plt import matplotlib #matplotlib.style.use('ggplot') url = 'https://framacalc.org/fauvel_res_classification' classifier =['RF','GMM','KNN'] # Read the file with open('temp.csv','wb') as file: file.write(urllib2.urlopen(url+'.csv').read()) data = sp.genfromtxt('temp.csv',delimiter=',',skip_header=2) os.remove("temp.csv") index = sp.arange(3) bar_width = 0.35 opacity = 0.4 error_config = {'ecolor': '0.3','lw':2,'capsize':5, 'capthick':2} # Compute the statistics m,s=sp.nanmean(data,axis=0),sp.nanstd(data,axis=0) for i in xrange(3): plt.figure() plt.bar(index,m[i*3:i*3+3],width=bar_width,alpha=opacity,color='b',yerr=s[i*3:i*3+3],error_kw=error_config) plt.xticks(index+bar_width/2,classifier) plt.axis([-0.5, 3.5, 75, 100]) plt.grid('on') plt.show() for name in fabas*.tif do for classifier in bayes svm rf knn do # Train otbcli_TrainImagesClassifier -io.il${name} io.vd train_fabas.shp -io.out model.txt -classifier ${classifier} # Predict otbcli_ImageClassifier -in${name} -model model.txt -out classif.tif

# Compute confusion
otbcli_ComputeConfusionMatrix -in classif.tif -ref vector -ref.vector.in valid_fabas.shp -out confu_${classifier}_${name%%.tif}.csv
done
done
rm model.txt classif.tif

## 12.4 Classification Multidates from tabular tab

import matplotlib.pyplot as plt

plt.figure()
for i,t in enumerate(tab):
plt.plot(range(1,5),t[3:],label='p'+str(i+1))

plt.legend()
plt.show()
import matplotlib.pyplot as plt
import scipy as sp
plt.figure()
x =sp.zeros((3,8))
for i in range(3):
x[i,:4]=tab[2*i][3:]
x[i,4:]=tab[2*i+1][3:]
plt.plot(range(1,9),x[i,:],label='p'+str(i+1))

plt.legend()
plt.show()

## 12.5 Sélection du meilleurs couple de dates

# Compute the NDVI
for name in Sud*.tif
do
otbcli_BandMath -il ${name} -out ndvi_${name} -exp "(im1b4-im1b3)/(im1b4+im1b3)" -progress 0
done

# Compute the classification accuracy for each couple of dates
FILE=ls ndvi_*m.tif # Get all the files that end with 'm.tif'
EFILE=''
for file in $FILE do # Add variables to be excluded from the second loop: EFILE -> # Exclude file EFILE=echo$EFILE $file # Exclude these variables from the next loop FILES=$FILE # Copy the variable
for efile in $EFILE do FILES=echo$FILES |  sed "s/\b$efile\b//g" # Exclude from # FILES all the # file from EFILE # (substitute with # nothing) done # Do the process, given the couple of images for files in$FILES
do
# Concatenation
otbcli_ConcatenateImages -il $file$files -out ndvi_concat.tif -progress 0

# Train
otbcli_TrainImagesClassifier -io.il ndvi_concat.tif -io.vd rpg_pur_train.shp  -io.out model -classifier rf -progress 0

# Classify the image
otbcli_ImageClassifier -in ndvi_concat.tif -model model -out classif.tif -progress 0

# Confusion
otbcli_ComputeConfusionMatrix -in classif.tif -out confu_${file:22:8}_${files:22:8}.csv -ref vector -ref.vector.in rpg_purs_validation.shp -progress 0

done
done

import scipy as sp
import glob
import matplotlib.pyplot as plt

CONFU = glob.glob('confu_20*_*.csv')
CONFU.sort()

OA=[]

for confu in CONFU:
OA.append(100.0*sp.diag(temp).sum()/temp.sum())
ind = sp.argmax(OA)

# plot the result
plt.plot(OA)
plt.plot(ind,OA[ind],'sk')
plt.title('Best couples of dates: ' + CONFU[ind])
plt.show()
LISTE=list.files(pattern='confu_')
n=length(LISTE)
OA=array(dim=n)
for(i in 1:n){
nom=LISTE[i]
OA[i]=sum(diag(mat))/sum(mat)
}
OA
index=which(OA==max(OA))
LISTE[index]
plot(1:n,OA,type='l',col='red',lwd=2)
points(index,OA[index],pch=19,col='green')


## 12.6 Dynamic Habitat

#!/bin/bash

# Decoupe des rasters
for name in *tif
do
gdal_translate -a_nodata 0 -projwin 517844.362981 6257253.98798 545735.795673 6240635.7476 \
-of GTiff ${name} cut_${name}
done

# Temporal Gap filling
for name in cut_serie_spot5_coteaux_green.tif cut_serie_spot5_coteaux_mir.tif cut_serie_spot5_coteaux_nir.tif cut_serie_spot5_coteaux_red.tif
do
otbcli_ImageTimeSeriesGapFilling -in ${name} -mask cut_serie_spot5_coteaux_NUA.tif -out filtered_${name} -comp 1 -it spline -id dates.txt
done

# Compute the NDVI
for i in seq 1 15
do
# Compute the NDVI
otbcli_BandMath -il filtered_cut_serie_spot5_coteaux_nir.tif filtered_cut_serie_spot5_coteaux_red.tif -out ndvi_${i}.tif -exp "(im1b${i}+im2b${i} ==0?0:((im1b${i}-im2b${i})/(im1b${i}+im2b${i})+1)/2)" done # Compute the SITS otbcli_ConcatenateImages -il ndvi_{1..15}.tif -out sits.tif rm ndvi_*.tif # Compute the cummulative greenness otbcli_BandMath -il sits.tif -out cumgreen.tif -exp "im1b1+im1b2+im1b3+im1b4+im1b5+im1b6+im1b7+im1b8+im1b9+im1b10+im1b11+im1b12+im1b13+im1b14+im1b15" # Compute the minimal annual cover otbcli_BandMath -il sits.tif -out mincover.tif -exp "min(im1b1,im1b2,im1b3,im1b4,im1b5,im1b6,im1b7,im1b8,im1b9,im1b10,im1b11,im1b12,im1b13,im1b14,im1b15)" # Compute the seasonnal variation otbcli_BandMath -il sits.tif cumgreen.tif -out seasonvar.tif -exp "sqrt(((im2b1/15-im1b1)^2+(im2b1/15-im1b2)^2+(im2b1/15-im1b3)^2+(im2b1/15-im1b4)^2+(im2b1/15-im1b5)^2+(im2b1/15-im1b6)^2+(im2b1/15-im1b7)^2+(im2b1/15-im1b8)^2+(im2b1/15-im1b9)^2+(im2b1/15-im1b10)^2+(im2b1/15-im1b11)^2+(im2b1/15-im1b12)^2+(im2b1/15-im1b13)^2+(im2b1/15-im1b14)^2+(im2b1/15-im1b15)^2)/15)/im2b1*15" # Create a multiband images stacking all the three indices otbcli_ConcatenateImages -il cumgreen.tif mincover.tif seasonvar.tif -out temp.tif # Rescale value between -1 and 1 for each spectral bands otbcli_Rescale -in temp.tif -out stack.tif -outmin -1 -outmax 1 rm temp.tif # Apply KMEANS classification for different number of classes for classe in {1..10} do otbcli_KMeansClassification -in stack.tif -ts 1000000 -out kmean_${classe}.tif -nc ${classe} -ram 2048 -outmeans centroids_${classe}
done

## 12.7 Python

### 12.7.1 Display image

import matplotlib.pyplot as plt
import scipy as sp
import myfilters as mf

plt.figure()
plt.imshow(image,cmap="gray")

# # Filter max
# plt.figure()
# filmax = mf.filterMax(image,p=1)
# plt.imshow(filmax,cmap="gray")

# plt.figure()
# filmin = mf.filterMin(image,p=1)
# plt.imshow(filmin,cmap="gray")

# plt.figure()
# filmedian = mf.filterMedian(image,p=1)
# plt.imshow(filmedian,cmap="gray")

# plt.figure()
# filmean = mf.filterMean(image,p=1)
# plt.imshow(filmean,cmap="gray")

for p in range(1,6):
plt.figure()
out = mf.filterMedian(image,p=p)
plt.imshow(out,cmap="gray")
plt.show()

### 12.7.2Test speed

import myfilters as mf
import time
import scipy as sp

#
tic = time.time()
im1 = mf.filterMax(image,p=1)
print "Processing time : {}".format(time.time()-tic)

tic = time.time()
im1 = mf.filterMaxS(image,p=1)
print "Processing time : {}".format(time.time()-tic)

import scipy as sp
from osgeo import gdal

def open_data(filename):
'''
The function open and load the image given its name.
The type of the data is checked from the file and the scipy array is initialized accordingly.
Input:
filename: the name of the file
Output:
im: the data cube
GeoTransform: the geotransform information
Projection: the projection information
'''
if data is None:
print 'Impossible to open '+filename
exit()
nc = data.RasterXSize
nl = data.RasterYSize
d  = data.RasterCount

# Get the type of the data
gdal_dt = data.GetRasterBand(1).DataType
if gdal_dt == gdal.GDT_Byte:
dt = 'uint8'
elif gdal_dt == gdal.GDT_Int16:
dt = 'int16'
elif gdal_dt == gdal.GDT_UInt16:
dt = 'uint16'
elif gdal_dt == gdal.GDT_Int32:
dt = 'int32'
elif gdal_dt == gdal.GDT_UInt32:
dt = 'uint32'
elif gdal_dt == gdal.GDT_Float32:
dt = 'float32'
elif gdal_dt == gdal.GDT_Float64:
dt = 'float64'
elif gdal_dt == gdal.GDT_CInt16 or gdal_dt == gdal.GDT_CInt32 or gdal_dt == gdal.GDT_CFloat32 or gdal_dt == gdal.GDT_CFloat64 :
dt = 'complex64'
else:
print 'Data type unkown'
exit()

# Initialize the array
if d ==1:
im = sp.empty((nl,nc),dtype=dt)
else:
im = sp.empty((nl,nc,d),dtype=dt)
for i in range(d):

GeoTransform = data.GetGeoTransform()
Projection = data.GetProjection()
data = None # Close the file
return im,GeoTransform,Projection

def write_data(outname,im,GeoTransform,Projection):
'''
The function write the image on the  hard drive.
Input:
outname: the name of the file to be written
im: the image cube
GeoTransform: the geotransform information
Projection: the projection information
Output:
Nothing --
'''
nl = im.shape[0]
nc = im.shape[1]
if im.ndim == 2:
d=1
else:
d = im.shape[2]

driver = gdal.GetDriverByName('GTiff')
dt = im.dtype.name
# Get the data type
if dt == 'bool' or dt == 'uint8':
gdal_dt=gdal.GDT_Byte
elif dt == 'int8' or dt == 'int16':
gdal_dt=gdal.GDT_Int16
elif dt == 'uint16':
gdal_dt=gdal.GDT_UInt16
elif dt == 'int32':
gdal_dt=gdal.GDT_Int32
elif dt == 'uint32':
gdal_dt=gdal.GDT_UInt32
elif dt == 'int64' or dt == 'uint64' or dt == 'float16' or dt == 'float32':
gdal_dt=gdal.GDT_Float32
elif dt == 'float64':
gdal_dt=gdal.GDT_Float64
elif dt == 'complex64':
gdal_dt=gdal.GDT_CFloat64
else:
print 'Data type non-suported'
exit()

dst_ds = driver.Create(outname,nc,nl, d, gdal_dt)
dst_ds.SetGeoTransform(GeoTransform)
dst_ds.SetProjection(Projection)

if d==1:
out = dst_ds.GetRasterBand(1)
out.WriteArray(im)
out.FlushCache()
else:
for i in range(d):
out = dst_ds.GetRasterBand(i+1)
out.WriteArray(im[:,:,i])
out.FlushCache()
dst_ds = None # Close the file
out = None

### 12.7.4 HM

import myfilters as mf
import rasterTool as rt

p= 10

im,geo,proj = rt.open_data("/tmp/decoup.tif")
print im.shape

# filter
im_fmax = mf.filterMulti(im,p=p,filter='max')
del im

im_fmin = mf.filterMulti(im_fmax,p=p,filter='min')
del im_fmax

im_fmedian = mf.filterMulti(im_fmin,p=p,filter='median')
del im_fmin

# save data
outname = "/tmp/toto_"+str(p) +".tif"
rt.write_data(outname,im_fmedian,geo,proj)

### 12.7.5 Parser

import myfilters as mf
import rasterTool as rt
import argparse

# Initialize parser
parser = argparse.ArgumentParser()

args = parser.parse_args()

# Some tests
if args.imageIn == None:
exit()

im,geo,proj = rt.open_data(args.imageIn)
print im.shape

# filter
im_fmax = mf.filterMulti(im,p=args.p,filter='max')
del im

im_fmin = mf.filterMulti(im_fmax,p=args.p,filter='min')
del im_fmax

im_fmedian = mf.filterMulti(im_fmin,p=args.p,filter='median')
del im_fmin

# save data
rt.write_data(args.imageOut,im_fmedian,geo,proj)

### 12.7.6 QGIS Script

from PyQt4.QtCore import *
from PyQt4.QtGui import *
from qgis.core import *
from qgis.gui import *

import scipy as sp
from osgeo import gdal
## FILTERS
def filterMax(imin,p=1):
"""This  function apply  the  filter  TheFilter  on image imin  with
parameters p1, p2, ...p2

Input:
-----
imin = image to be processed, Scipy 2D-Array
p = window size = 2p+1. Should be strictly positive

Output:
------
imout = filtered image, Scipy 2D-Array, not necessraly of the same
type as imin, depending of the filter
"""
## Initialization
if imin.ndim != 2: # Check  the number of dimensions of the image:
# should be 2
return 0

H,W = imin.shape # Get the image shape

imout = sp.empty((H,W),dtype=imin.dtype) # Initialization of the output image
## Size of the window
if p<=0:
return 0

## Loop over the pixels
for i in range(p,H-p):
for j in range(p,W-p):
imout[i,j]=imin[(i-p):(i+p+1),(j-p):(j+p+1)].max()

## Fill border
imout[:,:p]=imin[:,:p]
imout[:,-p:]=imin[:,-p:]
imout[:p,:]=imin[:p,:]
imout[-p:,:]=imin[-p:,:]

return imout

def filterMin(imin,p=1):
"""This  function apply  the  filter  TheFilter  on image imin  with
parameters p1, p2, ...p2

Input:
-----
imin = image to be processed, Scipy 2D-Array
p = window size = 2p+1. Should be strictly positive

Output:
------
imout = filtered image, Scipy 2D-Array, not necessraly of the same
type as imin, depending of the filter
"""
## Initialization
if imin.ndim != 2: # Check  the number of dimensions of the image:
# should be 2
return 0

H,W = imin.shape # Get the image shape

imout = sp.empty((H,W),dtype=imin.dtype) # Initialization of the output image
## Size of the window
if p<=0:
return 0

## Loop over the pixels
for i in range(p,H-p):
for j in range(p,W-p):
imout[i,j]=imin[(i-p):(i+p+1),(j-p):(j+p+1)].min()

## Fill border
imout[:,:p]=imin[:,:p]
imout[:,-p:]=imin[:,-p:]
imout[:p,:]=imin[:p,:]
imout[-p:,:]=imin[-p:,:]

return imout

def filterMean(imin,p=1):
"""This  function apply  the  filter  TheFilter  on image imin  with
parameters p1, p2, ...p2

Input:
-----
imin = image to be processed, Scipy 2D-Array
p = window size = 2p+1. Should be strictly positive

Output:
------
imout = filtered image, Scipy 2D-Array, not necessraly of the same
type as imin, depending of the filter
"""
## Initialization
if imin.ndim != 2: # Check  the number of dimensions of the image:
# should be 2
return 0

H,W = imin.shape # Get the image shape

imout = sp.empty((H,W),dtype=imin.dtype) # Initialization of the output image
## Size of the window
if p<=0:
return 0

## Loop over the pixels
for i in range(p,H-p):
for j in range(p,W-p):
imout[i,j]=imin[(i-p):(i+p+1),(j-p):(j+p+1)].mean()

## Fill border
imout[:,:p]=imin[:,:p]
imout[:,-p:]=imin[:,-p:]
imout[:p,:]=imin[:p,:]
imout[-p:,:]=imin[-p:,:]

return imout

def filterMedian(imin,p=1):
"""This  function apply  the  filter  TheFilter  on image imin  with
parameters p1, p2, ...p2

Input:
-----
imin = image to be processed, Scipy 2D-Array
p = window size = 2p+1. Should be strictly positive

Output:
------
imout = filtered image, Scipy 2D-Array, not necessraly of the same
type as imin, depending of the filter
"""
## Initialization
if imin.ndim != 2: # Check  the number of dimensions of the image:
# should be 2
return 0

H,W = imin.shape # Get the image shape

imout = sp.empty((H,W),dtype=imin.dtype) # Initialization of the output image
## Size of the window
if p<=0:
return 0

## Loop over the pixels
for i in range(p,H-p):
for j in range(p,W-p):
imout[i,j]=sp.median(imin[(i-p):(i+p+1),(j-p):(j+p+1)])

## Fill border
imout[:,:p]=imin[:,:p]
imout[:,-p:]=imin[:,-p:]
imout[:p,:]=imin[:p,:]
imout[-p:,:]=imin[-p:,:]

return imout

def filterMulti(imin,p=1,filter='median'):
"""This  function apply  the  filter  on data cube

Input:
-----
imin = image to be processed, Scipy 3D-Array
p = window size = 2p+1. Should be strictly positive
filter = the kind of filter
Output:
------
imout = filtered image, Scipy 3D-Array, not necessraly of the same
type as imin, depending of the filter
"""
H,W,D = imin.shape # Get the image shape

imout = sp.empty((H,W,D),dtype=imin.dtype) # Initialization of the output image

# Select the filter
if filter == 'max':
function = filterMax
elif filter == 'min':
function = filterMin
elif filter == 'mean':
function = filterMean
else :
function = filterMedian

for d in range(D):
imout[:,:,d]=function(imin[:,:,d],p=p)

return imout

## RASTER TOOL
def open_data(filename):
'''
The function open and load the image given its name.
The type of the data is checked from the file and the scipy array is initialized accordingly.
Input:
filename: the name of the file
Output:
im: the data cube
GeoTransform: the geotransform information
Projection: the projection information
'''
if data is None:
print 'Impossible to open '+filename
exit()
nc = data.RasterXSize
nl = data.RasterYSize
d  = data.RasterCount

# Get the type of the data
gdal_dt = data.GetRasterBand(1).DataType
if gdal_dt == gdal.GDT_Byte:
dt = 'uint8'
elif gdal_dt == gdal.GDT_Int16:
dt = 'int16'
elif gdal_dt == gdal.GDT_UInt16:
dt = 'uint16'
elif gdal_dt == gdal.GDT_Int32:
dt = 'int32'
elif gdal_dt == gdal.GDT_UInt32:
dt = 'uint32'
elif gdal_dt == gdal.GDT_Float32:
dt = 'float32'
elif gdal_dt == gdal.GDT_Float64:
dt = 'float64'
elif gdal_dt == gdal.GDT_CInt16 or gdal_dt == gdal.GDT_CInt32 or gdal_dt == gdal.GDT_CFloat32 or gdal_dt == gdal.GDT_CFloat64 :
dt = 'complex64'
else:
print 'Data type unkown'
exit()

# Initialize the array
if d ==1:
im = sp.empty((nl,nc),dtype=dt)
else:
im = sp.empty((nl,nc,d),dtype=dt)
for i in range(d):

GeoTransform = data.GetGeoTransform()
Projection = data.GetProjection()
data = None # Close the file
return im,GeoTransform,Projection

def write_data(outname,im,GeoTransform,Projection):
'''
The function write the image on the  hard drive.
Input:
outname: the name of the file to be written
im: the image cube
GeoTransform: the geotransform information
Projection: the projection information
Output:
Nothing --
'''
nl = im.shape[0]
nc = im.shape[1]
if im.ndim == 2:
d=1
else:
d = im.shape[2]

driver = gdal.GetDriverByName('GTiff')
dt = im.dtype.name
# Get the data type
if dt == 'bool' or dt == 'uint8':
gdal_dt=gdal.GDT_Byte
elif dt == 'int8' or dt == 'int16':
gdal_dt=gdal.GDT_Int16
elif dt == 'uint16':
gdal_dt=gdal.GDT_UInt16
elif dt == 'int32':
gdal_dt=gdal.GDT_Int32
elif dt == 'uint32':
gdal_dt=gdal.GDT_UInt32
elif dt == 'int64' or dt == 'uint64' or dt == 'float16' or dt == 'float32':
gdal_dt=gdal.GDT_Float32
elif dt == 'float64':
gdal_dt=gdal.GDT_Float64
elif dt == 'complex64':
gdal_dt=gdal.GDT_CFloat64
else:
print 'Data type non-suported'
exit()

dst_ds = driver.Create(outname,nc,nl, d, gdal_dt)
dst_ds.SetGeoTransform(GeoTransform)
dst_ds.SetProjection(Projection)

if d==1:
out = dst_ds.GetRasterBand(1)
out.WriteArray(im)
out.FlushCache()
else:
for i in range(d):
out = dst_ds.GetRasterBand(i+1)
out.WriteArray(im[:,:,i])
out.FlushCache()
dst_ds = None # Close the file
out = None

## The core function
def run_script(iface,p,output_name):

## Get the filename of the data to be processed
layer = iface.activeLayer() # Get the current layer
name = layer.dataProvider().dataSourceUri() # Get the path of the layer

## Do the processing (copy/paste from previous function)
im,geo,proj = open_data(name)

# filter
im_fmax = filterMulti(im,p=p,filter='max')
del im

im_fmin = filterMulti(im_fmax,p=p,filter='min')
del im_fmax

im_fmedian = filterMulti(im_fmin,p=p,filter='median')
del im_fmin

# save data
write_data(output_name,im_fmedian,geo,proj)

raster_lyr = iface.addRasterLayer(output_name,'FilterImage')

# 13 TO DO

## 13.1 A compléter

• [X] Ecrire le fonctionnement des TD
• [ ] Faire un mind map
• [X] Présentation des scripts shell
• [X] Reprendres les objectifs pour les compétences plutot que pour les activités
• [X] Faire la carte “test” pour la spatilization
• [ ] Leur faire faire une carte sur Fabas pour François.
• [X] Faire le script pour les résultats de classifications et multitemp
• [100%] Rajouter le détails des bandes dans la section Data sets
• [X] SITS
• [X] Change detection
• [ ] Mettre l’images HM en ligne
• [X] Mettre les codes python pour l’ouverture des images, en enlevant les commentaires.
• [X] Faire codes pour tester les temps de calculs sur le parcours des boucles
• [ ] Refaire les box-plots par bande plutot que par classe
• [ ] Ecrire une syntaxe sur Band Math
• Opération classique
• Comparaison “if” etc …

+

• [ ] Modifier la partie #sec:classif:automatize pour la partie python, utiliser les commandes internes.

## 13.2 Séquences plus ou moins prêtes

• [X] Ouvertures et Visualisation d’images
• [X] Segmentation d’images
• [X] Spectral indices
• [X] Construction de série temporelle
• [X] Changes detection
• [X] Classification of remote sensing images
• [ ] Python: intro aux traitements d’images
• [X] Historical Maps

## 13.3 Séquences en non présentielle

• Segmentation of 2D histograms ?
• Classification of images for François.
• Influence of the spatial correlation for training/testing set

## 13.4 A modifier

• [X] script python, c’est de la merde :(
• [X] Collecter l’ensemble des solutions pour les scripts
• [X] Donner les bandes pour Fabas et les autres images
• [X] Modifier les images quickbird pour virer la bandes zeros à la fin…
• [X] Rajouter les bandes spectrales pour SITS
• [X] Rajouter le traitement par lots dans QGIS https://docs.qgis.org/2.14/fr/docs/user_manual/processing/batch.html?highlight=batch
• [ ] Sequence en Non presentiel pour le Dynamic Habitat Index

## 13.5 Column view

ITEMFORMATIONDURATIONSEQUENCETAG
IntroductionPresential0:301
Data setsPresential0
Visualization of remote sensing dataPresential1:101
Spectral indices: Normalized Difference Vegetation IndexPresential01:002
Segmentation of remote sensing imagesPresential2:002
Change detection: Detection of floodsPresential01:403
Classification of remote sensing imagesPresential04:003-4
Influence of the spatial distribution of the learning samplesNon Presential01:004
Satellite Image Time SeriesPresential04:004-5
Extraction of the best couple of datesNon Presential01:40
Dynamic Habitat IndexPresential04:005
Template filtersPresential04:00