-
Notifications
You must be signed in to change notification settings - Fork 0
/
var-prep.Rmd
306 lines (248 loc) · 13.5 KB
/
var-prep.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
---
title: "Variable preparation"
author: "Robert Schlegel"
date: "2019-07-25"
output: workflowr::wflow_html
editor_options:
chunk_output_type: console
csl: FMars.csl
bibliography: MHWNWA.bib
---
```{r global_options, include = FALSE}
knitr::opts_chunk$set(fig.width = 8, fig.align = 'center',
echo = TRUE, warning = FALSE, message = FALSE,
eval = TRUE, tidy = FALSE)
```
## Introduction
This vignette will walk through the steps needed to create mean 'whatever' states during all of the MHWs detected in the previous [SST preparation](https://robwschlegel.github.io/MHWNWA/sst-prep.html) vignette. These 'whatever' states are any of the abiotic variables that have a presence in the ocean heat budget equation and that have been deemed relevant w.r.t. forcing of extreme ocean surface temperatures.
After first going through this entire process with NAPA model output it has become clear that this analysis should be run first with obs/reanaltsis data, and the NAPA model results may be compared against this as desired. To this end we will now use the following variables as found in the following data products:
- NOAA OISST: SST
- GLORYS: Mixed layer depth, U & V current vectors (1m)
- ERA 5: Latent & sensible heat flux, shortwave and longwave radiation, U & V wind vectors (10m), air temperature (2m)
ALl of these variables have a daily resolution from at least 1993 -- 2018. The spatial resolution varies from course (1 degree) OAFlux data to hi-res (1/12 degree) GLORYS data. The OISST, ERA 5, and GLORUS (1993 -- 2015) data are natively at 1/4 degree, though the corrdinates are slightly off between OISST and ERA 5 + GLORYS. The ERA5 data are also hourly, so will need to be binned into mean daily values. In order to trade between efficiency and accuracy we will be converting everything to the 1/4 degree grid of the OISST data. This will be accomplished with the newly published __`grainchanger`__ package.
In order to compare the OISST dataset to the other 1/4 degree products the lat values for OISST will have 0.125 added to them, and the longitude values will have 0.125 subtracted from them. This is because the OISST coordinates give the centre value of each pixel, whereas the other coordinate system gives the top left corner (north-west corner).
```{r startup}
# Packages used in this vignette
library(jsonlite, lib.loc = "../R-packages/")
library(tidyverse) # Base suite of functions
library(tidync, lib.loc = "../R-packages/")
library(lubridate) # For convenient date manipulation
# Load functions required below
# source("code/functions.R")
# Set number of cores
doMC::registerDoMC(cores = 50)
# Disable scientific notation for numeric values
# I just find it annoying
options(scipen = 999)
# Corners of the study area
NWA_corners <- readRDS("data/NWA_corners.Rda")
# The region pixel lon/lat values
NWA_info <- readRDS("data/NWA_info.Rda")
# Load the OISST grid
OISST_grid <- readRDS("data/OISST_grid.Rda")
# Load the OISST grid trimmed to the region coords
OISST_grid_region <- readRDS("data/OISST_grid_region.Rda")
# Load MHW results
OISST_region_MHW <- readRDS("data/OISST_region_MHW.Rda")
# MHW Events
MHW_event <- OISST_region_MHW %>%
select(-cats) %>%
unnest(events) %>%
filter(row_number() %% 2 == 0) %>%
unnest(events)
```
For the upcoming variable prep we are also going to want the OISST coordinates that are within our chosen study area as seen with `NWA_corners`.
<!-- We will also create a subsetted bathy coord file, too, so as to have an effective land mask. This will help us to reduce a lot of computational cost as we go along because many of the pixels over land are given 0 values, rather than missing values, which is a strange choice... -->
```{r OISST-grid-sub}
# The OISST coordinates for the study area only
OISST_grid_study <- OISST_grid %>%
filter(lon >= NWA_corners[1], lon <= NWA_corners[2],
lat >= NWA_corners[3], lat <= NWA_corners[4])
# saveRDS(OISST_grid_study, "data/OISST_grid_study.Rda")
```
## Variable datasets
Rather than go about performing all of the calculations below on piecemeal bits of data, it should be possible to load the entire datasets for each variable into memory at once. So in this section we will create functions that load each of the variables for the different data products. We will then create and save these complete datasets for ease of use later on.
```{r variable-datasets, eval=FALSE}
# Load the functions used below
# I've chosen to house a lot of code here to keep this vignette tidier
source("code/functions.R")
## SST
# The files with data in the study area
OISST_sst_files <- data.frame(files = dir("../../data/OISST", full.names = T),
lon = c(seq(0.125, 179.875, by = 0.25), seq(-179.875, -0.125, by = 0.25))) %>%
filter(lon >= NWA_corners[1]+0.125, lon <= NWA_corners[2]+0.125) %>%
mutate(files = as.character(files))
OISST_sst_files <- as.vector(OISST_sst_files$files)
# Aggregate data
system.time(
OISST_sst <- load_all_OISST(OISST_sst_files)
) # xxx seconds
# save
saveRDS(OISST_sst, "data/OISST_sst.Rda")
## Air temperature at 2 metres
# Grab file location
ERA5_t2m_files <- dir("../../oliver/data/ERA/ERA5/T2M", full.names = T, pattern = "ERA5")[15:40]
# Aggregate data
system.time(
ERA5_t2m <- load_all_ERA5(ERA5_t2m_files)
) # 2548 seconds
ERA5_t2m$t2m <- round(ERA5_t2m$t2m, 2)
# save
saveRDS(ERA5_t2m, "data/ERA5_t2m.Rda")
```
### Net heat flux
This is the one variable I was not able to source. The OAFlux product does have a net heat flux layer, but it ends in 2009. This is not long enough so we are going to create our own net heat flux layer by adding together the latent & sensible heat flux layers with the short & long wave radiation layers all from the ERA 5 product. When doing so we must ensure that the directions of the terms are matched correctly (i.e. that they all represent positive downward flux).
```{r net-heat-flux, eval=FALSE}
# The code used to perform these steps may be seen in code/workflow.R
```
## Variable climatologies
In the data packets we need to create for the SOMs (see below) we will need to include anomaly values. In order to do this we need to first create daily climatology values for each variable for each pixel. In order to create a climatology of values we will need to load all of the files and then pixel-wise go about getting the seasonal (daily) climatologies. This will be done with the same function (`ts2clm()`) that is used for the MHW climatologies. One-by-one we will load an entire dataset (created above) into memory so that we can perform the necessary calculations. Hold onto your hats, this is going to be RAM heavy...
```{r variable-climatologies, eval=FALSE}
# Load functions required below
source("code/functions.R")
## SST
# Load
OISST_sst <- readRDS("data/OISST_sst.Rda")
# Calculate clims
system.time(
OISST_sst_clim <- OISST_sst %>%
group_by(lon, lat) %>%
nest() %>%
mutate(clims = map(data, ts2clm,
climatologyPeriod = c("1993-01-01", "2018-12-31"),
clmOnly = T, roundClm = FALSE)) %>%
select(-data) %>%
unnest()
) # 1139 seconds
OISST_sst_clim$thresh <- NULL
# Save
saveRDS(OISST_sst_clim, "data/OISST_sst_clim.Rda")
# The code used to create the other climatologies may be found in code/workflow.R
```
## Variable anomalies
The last step before we can begin creating our data packets is to now subtract our climatology data from our base data for each of the variables used in this study. We will then save this large anomaly cubes to allow for easy synoptic state creation.
```{r variable-anomalies, eval=FALSE}
## SST
# Load
OISST_sst <- readRDS("data/OISST_sst.Rda") %>%
mutate(doy = format(t, "%m-%d"))
OISST_sst_clim <- readRDS("data/OISST_sst_clim.Rda") %>%
left_join(doy_index, by = c("doy" = "doy_int")) %>%
select(-doy) %>%
dplyr::rename(doy = doy.y)
# Join and calculate anomalies
system.time(
OISST_sst_anom <- left_join(OISST_sst, OISST_sst_clim, by = c("lon", "lat", "doy")) %>%
mutate(sst_anom = round(temp-seas, 2)) %>%
select(lon, lat, t, sst_anom)
) # 43 seconds
# Save
saveRDS(OISST_sst_anom, "data/OISST_sst_anom.Rda")
## Air temperature
system.time(
ERA5_t2m_anom <- anom_one(ERA5_t2m, ERA5_t2m_clim, "t2m_anom", 2)
) # 85 seconds
saveRDS(ERA5_t2m_anom, "data/ERA5_t2m_anom.Rda")
# The code used to create the rest of the anomalies may be found in code/workflow.R
```
## Synoptic states
With the variables chosen from a few different data products, the next step is to create mean synoptic states for each variable during each of the MHWs detected in each region. In order to make that process go more smoothly we will first create time (date) and space (lon/lat) indexes for each data product/layer. Unfortunately this means that from here out the code in this vignette will only run on the server I am working from. The data output of this vignette will however be publicly available [here](https://github.com/robwschlegel/MHWNWA/tree/master/data), in as much as the data hosting limits for GitHub allows. For much larger datasets please contact me directly and we can make a plan.
### Variable extractor
We needed a list of the dates present in each file so that we can easily load only the NetCDF files we need to extract our desired variables. The dates we want are the range of dates during each of the MHWs detected in the [SST preparation](https://robwschlegel.github.io/MHWNWA/sst-prep.html) vignette. In the chunk below we will create a function that decides which files should have their variables loaded and a function that binds everything up into tidy data packets that our SOM can ingest.
```{r extractor-funcs}
# Function for extracting the desired variables from a given NetCDF file
# testers...
# file_name <- NAPA_files[1]
# extract_all_var <- function(file_name){
#
# # Extract and join variables with a for loop
# # NB: This should be optimised...
# NAPA_vars_extracted <- data.frame()
# system.time(
# for(i in 1:length(NAPA_vars$name)){
# extract_one <- extract_one_var(NAPA_vars$name[i], file_name = file_name)
# if(nrow(NAPA_vars_extracted) == 0){
# NAPA_vars_extracted <- rbind(extract_one, NAPA_vars_extracted)
# } else {
# NAPA_vars_extracted <- left_join(NAPA_vars_extracted, extract_one,
# by = c("lon_index", "lat_index", "lon", "lat", "bathy", "t"))
# }
# }
# ) # 18 seconds for one
# NAPA_vars_extracted <- dplyr::select(NAPA_vars_extracted,
# lon_index, lat_index, lon, lat, t, bathy, everything())
#
# # Exit
# return(NAPA_vars_extracted)
# }
# Function for extracting variables from as many files as a MHW event requires
# testers...
# event_sub <- NAPA_MHW_event[23,]
data_packet <- function(event_sub){
# Create date and file index for loading
date_idx <- seq(event_sub$date_start, event_sub$date_end, by = "day")
file_idx <- filter(NAPA_files_dates, date %in% date_idx) %>%
mutate(file = as.character(file)) %>%
select(file) %>%
unique()
# Load required base data
# system.time(
packet_base <- plyr::ldply(file_idx$file, extract_all_var) %>%
filter(t %in% date_idx) %>%
mutate(doy = format(t, "%m-%d"))
# ) # 125 seconds for seven files
# Join to climatologies
packet_join <- left_join(packet_base, NAPA_clim_vars, by = c("lon", "lat", "doy"))
# Create anomaly values and remove clim columns
packet_anom <- packet_join %>%
mutate(emp_oce_anom = emp_oce - emp_oce_clim,
fmmflx_anom = fmmflx - fmmflx_clim,
mldkz5_anom = mldkz5 - mldkz5_clim,
mldr10_1_anom = mldr10_1 - mldr10_1_clim,
qemp_oce_anom = qemp_oce - qemp_oce_clim,
qns_anom = qns - qns_clim,
qt_anom = qt - qt_clim,
ssh_anom = ssh - ssh_clim,
sss_anom = sss - sss_clim,
sst_anom = sst - sst_clim,
taum_anom = taum - taum_clim) %>%
dplyr::select(lon, lat, doy, emp_oce:taum_anom,
-c(colnames(NAPA_clim_vars)[-c(1:3)]))
# dplyr::select(-c(colnames(packet_base)[-c(3,4,ncol(packet_base))]),
# -c(colnames(NAPA_clim_vars)[-c(1:3)]))
# Create mean synoptic values
packet_mean <- packet_anom %>%
select(-doy) %>%
# NB: The lowest pixels are a forcing edge and shouldn't be included
# We can catch these out by filtering pixels whose SST is exactly 0
filter(sst != 0) %>%
group_by(lon, lat) %>%
summarise_all(mean, na.rm = T) %>%
arrange(lon, lat) %>%
ungroup() %>%
nest(.key = "synoptic")
# Combine results with MHW dataframe
packet_res <- cbind(event_sub, packet_mean)
# Test visuals
# ggplot(packet_mean, aes(x = lon, y = lat)) +
# geom_point(aes(colour = sst_anom)) +
# scale_colour_gradient2(low = "blue", high = "red") +
# coord_cartesian(xlim = NWA_corners[1:2],
# ylim = NWA_corners[3:4])
# Exit
return(packet_res)
}
```
With our functions sorted, it is now time to create our data packets.
```{r synoptic-states, eval=FALSE}
# Set number of cores
# NB: Was set to 25 as someone else was using the server at the time
doMC::registerDoMC(cores = 25)
# Create one big packet
# system.time(
synoptic_states <- plyr::ddply(NAPA_MHW_event, c("region", "sub_region", "event_no"), data_packet, .parallel = T)
# ) # 82 seconds for first 2,6125 seconds (102 minutes) for all
# Save
# saveRDS(synoptic_states, "data/synoptic_states.Rda")
```
With all of our synoptic snapshots for our chosen variables created it is now time to feed them to the [Self-organising map (SOM) analysis](https://robwschlegel.github.io/MHWNWA/som.html).