Skip to content

wisechua/JSC370-labs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Lab 05 - Data Wrangling

#install.packages(c("data.table","leaflet"))
library(data.table)
library(dtplyr)
library(dplyr)
library(leaflet)
library(tidyverse)
library(ggplot2)
library(mgcv)

Learning goals

  • Use the merge() function to join two datasets.
  • Deal with missings and impute data.
  • Identify relevant observations using quantile().
  • Practice your GitHub skills.

Lab description

For this lab we will be dealing with the meteorological dataset met. In this case, we will use data.table to answer some questions regarding the met dataset, while at the same time practice your Git+GitHub skills for this project.

This markdown document should be rendered using github_document document.

Part 1: Setup a Git project and the GitHub repository

  1. Go to wherever you are planning to store the data on your computer, and create a folder for this project

  2. In that folder, save this template as "README.Rmd". This will be the markdown file where all the magic will happen.

  3. Go to your GitHub account and create a new repository of the same name that your local folder has, e.g., "JSC370-labs".

  4. Initialize the Git project, add the "README.Rmd" file, and make your first commit.

  5. Add the repo you just created on GitHub.com to the list of remotes, and push your commit to origin while setting the upstream.

Most of the steps can be done using command line:

# Step 1
cd ~/Documents
mkdir JSC370-labs
cd JSC370-labs

# Step 2
wget https://raw.githubusercontent.com/JSC370/jsc370-2023/main/labs/lab05/lab05-wrangling-gam.Rmd
mv lab05-wrangling-gam.Rmd README.Rmd
# if wget is not available,
curl https://raw.githubusercontent.com/JSC370/jsc370-2023/main/labs/lab05/lab05-wrangling-gam.Rmd --output README.Rmd

# Step 3
# Happens on github

# Step 4
git init
git add README.Rmd
git commit -m "First commit"

# Step 5
git remote add origin git@github.com:[username]/JSC370-labs
git push -u origin master

You can also complete the steps in R (replace with your paths/username when needed)

# Step 1
setwd("~/Documents")
dir.create("JSC370-labs")
setwd("JSC370-labs")

# Step 2
download.file(
  "https://raw.githubusercontent.com/JSC370/jsc370-2023/main/labs/lab05/lab05-wrangling-gam.Rmd",
  destfile = "README.Rmd"
  )

# Step 3: Happens on Github

# Step 4
system("git init && git add README.Rmd")
system('git commit -m "First commit"')

# Step 5
system("git remote add origin git@github.com:[username]/JSC370-labs")
system("git push -u origin master")

Once you are done setting up the project, you can now start working with the MET data.

Setup in R

  1. Load the data.table (and the dtplyr and dplyr packages if you plan to work with those).

  2. Load the met data from https://github.com/JSC370/jsc370-2023/blob/main/labs/lab03/met_all.gz or (Use https://raw.githubusercontent.com/JSC370/jsc370-2023/main/labs/lab03/met_all.gz to download programmatically), and also the station data. For the latter, you can use the code we used during lecture to pre-process the stations data:

fn <- "https://raw.githubusercontent.com/JSC370/jsc370-2023/main/labs/lab03/met_all.gz"
if (!file.exists("met_all.gz")) {
  download.file(fn, destfile = "met_all.gz")}

met <- data.table::fread("met_all.gz")
# Download the data
stations <- fread("ftp://ftp.ncdc.noaa.gov/pub/data/noaa/isd-history.csv")
stations[, USAF := as.integer(USAF)]

# Dealing with NAs and 999999
stations[, USAF   := fifelse(USAF == 999999, NA_integer_, USAF)]
stations[, CTRY   := fifelse(CTRY == "", NA_character_, CTRY)]
stations[, STATE  := fifelse(STATE == "", NA_character_, STATE)]

# Selecting the three relevant columns, and keeping unique records
stations <- unique(stations[, list(USAF, CTRY, STATE)])

# Dropping NAs
stations <- stations[!is.na(USAF)]

# Removing duplicates
stations[, n := 1:.N, by = .(USAF)]
stations <- stations[n == 1,][, n := NULL]
  1. Merge the data as we did during the lecture.
met <- merge(x = met, y = stations, all.x = TRUE, all.y = FALSE, by.x = "USAFID", by.y= "USAF")
### Tidyverse
# met <- left_join(met, stations, by = c("USAFID" = "USAF"))
met_lz <- lazy_dt(met, immutable = FALSE)

Question 1: Representative station for the US

Across all weather stations, what is the median station in terms of temperature, wind speed, and atmospheric pressure? Look for the three weather stations that best represent continental US using the quantile() function. Do these three coincide?

Knit the document, commit your changes, and save it on GitHub. Don't forget to add README.md to the tree, the first time you render it.

# Average for each station
met_avg_lz <- met_lz |> 
  group_by(USAFID) |> 
  summarise(
    across(
      c(temp, wind.sp, atm.press),
      function(x) mean(x, na.rm = TRUE)
    )
    # temp = mean(temp, na.rm = TRUE),
    # wind.sp = mean(wind.sp, na.rm = TRUE),
    # stm.press = mean(atm.press, na.rm = TRUE)
  )
met_avg_lz
# Median for each station
met_med_lz <- met_avg_lz |>
  summarise(
    across(
      2:4,
      function(x) quantile(x, prob = 0.5, na.rm = TRUE)
    )
  )

met_med_lz
# Temperature
temp_us_id <- met_avg_lz |>
  mutate(
    temp_diff = abs(temp - met_med_lz |> pull(temp))
    ) |> 
  arrange(temp_diff) |>
  slice(1) |>
  pull (USAFID)

# Wind Speed
wind_us_id <- met_avg_lz |>
  mutate(
    wind_diff = abs(wind.sp - met_med_lz |> pull(wind.sp))
    ) |> 
  arrange(wind_diff) |>
  slice(1) |>
  pull (USAFID)

# atm Speed
atm_us_id <- met_avg_lz |>
  mutate(
    d = abs(atm.press - met_med_lz |> pull(atm.press))
    ) |> 
  arrange(d) %>% 
  slice(1) %>% 
  pull (USAFID)

cat("ID with median ...",
    "\n    temperature: ", temp_us_id,
    "\n    wind: ", wind_us_id,
    "\n    atm: ", atm_us_id)
met_lz |>
  select(USAFID, lon, lat) %>% 
  distinct() %>% 
  filter(USAFID %in% c(temp_us_id, wind_us_id, atm_us_id))

These three stations does not coincide as we have three different station. The station that represent temperature is 720458, the wind is 720929, the atmospheric pressure is 722238.

Question 2: Representative station per state

Just like the previous question, you are asked to identify what is the most representative, the median, station per state. This time, instead of looking at one variable at a time, look at the euclidean distance. If multiple stations show in the median, select the one located at the lowest latitude.

# mean for each station
met_avg_state_lz <- met |> 
  group_by(USAFID) |> 
  summarise(
    temp = mean(temp, na.rm = TRUE),
    wind.sp = mean(wind.sp, na.rm = TRUE),
    atm.press = mean(atm.press, na.rm = TRUE),
    STATE = unique(STATE), lat = unique(lat, na.rm = TRUE)[1], lon = unique(lon, na.rm = TRUE)[1]
  ) 
met_avg_state_lz <- na.omit(met_avg_state_lz)
met_avg_state_lz
# Median of all state
met_med_avg_state_lz <- met_avg_state_lz %>% 
  group_by(STATE) %>%
  summarise(across(
    c(temp, wind.sp, atm.press),
    function(x) quantile(x, 0.5, na.rm = TRUE)
  ))
met_means_med_state <- merge(
  x = met_avg_state_lz,
  y = met_med_avg_state_lz,
  by.x = "STATE",
  by.y = "STATE",
  all.x = TRUE,
  all.y = TRUE,
  na.rm = TRUE
 )
udist <- function(pt1, pt2, pt3, pt4, pt5, pt6) {
  sqrt((pt1 - pt2)^2 + (pt3 - pt4)^2 + (pt5 - pt6)^2)
}

met_means_med_state$euclid <-udist(met_means_med_state$temp.x, met_means_med_state$wind.sp.y, met_means_med_state$wind.sp.x, met_means_med_state$atm.press.x, met_means_med_state$temp.y, met_means_med_state$atm.press.y)

met_means_med_state_answer <- met_means_med_state |>
  group_by(STATE) |>
  slice_min(euclid) |> slice_min(lat)
met_means_med_state_answer
nrow(met_means_med_state_answer)
length(unique(met$STATE))

Knit the doc and save it on GitHub.

We have checked that the result has the same number of unique states from the data.

Question 3: In the middle?

For each state, identify what is the station that is closest to the mid-point of the state. Combining these with the stations you identified in the previous question, use leaflet() to visualize all ~100 points in the same figure, applying different colors for those identified in this question.

met_lon_lat_avg <- met_lz %>% group_by(STATE) %>%
  summarise(lon = mean(lon, na.rm = TRUE),
            lat = mean(lat, na.rm = TRUE))
data.frame(met_lon_lat_avg)
met_lon_lat_avg$category <- 'median'
met_means_med_state_answer_filter <- met_means_med_state_answer %>% select(STATE, lat, lon)
met_means_med_state_answer_filter$category <- 'median'
lat_lon_answer <- rbind(met_means_med_state_answer_filter, data.frame(met_lon_lat_avg))
lat_lon_answer
pal <- colorFactor(c("navy", "red"), domain = c("mean", "median"))
leaflet(lat_lon_answer) |>
  addProviderTiles('OpenStreetMap') |>
  addCircleMarkers(lat = ~lat, lng = ~lon, opacity = 0.7, radius = 5, color = ~pal(category))

Knit the doc and save it on GitHub.

Question 4: Means of means

Using the quantile() function, generate a summary table that shows the number of states included, average temperature, wind-speed, and atmospheric pressure by the variable "average temperature level," which you'll need to create.

Start by computing the states' average temperature. Use that measurement to classify them according to the following criteria:

  • low: temp < 20
  • Mid: temp >= 20 and temp < 25
  • High: temp >= 25
met_avg_state_lz_q4 <- met |> 
  group_by(STATE) |> 
  summarise(
    number_of_na = sum(is.na(temp)),
    temp = mean(temp, na.rm = TRUE),
    wind.sp = mean(wind.sp, na.rm = TRUE),
    atm.press = mean(atm.press, na.rm = TRUE),
    STATE = unique(STATE), number_of_station = n()
  ) 
unique(data.frame(met_avg_state_lz_q4))
met_avg_state_lz_q4 <- met_avg_state_lz_q4 %>% mutate("average temperature level" =
  case_when(temp >= 25 ~ "High",
            temp < 20 ~ "Low",
            temp >=20 & temp < 25 ~ "Mid")
)
met_avg_state_lz_q4

Once you are done with that, you can compute the following:

  • Number of entries (records),
  • Number of NA entries,
  • Number of stations,
  • Number of states included, and
  • Mean temperature, wind-speed, and atmospheric pressure.

All by the levels described before.

met_avg_state_lz_q4_answer <- met_avg_state_lz_q4 %>% 
  group_by(c(`average temperature level`)) %>%
  summarise(
    num_entries = n(),
    num_NA = sum(number_of_na),
    num_stations = sum(number_of_station),
    num_states = length(STATE),
    mean_temp = mean(temp, na.rm = TRUE),
    mean_wind.sp = mean(wind.sp, na.rm = TRUE),
    mean_atm.press = mean(atm.press, na.rm = TRUE)
  )
met_avg_state_lz_q4_answer

Knit the document, commit your changes, and push them to GitHub.

Question 5: Advanced Regression

Let's practice running regression models with smooth functions on X. We need the mgcv package and gam() function to do this.

  • using your data with the median values per station, examine the association between median temperature (y) and median wind speed (x). Create a scatterplot of the two variables using ggplot2. Add both a linear regression line and a smooth line.
met_med_lz_q5 <- met_lz |> 
  group_by(USAFID) |> 
  summarise(
    temp = mean(temp, na.rm = TRUE),
    wind.sp = mean(wind.sp, na.rm = TRUE)
  ) 

met_med_lz_q5 %>% as.data.frame() %>%
  filter(!is.na(wind.sp) & !is.na(temp)) %>% 
  ggplot(aes(x = wind.sp, y = temp)) +
  geom_point() + geom_smooth(method = "lm", formula = y~x, color = "navy") + geom_smooth()
  • fit both a linear model and a spline model (use gam() with a cubic regression spline on wind speed). Summarize and plot the results from the models and interpret which model is the best fit and why.
model1 <- lm(temp ~ wind.sp, data = met_med_lz_q5)
summary(model1)
plot(model1)
model2 <- gam(temp~s(wind.sp, bs = "cr", k=20), data = data.frame(met_med_lz_q5))
plot(model2)
summary(model2)

The linear model has a p-value lower than 0.05 and also for the spline model. Notice that the spline model has an extremely small p-value which could possibly (but not sure) that be better than the linear model. I think the spline model will be better as the R-squared adjusted is higher.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published