generated from jtr13/cctemplate
-
Notifications
You must be signed in to change notification settings - Fork 57
/
basic_time_series_analysis_and_prediction.Rmd
125 lines (79 loc) · 4.99 KB
/
basic_time_series_analysis_and_prediction.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
# Basic time series analysis and prediction
Dexin Sun
```{r, include=FALSE}
knitr::opts_chunk$set(echo = TRUE,warning = F, message = F)
```
This is a tutorial for time series and prediction. The process of building a time series model and how to do the analysis will be covered.
## Motivation
Time series analysis is a method to show the relationship between a variable and time. We usually use Yt as the value based on time t. The main purpose of time series analysis is to do the prediction based on historical data. This tutorial will focus on how to build a time series model and do the prediction.
## Data input
We use the dataset in R call "sunspot.month". This is a monthly numbers of sunspots, as from the World Data Center.
```{r}
library(tseries)
library(itsmr)
library(forecast)
data("sunspot.month")
```
## Time series plot
First, we could use "plot.ts()" to show the time series plot, this is a direct way to show the relationship between a variable and time.
Most of time series model could be divide into 3 components: stationary series, trend, seasonal variation. Definition of Stationary Series is that joint probability distributions do not change with time or expectation and variance do not change with time, meaning that statistical value fluctuates up and down a constant and the fluctuation range is bounded.
For example, in the plot below, we could find that it is a model with seasonality but the trend is not obvious. And stationary series is what we could choose a model to fit.
The autocorrelation coefficient ACF characterizes the correlation coefficient between yt and the lag k-order Y(t-k), and the partial autocorrelation coefficient PACF characterizes the removal of the Y(t-1), Y(t-2) equivalent pair Y(t-k) after the impact of the correlation coefficient. ACF and PACF could show the stationary of model. Both of them are shown, as follows:
```{r}
plot.ts(sunspot.month)
acf(sunspot.month)
pacf(sunspot.month)
```
To get stationary series, we need to remove trend and seasonal variation. There are two common ways: Smoothing and differencing. Smoothing is a way to get the function of trend and seasonality, which is hard to find. Differencing is what we will focus in this tutorial.
## Differencing
We could use "adf.test()" to verify whether we need to differencing. "ndiffs()" could show the difference order.
```{r}
adf.test(sunspot.month)
ndiffs(sunspot.month)
```
```{r}
newST <- diff(sunspot.month,1)
plot.ts(newST)
acf(newST)
pacf(newST)
```
## Stationary series model
There are three common stationary series models:
AR(p): Autoregression. The evolving variable of interest is regressed on its own lagged values.
MA(q): Moving Average. The regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past.
ARMA(p,q): Autoregressive moving average. AR+MA
The model without differencing is ARIMA(p,d,q) model, where:
I: Integrated. The data values have been replaced with the difference between their values and the previous values.
p: The order (number of time lags) of the autoregressive model
d:The degree of differencing
q: The order of the moving-average model
From the figure above, it can be seen that the sequence after the differencing becomes a stationary sequence. And the autocorrelation graph shows that the autocorrelation coefficient quickly decreases to 0 after lagging 1 order, which further shows that the series is stable. According to the figure below, the sequence after the differencing applies to ARIMA(5,1,2). And ARIMA(5,1,2) could be the beginning parameter, and based on the beginning parameter we could do some small adjustment to gain the best model.
```{r}
arima(x=sunspot.month,order=c(5,1,2),method="ML")
```
```{r}
arima(x=sunspot.month,order=c(4,1,2),method="ML")
arima(x=sunspot.month,order=c(3,1,2),method="ML")
arima(x=sunspot.month,order=c(2,1,2),method="ML")
arima(x=sunspot.month,order=c(1,1,2),method="ML")
```
Compared all AIC value above, we could find that ARIMA(2,1,2) has the smallest model. Therefore, ARIMA(2,1,2) would be the fittest model.
Or "auto.arima" could be use to find the fittest model, which is a easier method:
```{r}
auto.arima(sunspot.month)
```
## Testing
```{r}
TSmodel <- auto.arima(sunspot.month)
shapiro.test(TSmodel$residuals)
Box.test(TSmodel$residuals)
```
The parameter test includes two tests: the significance test of the parameters and the normality and irrelevance test of the residuals. We could use "shapiro.test()" to check the normality of residuals and "Box.test()" to check the randomness of residuals. Since p-value of normality test is smaller than 0.05, stationary series is not a normal distribution.
## Prediction
Since the first-order differencing was originally made, it is necessary to restore the first-order differencing for prediction when making predictions. "forecast()" could be used to do the prediction.
```{r}
pre <- forecast(TSmodel,50)
plot(pre)
```
## Reference
https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Definition