AR model
Expresses the current value of a time series as a linear combination of its past values and a random error term.
For an AR model of order p (AR(p)):
* Xt = ϕ1Xt-1 + ϕ2Xt-2 + … + ϕpXt-p + ϵt
Explanation of MA model
Relates the current value of a time series to past forecast errors (residuals)
For an MA model of order q (MA(q)):
* Xt = μ + θ1ϵt-1 + θ2ϵt-2 + θqϵt-q + ϵt
ARMA(p,q)
Combines AR(p) and MA(q) components
* Xt = ϕ1Xt-1 + ϕ2Xt-2 + … + ϕpXt-p + θ1ϵt-1 + θ2ϵt-2 + … + θqϵt-q + ϵt
Behaviour based on ϕ1
Behaviour based on θ1
AR Stationarity Conditions
the roots of the characteristic equation lie outside the unit circle
* |ϕ1| < 1
Patterns for model identification
MA Stationarity Conditions
Always stationarity because it depends on a finite number of past errors
Box-Ljung Test
Tests if residual autocorrelations are statistically significant
* Null Hypothesis H0: Residuals are uncorrelated (white noise)
Box-Jenkins Approach
Akaike Information Criterion (AIC)
Measures the relative quality of a statistical model for a given dataset by penalizing poor model fit ( -2ln(L) ) and number of parameters ( 2k )
* AIC = -2ln(L) + 2k
* Lower AIC is better
* Useful when comparing models with the same data but different numbers of parameters
Bayesian Information Criterion (BIC)
Similar to AIC but includes a stronger penalty for model complexity (parameters)
* BIC = -2ln(L) + kln(n)