Taylor Flashcards

(22 cards)

1
Q

EDF general form

A

ln(pi(y; theta, phi)) = (y * theta - b(theta)) / alpha(theta) + c(y, phi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

y in EDF

A

value of observation Y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

theta in EDF

A

location parameter; canonical parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

phi in EDF

A

dispersion parameter; scale parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

b(theta) in EDF

A

cumulant function which determines the shape of the distribution
E[Y] = derivative of b evaluated at theta

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

exp(c(y, phi))

A

normalizing factor producing a unit total mass for the distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Var(Y) in EDF

A

alpha(phi) * V(mu) = phi * mu^p
- in general, alpha(phi) = phi
- V(mu) is called the variance function usually = mean^p
Function of dispersion parameter and mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Tweedie Sub-Family general (formula to solve for theta)

A

restrict the variance function:
V(mu) = mu^p, p<= 0 or p>=1
where mu = [(1-p) * theta] ^ (1/(1-p))
- when p = 1, mu = exp(theta) which is the Poisson

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Tweedie Sub-Family p | Distribution | Variance

A

p | Distribution | Var(y)
0 | Normal | phi
1 | ODP | phi * mu
2 | Gamma | phi * mu^2
3 | Inverse Gamma |
[1,2) | Compound Poisson with Gamma Sev Dist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Selections to be made for a GLM

A
  • cumulant function, controlling model’s assumed error distribution
  • index p, controlling relationship between model’s mean and variance
  • covariates xi^T (variables that explain mui)
  • link function, controlling relationship between mean mui and associated covariates (usually log link)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fully specify the ODP Mack model as a GLM

A

From cumulative loss triangle:
Y = observation matrix of fhat - 1
w = weight matrix of cumulative loss (in the 12-24 period, take the 12mo cum loss)
X = design matrix of dummy variables for AYs and dev periods
Beta = f(dev period) - 1

Yhat or mu = h^-1 (X * Beta)
h = identity function
fkj - 1 ~ ODP[fj - 1, phij/Xkj]
Output of GLM is the best-fit parameter estimates of fj - 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Scaled Deviance (measure of Goodness of Fit)

A

D* (Y, Yhat) = 2 * sum[ logliklihood of Saturated Model - logliklihood of our GLM ]

the saturated model is one that perfectly predicts every value; goal is to find the parameters for our GLM that minimize the scaled deviance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

estimating phi with deviance

A

phihat = D* (Y, Yhat) / (n-p)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Standardized Pearson Residual -> Std Deviance Residuals

A

RiP = (Yi - Yhati) / sigmahati

RiD = sgn(Yi - Yhati) / sqrt(di / phihat)

sgn is a sign function with outputs of -1, 0, +1 when values evaluated are negative, zero, and positive respectively

deviance residuals are useful because they are closer to normal than Pearson residuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Nonparametric Mack assumptions in Taylor paper

A

M1: AYs are stochastically independent
M2: For each AYk, the cumulative losses Xkj form a Markov chain
M3a: E[Xkj+1 | Xkj] = fj * Xkj for some parameter fj > 0
M3b: Var(Xkj+1 | Xkj) = sigma^2 * Xkj for some parameter sigmaj > 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Parametric Mack Model assumptions

A

M1-3a consistent with non-parametric assumptions
- M1: AYs are stochastically independent
- M2: For each AYk, the cumulative losses Xkj form a Markov chain
- M3a: E[Xkj+1 | Xkj] = fj * Xkj for some parameter fj > 0
M3b: variance assumption follows one of:
EDF, Tweedie, or ODP distributions

17
Q

Theorem 3.1

A

Assuming data is a triangle, EDF Mack model assumes:
- If M3b holds, then the maximum likelihood estimators of the fi are conventional CL estimators and are unbiased
- If we are in the special case of ODP Mack model AND the dispersion parameters phi k,j are just column dependent (k doesn’t matter, just j) then the conventional CL estimators are minimum variance unbiased estimators (MVUEs); the Cum Loss estimates and reserve ests are also MVUEs

18
Q

Theorem 3.2

A

Given the assumptions from the EDF cross-classified model as well as ODP cross class model assumptions
- Ykj is restricted to ODP distribution
- dispersion parameters phi kj are identical for all cells (one phi value)
Then the MLE fitted values and forecasts Yhat kj are the same as those given by the conventional CL method

19
Q

Theorem 3.3

A

In general, MLEs Yhat kj will not be unbiased
However, if we assume that the ODP cross-class model assumptions apply (Theorem 3.2) AND that the fitted values and forecasts are corrected for bias, then they are MVUEs of Ykj and Rk

20
Q

Fully specify the ODP Cross-Classified model as a GLM

A

From incremental loss triangle:
Y = incremental loss matrix
X = design matrix; one side is alphas (AYs) and other is betas (dev periods)
Beta = ln(alpha(AY)) then ln(beta(dev period))

Yhat or mukj = exp(ln(alphak) + ln(betaj))
h = loglink function
Ykj ~ ODP(mukj, phi)

21
Q

Describe two plots for GLM model validation

A

Residual plot of standardized Pearson residuals vs. development age
- Residuals should be random around zero (unbiased) and have a similar variance from left to right (homoscedasticity).
Histogram of standardized deviance residuals
- Deviance residuals should be normally distributed. This plot can also show outliers that we may want to address.

22
Q

Calculate the parameter estimates for an ODP Cross-Classified model

A

Start with incremental loss triangle (ex: 4x4)
alpha1 = sum across oldest AY (assume dev complete)
beta48 = sum down 48mo inc column / alpha1
alpha2 = sum across AY2 / (1-beta1)
beta36 = sum down 36 inc col / (apha1 + alpha2)
alpha3 = sum across row / (1- beta1 - beta2)
beta24 = sum down col / (alpha1 + alpha2 + alpha3)
alpha4 = sum across row/( 1 - all future betas)
beta12 = sum down col / sum of all prev alphas