AAA #1 Question: What basic principles should be present in a risk classification?
• Reflects expected cost differences – Among classes and other factors etc. • Distinguishes among risks on the basis of relevant cost-related factors – must relate to losses • Applied objectively – understandable rules • Practical and cost-effective – cannot be too costly or too difficult to use • Acceptable to the public – Public must feel it is fair
AAA #2 Question: Compare a Government insurance program to a private insurance program.
Answer: Similarities: Pooling of risks. Pools should be large enough to guarantee reasonable predictability of total losses. Differences: Government is provided by law, private is provided by contract. Government is usually compulsory, private is voluntary Government does not need to be self-supporting, private must support itself
AAA *** List the: Three Primary Purposes of Risk Classification
AAA—Risk Classification Ratemaking 1. PROTECT INSURANCE SYSTEM’S FINANCIAL SOUNDNESS Risk Classif ication is the Primary means to control adverse selection 2. BE FAIR Risk classif ication should produce prices ref lective of expected costs 3. Permit ECONOMIC INCENTIVES to operate and thus ENCOURAGE widespread COVERAGE AVAILABILITY A proper class system will allow an insurer to write and better serve both higher and lower cost risks
AAA *** List the “Program Design Elements” (3) and how they relate to risk classification
[PEC]
1. DEGREE OF BUYER CHOICE:
Compulsory programs ~ Broad classif ication Voluntary programs ~ Ref ined classification
AAA ** Four Differences Between: Public vs. Private Insurance Programs
AA ** Operational Considerations in Classification Rate Making (7)
MAMA ACE
AAA *** Considerations in Designing a Risk Classification System (9)
AAA *** Five Basic Principles of a Sound Risk Classification System
AAA * 3 Mechanisms for Coping with Risk
AAA * 3 Means of Establishing a Fair Price
FAIR PRICING METHODS:
Bailey & Simon: *** 3 Major Conclusions on the Actuarial Credibility of a Single Auto
Adding a 2nd year increased credibility roughly 2/5 Adding a 3rd year increased credibility by another 1/6
Bailey & Simon: ** Four reasons Multi-year Credibility does not Grow Linearly from 1-yr Credibility
Multi-Year Credibilities are Not Linear because:
Bailey & Simon: ** The Credibility of PPA Experience Rating depends on ______ and ______
Experience Rating Credibility Depends on:
Bailey & Simon: ** Why is EPPR used instead of earned exposures as the FREQUENCY BASE for calculating credibility of a single PPA?
According to Hazam, what conditions must be met?
Use EPPR as a Base to AVOID THE MALDISTRIBUTION that results when higher claim frequency territories produce more X, Y, and B risks and also produce higher territory premiums. Basically to avoid overlap between territory rating and experience rating
To use EPPR as a Base for Eliminating misdistribution [Hazam]: 1. High Frequency territories must also be higher Premium territories 2. Territory Differentials are Proper Alternative: apply Bailey-Simon method to loss costs instead of loss frequency
Bailey & Simon: *** Single PPA Credibility:
Credibility =
Modification =
R =
m =
CARD #11
z = ( 1 - Mod) Class is n-yr claim free (1+,2+, 3+); R = 0 for claims free risks
z = (Mod - 1) / (R - 1) Group of Risks WITH claim experience (0,1,2)
Mod = Relative Freq = ZxR + (1-Z) =
= (# claims class/EPPR class) / (# claims tot / EPPR tot) R = Actual Freq / E[Freq] = 0 for accident free risks
= [1 - e^(-m)]^-1 if Freq Poisson distributed
E[Freq] is usually the class AVG freq
m = (#claims total)/ EE total)
Bailey & Simon:
How to determine which class has more stability:
**Stability Across Time:** Examine (n - yr Cred / 1-yr Cred) for each class The more linear the multi -year credibility, the MORE STABLE (ratio = n) (i.e. The class with RATIO CLOSEST TO n is the MOST STABE) Logic: If an insured’s chance for an accident remained constant from yr-to-yr and no risks entering/leaving, then credibility should vary in proportion to the number of years.
**Stability within a Class**: Examine (n - yr Cred / freq per EE <sub>total</sub>) for each class The LOWEST RATIO indicates the MOST STABLE individual risks, lowerst variation within its HG, or is the most narrowly defined/most homogeneous. Logic: If the variation of individual insured’s chances for an accident were the same within each class, credibility should vary in proportion to the average claim frequency. *Recall: there are 5 classes shown; each class is divided into experience groups A, X, Y, B*
Mahler 1 *
Credibility & Shifting Risk Parameters
Background for the paper’s analysis
Background:
Past Experience used to predict the future
New Estimate = Data*(z) + (Prior Est)*(1-z)
z = Credibility, Prior Est = Class AVG
Parameters may shift over time posing the question: How should we combine different years of historical data? Suggests:
May want to vary the weights to prior estimates
Mahler 1 ***
Criteria for Evaluating Credibility Weighting
Schemes (3)
for weighting past experience against expected future
experience
3 Criteria for evaluating Credibility weighting:
Mahler 1 *
What is the MAXIMUM REDUCTION in MSE
that can be attained by using credibility to
combine two estimates?
Optimal Credibility (Z) =
Lowest MSE possible = 0.75 * Min[MSE Z=1 , MSE Z=0]
Optimal Credibility (Z) = 1 - ( MSE Z=1 / MSE Z=0 )
Mahler 1 **
Mahler 1 - Study of Credibility
Mahler’s Various Findings:
Mahler 1 ***
3 Tests to see if parameters shift over time
(Three test Mhler uses to examine the winning percents of the teams)
What does each test reveal?
Test if all teams win the same amount. Metric: Do team winning percents fall within a 95% confidence interval around the grand mean
Test for shift over time (same as Chi-Squared) Metric:
Mahler 1 (mine)
Exponential Smoothing
X j+1 = z * Yj + (1-Z) * X j
X j+1 = Next Year estimate
Yj = Prior year actual result
X j = Priro year estimate
X 0 could be the grand mean
Anderson ***
Failings of one-way analysis (2)
Failings of One-way Analysis:
1. Can be DISTORTED BY CORRELATIONS between rating factors
Youth are more concentrated in some territories: territory and age are correlated; one-way analysis overlooks this
2. Does NOT CONSIDER INPERDEPENDEMCIES
between factors (aka: interactions)
Youth+High Performance Car = extra risky driver, but Elderly+High Performance Car = extra careful driver; impact of high performance vehicle changes across, or “interacts” with age
Anderson ***
Failings of Minimum Bias (2)
Failing of Minimum Bias: Lack of statistical framework:
Min Bias - Iterative calculations are considered computationally inefficient