Untitled Deck Flashcards

(219 cards)

1
Q

Definition of an exposure and an exposure base

A

An exposure is a basic unit of risk underlying the premium.

An exposure base measures exposure according to line of business by different quantities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

3 Criteria for a good exposure base

A

1) Proportional to expected loss

2) Practical – easy to verify, inexpensive to obtain

3) Historical Precedence – it is costly to change because
a) It results in large premium swings for insureds
b) Requires rating algorithm changes which are expensive
c) Cannot use prior data for ratemaking, or significant adjustments needed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

relation between Written, Earned, and Unearned

A

CY Written = CY Earned + CY Change in Unearned (also for CY)

PY Written = PY Earned + PY Unearned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Definitions of written, earned, unearned, and in–force

A

Written – total amount coming from policies issues during a given time period as of a certain point in time

Earned – The portion of written for which coverage has been provided as of a point in time

Unearned – The portion of written for which coverage has NOT yet been provided as of a point in time.

In–Force – The portion of policies for which coverage is currently being provided as of a point in time. (Full–Term premium is used)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

2 Ways to Aggregate Exposures

A

Calendar Year – considers all policies during the year regardless of effective date

Policy Year – considers all policies with effective dates only during the year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Formula for Reported Losses

A

Reported Loss = Paid Loss + Change in Case Reserves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

4 Ways to Aggregate Claims

A

1) Calendar Year – considers all claims within a set calendar year regardless of accident/report date

2) Accident Year – considers claims only with an accident date in the given year.

3) Policy Year – considers claims on policies with an effective date in that year. (Best match of claims to premiums)

4) Report Year – considers claims on policies with a report date in the given year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fundamental Insurance Equation

A

Premium = Loss + LAE + UW Expense + UW Profit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Total Profit for an Insurer

A

Profit = UW Profit + Net Investment Income

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reason that fundamental insurance equation must be balanced at individual and aggregate levels

A

Aggregate balance ensures that insurer achieves overall UW profit.

Individual balance ensures that the rates are fair for different risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Three main objectives of aggregating data by time

A

1) Accurately matching premiums/exposures to loss

2) Using the most recent data available

3) Minimizing cost of data collection and retrieval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Definition of CY with advantages and disadvantages

A

CY – considers all policies within a given year

Advantages – Quickly available as it is fixed at year end, and used for financial reporting

Disadvantage – Poor relation of premiums to loss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Definition of AY with advantages and disadvantages

A

AY – considers all losses from policies with a given accident date in the year

Advantage – Good match between losses and premiums, preferable if we want to isolate major claim events

Disadvantage – takes time to develop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Definition of PY with advantages and disadvantages

A

PY – considers all measures from policies with effective dates in the given period

Advantage – true match between losses and premiums. Preferable if we want to isolate UW or policy changes such as limits or deductibles being written.

Disadvantage – significantly longer to develop than AY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Definition of RY with advantages and disadvantages

A

RY – considers losses on policies with report dates in the given period

Advantage – great for claims made ratemaking, # of claims is also known at year end (fixed)

Disadvantage – good for IBNER, but there is no IBNR for RY. Preferable if we want to isolate claims practice changes such as case adequecy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

4 things to review for dataset reliability

A

1) Consistency with financial statement data

2) Consistency with data from prior analyses

3) Reasonableness of data

4) Data definitions – do we know what each field is?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Types of external data sources

A

1) Statistical Plans – plans that aggregate data across companies. (ISO and NCCI)

2) Other aggregated industry data – Fast Track

3) Competitor Filings/Manuals – can be
obtained from public records.

4) Other 3rd party data – geographic, census, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why caution should be taken in using external data

A

Data may be different because of differences in products, coverage definitions, underwriting criteria, expense levels, claims practices, mix of business, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Frequency Formula and USe

A

Frequency = Claims/Exposures

Changes in Frequency can help identify trends in claim occurrence and measure effectiveness of underwriting changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Severity Formula and Use

A

Severity = Loss/Claims

Can be used to analyze changes in inflation or claims handling procedures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Pure Premium Formula and Use

A

Pure Premium = Loss/Exposure

This is the basic cost of a policy without any expenses or LAE loaded in (unless the loss has LAE). It is Freq * Severity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Average Premium Formula and Use

A

Average Premium = Premiums/Exposures

Average premium changes measure changes in mix of business and rate changes made by an insurer (using written measures this shows up faster)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Loss Ratio Formula and Use

A

Loss Ratio = Losses/Premiums = Pure Premium/Average Premium

Loss Ratios give an understanding of how we are doing relative to how much we’ve made. Rate adequacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

LAE Ratio Formula and Use `

A

LAE Ratio = LAE/Losses

It can also be LAE/Premium, be sure to check what to use.

It is used to monitor claims department costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
UWExpense Ratio Formula and Use
UW Expense Ratio = UW Expense/Premium Usually, General Exp is under EP, all other are under WP. It is used to monitor UW expense costs.
26
Close Ratio Formula and Use
Close Ratio = Quotes Accepted/Total Quotes The definition of total quotes may vary (do 2 quotes for the same policy count twice or once?) Used for marketing and analyzing competitive position.
27
Operating Expense Ratio Formula
OER = UW Exp Ratio + LAE/Earned Premium
28
Combined Ratio Formula and Use
Combined Ratio = Loss Ratio + OER, or Loss Ratio + LAE/EP + UW Expense/WP Measures an insurers overall profitability
29
Retention Ratio Formula and Use
Retention Ratio = Policies Renewed/Policies Available to Renew Can change by insurer depending on the denominator (do cancelled policies count?) Measures competitiveness and watched closely after rate changes or service changes. Useful to project future premiums as well.
30
4 Principles of P&C Ratemaking
1) A rate is an estimate of the expected value of future costs 2) A rate provides for all costs associated with the transfer of risk 3) A rate provides costs associated with individual risk transfer 4) A rate is reasonable, and not excessive, inadequate, or unfairly discriminatory if it follows the prior 3 rules.
31
Examples of non–pricing measures to mitigate catastrophe exposure for an insruer
1) Writing in lower risk areas 2) Requiring higher deductibles in high risk areas 3) Purchasing reinsurance
32
Shock Loss Examples
1) In Auto – accident involving multiple parties 2) In Home – total loss of a house 3) In Workers Comp – permanent disability of young worker
33
Why shock loss definitions might vary by insurer
Premium volume and insurer size
34
What happens to future losses when you don't smooth out shock/CAT losses
Future losses are overestimated if shock losses are in historical data, underestimated if they are not in the data.
35
Ways to adjust data for shock losses
1) Cap Losses at basic limits 2) Cap Losses and apply Excess Loss Load 2) Remove ground up shock losses and apply shock loss loading
36
Common choices for a level to cap shock losses
1) Arbitrary amount 2) Percentile of size of loss distribution – example would be sorting all claims in increasing order and capping all losses at 99th percentile loss amount 3) Loss as percent of insured value – usually in home/property, you would cap at x% of insured value.
37
Goal in deciding number of years to use for excess loss load or non–modeled CAT load
Balancing stability of long–term average and its responsiveness to changes.
38
Ways to account for changes in severity for excess loadd loads
1) Trend the historical losses then calculate the excess ratio 2) Index the cap level to reflect changing cost level so the cap level varies for each year.
39
Why CAT loads are often split into modeled and non–modeled components
Non–modeled CAT losses are frequent enough that a long–term average provides enough data for a reasonable estimate. (think of these as shock losses) CAT models are used to estimate losses from large scale events such as hurricanes where the longer term average doesn't have enough data.
40
Two ways a benefit change can apply to losses
1) To all losses on policies written after a certain date. (Slanted line in Parallelogram Method) 2) To all new losses occuring after a certain date. (Vertical line in Parallelogram Method)
41
Why adjust for one–time changes in ratemaking?
So that the data is most representative of the future policy period being priced. Not making these adjustments might lead to excessive or inadequate rates.
42
Types of effects for one–time changes
Direct effects are the direct and obvious impacts resulting from the change to premiums/losses/expenses. Indirect effects are impacts to these measures from changes in human behavior that are consequences of the change (high rate change, more policies cancel).
43
Why direct effects of coverage increases on losses are difficult to quantify
Historical losses may have been capped at lower levels of coverage or the coverage may not have existed historically.
44
3 Ways to calculate the direct effect of a coverage change on losses
1) Restate individual claims at new coverage levels – preferred, but reqquires historical individual claim data. 2) Calculate effect on representative group of claims – use weighted average of impact of the change on different segments 3) Simulate losses under new coverage levels – can be hard to pick parameters.
45
2 methods to on–level premiums
1) Extension of Exposures – preferred, but requires transactional data 2) Parallelogram Method – easy to perform, but assumes even writings which may not be true.
46
Advantages and Disadvantages of Extension of Exposures
Advantage – most accurate way to on–level Disadvantage – requires historical transactional data which may be difficult to acquire, we may need to make assumptions for new rating variables as well (rate gaps), schedule rating may be difficult as well to incorporate.
47
Advantages and Disadvantages of the Parallelogram Method
Advantage – easy to perform Disadvantage – assumes constant even writings which is not true in practice. Also, aggregate on–level factors are not appropriate for class level ratemaking.
48
Ways to correct for uneven writings with the Parallelogram Method
Use shorter time periods to reduce inaccuracy. A more accurate way is to aggregate historical data by rate level.
49
Parallelogram Method assumption for losses
Losses are uniformly distributed over the historical period being used.
50
Reason for adjusting for continuous changes
To ensure historical data reflects mix of business and levels of social and economic inflation expected in the future period.
51
Possible data sources for trends
1) Historical Insurer Data 2) Industry Data 3) Economic Data
52
Data Adjustments required before trending
Adjust for large events & anomalies, one–time changes, and seasonality.
53
Advantages and Disadvantages of using written premium to determine premium trends
Advantage – we can use more recent data as it is most responsive to changes in mix of business Disadvantage – we are trending earned premium, so using written may not be as accurate.
54
Why frequency and severity are typically analyzed separately
In the end we want pure premium, and these are two components of it. Analyzing separately allows us to make any changes more apparent.
55
Advantages and Disadvantages of using CY data to determine loss trends
Advantage – you don't need to estimate ultimate losses which are uncertain Disadvantage – accident year provides a better match between losses and exposures
56
Advantages and Disadvantages of using paid loss data compared to reported loss data to determine loss trends
Advantage – not subject to changes in case reserve practices Disadvantage – reported contains more information since case reserves provide information that will eventually show up in the paid data.
57
Two alternative methods for loss trends
1) Using econometric or GLMs 2) Using rolling CY data by AY
58
Three ways to calculate a trend from historical data
1) Take average of percent changes by period 2) Linear Regression – assumes constant change for each period 3) Exponential Regression – assumes constant percent change between each period
59
Two reasons why excess severity trends are greater than basic or total limits trends
1) For losses above the basic limits, the trend is entirely in the excess layer 2) Losses just under the basic limit contribute very little to the basic layer, while they will create new losses for the excess layer
60
TWo assumptions commonly made in determining trend periods
1) Policies are uniformly written over time. 2) Premiums are earned uniformly and losses occur uniformly over the policy period.
61
Information needed to determine the average earned date of a future policy period
1) Future rate change effective date 2) Length of time rates are to be in effect 3) Policy term
62
Overlap Fallacy between development and trending
There is no overlap. The overlap fallacy states that development and trending double count, as development includes trending. Development brings to ultimate levels, while trending brings the ultimate levels from one historical period to the next.
63
Two ways to perform the first step of a two–step trend
1) Use latest Average WP/historical Average EP to adjust each year. 2) Calculate a trend for each year to trend to current (similar to 1–step method)
64
Reason to Develop losses in Ratemaking
So that rates in the future policy period will be adequate to cover the ultimate costs coming from policies written using those rates.
65
Four steps in estimating ultimates
1) Exploratory analysis of the data – identifying key characteristics and anomalies, and balancing to verified sources 2) Apply appropriate techniques to estimate ultimate 3) Evaluate the conflicting results of different techniques – reconcile and explain different outcoems 4) Monitor projections of actual versus expected development – update or correct projections with new information
66
Areas in estimating ultimates for which actuarial judgement is needed
1) Determining optimal combination of claims to use (granularity) 2) Assessing effect of operational changes on data 3) Adjusting for known and quantifiable events 4) Evaluating strengths and weaknesses of each reserving method 5) Making final selection of the estimated ultimate
67
What changes can you see if you analyze closed claim counts over reported claim counts
Whether there has been a speedup or slowdown in closing claims over time (settlement rates).
68
What changes can you see if you analyze an average paid on closed claims triangle
Reflects both severity trends and speedups or slowdowns in closing of small claims relative to large claims.
69
What changes can you see if you analyze an average case reserve triangle
Reflects severity trends, speedups or slowdowns of closing small claims relative to large claims, and changes in case reserve adequacy.
70
What changes can you see if you analyze a paid loss to reported loss triangle
Reflects speedups or slowdowns in closing of claims (not relative to small or large) and changes in case reserve adequacy.
71
Atleast 3 key characteristics to use in deciding triangle granularity
1) Similarity of coverage (Line of Business) 2) Credibility (Volume of Claim Counts) 3) Reliability of Case Reserves 4) Report Lag 5) Settlement Lag 6) Claim Severity 7) Likelihood of claims re–opening
72
Reasons that losses can develop downwards
Reasons for downwards loss development include case reserve decreases, deductible recoveries, and salvage and subrogration.
73
Possible selections for age–to–age factors
1) Straight Average 2) Weighted Average 3) Geometric Average 4) Medial Average (Same as ex High and low). 6) Judgement 7) Latest Year 8) Industry Benchmark Factors
74
Characteristics actuaries look at when selecting age–to–age factors
1) Smooth progression of factors across columns – should steadily decrease with age 2) Stability of factors down a column – more stability means more consistent development 3) Credibility of experience – if limited or unstable data, we can use industry benchmark factors. 4) Changes in patterns – systematic patterns can identify changes in operations or environment 5) Applicability of historical experience – do we expect future development to be like past development
75
Do triangles need to be adjusted for one–time changes?
Usually not, as changes can be isolated to appear on separate rows if needed. However, triangles do need to be adjusted for law or benefit level changes that impact open claims or in–force policies. This is done by on–leveling or considering impact of changes when selecting LDF's.
76
What would the impact be on future estimates based on reported chain ladder with a recent decrease in case reserve adequacy
Ultimate losses will be understated as the historical data will use the prior case reserve adequacy which was higher.
77
What would the impact be on future estimates based on paid chain ladder with an increase in claim settlement rates
Ultimate losses will be overstated as the settlement rates have increased so the development factors would go to 1.00 more quickly. In the historical data it would be slower to 1.00 which means more development.
78
Common methods for selecting a tail factor
1) Special study that contains more years of data 2) Using an industry benchmark tail factor 3) Fitting a curve to LDF's and extrapolating the tail factor 4) Using reported–to–paid ratios at the latest paid development period 5) Judgement
79
Formulas for ultimate, IBNR, and unpaid claims estimates
Ultimate = Reported Loss + IBNER + IBNYR = Latest Value * Age–to–Ultimate Factor IBNR = Ultimate – Latest Reported Loss Unpaid = Ultimate – latest paid loss
80
What is the impact of a large loss in a cumulative triangle on age–to–age factors
A spike in age–to–age factors for that period, followed by a drop.
81
2 ways to incorporate ALAE in indications
1) Included with loss – add loss and ALAE and treat as one amount (can be done as ALAE is attributed to each claim) 2) Analyzed separately – this is done if ALAE is large as in some commercial lines, it is then trended and developed separately from losses.
82
Pure Premium Method Formula
Indicated Average Rate = Selected Pure Premium including LAE + FE per exposure / (1– Variable Expense – UW Profit Provision)
83
Loss Ratio Method Formula
Indicated Rate Change = Selected Loss & LAE Ratio + FE ratio to premium / (1– Variable Expense – UW Profit Provision) – 1 Remember to subtract by 1 to get the rate change
84
3 Methods for incorporate UW expenses in pricing
1) All Variable Expense Method – treat all UW expenses as variable to premium. Assumes future ratios will be consistent with historical. 2) Premium based Projection method – assumes some percentage of fixed expense, but same assumption that future ratios will be consistent with historical. 3) Exposure/Policy based Projection Method – divides fixed expense by exposures and other varialb expenses by premium.
85
Which expense categories relate to what type of premium measure
General expenses – earned premium All other expenses – written premium
86
Formula for Variable Permissible Loss Ratio (PLR) and Total PLR
Variable PLR = 1 – Variable Expense Ratio – UW Profit Total PLR = 1– UW Expense Ratio – UW Profit
87
Which expense categories generally use state vs countrywide data
General Expenses – Countrywide Data Other Acquisitions – Countrywide Data Taxes, Licenses & Fees – State Data Commissions – State Data
88
Shortcomings of All Variable Expense Method
If expenses are truly fixed, this method results in distortion. It would overcharge risks with average high premiums and undercharge risks with lower average premium.
89
One way to address the shortcomings of the all variable method (list the shortcoming too)
A shortcoming of the method is that it would over/under charge insureds that had higher than or lower than average premium respectively. A premium discount or expense constant would alleviate these issues.
90
3 Shortcomings of the Premium–Based Method and how they can be addressed
1) One time changes – these can cause a shift in historical expense ratios to be different from expected levels. Fix: We can on–level premiums to future levels 2) Continouous Changes – these can again cause a change in expected levels. Fix: trend the premiums AND expenses to future levels. 3) Fixed Expense Allocation to State from CW Data – can cause inequitable rates for multi–state insurers when CW expenses are allocated to state level. Higher average premium states will get overcharged, and vice–versa. Fix: calculate expense ratio by state
91
4 Possible Enhancements to Exposure/Policy Based Projection Method
1) Finding a more accurate way to split expenses to fixed and variable components 2) Finding a more equitable way to allocate CW expenses to state expenses 3) Some expenses that are considered fixed do actually vary by policy characteristics (such as new or renewal business). To the extent that the distribution of these characteristics is changing over time, this could result in inequitable allocations of fixed expenses. 4) Economies of Scale – in a changing book of business this will lead to increasing/decreasing future average expenses per exposure. These impacts could be identified and quantified more directly.
92
2 ways to incorporate non–proportional reinsurance costs into ratemaking
1) Restate all premium and loss data to be net of reinsurance 2) Calculate the net cost of reinsurance and treat it as a fixed expense
93
Difference between Pure Premium and Loss Ratio Methods
1) The loss ratio methods relies on LR which requires on–level trended premiums. The Pure Premium method requires well–defined exposures that may have to be trended and developed. 2) One produces an indicated rate, and one produces an indicated rate change. Mathematically they are equivalent but due to differing assumptions they may give different answers.
94
Principles of Claims–Made Ratemaking
1) Claims made policies should always cost less than an occurrence policy as long as claim costs are increasing. 2) If there is a sudden, unpredictable change in underlying trends, the claims made policy priced using the prior trend will be closer to the correct price than the occurrence policy based on the prior trend. 3) If there is a sudden, unexpected shift in claims reporting pattern, the cost of a claims made policy is not affected much at all compared to the occurrence policy. 4) Claims–Made policies incur no liability for pure IBNR so the risk of reserve inadequacy is greatly reduced. 5) Investment income from claims–made policies is substantially less than under occurrence policies.
95
What is a coverage trigger?
A coverage trigger is the event that must occur for an insurance policy to apply to a claim.
96
Two types of coverage triggers in insurance
1) Occurrence – occurrence policies cover claims that occur during the effective policy period regardless of when claim is reported to the insurer. The coverage trigger is the date of occurrence of the accident. 2) Claims–made – claims made policies cover claims that are reported to the insurer during the effective period regardless of when the claim occurred. The coverage trigger on these policies is the reporting of a claim.
97
Purpose of retroactive date provision
A retroactive provision limits claim covered under a claims made policy to only claims that occur after the start of the first (and are reported during a claims–made policy term). This prevents overlap with any prior occurrence policies.
98
Purpose of extended reporting endoresement
To allow claims that occur during a claims–made period but that are reported after the claims–made policy expires. This is commonly used when the insured switches from a claims–made policy to an occurrence policy.
99
Definitions of adverse selection and favorable selection
Adverse Selection – not using a risk characteristic that is used by competitors and attracting underpriced customers as a result Favorable Selection – using a risk characteristic not used by competitors that allows to attract profitable customers.
100
4 Social Criteria for evaluating a rating variable
1) Affordability – especially desirable for compulsory insurance programs. Subsidies often exist to promote affordability. 2) Causality – a cause/effect relationship should be intuitive or easily evident so the public accepts the variable 3) Controllablility – insureds should feel the ability to be able to change their class 4) Privacy – insureds should not feel their privacy is being infringed upon
101
Definition of skimming the cream
Identifying a lower cost group of insureds that has not been identified by competition and using underwriting or marketing plans INSTEAD OF rating to acquire them
102
Definition of risk classification
The grouping of risks with similar risk characteristics (expected costs) with the purpose of setting prices
103
4 Criteria for Evaluating Rating Variables
1) Statistical 2) Social 3) Legal 4) Operational
104
3 Statistical Criteria for evaluating a rating variable
1) Statistical Significance – expected costs should vary by class with acceptable confidence level and this should be consistent over time 2) Credibility – classes should be large enough to allow credibile statistical predictions but a class need not be fully credible on its own 3) Homogeneity – expected costs for individual risks in a class should be similar with no clearly identifiable subclasses to group into that would have significantly different loss potential
105
3 Operational Criteria for Evaluating a Rating Variable
1) Objective – classes should be measurable and clearly defined, AKA exhaustive and mutually exclusive. 2) Inexpensive to administer – cost of establishing and maintaining the data to establish classes should not be too high 3) Verifiable – levels should be verifiable and minimize the ability for INTENTIONAL misclassification
106
Advantages and Disadvantages of Adjusted Pure Premium Approach
Advantage – It considers correlation of exposures and does not require on–leveling the premiums Disadvantage – calculating relativities by weighted average can be cumbersome in a plan with many variables
107
Main Distortion in the Pure Premium Approach to class ratemaking
It does not consider correlation of exposures (distributional biases). The approach will double count the experience of the levels if exposures in a rating variable are correlated with another variable whose exposures are being analyzed.
108
How the Loss Ratio approach corrects for distributional bias
On–Level Premiums are used which account for distributional bias as premiums have rating variables built into them
109
Advantages and Disadvantages of Univariate Analysis
Advantage – simple to calculate and intuitive Disadvantage – does not account for correlation of exposures, this is very important as many variables in insurance are correlated.
110
Three Reasons GLMs have grown in popularity
1) Improvements to computing 2) Competitive pressure 3) Better data availability (and granularity)
111
Benefits of Multivariate Methods (particularly GLMs)
1) They properly adjust for correlation of exposures. 2) They attempt to focus on signal and ignore the noise. 3) They provide statistical diagnostics 4) They allow for consideration of interactions between variables
112
Challenges in determining indicated rates for territories
Territory tends to be highly correlated with other variables. Territories do not have very much credibility or homogeneity.
113
Advantage and Disadvantages of Minimum Bias Procedures
Advantage – accounts for correlation of exposures Disadvantage – can be difficult to calculate for rating plans with multiple variables and they do not provide diagnostic tests
114
Describe Sequential Analysis
To perform the analysis, first you perform a standard univariate analysis to obtain indicated relativities for a single variable. Next you perform the Adjusted PP Method to obtain indicated relativities for the second variable based on adjusting exposures as a result of the prior variable's selected relativities. You then repeat the Adjusted PP Method for all variables, having adjusted for all prior variables at each step. Only one pass through the variables is done, and it is not iterative. The main criticism is that it doesn't have a closed–form solution, meaning that the results change based on the order of variables chosen.
115
Some important steps in creating GLMs
Compiling a dataset with enough data for modeling, selecting link functions, specifying the distribution of the underlying process, and using MLE's to calculate parameters.
116
Why GLMs are usually run on frequency and severity instead of loss ratios
There is no need to on–level premiums at the granular level, actuaries have a–priori expections of frequency/severity patterns but not loss ratios. Loss ratio models also become absolute when rates are changed. Finally, there is no standard distribution for modeling loss ratios.
117
Some data mining techniques
1) MARS – Multivariate Adaptive Regression Spline, turns continuous variables to categorical variables 2) CART – Classification And Regression Trees. Builds set of if–then rules for use in classification 3) Neural Network – training algorithms to identify patterns. Can identify unknown iterations between variables 4) Cluster Analysis – method to combine similar risks into groups. EX: Creating territories using ZIP codes 5) Factor Analysis – technique used to reduce number of variables needed in classification ratemaking analysis. EX: Symbol variable.
118
Some common GLM diagnostic tests
1) Looking at standard errors (confidence intervals) around estimates 2) Chi Squared, F–Test, and other deviance tests to compare models 3) Running model on different years of data to see if estimated parameters are consistent over time 4) Using a holdout dataset
119
How actuaries can play a key role in using GLMs
1) Obtaining reliable data for use in modeling 2) Exploring anomalous results in the GLM with additional analysis 3) Considering model results from both a statistical and business perspective 4) Developing appropriate methods to communicate the model results based on the company's ratemaking objectives.
120
Common types of external data used in GLMs
1) Geo–demographic info: such as population density 2) Weather data: average rainfall 3) Property characteristics such as square footage 4) Information about insured individuals or businesses such as credit scores
121
Challenges in determining indicated rates for territories
1) Territory tends to be highly correlated with other rating variables. Address using multivariate methods 2) Territories are often set to be in small areas (zip codes) that have very limited credibility.
122
Two steps in territorial ratemaking
1) Establishing territorial boundaries 2) Determining indicated rates for each territory (use GLM)
123
Two basic spatial smoothing approaches and advantages of each
1) Distance Based – Current unit data is credibility weighted with other geographic units with weights diminishing by distance. Advantage is easy to understand/implement. Disadvantage is that distance is same impact for urban/suburban risks. Best for weather perils 2) Adjacency Based – current unit data is weighted with data from rings of surrounding geographic units, weights diminish with wider rings. Better accounts for physical, urban/rural differences. Best for socio–demographic perils such as Theft.
124
Clustering Routines for Territorial Ratemaking
1) Quantile Method – clusters have equal # of observations or equal weights 2) Similarity Methods – clusters are based on closeness of estimated relativities.
125
Why is standard ratemaking problematic for determining ILFs
1) Generally less data at higher limits so results are volatile 2) Analyses can produce results that are impractical to implement (lower price for higher limit)
126
Assumptions commonly made in ILF pricing
1) Frequency is same for all limits and is independent of severity 2) Expenses/Profit is variable and don't vary by limit
127
Why should loss data be trended and developed for ILF pricing
Higher limits can experience higher severity trends and development can take longer on larger claims
128
What does an Expense Constant account for?
It accounts for expense costs that do not vary by size of risk. Particularly important for small policies since expenses may be large portion of premium
129
Two issues with underinsurance (not insuring to value)
1) Insured will not be fully covered in case of total loss 2) If the insurer assumes ITV for all homes when calculating rates, then premium charged for underinsured policies will be understated and will not be adequate enough to cover expected losses for those policies.
130
How does premium rate per XX change as ITV increases based on skew of severity distribution
Generally, rate per XX decreases as ITV increases 1) Large Losses Predominate – decrease at increasing rate 2) Small Losses Predominate – decrease at decreasing rate 3) Uniform – decrease at constant rate
131
3 Criteria for Credibility
1) 0 2) dZ/dN > 0 3) Must be concave down (increase at decreasing rate)
132
6 Desirable Qualities for Complement of Credibility
1) Accurate – close to target 2) Unbiased – should be on target on average 3) Statistically Independent – otherwise errors compound 4) Easy to Calculate – difficult to justify if not 5) Available – otherwise it's impractical 6) Logical relationship to base statistic – otherwise difficult to justify
133
Advantage/Disadvantage of Complement: Loss Costs of A Larger Group Including Group Being Rated
Advatanges – Available, Logical Relationship, Accurate (more data –> more stability), Easy to Calculate, Disadvantage – can be biased and not independent (can be independent if group is removed)
134
Advantage/Disadvantage of Complement: Loss Costs of A Larger Related Group
Advantage – Accurate (sometimes), Available, Independent, Logical Relationship (if chosen reasonably), Easy to Compute Disadvantage – can be biased
135
Advantage/Disadvantage of Complement: Rate change from larger group applied to present rates
Advantage – unbiased, available, easy to compute, independent and accurate. Disadvantage – possibly not a logical relationship unless chosen reasonably
136
Advantage/Disadvantage of Excess Loss Complement: Fitted Curves
If fitted curve is accurate, then it is accurate, less biased, and has logical relationship. Disadvantage is that is it less independent since subject data is included (losses below layer), it is not easy to calculate and data is not usually available.
137
Advantage/Disadvantage of Complement: Harwayne's Method
Best complement, only thing is it's difficult to calculate
138
Advantage/Disadvantage of Complement: Trended Present Rates
Advantage – unbiased, available, easy to compute, and logical relationship. Disadvantage – may or may not be accurate or independent based on stability of indications
139
Advantage/Disadvantage of Complement: Competitor Rates
Advantage – statistically independent, usually has logical relationship and easy to compute Disadvantage – may be biased, difficult to obtain and may not be accurate
140
Advantage/Disadvantage of Excess Complement: Increased Limits Analysis
Advantage – Independent and practical if data is available. Disadvantage –If ILF's are based on different size of loss distribution than experience, it is biased. It also may be inaccurate. May also not be logical
141
Advantage/Disadvantage of Excess Complement: Lower Limits Analysis
Advantage – independent, practical if data available. More accurate than limits analysis (stable) Disadvantage – more biased than limits analysis, may not have logical relationship
142
Advantage/Disadvantage of Excess Complement: Limits Analysis
Advantage – independent, practical if data is available. Disadvantage – biased if ILFs are based on different distribution, may be inaccurate due to low volume of data, and may not have logical relationship
143
Why is individual risk rating more common in commercial lines
1) Exposure to loss varies more from risk to risk 2) Classification plans are not as refined due to limited data 3) Individual insureds can be large enough to have credibility
144
ISO Experience Mod Formula
Mod = Z*(AER–EER)/EER
145
ISO AER Componenets
AER = Actual Basic Limits Loss Limited by MSL + Expected Unreported Limited by MSL all over CSLC (Company Subject Loss Cost, Ultimate basic limits losses NOT limited by MSL)
146
ISO EER Components
Ultimate Basic Limits Loss Limited by MSL over Ultimate Basic Limits Loss NOT limited by MSL
147
Steps to Calculate ISO Mod
1) Find Basic Limits Expected Loss for each subline – ELR*basic limits premium 2) Detrend BLEL by year and subline to get CSLC 3) Sum CSLC for all years and sublines 4) Development = CSLC by year*EER*(%Unrept) 5) Add Development for each year 6) Cap actual loss at basic limits then add ALAE and cap at MSL. Sum result 7) Calculate Mod
148
NCCI Mod Formula – Both ways
1) Mod = Ap*(Zp)+Ep(1–Zp)+Ae(Ze)+Ee(1–Ze) all over E(total) 2) Mod = Ap + W*Ae + Ee(1–W) +B all over E+B where Zp = E/(E+B) and Ze = WZp
149
Where does D ratio apply?
D Ratio gives primary loss. 1–D ratio gives excess loss
150
When can schedule rating be used?
To reflect individual risk characteristics that are not already reflected in rates and are not captured in experience rating
151
Unique Considerations for Large Deductible Pricing
1) Who pays ALAE below deductible? 2) Does deductible apply to loss or loss+ALAE 3) Credit risk 4) Risk margin – more profit margin due to difficulty in estimating loss above deductible (more risk = more profit)
152
Retro Rating Formula
R = (b+CA)*T b = e–(C–1)E(A)+CI I = (Insurance Charge – Savings)*E(A) Apply min and max to standard premiums
153
Examples of Rate regulation
1) Limiting amount of rate change 2) Requiring notices to insureds recieving large changes 3) Prohibiting rating variables 4) Prescribing specific techniques (california) 5) Revising assumptions used in ratemaking (select new trend). Causes delays/changes
154
Actions to take to respond to regulatory restrictions
1) Legal Action 2) Revise UW guidelines to limit business at inadequate rates 3) Revise marketing to limit business at inadequate rates 4) Use proxy variables when others are prohibited
155
Two operational constraints
1) Systems Limitations – cost to implement changes in systems 2) Resource Constant – new variable requires new staff to collect/process data. Something like ITV
156
Factors influecing insureds decision to purchase
1) Competitor Price 2) Overall Cost 3) Rate Changes (existing customers only) 4) Insured Characteristics – sensitivity to price 5) Brand Loyalty/Customer Satisfaction
157
Differences between distributional analysis and policyholder dislocation analysis
Distribution Analysis indetifies which segments are growing/shrinking over time and market share changes. Dislocation Analysis can be used to identify impact of rate changes on existing customers. We want to quantify how many customers recieve rate changes by % and $ amount as those with higher changes may cancel.
158
Traditional vs Asset Share Pricing
Traditional only considers experience of a single period in time. Does not consider persistency between risks. Asset share has multiple periods, persistency, and different assumptions for new and renewal business.
159
Steps in UW Cycle (Hard Market vs Soft Market)
1) Hard Market – prices and profits are high but growth is low 2) Some insurers lower rates to attract new customers 3) All insurers lower rates to maintain competitiveness. Profit goes down and we enter soft market 4) Insurers respond to low profits by not writing business and raise prices 5) All insurers follow and we enter hard market
160
Considerations in Premium Transition Rules
1) Cap Amounts 2) Should allow for large changes but limit overall impact 3) Shorter transition periods are preferable 4) How to implement and file?
161
Third Principle of Unpaid Claim Estimates
Actual amounts that will be paid for defined group of claims will likely differ from estimated future payments implied by a reasonable unpaid claims estimate. Actual payments can be known only with certainty when all payments for such claims have been made.
162
Entities for which accurate unpaid claims estimates are important
1) Internal Management – for pricing/underwriting and business decisions 2) Investors – profitability for insurer and returns paid to investors 3) Regulators – estimates are used to monitor solvency of insurer
163
Accounting, Valuation and Review Dates
1) Accounting Date – defines group of claims being analyzed and determined date for paid/unpaid split 2) Valuation Date – defines date through which transactions are included in analysis. Can be same, prior, or after accounting date 3) Review date – date through which material information known to actuary is included in analysis (after valuation date)
164
First Principle of Unpaid Claims
Unpaid claims estimate for defined group of claims is reasonable if it is derived from reasonable assumptions and appropriate methods and has been validated by appropriate tests all evaluated consistent with review/valuation date.
165
Second Principle of Unpaid Claims
Unpaid claims estimates are inherently uncertain. This stems from a dependence on the amount of future claims payments on facts and circumstances unknown when an estimate is made. A range of estimates is reasonable.
166
Assumption of Expected Claims Method
Prior development is not predictive of future development. Also that a reasonable ECR can be obtained.
167
Common uses of Expected Claims Method
1) When entering new line of business (no data available) 2) When operational/environmental changes make historical data irrelevant 3) Ultimates for long tailed lines at early maturities where age–to–ult is highly leveraged
168
Difficulty of using Expected Claims Method
1) Determining appropriate exposure base 2) Estimating claims relative to that exposure base
169
Expected Claims method under Mix of Business Changes
Only affected if either 1) Segments of changing business have different ECR 2) New business has same ELR but different development patterns AND that this development change causes estimate of ECR from historical data using Chain Ladder to be inaccurate
170
Expected Claims method under Growth/Shrinkage
Not usually affected. Only if average accident changes. Depends on which period the change occurred and if the ECR is affected. Will be affected in the same direction as the chain ladder but to a lesser degree (Growth would lead to Underestimate)
171
Assumptions of the BF Method
IBNR can be better estimated based on a–priori estimate than using experience. Prior development has no significance to future development.
172
Advantage of BF and Benktander Method
Stability and Responsiveness compared to Expected Claims and Chain ladder. Benktander is more responsive than BF but not as stable (due to chain ladder).
173
How to deal with CDF<1 for BF or Benktander?
1) Continue using method 2) Limit CDF to minimum of 1 3) Rely on different technique to select ultimates for years with CDFs below 1
174
BF/Benktander under changing claims ratios
These methods do not fully react to changes in claims ratios. Reported is more responsive than paid since more weight will be given to development technique when reported is used.
175
BF/Benktander under growth/shrinkage
Unaffected unless average accident date changes. Effect is in same direction as chain ladder.
176
Cape Cod under mix of business change
Will be inaccurate if 1) Segments that are changing have different development patterns 2) Segment that are changing have different ECR
177
Assumption of Cape Cod Method
Unreported claims develop based on expected claims. Expected claims are derived using reported claims and earned premium
178
Advantage of Cape Cod
ECR is estimated from historical data rather than judgementally selected. Random fluctuations are early maturities do not significantly distort estimates.
179
Disadvantages of Cape Cod
1) Can't be used for new line since no data to calculate ECR 2) Estimates are highly dependent on on–leveled premium and this can be difficult 3) If data is thin or volatile, it is not reliable. BF may perform better.
180
Cape Cod under Case Adequacy Changes
Overstated if Case Adequacy Increases. Error is larger than BF method but lower than chain ladder (since it considers latest year)
181
Cape Cod under Claims Ratio Changes
More responsive than BF since it uses historical data to calculate ECR. Not fully responsive though.
182
Cape Cod under Growth/Shrinkage
Only if average accident date changes. Growth would mean understating ultimates.
183
Considerations of maturity age to calculate tail severity
1) Combine when results become erratic since it may provide stability 2) Influence on total projections (if the age doesn't matter, we don't need to analyze further) 3) % of claims expected to be closed beyond selected age. Enough claims should exist to provide more stable severity estimate when grouped but not too many since some should remain to provide estimates for earlier maturities where age–to–age is more stable.
184
Assumptions of Frequency–Severity Techniques
1) Claim Counts/Severities will continue to develop based on historical development 2) Claim counts are consistent in definition throughout the experience period 3) Mix of claims is relativity homogeneous. Disposal rate technique also requires no significant partial payments.
185
Advantages of Freq–Sev Techniques
Disposal rate technique isn't impacted by case changes (paid data) Can incorporate trends/inflation and disposal rate information We gain greater insight into claims processes by analyzing freq/sev separately such as settlement rate and average dollar value of claims.
186
Disadvantage of Freq–Sev Techniques
1) Highly sensitive to trend rate 2) Changes in claim count definitions impact estimates 3) Changes in reporting/processing impact estimates 4) Data may not be available and should be relatively homogeneous
187
Assumption of Case Outstanding Techniques
1) Prior development is predictive of future development 2) Case Outstanding to date provides useful information about future claims Also consistent processing, mix of claims, stable limits/deductibles.
188
When are Case Outstanding techniques most useful?
1) Report Year triangles (no IBNR) and claims made policies 2) AY triangles where most claims are reported by first maturity
189
Disadvantages of Case Outstanding Techniques
1) In most lines, case outstanding does not provide significant information about pure IBNR 2) Lack of industry benchmark data for AY applications 3) Ratios are not intuitive (paid to case and case to case) 4) Projections can be distorted by case reserves for large losses. 5) Second method is highly leveraged for immature AY's and can be distorted by industry vs insured mix of business differences.
190
Two ways to deal with operational changes in unpaid claims estimates
1) Data selection and rearrangement to isolate or neutralize impact of changes 2) Use data adjustments to restate historical data as if changes had not occurred.
191
Examples of using data selection/rearrangement for unpaid claims estimates
1) Earned exposure instead of claim counts when definition of counts has changed 2) PY data instead of AY if limits have changed 3) RY instead of AY if reporting patterns have changed 4) Shorter time periods if avg accident date has changed
192
When would subdividing triangles by size of loss be useful?
If there was a shift in handling large claims vs small claims
193
Assumptions of Berquist Sherman Paid Adjustment
1) Changes in disposal rates are due to changes in settlement rates 2) Higher disposal rates means higher % of paid claims
194
Assumptions of Berquist Sherman Case Adjustment
Any differences between annual changes in average case reserves and severity trend is due to change in case reserve adequacy.
195
Challenge in using paid loss data to find severity trend for Berquist Sherman Adjustment
Assumes average paid severity is only changing because of severity trend (not because of prioritization between large/small claims) For certain lines, limited paid data is only available at early maturities which distorts trend and irregular settlement/variation in rate of claims closed without payment can distort trends.
196
Why is linear interpolation within quarters more reasonable than in a year?
Development usually is higher at early maturities and tends to decrease over time. As such linear interpolation is usually not reasonable for longer periods of time since development will tend to occur earlier in the year than later in the year.
197
Benefits of using multiple methods to estimate unpaid claims
Seeing multiple estimates allows you to better understand range and distribution of possible outcomes and sensitivity of estimates to varying assumptions.
198
Common diagnostics for reviewing unpaid claim estimates
Implied Ultimate Freq/Sev, Claims Ratios, Pure Premiums, and Unpaid Severities.
199
2 reasons why unpaid claims estimates should be reviewed annually
1) Unpaid claims estimates should be revised if there have been changes 2) Check if claims have been developing as expected to check if unpaid estimates are reasonable.
200
Different options to take if actual emergence is different from expected emergence
1) Reduce IBNR – would be appropriate if speedup in reporting 2) Leave IBNR unchanged – for example if there was a large reported claim (and expected development will return to future levels) 3) Increase IBNR – for example if claims ratio increased (deteriorated)
201
Tail Factor Relationships for Excess of Loss and Quota Share Treaties
1) Excess of Loss – tail factor for ceded is larger than gross since once the retention is hit all the development is in the ceded layer. The tail factor for net will be lower since losses are capped by reinsurance. 2) Quota Share – all tail factors (ceded,net,gross) are same since ceded and net are multiples of gross triangle.
202
Considerations on whether to develop gross and ceded separately or to develop net
1) Availability of Data 2) Characteristics of Reinsurance Program
203
Assumption of Ratio approach to ALAE estimation for unpaid ALAE
The relationship between ALAE and claims is stable over the experience period.
204
Advantage of additive ratio method for unpaid ALAE estimates
Additive method is more stable if paid to paid ratios are small at early maturities
205
Advantage of using Ratio Approach to estimate Unpaid ALAE
1) Recognizes inherent relationship between claims and ALAE 2) Development factors are not as highly leveraged as chain ladder 3) Can judgmentally select ratios
206
Disadvantages in using Ratio Approach to estimate Unpaid ALAE
1) For some lines, there may be claims with no claim payment but substantial ALAE 2) Errors in ultimate claim estimates will lead to errors in ultimate ALAE estimates
207
Assumption of dollar based techniques for unpaid ULAE
ULAE costs track with claim costs in both timing and amount. (A single 10K claim has same ULAE as ten 1K claims)
208
Claims Basis Formula for Generalized Kittel
B = U1*R+U2*P+U3*C R = Ultimate Cost of Claims REPORTED in CY P = Paid claims during CALENDAR year C = Ultimate Cost of Claims CLOSED during CALENDAR year u1+u2+u3 = 1
209
Three ways to estimate unpaid ULAE using Generalized Kittel
1) Expected Claims Method – W*(Ultimate) – Paid ULAE 2) BF Method – W*(Ultimate – Paid ULAE) 3) Development Method – Paid ULAE*(Ultimate/Basis – 1)
210
Assumptions of Generalized Kittel
1) ULAE costs are proportional to dollars of claims 2) ULAE amounts spent opening is proportional to ultimate cost of claims reported 3) ULAE amounts spent maintaining is proportional to payments made 4) ULAE amounts spent closing claims are proportional to ultimate cost of claims being closed Also assumes no cost to re–open or re–close claims.
211
Disadvantage of Generalized Kittel
Does not account for when ULAE inflation is occurring at different rate than claims inflation
212
Assumptions of Classical Approach to unpaid ULAE
1) 50/50 opening and closing ULAE costs 2) No partial payments, all payments made when claims are closed.
213
When does the classical approach for ULAE not work well?
Long tailed lines and if ULAE is trending at different rate than claims. Also if insurer is growing/shrinking significantly and if the 50/50 assumption is not appropriate.
214
Two issues in dollar based techniques that count based techniques address
1) ULAE is not directly proportional to claims 2) ULAE to claims ratios results in volatile ULAE when claims are volatile
215
Assumptions of Count Based ULAE Techniques
1) Same kind of transaction costs the same ULAE regardless of claim size The challenge is obtaining accurate and consistent claim count data.
216
Brian Technique for Unpaid ULAE
5 types of transactions. Open, Maintain Open, Pay, Close, and Reopen. Assumed that ULAE costs was similar and could be estimated using historical ratio of ULAE to # of transactions. Weakness is the assumption that all transactions have similar ULAE, another is that it is difficult to estimate future # of transactions and avg cost per transaction.
217
Wendy Johnson Technique for Unpaid ULAE
Assumes ULAE can be split into opening and maintain claims. Uses count based technique but allows for costs to vary between transactions. It uses relative ULAE per transaction type (for example maintaining costs 1.5x opening)
218
Mango Allen Staffing Approach to Unpaid ULAE
Calculate new count base as Open Counts + Closed Counts + Open Pending Counts (OCP). Then calculate Staff/OCP and project OCP to get projected staff. Use projected staff and expected ULAe per staff to get ULAE. A concern is that ULAE estimates are senstivive to magnitude of selected parameters.
219
Spalla Technqiue to Unpaid ULAE
Spalla suggets modern computer systems can track time spent on individual claims by each employee. We can then find ULAe per type of transaction and load for other ULAE like overhead. We can also apply a relative technique like Wendy Johnson here.