Reserving Flashcards

Theory (36 cards)

1
Q

Basic chain ladder method (BCL)

Definition

A

The BCL method is a statistical method of estimating outstanding claims, specifically estimating the ultimate value of a set of development data. It projects the future by calculating and applying the weighted average of past claim development (known as link ratios or development factors) into the future. This method can be applied to many different categories of data, including premiums, paid claims, incurred claims, and numbers of claims.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

BCL Method

Step by step method

A
  1. Tabulate Claims: Tabulate the claims data ( eg cumulative paid or incurred amounts) in a run-off triangle format, group by origin year (eg accident year or underwriting year) and development year.
  2. Calculate Development Ratios (Link Ratios): Calculate the ratios of accumulated claims in successive development periods (link ratios).
  3. Apply Ratios: Apply these ratios to the cumulative claims data to complete the triangle (i.e project the data to ultimate maturity).
  4. Find Future payments: From the completed cumulative results, find the expected future payments in each cell (usually by disaccumulating the ultimate cumulative claims and then calculating the outstanding amount by subtracting claims paid to date)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

BCL Method

Assumptions

A
  1. Stability of development pattern: it assumes that the future claims development will be in line with the past claims development. The core assumption is that the claim development pattern is stable for different cohorts.
  2. Inflation implication: the basis method implicitly assumes that past inflation will continue into the future.
  3. Homogeneity: the method assumes that the data is homogenous and that the run-off pattern is the same for each origin period.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

BCL Method

Advantages and disadvantages

A

Advantages

  • Simplicity and Clarity: Conceptually straightforward and easy to relate results back to the pattern of development
  • Wide Applicability: Can be applied to a wide variety of sets of data (paid, incurred, or claim numbers).
  • Flexibility: The basic method can easily be modified to allow for data distortions.
  • Foundation for Other Methods: It can be developed to serve as a starting point for a number of other methods, such as the Bornhuetter-Ferguson method.

Disadvantages

  • Distortion by Anomalies: Results can be distorted by unusual experience (e.g., very good or very bad claims experience).
  • Volatility: If development factors (link ratios) are volatile or unstable, projections may be difficult.
  • Dependence on Experience to Date: It projects the claims experience to date. If experience is light or heavy, applying the development factor could result in an under- or over-estimation of the ultimate value without adjustment.
  • Ignores Claim Numbers/Amounts: On its own, it doesn’t separate the effects of changes in claim frequency versus claim severity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Bornhuetter Ferguson (BF) Method

Definition

A

The Bornhuetter-Ferguson (BF) method is a reserving method that can be viewed as a credibility estimate. It is a weighted average that combines an expected level of claims (usually estimated using the Expected Loss Ratio approach) with a projection of ultimate claims based on experience to date (usually estimated using the Chain Ladder method).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Bornhuetter Ferguson (BF) Method

Step by Step Method

A
  1. Determine Initial Ultimate Estimate (Loss Ratio): Determine an initial expected ultimate claim amount (LR) for the relevant origin period. This loss ratio is typically derived from the previous year’s best estimate results, adjusted for inflation, rate changes, large claims, and other factors.
  2. Estimate Developed Proportion: Estimate the proportion of claims that are currently incurred (p) or paid for the period. This proportion often comes from the Chain Ladder run-off pattern.
  3. Derive Undeveloped Proportion: Derive the proportion of claims yet to be incurred or paid, calculated as (1−p).
  4. Calculate Expected Future Claims (IBNR): Determine the expected value of the undeveloped claims (IBNR/IBNER) by multiplying the initial expected ultimate claims (LR) by the undeveloped proportion (1−p): LR×(1−p).
  5. Calculate Final Ultimate Claims: Calculate the final expected ultimate claims for the origin period by summing the current value of the incurred or paid claims (C) and the expected value of the undeveloped claims (from Step 4): C+LR×(1−p).
  6. Determine Reserve: To calculate the required reserve, deduct the amount of claims paid to date from the final ultimate claims estimate.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Bornhuetter Ferguson (BF) Method

Assumptions

A
  1. A Priori Estimate Credibility: The method relies on the assumption that the initial expected ultimate claims (loss ratio) used as the prior estimate is correct.
  2. Assumed Run-off Pattern: It relies on an assumed run-off pattern, usually derived from the Chain Ladder technique. This pattern (p) dictates the speed at which claims are expected to emerge.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Bornhuetter Ferguson (BF) Method

Advantages and Disadvantages

A

Advantages

  • Stability: It provides a result that is less dependent on the volatility of claims experience to date, unlike the Chain Ladder method.
  • Utility with Sparse Data: It is very useful where the available data for a cohort is sparse (e.g., recent cohorts, long-tailed business).
  • Incorporates External Information: Combines stable external estimates (loss ratio) with internal data patterns (Chain Ladder factors).

Disadvantages

  • Subjective Prior Estimate: It can be difficult to gather information for the prior loss ratio estimate.
  • Sensitivity: The result, particularly at early stages of development, can be heavily dependent on the prior loss ratio estimate.
  • Accuracy Risk: If the selected loss ratio is inappropriate (e.g., too low or too high), the resulting estimate will be similarly biased.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Expected Loss Ratio (ELR) Method

Definition

A

The Expected Loss Ratio method can be used to estimate future claims. It works by applying historical loss ratios (based on the company’s or industry data) to current premiums to obtain future claim estimates. It is a simple, high-level approach and is often used to derive the a priori estimate required by the Bornhuetter-Ferguson method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Expected Loss Ratio (ELR) Method

Step by step Method

A
  1. Gather Premium Data: Collect earned premium data (or ultimate premiums) for the cohort being reserved.
  2. Determine Historical Loss Ratio: Determine historical loss ratios, adjusting them for current factors such as claims inflation and rate changes.
  3. Select Loss Ratio: Select the estimated loss ratio (LR expected) for the period in question.
  4. Estimate Ultimate Claims: Calculate the ultimate claim estimate by multiplying the earned premium (P) by the selected loss ratio: Ultimate Claims=P×LR expected.
  5. Calculate Reserve: Determine the required outstanding claims reserve by subtracting claims paid to date from the estimated ultimate claims.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Expected Loss Ratio (ELR) Method

Assumptions

A
  1. Loss Ratio Credibility: Assumes that the selected historical or a priori loss ratio is correct.
  2. Underlying Consistency: Assumes that the underlying premium rates and claims data used to calculate the historical loss ratio are relevant to the current business being reserved.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Expected Loss Ratio (ELR) Method

Advantages and Disadvantages

A

Advantages

  • Simplicity: The method is simple to apply.
  • Utility in Sparse Data: Especially useful when data is sparse, unreliable, missing altogether, or when a company has started a new line of business.
  • Benchmarking: Gives a useful reference point against which to compare results from other, more complex methods.

Disadvantages

  • Subjectivity: The underlying assumptions, particularly when based on underwriter opinion or external benchmarks, can be subjective.
  • Model Risk: It ignores claim number development and payment patterns, relying solely on an assumed ratio.
  • Obscured Trends: Trends in frequency and severity are difficult to identify separately, as the method relies only on the aggregate ratio.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Average Cost Per Claim (ACPC) Method

Definition

A

The Average Cost per Claim (ACPC) method is a reserving approach that relies on the average cost of claims paid or incurred. It is a member of the family of statistical methods often employed in reserving work using triangulations.

The method estimates the expected loss cost by analyzing two separate components for each origin year: claims frequency (the number of claims) and claims severity (the average size of those claims). It is not uniquely defined, and many variations are possible regarding whether it is applied to incurred, reported, settled, or pure Incurred But Not Reported (IBNR) claims, or whether the average size is based on origin, development, or calendar years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Average Cost Per Claim (ACPC) Method

Step by Step Method

A

The ACPC method requires development tables for both total claim amounts and claim numbers. A common approach, where the average ultimate claim size is estimated, involves the following steps:

  1. Tabulate Data: Create two separate run-off triangles (tabulated by origin year and development year): one for the number of claims (claim frequency) and one for the total claim amounts (paid or incurred).
    ◦ Note: If using incurred claims data, each cell typically comprises claims paid to date plus case estimates of outstanding reported claims.
  2. Ensure Consistency: Verify that the aggregate claim amounts used correspond accurately to the claim numbers used (i.e., the claim frequency and claim severity are consistent). Special care must be taken with claims involving partial payments (which should typically be aggregated if the claim is fully settled to avoid understating the average cost) or zero claims.
  3. Project Claim Numbers (Frequency): Apply a projection technique (such as the basic Chain Ladder method) to the claim number triangle to estimate the ultimate number of claims for each origin period.
  4. Calculate Average Cost per Claim (Severity): Calculate the historical average claim size (severity) for each cell in the triangle by dividing the total claim amount by the number of claims in that corresponding cell.
  5. Estimate Ultimate Severity: Apply trending or projection methods to the historical average claim sizes to arrive at an estimated average ultimate claim size for each origin period.
  6. Calculate Ultimate Claims: Calculate the estimated ultimate claim amount for the origin period by multiplying the estimated ultimate number of claims (Step 3) by the estimated average ultimate claim size (Step 5).
  7. Determine Reserve: Calculate the required outstanding claims reserve by subtracting claims paid to date (obtained from a separate source if working with incurred or reported claims data) from the estimated ultimate claims amount.

Alternative Reserve Calculation: If the average claim size used in Step 5 is the average of future payments rather than the ultimate average claim, the required claims reserve can be calculated directly by multiplying the estimated number of future claims by the estimated average claim size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Average Cost Per Claim (ACPC) Method

Assumptions

A
  1. Stability of Average Cost per Claim: The method relies on the fundamental assumption of the assumed stability of the average cost per claim.
  2. Consistency of Definition: It assumes that the definition of a claim and the average cost of a claim remain consistent over time.
  3. Accuracy of Frequency and Severity Projections: It relies on the ability to accurately project the development patterns for both claim numbers and average claim sizes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Average Cost Per Claim (ACPC) Method

Advantages and Disadvantages

A

Advantages

  • Separates Frequency and Severity: Enables more accurate adjustments to be made if expected changes only affect frequency or only affect severity.
  • Utility in Protocols Changes: Can be applied to settled claims, which can be useful when claims reserving protocols have changed over the development history, potentially invalidating other methods.
  • Latent Claims: Can be useful for estimating latent claims (reported long after occurrence) because it allows the actuary to make explicit assumptions about the average claim size, long-term inflation effects, and the expected number of claims.
  • Treatment of Large Losses: Easily allows for the adjustment of data for large claims (by excluding them or capping amounts) to focus the projection on attritional claims.

Disadvantages

  • Data Intensive: Requires reliable data for both claim amounts and claim counts.
  • High Sensitivity: Results can be easily invalidated by a change in the definition of a claim or the average cost of a claim.
  • Complexity in Payment Structures: Changes in how partial payments or zero claims are processed can disturb the method.
  • Potential Instability: The calculation can be influenced by acceleration or slowing in claims payments or settlements.
17
Q

Stochastic Reserving Methods

Step by step method

A

Stochastic claims reserving models can be broadly categorized as analytical, simulation, or Bayesian methods. Simulation methods, particularly bootstrapping, are commonly used:

  1. Fit a Model and Calculate Residuals: Fit a deterministic model (such as the Chain Ladder/ODP GLM) to the historical claims triangle. Calculate the expected values (fitted values) that would have occurred if the data perfectly followed the model, and then calculate the residuals (the differences between actual values and fitted values).
  2. Create Pseudo-Data Sets (Sampling): Re-sample (with replacement) from the residual distribution and combine these residuals with the fitted values to create a large number of alternative or pseudo-data sets.
  3. Calculate Projections: Apply the original deterministic reserving method (e.g., Chain Ladder) to each of the generated pseudo-data sets to obtain a revised reserve estimate for each set.
  4. Collate and Analyze Distribution: Collate the resulting reserve projections to determine the distribution, moments (mean, variance), and percentiles of the predicted reserve distribution.
17
Q

Stochastic Reserving Methods

Definition

A

Stochastic reserving refers to methods used to assess the uncertainty surrounding reserve estimates. While traditional (deterministic) methods like the Chain Ladder produce a single best estimate of the claims reserve, stochastic methods provide a confidence interval or a full distribution of the reserves.

By modelling the random variation around the chosen development pattern, stochastic methods quantify the likely error involved in using a best estimate, thus providing information about the distribution of possible outcomes (e.g., the variance).

18
Q

The Mack Method

Definition and Methodology

A

The Mack method is an analytical stochastic reserving model that builds upon the assumptions of the basic Chain Ladder method. It is effectively distribution-free, focusing on the first two moments (mean and variance) of the ultimate reserve outcome, rather than specifying the precise underlying distribution. The mean outcome matches the result derived by the standard Chain Ladder method.

The methodology involves calculating the variance of the forecast error (prediction error) by calculating and combining the variance parameter (σj^2) derived from the observed run-off data.

19
Q

The ODP Model (Over-Dispersed Poisson)

Definition and Methodology

A

The Over-Dispersed Poisson (ODP) model is a Generalised Linear Model (GLM) applied to claims triangles in a stochastic context. It can be used in either a deterministic or a stochastic (bootstrap) form.

The model assumes that incremental claims follow an ODP distribution, meaning the variance of the claim amount is proportional to the mean, but typically greater (hence over-dispersed). The expected values obtained from fitting the ODP model (a special case of GLM) are exactly the same as the basic Chain Ladder estimates.

The ODP model is often used in conjunction with bootstrapping to obtain a full distribution of possible outcomes:

  1. Fit the GLM: Fit the GLM assuming an ODP distribution to the incremental claims data, resulting in parameters and expected values.
  2. Calculate Residuals: Calculate the Pearson residuals (a measure of noise).
  3. Bootstrap: Sample repeatedly from these residuals to create pseudo-data sets.
  4. Refit and Forecast: Refit the model (or the corresponding Chain Ladder mechanics) to each pseudo-data set to obtain a distribution of parameters and a forecast distribution of reserve estimates.
20
Q

Accident Year (AY) Cohort

Definition

A

An Accident Year grouping combines all claims related to loss events that occurred within a specific 12-month period. This grouping is applied irrespective of when the claim was reported or paid, and regardless of when the period of cover started. This basis is consistent with a losses-occurring policy (LOD).

21
Q

Accident Year (AY) Cohort

Assumptions

A

Assumptions
1. Consistency of Exposure: It is assumed that all claims stem from the same exposure cohort (the period of loss occurrence).
2. Date Identifiability: It assumes that the date of loss is identifiable.
3. Run-off Stability: Assumes that the future development pattern will be stable and in line with past claims development.

22
Q

Accident Year (AY) Cohort

Advantages and Disadvantages

A

Advantages

  • All claims stem from the same exposure cohort.
  • Variations between cohorts can be easily related to external influences operating at that time, such as legislative changes or changes in business volume.
  • Can be aligned with the accounting year for comparing emerging losses with the financial charges made for that period.
  • The appropriate exposure measure, earned premium, naturally aligns with the accident year approach.

Disadvantages

  • The full ultimate number or amount of claims is not known until the last claim is reported.
  • The date of loss is not always known for certain classes (e.g., subsidence, asbestos, or some London Market business), making the approach potentially impractical.
  • Rules must be established to consistently treat claims where the event date cannot be precisely determined (e.g., asbestos claims allocated over the period of exposure).
23
Q

Underwriting Year (UY) Cohort

Definition

A

An Underwriting Year grouping (also known as the year of account or policy year) combines all claims relating to loss events that can be attributed to all policies that commenced cover within a specific calendar year. This grouping applies regardless of when the claim occurred, was reported, or was paid. This basis is consistent with a risks-attaching policy.

24
Underwriting Year (UY) Cohort Assumptions
1. Relativity to Premium: The central assumption is that the outcome of all policies written in that year can be followed, allowing the use of results to test the adequacy of the premiums charged for that policy set. 2. Pattern Stability: Assumes that the run-off pattern is consistent across different underwriting cohorts. 3. Written Premium Alignment: Written premiums are the appropriate measure for loss ratio calculations using this cohort.
25
Underwriting Year (UY) Cohort Advantages and Disadvantages
Advantages - Allows tracking of the total outcome of all policies written in each year. - Can be used to test the adequacy of the premiums as policies are subject to the same set of rates and conditions. - IBNR and URR are automatically included in the projection to ultimate. - This is the most commonly used approach in many reinsurance and London Market classes. Disadvantages - Claims in a particular cohort have occurred over a wider risk period (potentially two or more calendar/accident years), leading to a consequent lengthening of the characteristic run-off pattern. - It takes more than one year before all claims under that cohort have occurred (e.g., up to two years if all policies are annual). - In the early stages, claims emergence relates to pre-notified claims, new IBNR claims, and the emergence of new claims for new periods of exposure (the unexpired risk portion).
26
Reporting Year (RY) Cohort Definition
A Reporting Year grouping combines all claims that are reported within a given calendar year. This is applied regardless of when the policy commenced, when the incident occurred, or when the claims are ultimately paid. This basis is consistent with a claims-made policy.
27
Reporting Year (RY) Cohort Assumptions
1. Fixed Cohort: It assumes that no further claims will be added to the cohort after the end of the original reporting period. 2. Settlement Stability: Assumes that the development pattern for settlement delays is stable across cohorts. 3. Consistency of Reporting Date: Assumes a consistent definition is used for the claim notification date.
28
Reporting Year (RY) Cohort Advantages and Disadvantages
Advantages - After the end of the reporting period, there is a fixed group of claims to track. - The claims development is quicker because it excludes the reporting delay phase. - Helps to monitor the development of notified claims to assess the delay before reliable estimates can be observed for claims once they are notified to the insurer. Disadvantages - The projection will not allow for pure IBNR (claims not yet notified). A separate estimate is needed for IBNR claims. - Claims come from several different exposure periods, which may have differed in business volume, coverage, and claim settlement patterns, thus obscuring underlying trends. - There is no natural exposure base that consistently corresponds to the risk under development, making loss ratio calculations difficult.
29
Supervision of Solvency / Statutory Returns Prudent basis or Best estimate plus risk margin
Prudent basis or Best estimate plus risk margin. The basis must comply with regulation and professional guidance. The goal is typically to be the most prudent basis possible to protect policyholders. For Solvency II, a best estimate reserve (the mean of all possible outcomes), discounted for expected future investment income, plus a risk margin is required. The rules may require a going concern, run-off, or break-up basis.
30
Published Accounts Prudent side of best estimate or Best estimate
Prudent side of best estimate or Best estimate. The basis depends heavily on legislation and accounting principles governing the territory (e.g., IFRS 4, IFRS 17, Solvency II, UK GAAP). It must consider whether a going concern basis is used and whether the accounts must show a true and fair view. Often, a prudent approach is taken to ensure stability and consistency year to year. Reserves may or may not be discounted depending on requirements.
30
Estimating Claims Costs for Premium Rating Best estimate basis.
Best estimate basis. The premium rating process requires a realistic basis for the expected claim costs. If a prudent basis were used, premiums would be set too high; if an optimistic basis were used, premiums would be too low. Unlike solvency reserving, the rating basis aims to avoid implicit margins.
31
Internal Management Accounts, Business Plans, and Budgets Best estimate basis.
Best estimate basis. The aim is to give a realistic view of the company's financial condition and expected future experience. Assumptions should be realistic and not distorted by margins. Management may also request figures based on alternative bases or scenarios to test sensitivity (e.g., changes in claim frequency, claim size, or claims inflation).
32
Valuation for Sale or Purchase / Transferring a Book of Business Close to best estimate.
Close to best estimate. The valuation is a key component in M&A transactions. While best estimate is the likely starting point, the seller may prefer a more optimistic estimate, and the buyer may prefer a more prudent or pessimistic estimate to obtain a lower purchase price.
33
Ascertaining Tax Liabilities Best estimate basis is likely
Best estimate basis is likely. It is crucial to comply with tax regulations in the relevant country. Tax authorities may penalise over-reserving because this delays the emergence of profit and the payment of tax.
34
Testing the Adequacy of Previous Reserve Estimates Same basis as the original estimation.
Same basis as the original estimation. For meaningful comparison, if analysing incurred claims, the actuary should use the same assumptions as those used in the previous reserving exercise. A range of bases may be used to test resilience.