Geometric return
Assumption is that interim payments are continuously reinvested. Note that this approach ensures that asset price can never be negative.
Some important properties:

Estimate VaR using a historical simulation approach (formula, how to calculate)
The observation that determines VaR for n observations at the (1 — α ) confidence level would be: (α x n) + 1
Normal VaR
In equation form, the VaR at significance level α is:

Lognormal VaR
The lognormal distribution is right-skewed with positive outliers and bounded below by zero. As a result, the lognormal distribution is commonly used to counter the possibility of negative asset prices (Pt).
The calculation of lognormal VaR (geometric returns) and normal VaR (arithmetic returns) will be similar when we are dealing with short-time periods and practical return estimates.

Expected shortfall
A major limitation of the VaR measure is that it does not tell the investor the amount or magnitude of the actual loss. VaR only provides the maximum value we can lose for a given confidence level. The expected shortfall (ES) provides an estimate of the tail loss by averaging the VaRs for increasing confidence levels in the tail. Specifically, the tail mass is divided into n equal slices and the corresponding n — 1 VaRs are computed.
Note that as n increases, the expected shortfall will increase and approach the theoretical true loss.
Coherent risk measures
A more general risk measure than either VaR or ES is known as a coherent risk measure. A coherent risk measure is a weighted average of the quantiles of the loss distribution where the weights are user-specific based on individual risk aversion.
ES (as well as VaR) is a special case of a coherent risk measure. When modeling the ES case, the weighting function is set to [1 / (1 — confidence level)] for all tail losses. All other quantiles will have a weight of zero.
Under expected shortfall estimation, the tail region is divided into equal probability slices and then multiplied by the corresponding quantiles. Under the more general coherent risk measure, the entire distribution is divided into equal probability slices weighted by the more general risk aversion (weighting) function.
This coherent risk measure is more sensitive to the choice of n than expected shortfall, but will converge to the risk measure’s true value for a sufficiently large number of observations. The intuition is that as n increases, the quantiles will be further into the tails where more extreme values of the distribution are located.
Properties of coherent risk measures
Where X and Y represent future values and rho, ρ(.), is the risk measure, a coherent risk measure must meet all four (4) of the following conditions:
Both expected shortfall (ES) and value at risk (VaR) are special cases of the general risk measure, however only ES qualifies as a spectral measure; spectral measures are necessarily coherent because (in part) they reflect well-behaved risk-aversion. Expected shortfall is coherent, which implies ES is always sub-additive. Value at risk (VaR) on the other hand, is not a spectral measure: VaR is not always sub-additive (i.e., VaR is only sub-additive if the distribution is elliptical/normal) and therefore VaR is not coherent.
!Non-subadditivity is treacherous because it suggests that diversification might be a bad thing, which would suggest the laughable conclusion that putting all your eggs into one basket might be good risk management practice!

General risk measure
Coherent risk measure is a special case of general risk measure which itself is a weighted average of the quantiles (denoted by qp) of the loss distribution.
Spectral measures are coherent

Evaluate estimators of risk measures by estimating their standard errors



Quantile-quantile (QQ) plot
The quantile-quantile (QQ) plot is a straightforward way to visually examine if empirical data fits the reference or hypothesized theoretical distribution.
A QQ plot is useful in a number of ways:
Other:
Parametric approaches to estimating VAR
We can think of parametric approaches as fitting curves through the data and then reading off the VaR from the fitted curve.
In making use of a parametric approach, we therefore need to take account of both the statistical distribution and the type of data to which it applies.
Bootstrap historical simulation
Describe historical simulation using non-parametric density estimation

Age-weighted Historical Simulation

Volatility-weighted Historical Simulation
Another approach is to weight the individual observations by volatility rather than proximity to the current date. The intuition is that if recent volatility has increased, then using historical data will underestimate the current risk level. Similarly, if current volatility is markedly reduced, the impact of older data with higher periods of volatility will overstate the current risk level.
This approach can be combined with the age-weighted approach to increase the sensitivity of risk estimates to large losses, and to reduce the potential for distortions and ghost effects. It can also be combined with order statistics or bootstrap methods to estimate confidence intervals for VaR or ES.
There are several advantages of the volatility-weighted method:
Algorithm:

Correlation-weighted Historical Simulation
1 0
ρ (1-ρ2)0.5
a22 -a12
-a21 a11
Filtered Historical Simulation
Key steps of the Filtered HS:
Filtered historical simulation is a form of semi-parametric bootstrap which aims to combine the benefits of HS with the power and flexibility of conditional volatility models such as GARCH.
Identify advantages and disadvantages of non-parametric estimation methods
Advantages of non-parametric methods include the following:
Disadvantages of non-parametric methods include the following:
Absolute vs relative VaR
absolute VaR = -µ + σ*α; i.e., worst expected loss relative to the current position (before the mean)
relative VaR = σ*α; i.e., worst expected loss relative to the future (end of period) position
Ghost effects
Ghost effects – we can have a VaR that is unduly high (or low) because of a small cluster of high loss observations, or even just a single high loss, and the measured VaR will continue to be high (or low) until n days or so have passed and the observation has fallen out of the sample period. At that point, the VaR will fall again, but the fall in VaR is only a ghost effect created by the weighting structure and the length of sample period used.
Abrupt change in n-day HS estimate as extreme loss moves from (t-n) to (t-n-1)
Spectral Risk Measure and its features

Does delta-normal (parametric) absolute VaR always increase with a longer holding period?
While absolute VaR always increases with higher confidence levels, the impact of holding periods is ambiguous because the return (drift) scales linearly with time but volatility scales with square root of time
Component VaR
Component VaR/Portfolio VaR = position weight * beta