Monthly Archives: July 2008

Parametric Distributions

Parametric distributions are functions in several dimensions.  Various parametric distributions are given in the exam tables.  Each input variable or dimension of the distribution function is called a parameter.  While studying, it is important to keep in mind that parameters are simply abstract devices built into a distribution function which allow us, through their manipulation, to tweak the shape of the distribution.  Ultimately, we are still only interested in things like Pr(X\le x) and the distribution function parameters are used to help describe the distribution of X.

Transformations 

  1. Scaling:  If a random variable X has a scaleable parametric distribution with parameters (a_1, a_2, ..., a_n, \theta), then one of these parameters can be called a scale parameter and is denoted by \theta.  Having the scaleable property implies that cX can be described with the same distribution function as X, except that the parameters of its distribution are (a_1,a_2,..., a_n,c\theta) where c is the scale factor.  In terms of probability, scaling a random variable has the following effect– if Y = cX with c >0, then Pr(Y \le y) = Pr(cX\le y) = Pr(X \le \frac{y}{c}).
    Caveat: The Inverse Gaussian as given in the exam tables has a \theta in its set of parameters; however, this is not a scale distribution.  To scale a Lognormal distribution, adjust the parameters to (\mu + \ln{c}, \sigma) where c is the scale factor and \mu and \sigma are the usual parameters.  All the rest of the distributions given in appendix A are scale distributions.
  2. Raising to a power:  A random variable raised to a positive power is called transformed.  If it is raised to -1 it is called inverse. If it is raised to a power less than -1, it is called inverse transformed.  When raising to a power, the scale parameter needs to be readjusted to remain a scale parameter in the new distribution.
  3. Exponentiating:  An example is the lognormal distribution.  If X is normal, then Y = e^X is lognormal.  In terms of probability, F_Y(y) = F_X(\ln{y}).
Splicing
You can create a new distribution function by defining different distribution probability densities on different domain intervals.  As long as the piecewise integral of the spliced distribution is 1, it is a valid distribution.  Since total probability has to be exactly 1, scaling is an important tool that allows us to do this.
Tail Weight
Since a density function must integrate to 1, it must tend to 0 at the extremities of its domain.  If density function A tends towards zero at a slower rate than density function B, then density A is said to have a heavier tail than density B.  Some important measures of tail weight:
  1. Tail weight decreases inversely with respect to the number of positive raw or central moments that exist.
  2. The limit of the ratio of one density or survival function over another may tend to zero or infinity depending on which has the greater tail weight.
  3. An increasing hazard rate function implies a lighter tail and vice versa.
  4. An increasing mean residual life function means a heavier tail and vice versa.

Leave a comment

Filed under Parametric Models, Probability

Expected Values for Insurance

Before I begin, please note: I hated this chapter.  If there are any errors please let me know asap!

A deductible d is an amount that is subtracted from an insurance claim.  If you have a $500 deductible on your car insurance, your insurance company will only pay damages incurred beyond $500.  We are interested in the following random variables: (X - d)_+ and (X\wedge d).

Definitions:

  1. Payment per Loss: (X-d)_+ = \left\{ \begin{array}{ll} X-d &\mbox{ if } X>d \\ 0 &\mbox{ otherwise} \end{array} \right.
  2. Limited Payment per Loss:  (X\wedge d) = \left\{ \begin{array}{ll} d &\mbox{ if } X>d \\ X &\mbox{ if } 0<X<d \\ 0 &\mbox{ otherwise} \end{array} \right.
Expected Values:
  1. \begin{array}{rll} E[(X-d)_+] &=& \displaystyle \int_{d}^{\infty}{(x-d)f(x)dx} \\ \\ &=& \displaystyle \int_{d}^{\infty}{S(x)dx} \end{array}
     
  2. \begin{array}{rll} E[(X\wedge d)] &=& \displaystyle \int_{0}^{d}{xf(x)dx +dS(x)} \\ \\ &=& \displaystyle \int_{0}^{d}{S(x)dx} \end{array}
We may also be interested in the payment per loss, given payment is incurred (payment per payment) X-d|X>d.
By definition:
E[X-d|X>d] = \displaystyle \frac{E[(X-d)_+]}{P(X>d)}
Since actuaries like to make things more complicated than they really are, we have special names for this expected value.  It is denoted by e_X(d) and is called mean excess loss in P&C insurance and \displaystyle {\mathop{e}\limits^{\circ}}_d is called mean residual life in life insurance.  Weishaus simplifies the notation by using the P&C notation without the random variable subscript.  I’ll use the same.
Memorize!
  1. For an exponential distribution,
    e(d) = \theta
  2. For a Pareto distribution,
    e(d) = \displaystyle \frac{\theta +d}{\alpha - 1}
  3. For a single parameter Pareto distribution,
    e(d) = \displaystyle \frac{d}{\alpha - 1}
Useful Relationships:
  1. \begin{array}{rll} E[X] &=& E[X\wedge d] + E[(X-d)_+] \\ &=& E[X\wedge d] + e(d)[1-F(d)] \end{array}
Actuary Speak (important for problem comprehension):
  1. The random variable (X-d)_+ is said to be shifted by d and censored.
  2. e(d) is called mean excess loss or mean residual life.
  3. The random variable X\wedge d can be called limited expected value, payment per loss with claims limit, and amount not paid due to deductible.  d can be called a claims limit or deductible depending on how it is used in the problem.
  4. If data is given for X with observed values and number of observations or probabilities, the data is called the empirical distribution.  Sometimes empirical distributions may be given for a problem, but you are still asked to assume an parametric distribution for X.

Leave a comment

Filed under Coverage Modifications, Deductibles, Limits, Probability, Severity Models