Tag Archives: Variance

Other Coverage Modifications

Coinsurance \alpha is the fraction of losses covered by the policy.  For example, \alpha = 0.8 means if a loss is incurred, 80% will be paid by the insurance company.  A claims limit u is the maximum amount that will be paid.  The order in which coinsurance, claims limits, and deductibles is applied to a loss is important and will be specified by the problem.  The expected payment per loss when all three are present in a policy is given by

E\left[Y\right] = \alpha \left[E\left[X\wedge u\right] - E\left[X \wedge d\right]\right]

where Y is the payment variable and X is the original loss variable.  The second moment is given by

E\left[Y^2\right] = \alpha^2\left(E\left[(X\wedge u)^2\right] - E\left[(X \wedge d)^2\right]-2d\left(E\left[X \wedge u\right]-E\left[X \wedge d\right]\right)\right)

The second moment can be used to find the variance of payment per loss.  If inflation r is present, multiply the second moment by (1+r)^2 and divide u and d by (1+r).   For payment per payments, divide the expected values by P(X>d) or 1-F(d).

Advertisements

Leave a comment

Filed under Coinsurance, Coverage Modifications, Deductibles, Limits

The Lognormal Distribution

Review: If X is normal with mean \mu and standard deviation \sigma, then

Z = \displaystyle \frac{X-\mu}{\sigma}

is the Standard Normal Distribution with mean 0 and standard deviation 1.  To find the probability Pr(X \le x), you would convert X to the standard normal distribution and look up the values in the standard normal table.

\begin{array}{rll} Pr(X \le x) &=& Pr\left(\displaystyle \frac{X-\mu}{\sigma} \le \frac{x-\mu}{\sigma}\right) \\ \\ &=& \displaystyle Pr\left(Z \le \frac{x-\mu}{\sigma}\right) \\ \\ &=& \displaystyle \mathcal{N}\left(\frac{x-\mu}{\sigma}\right) \end{array}

If V is a weighted sum of n normal random variables X_i, i = 1, ..., n, with means \mu_i, variance \sigma^2_i, and weights w_i, then

\displaystyle E\left[\sum_{i=1}^n w_iX_i\right] = \sum_{i=1}^n w_i\mu_i

and variance

\displaystyle Var\left(\sum_{i=1}^n w_iX_i\right) = \sum_{i=1}^n \sum_{j=1}^n w_iw_j\sigma_{ij}

where \sigma_{ij} is the covariance between X_i and X_j.  Note when i=j, \sigma_{ij} = \sigma_i^2 = \sigma_j^2.

Remember: A sum of random variables is not the same as a mixture distribution!  The expected value is the same, but the variance is not.  A sum of normal random variables is also normal.  So V is normal with the above mean and variance.

Actuary Speak: This is called a stable distribution.  The sum of random variables from the same distribution family produces a random variable that is also from the same distribution family.

The fun stuff:
If X is normal, then Y = e^X is lognormal.  If X has mean \mu and standard deviation \sigma, then

\begin{array}{rll} \displaystyle E\left[Y\right] &=& E\left[e^X\right] \\ \\ \displaystyle &=& e^{\mu + \frac{1}{2}\sigma^2} \\ \\ Var\left(e^X\right) &=& e^{2\mu + \sigma^2}\left(e^{\sigma^2} - 1\right)\end{array}

Recall FV = e^\delta where FV is the future value of an investment growing at a continuously compounded rate of \delta for one period.  If the rate of growth is a normal distributed random variable, then the future value is lognormal.  The Black-Scholes model for option prices assumes stocks appreciate at a continuously compounded rate that is normally distributed.

S_t = S_0e^{R(0,t)}

where S_t is the stock price at time t, S_0 is the current price, and R(0,t) is the random variable for the rate of return from time 0 to t.  Now consider the situation where R(0,t) is the sum of iid normal random variables R(0,h) + R(h,2h) + ... + R((n-1)h,t) each having mean \mu_h and variance \sigma_h^2.  Then

\begin{array}{rll} E\left[R(0,t)\right] &=& n\mu_h \\ Var\left(R(0,t)\right) &=& n\sigma_h^2 \end{array}

If h represents 1 year, this says that the expected return in 10 years is 10 times the one year return and the standard deviation is \sqrt{10} times the annual standard deviation.  This allows us to formulate a function for the mean and standard deviation with respect to time.  Suppose we write

\begin{array}{rll} \displaystyle \mu(t) &=& \left(\alpha - \delta -\frac{1}{2}\sigma^2\right)t \\ \sigma(t) &=& \sigma \sqrt{t} \end{array}

where \alpha is the growth factor and \delta is the continuous rate of dividend payout.  Since all normal random variables are transformations of the standard normal, we can write R(0,t) =\mu(t)+Z\sigma(t) . The model for the stock price becomes

\displaystyle S_t = S_0e^{\left(\alpha - \delta - \frac{1}{2}\sigma^2\right)t + Z\sigma\sqrt{t}}

In this model, the expected value of the stock price at time t is

E\left[S_t\right] = S_0e^{(\alpha - \delta)t}

Actuary Speak: The standard deviation \sigma of the return rate is called the volatility of the stock.  This term comes from expressing the rate of return as an Ito process. \mu(t) is called the drift term and \sigma(t) is called the volatility term.

Confidence intervals: To find the range of stock prices that corresponds to a particular confidence interval, we need only look at the confidence interval on the standard normal distribution then translate that interval into stock prices using the equation for S_t.

Example: For example z=[-1.96, 1.96] represents the 95% confidence interval in the standard normal \mathcal{N}(z).  Suppose t = \frac{1}{3}, \alpha = 0.15, \delta = 0.01, \sigma = 0.3, and S_0 = 40.  Then the 95% confidence interval for S_t is

\left[40e^{(0.15-0.01-\frac{1}{2}0.3^2)\frac{1}{3} + (-1.96)0.3\sqrt{\frac{1}{3}}},40e^{(0.15-0.01-\frac{1}{2}0.3^2)\frac{1}{3} + (1.96)0.3\sqrt{\frac{1}{3}}}\right]

Which corresponds to the price interval of

\left[29.40,57.98\right]

Probabilities: Probability calculations on stock prices require a bit more mental gymnastics.

\begin{array}{rll} \displaystyle Pr\left(S_t<K\right) &=& Pr\left(\frac{S_t}{S_0} < \frac{K}{S_0}\right) \\ \\ \displaystyle &=& Pr\left(\ln{\frac{S_t}{S_0}} < \ln{\frac{K}{S_0}}\right) \\ \\ \displaystyle &=& Pr\left(Z< \frac{\ln{\frac{K}{S_0}} - \mu(t)}{\sigma(t)}\right) \\ \\ \displaystyle &=& Pr\left(Z<\frac{\ln{\frac{K}{S_0}} - \left(\alpha - \delta - \frac{1}{2}\sigma^2\right)t}{\sigma\sqrt{t}}\right) \end{array}

Conditional Expected Value: Define

\begin{array}{rll} \displaystyle d_1 &=& -\frac{\ln{\frac{K}{S_0}} - \left(\alpha - \delta + \frac{1}{2}\sigma^2\right)t}{\sigma\sqrt{t}} \\ \\ \displaystyle d_2 &=& -\frac{\ln{\frac{K}{S_0}}- \left(\alpha - \delta - \frac{1}{2}\sigma^2\right)t}{\sigma\sqrt{t}} \end{array}

Then

\begin{array}{rll} \displaystyle E\left[S_t|S_t<K\right] &=& S_0e^{(\alpha - \delta)t}\frac{\mathcal{N}(-d_1)}{\mathcal{N}(-d_2)} \\ \\ \displaystyle E\left[S_t|S_t>K\right] &=& S_0e^{(\alpha - \delta)t}\frac{\mathcal{N}(d_1)}{\mathcal{N}(d_2)} \end{array}

This gives the expected stock price at time t given that it is less than K or greater than K respectively.

Black-Scholes formula: A call option C_t on stock S_t has value \max\left(0,S_t - K\right) at time t.  The option pays out if S_t > K.  So the value of this option at time 0 is the probability that it pays out at time t, discounted by the risk free interest rate r, and multiplied by the expected value of S_t - K given that S_t > K.  In other words,

\begin{array}{rll} \displaystyle C_0 &=& e^{-rt}Pr\left(S_t>K\right)E\left[S_t-K|S_t>K\right] \\ \\ &=& e^{-rt}\mathcal{N}(d_2)\left(E\left[S_t|S_t>K\right] - E\left[K|S_t>K\right]\right) \\ \\ &=& e^{-rt}\mathcal{N}(d_2)\left(S_0e^{(\alpha - \delta)t}\frac{\mathcal{N}(d_1)}{\mathcal{N}(d_2)} - K\right) \end{array}

Black-Scholes makes the additional assumption that all investors are risk neutral.  This means assets do not pay a risk premium for being more risky.  Long story short, \alpha - r = 0 so \alpha = r.  So in the Black-Scholes formula:

\begin{array}{rll} \displaystyle d_1 &=& -\frac{\ln{\frac{K}{S_0}} - \left(r - \delta + \frac{1}{2}\sigma^2\right)t}{\sigma\sqrt{t}} \\ \\ \displaystyle d_2 &=& -\frac{\ln{\frac{K}{S_0}}- \left(r- \delta - \frac{1}{2}\sigma^2\right)t}{\sigma\sqrt{t}} \end{array}

Continuing our derivation of C_0 but replacing \alpha with r,

\begin{array}{rll} \displaystyle C_0 &=& e^{-rt}\mathcal{N}(d_2)\left(S_0e^{(r - \delta)t}\frac{\mathcal{N}(d_1)}{\mathcal{N}(d_2)} - K\right) \\ \\ &=& S_0e^{-\delta t}\mathcal{N}(d_1) - Ke^{-rt}\mathcal{N}(d_2)\end{array}

For a put option P_0 with payout K-S_t for K>S_t and 0 otherwise,

P_0 = Ke^{-rt}\mathcal{N}(-d_2) - S_0e^{-\delta t}\mathcal{N}(-d_1)

These are the famous Black-Scholes formulas for option pricing.  When derived on the back of a cocktail napkin, they are indispensable for impressing the ladies at your local bar.  :p

Leave a comment

Filed under Parametric Models, Probability

The Bernoulli Shortcut

If X has a Standard Bernoulli Distribution, then it can only have values 0 or 1 with probabilities q and 1-q.  Any random variables that can only have 2 values is a scaled and translated version of the standard bernoulli distribution.

Expected Value and Variance:

For a standard bernoulli distribution, E[X] = q and Var(X) = q(1-q).  If Y is a random variable that can only have values a and b with probabilities q and (1-q) respectively, then

\begin{array}{rl} Y &= (a-b)X +b \\ E[Y] &= (a-b)E[X] +b \\ Var(Y) &= (a-b)^2Var(X) \\ &= (a-b)^2q(1-q) \end{array}

 

Leave a comment

Filed under Probability

Mixture Distributions

Finite:  A finite mixture distribution is described by the following cumulative distribution function:

F(x) = \displaystyle \sum_{i=1}^n w_iF(x_i)

Where X is the mixture random variable, X_i are the component random variables that make up the mixture, and w_i is the weighting for each component.  The weights add to 1.  

If X is a mixture of 50% X_1 and 50% X_2, F(x) = 0.5F(x_1) + 0.5F(x_2).  This is not the same as X = 0.5X_1 +0.5X_2.  The latter expression is a sum of random variables NOT a mixture!

Moments and Variance:

\begin{array}{rl} E(X^t) &= \displaystyle \sum_{i=1}^n w_iE(X_i^t) \\ Var(X) &= E(X^2) - E(X)^2 \\ &= \displaystyle \sum_{i=1}^n w_iE(X^2) - \left(\sum_{i=1}^n w_iE(X)\right)^2 \end{array}

Leave a comment

Filed under Probability

Variance and Expected Value Algebra

Linearity of Expected Value: Suppose X and Y are random variables and a and b are scalars.  The following relationships hold:

E[aX+b] = aE[X]+b

E[aX+bY] = aE[X] +bE[Y]

Variance:

Var(aX+bY) = a^2Var(X)+2abCov(X,Y)+b^2Var(Y)

Suppose X_i for i=\left\{1\ldots n\right\} are n independent identically distributed (iid) random variables.  Then Cov(X_i,X_j) = 0 for i\ne j and

\displaystyle Var\left({\sum_{i=1}^n X_i}\right) = \sum_{i=1}^n Var(X_i)

Example:

X is the stock price of AAPL at market close.  Y is the sum of closing AAPL stock prices for 5 days.  Then

\begin{array}{rl} Var(Y) &= \displaystyle \sum_{i=1}^5 Var(X_i) \\ &= 5Var(X) \end{array}.  

Contrast this with the variance of Z = 5X.  In other words, Z is a random variable that takes a value of 5 times the price of AAPL at the close of any given day.  Then

\begin{array}{rl} Var(Z) &= Var(5X) \\ &=5^2Var(x) \end{array} 

The distinction between Y and Z is subtle but very important.

Variance of a Sample Mean:

In situations where the sample mean \bar{X} is a random variable over n iid observations (i.e. the average price of AAPL over 5 days), the following formula applies:

\begin{array}{rl} Var(\bar{X}) &= \displaystyle Var\left(\frac{1}{n} \displaystyle \sum_{i=1}^n X_i\right) \\ &= \displaystyle \frac{nVar(X)}{n^2} \\ &= \displaystyle \frac{Var(X)}{n} \end{array} 

2 Comments

Filed under Probability

Functions and Moments

Some distribution functions:

Survival function

\displaystyle S(x) = 1-F(x) = \Pr(X>x)  

where F(x) is a cumulative distribution function.

Hazard rate function

\displaystyle h(x) = \frac{f(x)}{S(x)} = -\frac{d\ln{S(x)}}{dx}

where f(x) is a probability density function.

Cumulative hazard rate function

\displaystyle H(x) =\int_{-\infty}^x{h(t)dt} = -\ln{S(x)}

The following relationship is often useful:

S(x) = \displaystyle e^{-\int_{-\infty}^x{h(t)dt}}

Expected Value:

\displaystyle E[X] = \int_{-\infty}^\infty{xf(x)dx}

Or more generally,

\displaystyle E[g(X)] = \int_{-\infty}^\infty{g(x)f(x)dx}

When g(X) = X^n, the expected value of such a function is called the nth raw moment and is denoted by \mu'_n.  Let \mu be the first raw moment.  That is, \mu = E[X].  E[(X-\mu)^n] is called an nth central moment.

Moments are used to generate some statistical measures.

Variance \sigma^2

\displaystyle Var(X) = E[(X-\mu)^2] = E(X^2) - E(X)^2

The coefficient of variation is \displaystyle \frac{\mu}{\sigma}.

Skewness \gamma_1

\displaystyle \gamma_1 = \frac{\mu_3}{\sigma^3}

Kurtosis \gamma_2

\displaystyle \gamma_2 = \frac{\mu_4}{\sigma^4}

Covariance of two distribution functions

\displaystyle Cov(X,Y) = E[(X-\mu_x)(Y-\mu_Y)] = E[XY] - E[X]E[Y]

*Note: if X and Y are independent, Cov(X,Y)=0

Correlation coefficient \rho_{XY}

\displaystyle \rho_{XY} = \frac{Cov(X,Y)}{\sigma_X\sigma_Y}

All of the above definitions should be memorized.  Some things that might be tested in the exam are:

  • Given a particular distribution function, what happens to skewness or kurtosis in the limit of a certain parameter?
  • What is the expected value, variance, skewness, kurtosis of a given distribution function?
  • What is the covariance or correlation coefficient of two distribution functions?

Additional Notes

Central moments can be calculated using raw moments.  Know how to calculate raw moments using the statistics function on the calculator.  This can be a useful timesaver in the exam.  Using alternating positive and negative binomial coefficients, write an expression for \mu_n with \mu' and \mu as the two binomial terms.

Example:

\mu_4 = \mu'_4 - 4\mu'_3\mu + 6\mu'_2\mu^2 - 4\mu'_1\mu^3 + \mu^4

Since \mu'_1 = \mu, the two terms on the end simplify to -3\mu^4.  The result is

\mu_4 = \mu'_4 - 4\mu'_3\mu + 6\mu'_2\mu^2 - 3\mu^4

Moment Generating Function:

If the moment generating function M(t) is known for random variable X, it’s nth raw moment can be found by taking the nth derivative of M(t) and evaluating at 0.  Moment generating functions take the form:

M(t) = \displaystyle E[e^{tx}]

If Z = X +Y, then M_Z(t) = M_X(t)\cdot M_Y(t).

Leave a comment

Filed under Probability