Monthly Archives: February 2010

Maximum Likelihood Estimators

With Maximum Likelihood Estimator (MLE) problems, a parametric distribution is named, one or more of it’s parameter values are unknown, a set of data from this distribution is given, and you’re asked to find the parameter values that maximize the probability of observing the data.  You do this by brute force.  For each data point, you express the probability of observing it, which is simply the density function of whatever named distribution was given in the problem.  The probability of observing the whole set of data is simply the product of the probability of each data point.  If there are lots of data points, you end up with a massive function.

For example, the density function for N is given by:

f_N(n;\lambda) = \lambda e^{-\lambda n}

in which \lambda is the unknown parameter.  You are also given a set of observations a, b, c and you must find the value of \lambda which maximizes the probability of seeing these particular values.  The likelihood of seeing these values is given by L(\lambda):

L(\lambda) = \left( \lambda e^{-\lambda a}\right)\left( \lambda e^{-\lambda b}\right)\left( \lambda e^{-\lambda c}\right) = \lambda^3 e^{-\lambda (a+b+c)}

To find \lambda which maximizes this function, you take the derivative of it, set it equal to 0, and solve for \lambda.  If you’re looking a few steps ahead, you should realize that doing the maximization by brute force will be difficult.  To get around this, we can log the likelihood function then find the maximum.  This does not change the resulting value of \lambda.  The log likelihood l(\lambda) is then:

l(\lambda) = 3\ln(\lambda) -\lambda(a+b+c)

The derivative with respect to \lambda is

\displaystyle \frac{\delta}{\delta \lambda}l(\lambda) = \frac{3}{\lambda} - (a+b+c)

Equating to 0 and solving, we have

\lambda = \displaystyle \frac{3}{a+b+c}

Advertisements

Leave a comment

Filed under Empirical Models