An empirical model makes an assumption about the type of distribution underlying a set of data. The data is then used to estimate the parameters in the assumed underlying distribution. For example, if it is assumed that the underlying distribution is a uniform distribution, the data may be used to estimate the maximum value that the random variable can have. If the assumed underlying distribution is Poisson, the data may be used to estimate the rate parameter . There are three measures of quality for these estimators– bias, consistency, and mean squared error.
Definitions:
- Let
be an estimator and
be the true parameter being estimated. The bias is defined by:
. An estimator is said to be unbiased if
.
- An estimator is said to be consistent if for any
,
, where
is the estimator based on
observations.
- The Mean Squared Error is
. A low mean squared error is desirable.
The following relationship is useful:
CONFIDENCE INTERVALS
To determine a construct a confidence interval for an estimator, you simply add and subtract to the estimate where
is the appropriate z-value from the standard normal table and
is the square root of the variance of the estimator. So a 95% confidence interval for an estimator
would be
You can usually express the variance of the estimator in terms of the true parameter. In that case you can substitute the estimated variance with the true variance of the estimator based on your assumption of the underlying distribution.
When the variance of an estimator can be expressed in terms of the true parameter that you are trying to estimate, a more accurate confidence interval can be derived by
Solving for gives the interval.
STATISTICS OF ESTIMATORS
An important point to keep in mind is that the results you observe in the data is the outcome of a random variable. Since estimators are calculated based on these observations, they are a function of the results of a random variable. If you make some assumptions about the underlying distribution, you can calculate the statistics of an estimator, such as the variance of an estimator. For example, in a population of 100, you observe 5 claims. You assume the underlying distribution for the number of claims per person is poisson and you estimate its parameter to be . This means your equation for the estimator is:
where represents the true underlying distribution for each person. Thus the variance of your estimator is given by
Since you’ve assumed the underlying distribution to be poisson,
where is the variance of
. This is how you can arrive at equations for the statistics of estimators based on an assumed underlying distribution. The key is to realize that there is a link between the estimator and the true underlying distribution.