Best Unbiased Estimators
Basic Theory
Consider again the basic statistical model, in which we have a random experiment that results in an observable random variable \({X}\) taking values in a set \(S\). Once again, the experiment is typically to sample $n $objects from a population and record one or more measurements for each item.
In this case, the observable random variable has the form
\[ {X} = (X_1, X_2, \ldots, X_n) \]
where \(X_i\) is the vector of measurements for the \(i\)th item.
Suppose that \(\theta\) is a real parameter of the distribution of \({X}\), taking values in a parameter space \(\Theta\). Let \(f_\theta\) denote the probability density function of \({X}\) for \(\theta \in \Theta\). Note that the expected value, variance, and covariance operators also depend on \(\theta\), although we will sometimes suppress this to keep the notation from becoming too unwieldy.
Definitions
Suppose now that \(\lambda = \lambda(\theta)\) is a parameter of interest that is derived from \(\theta\). (Of course, \(\lambda\) might be \(\theta\) itself, but more generally might be a function of \(\theta\).) In this section we will consider the general problem of finding the best estimator of \(\lambda\) among a given class of unbiased estimators. Recall that if \(U\) is an unbiased estimator of \(\lambda\), then \(\text{var}_\theta(U)\) is the mean square error. Mean square error is our measure of the quality of unbiased estimators, so the following definitions are natural.
Suppose that \(U\) and \(V\) are unbiased estimators of \(\lambda\).
- If \(\text{var}_\theta(U) \le \text{var}_\theta(V)\) for all \(\theta \in \Theta\) then \(U\) is a uniformly better estimator than \(V\).
- If \(U\) is uniformly better than every other unbiased estimator of \(\lambda\), then \(U\) is a Uniformly Minimum Variance Unbiased Estimator (UMVUE) of \(\lambda\).
Given unbiased estimators \(U\) and \(V\) of \(\lambda\), it may be the case that \(U\) has smaller variance for some values of \(\theta\) while \(V\) has smaller variance for other values of \(\theta\), so that neither estimator is uniformly better than the other. Of course, a minimum variance unbiased estimator is the best we can hope for.
The Cramér-Rao Lower Bound
We will show that under mild conditions, there is a lower bound on the variance of any unbiased estimator of the parameter \(\lambda\). Thus, if we can find an estimator that achieves this lower bound for all \(\theta\), then the estimator must be an UMVUE of \(\lambda\). The derivative of the log likelihood function, sometimes called the score, will play a critical role in our analysis. A lesser, but still important role, is played by the negative of the second derivative of the log-likelihood function. Life will be much easier if we give these functions names.
For \({x} \in S\) and \(\theta \in \Theta\), define
\[ \begin{align} L_1(x, \theta) & = \frac{d}{d \theta} \ln\left(f_\theta(x)\right) \\ L_2(x, \theta) & = -\frac{d}{d \theta} L_1(x, \theta) = -\frac{d^2}{d \theta^2} \ln\left(f_\theta(x)\right) \end{align} \]
In the rest of this subsection, we consider statistics \(h(X)\) where \(h: S \to \mathbb{R}\) (and so in particular, \(h\) does not depend on \(\theta\)). We need a fundamental assumption:
We will consider only statistics \(h(X)\) with \(E_\theta\left(h^2(X)\right) \lt \infty\) for \(\theta \in \Theta\). We also assume that
\[ \frac{d}{d \theta} E_\theta\left(h(X)\right) = E_\theta\left(h(X) L_1(X, \theta)\right) \]
This is equivalent to the assumption that the derivative operator \(d / d\theta\) can be interchanged with the expected value operator \(E_\theta\).
Generally speaking, the fundamental assumption will be satisfied if \(f_\theta(x)\) is differentiable as a function of \(\theta\), with a derivative that is jointly continuous in \(x\) and \(\theta\), and if the support set \(\left\{x \in S: f_\theta(x) \gt 0 \right\}\) does not depend on \(\theta\).
\[ E_\theta\left(L_1(X, \theta)\right) = 0 \]
for \(\theta \in \Theta\).
If \(h(X)\) is a statistic then
\[ cov_\theta\left(h(X), L_1(X, \theta)\right) = \frac{d}{d \theta} E_\theta\left(h(X)\right) \\ var_\theta\left(L_1(X, \theta)\right) = E_\theta\left(L_1^2(X, \theta)\right) \]
The following theorem gives the general Cramér-Rao lower bound on the variance of a statistic. The lower bound is named for Harold Cramér and CR Rao:
If \(h(X)\) is a statistic then
\[ var_\theta\left(h(X)\right) \ge \frac{\left(\frac{d}{d \theta} E_\theta\left(h(X)\right) \right)^2}{E_\theta\left(L_1^2(X, \theta)\right)} \]
We can now give the first version of the Cramér-Rao lower bound for unbiased estimators of a parameter.
Suppose now that \(\lambda(\theta)\) is a parameter of interest and \(h(X)\) is an unbiased estimator of \(\lambda\). Then
\[ var_\theta\left(h(X)\right) \ge \frac{\left(d\lambda / d\theta\right)^2}{E_\theta\left(L_1^2(X, \theta)\right)} \]
An estimator of \(\lambda\) that achieves the Cramér-Rao lower bound must be a uniformly minimum variance unbiased estimator (UMVUE) of \(\lambda\).
Equality holds in the previous theorem, and hence \(h(X)\) is an UMVUE, if and only if there exists a function \(u(\theta)\) such that (with probability 1)
\[ h(X) = \lambda(\theta) + u(\theta) L_1(X, \theta) \]
The quantity \(E_\theta\left(L^2(X, \theta)\right)\) that occurs in the denominator of the lower bounds in the previous two theorems is called the Fisher information number of \(X\), named after Sir Ronald Fisher. The following theorem gives an alternate version of the Fisher information number that is usually computationally better.
If the appropriate derivatives exist and if the appropriate interchanges are permissible then
\[ E_\theta\left(L_1^2(X, \theta)\right) = E_\theta\left(L_2(X, \theta)\right) \]
The following theorem gives the second version of the Cramér-Rao lower bound for unbiased estimators of a parameter.
If \(\lambda(\theta)\) is a parameter of interest and \(h(X)\) is an unbiased estimator of \(\lambda\) then
\[ var_\theta\left(h(X)\right) \ge \frac{\left(d\lambda / d\theta\right)^2}{E_\theta\left(L_2(X, \theta)\right)} \]
Random Samples
Suppose now that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the distribution of a random variable \(X\) having probability density function \(g_\theta\) and taking values in a set \(R\). Thus \(S = R^n\). We will use lower-case letters for the derivative of the log likelihood function of \(X\) and the negative of the second derivative of the log likelihood function of \(X\).
For \(x \in R\) and \(\theta \in \Theta\) define
\[ \begin{align} l(x, \theta) & = \frac{d}{d\theta} \ln\left(g_\theta(x)\right) \\ l_2(x, \theta) & = -\frac{d^2}{d\theta^2} \ln\left(g_\theta(x)\right) \end{align} \]
\(L^2\) can be written in terms of \(l^2\) and \(L_2\) can be written in terms of \(l_2\):
- \(E_\theta\left(L^2(X, \theta)\right) = n E_\theta\left(l^2(X, \theta)\right)\)
- \(E_\theta\left(L_2(X, \theta)\right) = n E_\theta\left(l_2(X, \theta)\right)\)
The following theorem gives the second version of the general Cramér-Rao lower bound on the variance of a statistic, specialized for random samples.
If \(h(X)\) is a statistic then
\[ \text{var}_\theta\left(h(X)\right) \ge \frac{\left(\frac{d}{d\theta} E_\theta\left(h(X)\right) \right)^2}{n E_\theta\left(l^2(X, \theta)\right)} \]
The following theorem give the third version of the Cramér-Rao lower bound for unbiased estimators of a parameter, specialized for random samples.
Suppose now that \(\lambda(\theta)\) is a parameter of interest and \(h(X)\) is an unbiased estimator of \(\lambda\). Then
\[ \text{var}_\theta\left(h(X)\right) \ge \frac{(d\lambda / d\theta)^2}{n E_\theta\left(l^2(X, \theta)\right)} \]
Note that the Cramér-Rao lower bound varies inversely with the sample size \(n\). The following version gives the fourth version of the Cramér-Rao lower bound for unbiased estimators of a parameter, again specialized for random samples.
If the appropriate derivatives exist and the appropriate interchanges are permissible) then
\[ \text{var}_\theta\left(h(X)\right) \ge \frac{\left(d\lambda / d\theta\right)^2}{n E_\theta\left(l_2(X, \theta)\right)} \]
To summarize, we have four versions of the Cramér-Rao lower bound for the variance of an unbiased estimate of \(\lambda\): version 1 and version 2 in the general case, and version 1 and version 2 in the special case that \(X\) is a random sample from the distribution of \(X\). If an ubiased estimator of \(\lambda\) achieves the lower bound, then the estimator is an UMVUE.
Examples and Special Cases
We will apply the results above to several parametric families of distributions. First we need to recall some standard notation. Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the distribution of a real-valued random variable \(X\) with mean \(\mu\) and variance \(\sigma^2\). The sample mean is
\[ M = \frac{1}{n} \sum_{i=1}^n X_i \]
Recall that \(E(M) = \mu\) and \(var(M) = \sigma^2 / n\). The special version of the sample variance, when \(\mu\) is known, and standard version of the sample variance are, respectively,
\[ \begin{align} W^2 & = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2 \\ S^2 & = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2 \end{align} \]
The Bernoulli Distribution
Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size n from the Bernoulli distribution with unknown success parameter \(p \in (0, 1)\). In the usual language of reliability, \(X_i = 1\) means success on trial \(i\) and \(X_i = 0\) means failure on trial \(i\); the distribution is named for Jacob Bernoulli. Recall that the Bernoulli distribution has probability density function
\[ g_p(x) = p^x (1 - p)^{1-x}, \quad x \in \{0, 1\} \]
The basic assumption is satisfied. Moreover, recall that the mean of the Bernoulli distribution is \(p\), while the variance is \(p (1 - p)\).
\(p (1 - p) / n\) is the Cramér-Rao lower bound for the variance of unbiased estimators of \(p\).
The sample mean \(M\) (which is the proportion of successes) attains the lower bound in the previous exercise and hence is an UMVUE of \(p\).
The Poisson Distribution
Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the Poisson distribution with parameter \(\theta \in (0, \infty)\). Recall that this distribution is often used to model the number of random points in a region of time or space and is studied in more detail in the chapter on the Poisson Process. The Poisson distribution is named for Simeon Poisson and has probability density function
\[ g_\theta(x) = e^{-\theta} \frac{\theta^x}{x!}, \quad x \in \mathbb{N} \]
The basic assumption is satisfied. Recall also that the mean and variance of the distribution are both \(\theta\).
\(\theta / n\) is the Cramér-Rao lower bound for the variance of unbiased estimators of \(\theta\).
The sample mean \(M\) attains the lower bound in the previous exercise and hence is an UMVUE of \(\theta\).
The Normal Distribution
Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the normal distribution with mean \(\mu \in \mathbb{R}\) and variance \(\sigma^2 \in (0, \infty)\). Recall that the normal distribution plays an especially important role in statistics, in part because of the central limit theorem. The normal distribution is widely used to model physical quantities subject to numerous small, random errors, and has probability density function
\[ g_{\mu,\sigma^2}(x) = \frac{1}{\sqrt{2 \, \pi} \sigma} \exp\left[-\left(\frac{x - \mu}{\sigma}\right)^2 \right], \quad x \in \mathbb{R} \]
The basic assumption is satisfied with respect to both of these parameters. Recall also that the fourth central moment is \(E\left((X - \mu)^4\right) = 3 \, \sigma^4\).
\(\sigma^2 / n\) is the Cramér-Rao lower bound for the variance of unbiased estimators of \(\mu\).
The sample mean \(M\) attains the lower bound in the previous exercise and hence is an UMVUE of \(\mu\).
\(\frac{2 \sigma^4}{n}\) is the Cramér-Rao lower bound for the variance of unbiased estimators of \(\sigma^2\).
The sample variance \(S^2\) has variance \(\frac{2 \sigma^4}{n-1}\) and hence does not attain the lower bound in the previous exercise.
If \(\mu\) is known, then the special sample variance \(W^2\) attains the lower bound above and hence is an UMVUE of \(\sigma^2\).
If \(\mu\) is unknown, no unbiased estimator of \(\sigma^2\) attains the Cramér-Rao lower bound above.
The Gamma Distribution
Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the gamma distribution with known shape parameter \(k \gt 0\) and unknown scale parameter \(b \gt 0\). The gamma distribution is often used to model random times and certain other types of positive random variables, and is studied in more detail in the chapter on Special Distributions. The probability density function is
\[ g_b(x) = \frac{1}{\Gamma(k) b^k} x^{k-1} e^{-x/b}, \quad x \in (0, \infty) \]
The basic assumption is satisfied with respect to \(b\). Moreover, the mean and variance of the gamma distribution are \(k b\) and \(k b^2\), respectively.
\(\frac{b^2}{n k}\) is the Cramér-Rao lower bound for the variance of unbiased estimators of \(b\).
\(\frac{M}{k}\) attains the lower bound in the previous exercise and hence is an UMVUE of \(b\).
The Beta Distribution
Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the beta distribution with left parameter \(a \gt 0\) and right parameter \(b = 1\). Beta distributions are widely used to model random proportions and other random variables that take values in bounded intervals, and are studied in more detail in the chapter on Special Distributions. In our specialized case, the probability density function of the sampling distribution is
\[ g_a(x) = a \, x^{a-1}, \quad x \in (0, 1) \]
The basic assumption is satisfied with respect to \(a\).
The mean and variance of the distribution are
- \(\mu = \frac{a}{a+1}\)
- \(\sigma^2 = \frac{a}{(a + 1)^2 (a + 2)}\)
The Cramér-Rao lower bound for the variance of unbiased estimators of \(\mu\) is \(\frac{a^2}{n \, (a + 1)^4}\).
The sample mean \(M\) does not achieve the Cramér-Rao lower bound in the previous exercise, and hence is not an UMVUE of \(\mu\).
The Uniform Distribution
Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the uniform distribution on \([0, a]\) where \(a \gt 0\) is the unknown parameter. Thus, the probability density function of the sampling distribution is
\[ g_a(x) = \frac{1}{a}, \quad x \in [0, a] \]
The basic assumption is not satisfied.
The Cramér-Rao lower bound for the variance of unbiased estimators of \(a\) is \(\frac{a^2}{n}\). Of course, the Cramér-Rao Theorem does not apply, by the previous exercise.
Recall that \(V = \frac{n+1}{n} \max\{X_1, X_2, \ldots, X_n\}\) is unbiased and has variance \(\frac{a^2}{n (n + 2)}\). This variance is smaller than the Cramér-Rao bound in the previous exercise.
The reason that the basic assumption is not satisfied is that the support set \(\left\{x \in \mathbb{R}: g_a(x) \gt 0\right\}\) depends on the parameter \(a\).
Best Linear Unbiased Estimators
We now consider a somewhat specialized problem, but one that fits the general theme of this section. Suppose that \(X = (X_1, X_2, \ldots, X_n)\) is a sequence of observable real-valued random variables that are uncorrelated and have the same unknown mean \(\mu \in \mathbb{R}\), but possibly different standard deviations. Let \(\sigma = (\sigma_1, \sigma_2, \ldots, \sigma_n)\) where \(\sigma_i = \text{SD}(X_i)\) for \(i \in \{1, 2, \ldots, n\}\).
We will consider estimators of \(\mu\) that are linear functions of the outcome variables. Specifically, we will consider estimators of the following form, where the vector of coefficients \(c = (c_1, c_2, \ldots, c_n)\) is to be determined:
\[ Y = \sum_{i=1}^n c_i X_i \]
\(Y\) is unbiased if and only if \(\sum_{i=1}^n c_i = 1\).
The variance of \(Y\) is
\[ var(Y) = \sum_{i=1}^n c_i^2 \sigma_i^2 \]
The variance is minimized, subject to the unbiased constraint, when
\[ c_j = \frac{1 / \sigma_j^2}{\sum_{i=1}^n 1 / \sigma_i^2}, \quad j \in \{1, 2, \ldots, n\} \]
This exercise shows how to construct the Best Linear Unbiased Estimator (BLUE) of \(\mu\), assuming that the vector of standard deviations \(\sigma\) is known.
Suppose now that \(\sigma_i = \sigma for i \in \{1, 2, \ldots, n\}\) so that the outcome variables have the same standard deviation. In particular, this would be the case if the outcome variables form a random sample of size \(n\) from a distribution with mean \(\mu\) and standard deviation \(\sigma\).
In this case the variance is minimized when \(c_i = 1 / n\) for each \(i\) and hence \(Y = M\), the sample mean.
This exercise shows that the sample mean \(M\) is the best linear unbiased estimator of \(\mu\) when the standard deviations are the same, and that moreover, we do not need to know the value of the standard deviation.
Reference
The origin post could be found on LibreTexts STATISTICS
Recommended materials in Chinese: 一致最小方差无偏估计 UMVUE 的求法归纳