Powered by Blogger.

Proof of the Law of Large Numbers

Preliminaries: Monte Carlo Simulation - Part 2, How do mathematicians model randomness?

In Monte Carlo Simulation - Part 2, I introduced the Weak and Strong Laws of Large Numbers as well as the Central Limit Theorem. While the proof of the latter is a bit more technical, I wanted to include a rather intuitive proof of the (Weak) Law of Large Numbers.

The LLN essentially states that, given a sufficiently large sample size, a random sample mean is unlikely to fall far away from the long-run mean. Thus, it should come as no surprise that the standard proof consists of taking the limit as $n \rightarrow \infty$ of an inequality limiting the likely "spread" of a random variable from its mean.

The Law of Large Numbers states that, for large sample sizes,
sample means will likely converge to the theoretical mean.
(image courtesy of Wikipedia)


Below, I'll prove two random variable inequalities called Markov's Inequality and Chebyshev's Inequality and then show you how the LLN falls right out of the latter when we take a limit.

Markov's Inequality: If $X$ is a non-negative random variable, then the probability that $X$ exceeds a positive value $a$ is no greater than the mean of $X$ divided by $a$. In symbols: $$
{\Bbb P}(X \geq a) \leq \frac{{\Bbb E}(X)}{a}
$$ Proof: Let $I$ be the indicator random variable for the event $X \geq a$. In other words, $I$ is a random variable which equals 1 if $X \geq a$ and 0 otherwise. Then it is always the case that $aI \leq X$:

  • If $X<a$, then $I=0$, so $aI=0 \leq X$, since $X$ is non-negative.
  • If $X \geq a$, then $I=1$, so $aI = a \cdot 1 = a \leq X$ since $X \geq a$.
Since $aI \leq X$ is always true, the means must satisfy the inequality ${\Bbb E}(aI) \leq {\Bbb E}(X) \ ( \spadesuit )$. But the left side of this inequality is equal to: 
$$
\begin{align}
{\Bbb E}(aI) &= a {\Bbb E}(I) \\
&= a(1\cdot {\Bbb P}(I=1) + 0 \cdot {\Bbb P}(I=0)) \\
&= a {\Bbb P}(I=1) \\
&= a {\Bbb P}(X \geq a) \tag{$\dagger$}
\end{align}
$$ Substituting $( \dagger )$ into $( \spadesuit )$ gives
$$ a {\Bbb P}(X \geq a) \leq {\Bbb E}(X) $$ Dividing both sides by $a$ yields the desired inequality.
$\square$

Chebyshev's Inequality: Let $X$ be a random variable with mean $\mu$ and non-zero, finite variance $\sigma^2$, and let $k \geq 1$. Then the probability is at most $\frac{1}{k^2}$ that $X$ falls at least $k$ standard deviations from its mean. In symbols: $$
{\Bbb P}( | X - \mu | \geq k \sigma ) \leq \frac{1}{k^2}
$$ Chebyshev's inequality puts a maximum on how frequently a random variable will exhibit values far away from its mean.

Proof: $|X - \mu| \geq k \sigma$ if and only if $(X-\mu)^2 \geq (k \sigma)^2$. Since $(X-\mu)^2$ is non-negative, we can apply Markov's inequality with $a = (k \sigma)^2$: $$
{\Bbb P}\left[ (X- \mu)^2 \geq (k \sigma)^2 \right] \leq \frac{{\Bbb E}\left[ (X-\mu)^2 \right]}{(k \sigma)^2}
$$ But ${\Bbb E}\left[ (X-\mu)^2 \right]$ is the definition of the variance $\sigma^2$, so $$
{\Bbb P} \left[ (X- \mu)^2 \geq (k \sigma)^2 \right] \leq \frac{\sigma^2}{(k \sigma)^2} = \frac{1}{k^2}
$$ as desired.
$\square$

Now, we can apply Chebyshev's inequality to the sample mean $\overline{X}_n$ to obtain the Weak Law of Large Numbers. For the below, let $X_1, X_2, X_3, \dotsc$ be iid random variables with mean $\mu$ and finite variance $\sigma^2$.

Proof of the (Weak) Law of Large Numbers: In Monte Carlo Simulation - Part 2, we saw that the expected value of the sample mean is the long-run mean, i.e. ${\Bbb E}(\overline{X}_n) = \mu$. Furthermore, the variance of $\overline{X}_n$ is $$
\begin{align}
{\rm var} \left( \overline{X}_n \right) &= {\rm var} \left(\frac{1}{n} \sum_{i=1}^{n}{X_i} \right) \\[2mm]

&= \frac{1}{n^2} \sum_{i=1}^{n}{{\rm var}(X_i)} \\[2mm]

&= \frac{n \sigma^2}{n^2} = \frac{\sigma^2}{n}
\end{align}
$$ Now, let $\epsilon > 0$. Then $$
\begin{align}
k \sigma_{\overline{X}_n} = \epsilon
&\iff k^2 {\rm var} \left( \overline{X}_n \right) = \epsilon^2 \\[2mm]

&\iff \frac{1}{k^2} = \frac{{\rm var}\left( \overline{X}_n \right)}{\epsilon^2} = \frac{\sigma^2}{n \epsilon^2}
\end{align}
$$ Thus, by Chebyshev's inequality, $$
{\Bbb P}\left( \left| \overline{X}_n - \mu \right| \geq \epsilon \right)
\leq \frac{1}{k^2}
= \frac{\sigma^2}{n \epsilon^2}
$$ Taking the limit of both sides yields $$
\lim_{n \rightarrow \infty}{\Bbb P}\left( \left| \overline{X}_n - \mu \right| \geq \epsilon \right)
\leq \lim_{n \rightarrow \infty}{\frac{\sigma^2}{n \epsilon^2}} = 0
$$ Finally, since probabilities are non-negative, the limit must be equal to zero.
$\square$

I hope this post helped clarify what the Law of Large Numbers says and why it's true. Thanks for reading, and feel free to ask any questions in the Comments section.

0 comments:

Post a Comment