Powered by Blogger.

AM-GM Inequality (Part 2)

In Part 1 of this post, I introduced the AM-GM Inequality and showed a few examples of how to apply it to prove certain inequalities and find the minimum value of certain functions. In Part 2, I will start off with the proof (which I skipped in Part 1) and then proceed to a few more miscellaneous examples of clever applications. If you're not interested in the proof, then you can skip the next section and go straight to the examples below.

Proof of the AM-GM Inequality

AM-GM Inequality: For $x_1, x_2, \dotsc , x_n \geq 0$, $$
\dfrac{x_1 + x_2 + \dotsb + x_n}{n} \geq \sqrt[n]{x_1 x_2 \dotsm x_n}
$$ with equality if and only if $x_1 = x_2 = \dotsb = x_n$.

Proof: The proof is by induction. The base case for induction is $n=1$, for which it's obvious that the inequality holds (in fact, it's an equality).

Now suppose we have $n+1$ non-negative real numbers $x_1, x_2, \dotsc , x_n, x_{n+1}$, and assume that for any $n$ non-negative real numbers, the AM-GM inequality holds. For notational simplicity, define $\alpha$ to be the arithmetic mean of the $n+1$ numbers, $$
\alpha = \dfrac{x_1 + x_2 + \dotsb + x_n + x_{n+1}}{n+1}
$$ which  can be re-arranged as $$
(n+1) \alpha = x_1 + x_2 + \dotsb + x_n + x_{n+1} \tag{$\star$}
$$ If each of the $n+1$ numbers is equal to $\alpha$, then the AM and GM are both $\alpha$, so there is nothing to prove. Otherwise, there must be at least one of the $x_i$'s that is greater than $\alpha$ and at least one that is less than $\alpha$. Let's say $x_n > \alpha$ and $x_{n+1} < \alpha$ (if this isn't the case, rearrange the labels so that it is). So we know that $$
(x_n - \alpha)(\alpha - x_{n+1}) > 0 \tag{$\star \star$}
$$ a fact which we will use later.

Now define $y = x_n + x_{n+1} - \alpha$. Note that $y$ is positive since it is greater than $x_n - \alpha$. Furthermore, the AM of the $n$ numbers $x_1, x_2, \dotsc , x_{n-1}, y$ is $\alpha$: $$
\dfrac{x_1 + x_2 + \dotsb + x_{n-1} + y}{n} &= \dfrac{x_1 + x_2 + \dotsb + x_{n-1} + (x_n + x_{n+1} - \alpha)}{n} \\[3mm]
&= \dfrac{(x_1 + x_2 + \dotsb + x_{n-1} + x_n + x_{n+1}) - \alpha}{n} \\[3mm]
&= \dfrac{(n+1)\alpha - \alpha}{n} \ \ \  \text{[by $\star$]} \\[3mm]
&= \dfrac{n \alpha}{n} \\[2mm]
&= \alpha
$$ By the induction hypothesis, the AM-GM inequality holds for the $n$ numbers $x_1, x_2, \dotsc , x_{n-1}, y$, i.e. $$
&&\sqrt[n]{x_1 x_2 \dotsm x_{n-1} y} \ &\leq \ \alpha& \\
\implies &&x_1 x_2 \dotsm x_{n-1} y \ &\leq \ \alpha^n& \\
\implies &&x_1 x_2 \dotsm x_{n-1} y \cdot \alpha \ &\leq \ \alpha^{n+1}& \tag{$\spadesuit$}
$$ The last ingredient is the following: $$
(x_n - \alpha)(\alpha - x_{n+1})
= \ &\underbrace{(x_n + x_{n+1} - \alpha)}_{y} \alpha  - x_n x_{n+1} \\[2mm]
> \ &0 \ \ \  \text{[by $\star \star$]} \\[2mm]
\implies y \alpha > \ &x_n x_{n+1}
$$ Plugging this into $( \spadesuit )$ yields $$
\alpha^{n+1} &\geq x_1 x_2 \dotsm x_{n-1} (y \alpha) \\[2mm]
&> x_1 x_2 \dotsm x_{n-1} (x_n x_{n+1}) \\[2mm]
\implies \alpha &> \sqrt[n+1]{x_1 x_2 \dotsm x_{n-1} x_n x_{n+1}}
$$ i.e. the arithmetic mean is greater than the geometric mean. This completes the proof.

Some more examples of AM-GM applications

In Part 1, I showed one easy example and two examples where we could apply the AM-GM inequality to find the minimum value of a function subject to a volume-like constraint. In this part, I'll show 3 more examples of clever applications, in order of increasing difficulty.

Example 4: Let $a_1, a_2, \dotsc , a_n$ be a sequence of positive numbers and $b_1, b_2, \dotsc , b_n$ be a permutation of the $a_i$'s. Show that $$
\dfrac{a_1}{b_1} + \dfrac{a_2}{b_2} + \dotsb + \dfrac{a_n}{b_n} \geq n
$$ Proof: The AM-GM inequality implies $$
\dfrac{1}{n} \left[ \dfrac{a_1}{b_1} + \dfrac{a_2}{b_2} + \dotsb + \dfrac{a_n}{b_n} \right]
\sqrt[n \uproot4]{\dfrac{a_1}{b_1} \dfrac{a_2}{b_2} \dotsm \dfrac{a_n}{b_n}}
= 1
$$ where the last equality is true since the $a_i$'s and $b_i$'s are the same list of numbers, just rearranged. Multiplying both sides by $n$ yields the desired inequality.

In Example 4, there was no volume-like constraint on the values of the $a_i$'s, but we were able to produce one on the fractions $\frac{a_i}{b_i}$ by multiplying them all together. In Example 5, we can do something similar- this one is admittedly shamelessly set up for AM-GM application, but I still thought it was cool enough to show.

Example 5: Find the positive solutions to the system of equations: $$
x_1 + \dfrac{1}{x_2} = 4 \\[3mm]
x_2 + \dfrac{1}{x_3} = 1 \\[3mm]
\vdots \tag{$\diamondsuit$}\\[3mm]
x_{99} + \dfrac{1}{x_{100}} = 4 \\[3mm]
x_{100} + \dfrac{1}{x_1} = 1
$$ Solution: The sum $x_1 + \frac{1}{x_2}$ is 2 times the arithmetic mean of the numbers $x_1$ and $\frac{1}{x_2}$, so by the AM-GM inequality, we have (using the same logic on the other 98 variables as well): $$
x_1 + \dfrac{1}{x_2} \geq 2 \sqrt{\dfrac{x_1}{x_2}} \\[3mm]
x_2 + \dfrac{1}{x_3} \geq 2 \sqrt{\dfrac{x_2}{x_3}} \\[3mm]
\vdots \tag{$\clubsuit$} \\[3mm]
x_{100} + \dfrac{1}{x_1} \geq 2 \sqrt{\dfrac{x_{100}}{x_1}}
$$ Note that putting the $x$'s under square roots is valid since we are only considering positive solutions. Now, both sides of $( \clubsuit )$ are positive in each of these inequalities, so we can multiply them together without reversing the direction of the inequality: $$
\left( x_1 + \dfrac{1}{x_2} \right)
\left( x_2 + \dfrac{1}{x_3} \right)
\left( x_{100} + \dfrac{1}{x_1} \right)
&\geq 2^{100} \sqrt{\dfrac{x_1}{x_2} \dfrac{x_2}{x_3} \dotsm \dfrac{x_{100}}{x_1}} \\[2mm]
&= 2^{100} \cdot 1 \\[2mm]
&= 2^{100}
$$ thus $$
\left( x_1 + \dfrac{1}{x_2} \right)
\left( x_2 + \dfrac{1}{x_3} \right)
\left( x_{100} + \dfrac{1}{x_1} \right)
\geq 2^{100} \tag{$\heartsuit$}
$$ On the other hand, multiplying together the original system of equations $( \diamondsuit )$ shows that the product on the left side of $( \heartsuit )$ is equal to $4^{50} \cdot 1^{50} = 2^{100}$. In other words, the inequality $( \heartsuit )$ is an exact equality. This implies, in turn, that each of the inequalities in $( \clubsuit )$ are exact equalities too, since otherwise $( \heartsuit )$ would not be an exact equality.

Finally, we can solve for the variables: $$
&& x_1 + \dfrac{1}{x_2} &= 2 \sqrt{\dfrac{x_1}{x_2}} \\[3mm]
&\implies& x_1 - 2 \sqrt{\dfrac{x_1}{x_2}} + \dfrac{1}{x_2} &= 0 \\[3mm]
&\implies& \left( \sqrt{x_1} - \sqrt{\dfrac{1}{x_2}} \right)^2 &= 0 \\[3mm]
&\implies& \sqrt{x_1} &= \sqrt{\dfrac{1}{x_2}} \\[3mm]
&\implies& x_1 &= \dfrac{1}{x_2}
$$ Similarly, $x_2 = \frac{1}{x_3}, \dotsc, x_{100} = \frac{1}{x_1}$. These, combined with the original system of equations $( \diamondsuit )$, show that the solution is $x_1 = 2, x_2 = \frac{1}{2}, x_3 = 2, x_4 = \frac{1}{2}, \dotsc, x_{99} = 2, x_{100} = \frac{1}{2}$.

The sixth and final example will require proving a lemma, but I thought this example (and the lemma itself) was so neat that it would be worth the effort.

Lemma: Let $p(x) = a_n x^n + a_{n-1} x^{n-1} + \dotsb + a_1 x + a_0$ be a polynomial in the variable $x$ (could be a real or complex variable and real or complex coefficients), and let $r_1, r_2, \dotsc, r_n$ be the roots of $p(x)$ (in general, these are complex numbers, and there may be repeats). Then the sum of the squares of the roots is related to the coefficients by the following formula: $$
r_1^2 + r_2^2 + \dotsb + r_n^2 = \left( \dfrac{a_{n-1}}{a_n} \right)^2 - 2 \dfrac{a_{n-2}}{a_n}
$$ Proof: The polynomial $p(x)$ can alternatively be written as $$
p(x) = a_n (x-r_1)(x-r_2) \dotsm (x-r_n) \tag{$\dagger$}
$$ When expanding this product, we add terms consisting of $\pm a_n$ times $n$ items, each either an $x$ or one of the $r_i$'s, then combine "like terms," i.e. those with the same power of $x$. For $0 \leq k \leq n$, a term in the expansion contributes to the $x^k$ coefficient when, from the $n$ factors in the product $( \dagger )$, we choose an $x$ from $k$ of them and the $-r_i$ from the other $n-k$. Therefore, the coefficient of $x^k$ in the expansion is $$
a_n (-1)^{n-k} \sum_{1 \leq i_1 < i_2 < \dotsb < i_{n-k} \leq n}{r_{i_1} r_{i_2} \dotsm r_{i_{n-k}}} \tag{$\ddagger$}
$$ Since the product $( \dagger )$ is an equivalent way of writing $p(x)$, the sum $( \ddagger )$ must be equal to the coefficient $a_k$.

On the other hand, the sum of the squares of the roots can be written as $$
r_1^2 + r_2^2 + \dotsb + r_n^2
&= (r_1 + r_2 + \dotsb + r_n)^2 - 2 \sum_{1 \leq i_1 < i_2 \leq n}{r_{i_1} r_{i_2}} \\[3mm]
&= \left( \dfrac{1}{a_n (-1)^{n-(n-1)}}a_{n-1} \right)^2
-2 \dfrac{1}{a_n (-1)^{n-(n-2)}}a_{n-2} \tag{$\maltese$} \\[3mm]
&= \left( \dfrac{a_{n-1}}{a_n} \right)^2 - 2 \dfrac{a_{n-2}}{a_n}
$$ where the equality $( \maltese )$ was obtained by plugging in $(n-1)$ and $(n-2)$ in for $k$ in $( \ddagger )$. This completes the proof.

Note: the result in the lemma is one of Newton's identities (or Newton sums) for polynomials, and the formulas $( \ddagger )$ (one formula for each value of $k$) are called Vieta's formulas. There are similar Newton's identities for the sum of the third, fourth, fifth, etc. powers of the roots. The formulas are recursive, each depending on the lower-degree Newton sums.

With the lemma and a slick AM-GM application, the final example will be a piece of cake.

Example 6: A polynomial with all coefficients equal to $\pm 1$ and only real roots has degree at most 3.

Proof: Suppose we have a polynomial $p(x) = a_n x^n + a_{n-1} x^{n-1} + \dotsb + a_1 x + a_0$ where each of the $a_i$'s is either $1$ or $-1$ and with roots $r_1, r_2, \dotsc, r_n$. We can assume that the leading coefficient $a_n$ is $+1$ since the polynomials with $a_n = -1$ are just the negatives of polynomials with $a_n = +1$.

By the lemma (with $a_n = 1$), the sum of the squares of the roots is $a_{n-1}^2 - 2 a_{n-2}$. Also, plugging in $k=n$ in $( \ddagger )$ tells us that the product of the squares of the roots is $a_0^2$.

By the AM-GM inequality, we have $$
&&\dfrac{r_1^2 + r_2^2 + \dotsb + r_n^2}{n} &\geq \sqrt[n \uproot2]{r_1^2 r_2^2 \dotsm r_n^2} \\[3mm]
&\implies& \dfrac{a_{n-1}^2 - 2 a_{n-2}}{n} &\geq \sqrt[n]{a_0^2} \\[3mm]
&\implies& \dfrac{1 \pm 2}{n} &\geq 1 \ \ \ [\text{since all coefficients are }\pm 1 ] \\[3mm]
&\implies& 3 &\geq n
$$ $\square$

That will do it for this post. Please post any questions in the comments section. Thanks for reading, and stay tuned for part 3...


Post a Comment