Skip to main content

Section 3.3 Expected Value and Variance

Subsection 3.3.1 Expected Values

Consider the probability space \((S,P)\) with sample space \(S = \{1,2,3\}\) and probability function \(P\) defined by \(P(1)=4/5\text{,}\) \(P(2)=1/10\text{,}\) and \(P(3)=1/10\text{.}\) Assume we choose an element in \(S\) according to this probability function. Let \(X\) be the random variable whose value is equal to the element in \(S\) that is chosen. Thus, as a function \(X : S \rightarrow \mathbb{R}\text{,}\) we have \(X(1)=1\text{,}\) \(X(2)=2\text{,}\) and \(X(3)=3\text{.}\)

The “expected value” of \(X\) is the value of \(X\) that we observe “on average”. How should we define this? Since \(X\) has a much higher probability to take the value \(1\) than the other two values \(2\) and \(3\text{,}\) the value \(1\) should get a larger “weight” in the expected value of \(X\text{.}\) Based on this, it is natural to define the expected value of \(X\) to be

\begin{equation*} 1 \cdot P(1) + 2 \cdot P(2) + 3 \cdot P(3) = 1 \cdot \frac{4}{5} + 2 \cdot \frac{1}{10} + 3 \cdot \frac{1}{10} = \frac{13}{10} . \end{equation*}
Definition 3.3.1. Expected Value.

Let \((S,P)\) be a probability space and let \(X : S \rightarrow \mathbb{R}\) be a random variable. The expected value (or expectation or weighted average) of \(X\) is defined to be

\begin{equation*} \mathbb{E}(X) = \sum_{s \in S} X(s) \cdot P(s)\text{.} \end{equation*}

Assume we flip a fair coin, in which case the sample space is \(S = \{H,T\}\) and \(P(H) = P(T) = 1/2\text{.}\) Define the random variable \(X\) to have value

\begin{equation*} X = \left\{\begin{array}{ll} 1 & \mbox{if the coin comes up heads,} \\ 0 & \mbox{if the coin comes up tails.} \end{array} \right. \end{equation*}

Thus, as a function \(X : S \rightarrow \mathbb{R}\text{,}\) we have \(X(H)=1\) and \(X(T)=0\text{.}\) The expected value \(\mathbb{E}(X)\) of \(X\) is equal to

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = X(H) \cdot P(H) + X(T) \cdot P(T) \\ & = 1 \cdot \frac{1}{2} + 0 \cdot \frac{1}{2} \\ & = \frac{1}{2} . \end{array} \end{equation*}

This example shows that the term “expected value” is a bit misleading: \(\mathbb{E}(X)\) is not the value that we expect to observe, because many times the value of \(X\) can never equal to its expected value.

Definition 3.3.3. Bernoulli Trial.

A Bernoulli trial is a special kind of experiment that can have only two outcomes: 1 or 0. A 1 is called a “success” and a 0 is called a “failure”. The probability of success is defined as \(p\) and the probability of failure is therefore \(1-p\) or \(q\text{.}\) If \(X\) is a random variable that represents the outcome of a Bernoulli trial then

\begin{equation*} P(X=1) = p \end{equation*}

and

\begin{equation*} P(X=0) = 1-p = q \end{equation*}

where \(p + q = 1\text{.}\)

In the preceding example (Example 3.3.2) we defined a random variable \(X\) where

\begin{equation*} X = \left\{\begin{array}{ll} 1 & \mbox{if the coin comes up heads,} \\ 0 & \mbox{if the coin comes up tails.} \end{array} \right. \end{equation*}

Each coin flip is a Bernoulli trial.

By Definition 3.3.1

\begin{equation*} \mathbb{E}(X) = 1\cdot p + 0 \cdot (1-p) = p \, \Box \end{equation*}

Assume we roll a fair die. Define the random variable \(X\) to be the value of the result. Then, \(X\) takes each of the values in \(\{1,2,3,4,5,6\}\) with equal probability \(1/6\text{,}\) and we get

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) = & 1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3 \cdot \frac{1}{6} + 4 \cdot \frac{1}{6} + 5 \cdot \frac{1}{6} + 6 \cdot \frac{1}{6} \\ = & \frac{7}{2} . \end{array} \end{equation*}

Now define the random variable \(Y\) to be equal to one divided by the result of the die. In other words, \(Y = 1/X\text{.}\) This random variable takes each of the values in \(\{1,1/2,1/3,1/4,1/5,1/6\}\) with equal probability \(1/6\text{,}\) and we get

\begin{equation*} \begin{array}{rl} \mathbb{E}(Y) & = 1 \cdot \frac{1}{6} + \frac{1}{2} \cdot \frac{1}{6} + \frac{1}{3} \cdot \frac{1}{6} + \frac{1}{4} \cdot \frac{1}{6} + \frac{1}{5} \cdot \frac{1}{6} + \frac{1}{6} \cdot \frac{1}{6} \\ & = \frac{49}{120} . \end{array} \end{equation*}

Note that \(\mathbb{E}(Y) \neq 1 / \mathbb{E}(X)\text{.}\) Thus, this example shows that, in general, \(\mathbb{E}(1/X) \neq 1 / \mathbb{E}(X)\text{.}\)

Consider a fair red die and a fair blue die, and assume we roll them independently, just like Example 3.1.7. The sample space is

\begin{equation*} S = \{ (i,j) : 1 \leq i \leq 6 , 1 \leq j \leq 6 \} , \end{equation*}

where \(i\) is the result of the red die and \(j\) is the result of the blue die. Each outcome \((i,j)\) in \(S\) has the same probability of \(1/36\text{.}\)

Let \(X\) be the random variable whose value is equal to the sum of the results of the two rolls. As a function \(X : S \rightarrow \mathbb{R}\text{,}\) we have \(X(i,j) = i+j\text{.}\) The matrix below gives all possible values of \(X\text{.}\) The leftmost column indicates the result of the red die, the top row indicates the result of the blue die, and each other entry is the corresponding value of \(X\text{.}\)

\begin{equation*} \begin{array}{|c||c|c|c|c|c|c|} \hline &1 &2 &3 &4 &5 &6 \\ \hline \hline 1 &2 &3 &4 &5 &6 &7 \\ 2 &3 &4 &5 &6 &7 &8 \\ 3 &4 &5 &6 &7 &8 &9 \\ 4 &5 &6 &7 &8 &9 &10 \\ 5 &6 &7 &8 &9 &10 &11 \\ 6 &7 &8 &9 &10 &11 &12 \\ \hline \end{array} \end{equation*}

The expected value \(\mathbb{E}(X)\) of \(X\) is equal to

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = \sum_{(i,j) \in S} X(i,j) \cdot P(i,j) \\ & = \sum_{(i,j) \in S} (i+j) \cdot \frac{1}{36} \\ & = \frac{1}{36} \sum_{(i,j) \in S} (i+j) \\ & = \frac{1}{36} \cdot \text{the sum of all matrix entries} \\ & = \frac{1}{36} \cdot 252 \\ & = 7 . \end{array} \end{equation*}

Subsubsection 3.3.1.1 Comparing the Expected Values of Comparable Random Variables

Consider a probability space \((S,P)\text{,}\) and let \(X\) and \(Y\) be two random variables on \(S\text{.}\) Recall that \(X\) and \(Y\) are functions that map elements of \(S\) to real numbers. We will write \(X \leq Y\text{,}\) if for each element \(s \in S\text{,}\) we have \(X(s) \leq Y(s)\text{.}\) In other words, the value of \(X\) is at most the value of \(Y\text{,}\) no matter which outcome \(s\) is chosen.

Using Definition 3.3.1 and the assumption that \(X \leq Y\text{,}\) we obtain

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = \sum_{s \in S} X(s) \cdot P(s) \\ & \leq & \sum_{s \in S} Y(s) \cdot P(s) \\ & = \mathbb{E}(Y) . \end{array} \end{equation*}

Subsubsection 3.3.1.2 An Alternative Expression for the Expected Value

In the Example 3.3.6, we used Definition 3.3.1 to compute the expected value \(\mathbb{E}(X)\) of the random variable \(X\) that was defined to be the sum of the results when rolling two fair and independent dice. This was a painful way to compute \(\mathbb{E}(X)\text{,}\) because we added all \(36\) entries in the matrix. There is a slightly easier way to determine \(\mathbb{E}(X)\text{:}\) By looking at the matrix, we see that the value \(4\) occurs three times. Thus, the event “\(X=4\)” has size \(3\text{,}\) i.e., if we consider the subset of the sample space \(S\) that corresponds to this event, then this subset has size \(3\text{.}\) Similarly, the event “\(X=7\)” has size \(6\text{,}\) because the value \(7\) occurs \(6\) times in the matrix. The table below lists the sizes of all non-empty events, together with their probabilities.

\begin{equation*} \begin{array}{|c|c|c|} \hline \text{event} & \text{size of event} & \text{probability} \\ \hline \hline X=2 & 1 & 1/36 \\ X=3 & 2 & 2/36 \\ X=4 & 3 & 3/36 \\ X=5 & 4 & 4/36 \\ X=6 & 5 & 5/36 \\ X=7 & 6 & 6/36 \\ X=8 & 5 & 5/36 \\ X=9 & 4 & 4/36 \\ X=10 & 3 & 3/36 \\ X=11 & 2 & 2/36 \\ X=12 & 1 & 1/36 \\ \hline \end{array} \end{equation*}

Based on this, we get

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = 2 \cdot \frac{1}{36} + 3 \cdot \frac{2}{36} + 4 \cdot \frac{3}{36} + 5 \cdot \frac{4}{36} + 6 \cdot \frac{5}{36} + 7 \cdot \frac{6}{36} + \\ & 8 \cdot \frac{5}{36} + 9 \cdot \frac{4}{36} + 10 \cdot \frac{3}{36} + 11 \cdot \frac{2}{36} + 12 \cdot \frac{1}{36} \\ & = 7 . \end{array} \end{equation*}

Even though this is still quite painful, less computation is needed. What we have done is the following: In the definition of \(\mathbb{E}(X)\text{,}\) i.e.,

\begin{equation*} \mathbb{E}(X) = \sum_{(i,j) \in S} X(i,j) \cdot P(i,j) , \end{equation*}

we rearranged the terms in the summation. That is, instead of taking the sum over all elements \((i,j)\) in \(S\text{,}\) we

  • grouped together all outcomes \((i,j)\) for which \(X(i,j) = i+j\) has the same value, say, \(k\text{,}\)

  • multiplied this common value \(k\) by the probability that \(X\) is equal to \(k\text{,}\)

  • and took the sum of the resulting products over all possible values of \(k\text{.}\)

This resulted in

\begin{equation*} \mathbb{E}(X) = \sum_{k=2}^{12} k \cdot P(X=k) . \end{equation*}

The following theorem states that this can be done for any random variable.

Recall that the event “\(X=x\)” corresponds to the subset

\begin{equation*} A_x = \{ s \in S : X(s) = x \} \end{equation*}

of the sample space \(S\text{.}\) We have

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = \sum_{s \in S} X(s) \cdot P(s) \\ & = \sum_x \sum_{s : X(s) = x} X(s) \cdot P(s) \\ & = \sum_x \sum_{s : X(s) = x} x \cdot P(s) \\ & = \sum_x \sum_{s \in A_x} x \cdot P(s) \\ & = \sum_x x \sum_{s \in A_x} P(s) \\ & = \sum_x x \cdot P \left( A_x \right) \\ & = \sum_x x \cdot P(X=x) . \end{array} \end{equation*}

When determining the expected value of a random variable \(X\text{,}\) it is usually easier to use Theorem 3.3.8 than Definition 3.3.1. To use Theorem 3.3.8, you have to do the following:

  • Determine all values \(x\) that \(X\) can take, i.e., determine the range of the function \(X\text{.}\)

  • For each such value \(x\text{,}\) determine \(P(X=x)\text{.}\)

  • Compute the sum of all products \(x \cdot P(X=x)\text{.}\)

Subsection 3.3.2 Linearity of Expectation

We now come to one of the most useful tools for determining expected values:

Recall that both \(X\) and \(Y\) are functions from \(S\) to \(\mathbb{R}\text{.}\) Define the random variable \(Z\) to be \(Z=aX+bY\text{.}\) That is, as a function \(Z : S \rightarrow \mathbb{R}\text{,}\) \(Z\) is defined by

\begin{equation*} Z(s) = a \cdot X(s) + b \cdot Y(s) \end{equation*}

for all \(s\) in \(S\text{.}\) Using Definition 3.3.1, we get

\begin{equation*} \begin{array}{rl} \mathbb{E}(Z) & = \sum_{s \in S} Z(s) \cdot P(s) \\ & = \sum_{s \in S} \left( a \cdot X(s) + b \cdot Y(s) \right) \cdot P(s) \\ & = a \sum_{s \in S} X(s) \cdot P(s) + b \sum_{s \in S} Y(s) \cdot P(s) \\ & = a \cdot \mathbb{E}(X) + b \cdot \mathbb{E}(Y) \, \Box \end{array} \end{equation*}

Let us return to the example in which we roll two fair and independent dice, one being red and the other being blue. Define the random variable \(X\) to be the sum of the results of the two rolls. We have seen two ways to compute the expected value \(\mathbb{E}(X)\) of \(X\text{.}\) We now present a third way, which is the easiest one: We define two random variables

\begin{equation*} Y = \mbox{ the result of the red die} \end{equation*}

and

\begin{equation*} Z = \mbox{ the result of the blue die.} \end{equation*}

We have already seen that

\begin{equation*} \mathbb{E}(Y) = 1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3 \cdot \frac{1}{6} + 4 \cdot \frac{1}{6} + 5 \cdot \frac{1}{6} + 6 \cdot \frac{1}{6} = \frac{7}{2} . \end{equation*}

By the same computation, we have

\begin{equation*} \mathbb{E}(Z) = \frac{7}{2} . \end{equation*}

Observe that

\begin{equation*} X = Y + Z . \end{equation*}

Then, by the Linearity of Expectation (i.e., Theorem 3.3.9), we have

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = \mathbb{E}(Y+Z) \\ & = \mathbb{E}(Y) + \mathbb{E}(Z) \\ & = \frac{7}{2} + \frac{7}{2} \\ & = 7 . \end{array} \end{equation*}

We have stated the Linearity of Expectation for two random variables. The proof of Theorem 3.3.9 can easily be generalized to any finite sequence of random variables:

Subsection 3.3.3 The Geometric Distribution

Say we are performing repeated independent Bernoulli trials such that each one is successful with probability \(p\) and fails with probability \(1-p\text{.}\) What is the expected number of times that we must perform the trial before we see a success?

We model this problem in the following way: Assume we have a coin that comes up heads with probability \(p\) and, thus, comes up tails with probability \(1-p\text{.}\) We flip this coin repeatedly and independently until it comes up heads for the first time. Define the random variable \(X\) to be the number of times that we flip the coin; this includes the last coin flip, which resulted in heads. We want to determine the expected value \(\mathbb{E}(X)\) of \(X\text{.}\)

The sample space is given by

\begin{equation*} S = \{ T^{k-1} H : k \geq 1 \} , \end{equation*}

where \(T^{k-1} H\) denotes the sequence consisting of \(k-1\) tails followed by one heads. Since the coin flips are independent, the outcome \(T^{k-1} H\) has a probability of \((1-p)^{k-1} p = p (1-p)^{k-1}\text{,}\) i.e.,

\begin{equation*} P \left( T^{k-1} H \right) = p (1-p)^{k-1} . \end{equation*}

For any integer \(k \geq 1\text{,}\) \(X=k\) if and only if the coin flips give the sequence \(T^{k-1} H\text{.}\) It follows that

\begin{equation*} P(X=k) = P \left( T^{k-1} H \right) = p (1-p)^{k-1} . \end{equation*}
Definition 3.3.11. Geometric Distribution.

Let \(p\) be a real number with \(0 <p <1\text{.}\) A random variable \(X\) has a geometric distribution with parameter \(p\text{,}\) if its distribution function satisfies

\begin{equation*} P(X=k) = p (1-p)^{k-1} \end{equation*}

for any integer \(k \geq 1\text{.}\)

Informally, this makes sense. If we see a success with probability \(p\) in each trial, then we should expect to see a success in 1 out of \(p\) trials, (if \(p = 1/n\) then we expect to perform \(1/p = n\) trials).

A formal proof requires calculus so not given here.

For example, if we flip a fair coin (in which case \(p=1/2\)) repeatedly and independently until it comes up heads for the first time, then the expected number of coin flips is equal to \(2\text{.}\)

Subsection 3.3.4 The Binomial Distribution

Say as in Subsection 3.3.3 we are performing repeated independent Bernoulli trials such that each one is successful with probability \(p\) and fails with probability \(1-p\text{.}\) But now we repeat the experiment a fixed number of times, say \(n\) times, integer \(n \geq 1\text{.}\) What number of successes can we expect to see in those \(n\) trials?

We again model this problem using a coin that comes up heads with probability \(p\) and, thus, comes up tails with probability \(1-p\text{.}\) We flip the coin, independently, \(n\) times and define the random variable \(X\) to be the number of times the coin comes up heads. We want to determine the expected value \(\mathbb{E}(X)\text{.}\)

Let \(n \geq 1\) and \(k\) be integers with \(0 \leq k \leq n\text{.}\) Then, \(X=k\) if and only if there are exactly \(k\) \(H\)'s in the sequence of \(n\) coin flips. The number of such sequences is equal to \(n \choose k\text{,}\) and each one of them has probability \(p^k (1-p)^{n-k}\text{.}\)

Definition 3.3.13. Binomial Distribution.

Let \(n \geq 1\) be an integer and let \(p\) be a real number with \(0 < p < 1\text{.}\) A random variable \(X\) has a binomial distribution with parameters \(n\) and \(p\text{,}\) if its distribution function satisfies

\begin{equation*} P(X=k) = {n \choose k} p^k (1-p)^{n-k} \end{equation*}

for any integer \(k\) with \(0 \leq k \leq n\text{.}\)

We define a sequence \(X_1,X_2,\ldots,X_n\) of random variables each representing a Bernoulli trial that takes the value 1 with probability \(p\) and the value 0 with probability \(1-p\text{.}\) Observe that

\begin{equation*} X = X_1 + X_2 + \cdots + X_n , \end{equation*}

because

  • \(X\) counts the number of heads in the sequence of \(n\) coin flips, and

  • the summation on the right-hand side is equal to the number of \(1\)'s in the sequence \(X_1,X_2,\ldots,X_n\text{,}\) which, by definition, is equal to the number of successes in the sequence of \(n\) Bernoulli trials.

Using the Linearity of Expectation (Theorem 3.3.10), we have

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = \mathbb{E} \left( \sum_{i=1}^n X_i \right) \\ & = \sum_{i=1}^n \mathbb{E} \left( X_i \right) . \end{array} \end{equation*}

Thus, we have to determine the expected value for each \(X_i\text{.}\) Since each \(X_i\) is a Bernoulli trial, by Theorem 3.3.4,

\begin{equation*} \mathbb{E} \left( X_i \right) = p . \end{equation*}

We conclude that

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) & = \sum_{i=1}^n \mathbb{E} \left( X_i \right) \\ & = \sum_{i=1}^n p \\ & = pn . \end{array} \end{equation*}

Subsection 3.3.5 Variance

The usefulness of the expected value as a prediction for the outcome of an experiment is increased when the outcome is not likely to deviate too much from the expected value. In this section we shall introduce a measure of this deviation, called the variance.

First, we must define what we mean by deviation.

Definition 3.3.15. Deviation of a Random Variable.

Let \(X\) be a random variable with expected value \(\mathbb{E}(X)\text{.}\) Then the deviation of \(X\) at \(s \in S\) is

\begin{equation*} (X(s) - \mathbb{E}(X)). \end{equation*}

The deviation can be thought of as the measurement of how far \(X(s)\) is from the expected value of \(X\text{.}\)

The variance is the weighted average (or expectation) of the square of the deviation. This can be seen as answering the question “how much on average does the value of \(X\) vary from its expected value?”

Definition 3.3.16. Variance.

Let \(X\) be a random variable with expected value \(\mathbb{E}(X)\text{.}\) Then the variance of \(X\text{,}\) denoted by \(V(X)\) or \(\sigma^2\text{,}\) is

\begin{equation*} V(X) = \sum_{s \in S}(X(s) - \mathbb{E}(X))^2 \cdot P(s). \end{equation*}

Note that because of the squaring, the variance is not in the same units as \(X(s)\) and \(\mathbb{E}(X)\text{.}\) A low variance indicates that the values of \(X\) tend to be close to the expected value, while a large variance indicates that \(X\)'s outcomes are spread out over a wider range.

Definition 3.3.17. Standard Deviation of a Random Variable.

Let \(X\) be a random variable with variance \(V(X)\text{.}\) Then the standard deviation of \(X\) is

\begin{equation*} \sigma = \sqrt{V(X)} \end{equation*}

Like the variance, a low standard deviation indicates that the outcomes of an experiment, or values of \(X\) tend to be close to the expected value, while a high standard deviation indicates that the outcomes are spread out over a wider range of values. The standard deviation is often more useful than the variance because it is in the same units as \(X\) and \(\mathbb{E}(X)\text{.}\)

Continuing our scenario from Example 3.3.5, assume we roll a fair die. Define the random variable \(X\) to be the value of the result, \(X\) takes each of the values in \(S = \{1,2,3,4,5,6\}\) with equal probability \(1/6\text{,}\) and we have calculated \(\mathbb{E}(X) = \frac{7}{2}\text{.}\) To use the variance formula in Definition 3.3.16 we calculate the squared difference between \(X(s)\) and \(\mathbb{E}(X)\text{,}\) shown in the table below:

\begin{equation*} \begin{array}{l|l|l} \hline \hline X(s) & P(s) & (X(s) - \frac{7}{2})^2\\ \hline 1 & 1/6 & 25/4 \\ 2 & 1/6 & 9/4 \\ 3 & 1/6 & 1/4 \\ 4 & 1/6 & 1/4 \\ 5 & 1/6 & 9/4 \\ 6 & 1/6 & 25/4 \\ \end{array}. \end{equation*}

From the table we can calculate

\begin{equation*} \begin{array}{cc} V(X) & = 1/6(\frac{25}{4}+\frac{9}{4}+\frac{1}{4}+\frac{1}{4}+\frac{9}{4}+\frac{25}{4})\\ & = \frac{35}{12} = 2 \frac{11}{12} \end{array} \end{equation*}

We can calculate the same variance of a fair die using Theorem 3.3.19. First we calculate

\begin{equation*} \mathbb{E}(X)^2 = \left(\frac{7}{2}\right)^2 = \frac{49}{4}\text{.} \end{equation*}

Then we must calculate the expectation of the squares of \(X\text{:}\)

\begin{equation*} \begin{array}{cc} \mathbb{E}(X^2)& = 1^2 (1/6) + 2^2 (1/6) + 3^2 (1/6) + 4^2 (1/6) + 5^2 (1/6) + 6^2 (1/6)\\ & = 1 (1/6) + 4(1/6) + 9(1/6) + 16(1/6) + 25(1/6) + 36(1/6)\\ & = 91/6 \end{array} \end{equation*}

Finally:

\begin{equation*} V(X) = 91/6 - 49/4 = 35/12 = 2 \frac{11}{12}. \end{equation*}

If \(X\) is a random variable representing a Bernoulli trial, then we know from Theorem 3.3.4 that \(\mathbb{E}(X) = p.\) By Definition 3.3.1

\begin{equation*} \mathbb{E}(X^2) = 1^2 \cdot p + 0^2 \cdot (1-p) = p. \end{equation*}

It follows using Theorem 3.3.19 that

\begin{equation*} \begin{array}{rl} V(X) = & \mathbb{E}(X^2) - (\mathbb{E}(X))^2 \\ = & p - p^2 \\ = & p(1-p)\\ = & pq \end{array} \end{equation*}

Requires calculus so not given here

.

Like in the proof of Theorem 3.3.14 we define a sequence of random variables \(X_1,X_2,\ldots,X_n\) each representing a Bernoulli trial that takes the value 1 with probability \(p\) and the value 0 with probability \(1-p\text{.}\) We know from Theorem 3.3.22 that

\begin{equation*} V(X_i) = pq. \end{equation*}

Therefore using Bienaymé's Formula, Theorem 3.3.23, the variance for the whole distribution is

\begin{equation*} \begin{array}{ll} V(X) &= V(X_1 + X_2 + \dots + X_n) \\ &= V(X_1) + V(X_2) + \dots + V(X_n)\\ &= nV(X_i)\\ &= np(1-p) &= npq \end{array}. \end{equation*}

Exercises 3.3.6 Exercises for Section 3.3

1.

A number is chosen at random from the set \(S = \{−1, 0, 1\}\text{.}\) Let \(X\) be the number chosen. Find the expected value, variance, and standard deviation of \(X\text{.}\)

Answer

Because the numbers are chosen randomly:

\begin{equation*} P(X = -1) = P(X = 0) = P(X = 1) = 1/3. \end{equation*}

The expected value of \(X\) is then

\begin{equation*} \mathbb{E}(X) = (-1)1/3 + (0) 1/3 + (1) 1/3 = 0. \end{equation*}

Using \(V(X) =\mathbb{E}(X^2) - (\mathbb{E}(X))^2\text{:}\)

\begin{equation*} \begin{array}{rl} \mathbb{E}(X^2) = & = (-1^2)1/3 + (0^2) 1/3 + 1^2 (1/3) = 2/3\\ V(X) = & 2/3 - (0)^2 \\ = & 2/3. \end{array} \end{equation*}

The standard deviation is then:

\begin{equation*} \sqrt{2/3} \approx 0.82 \end{equation*}
2.

A random variable X has the distribution

\begin{equation*} \begin{array}{rl} P(X=0) = & 1/3 \\ P(X=1) = & 1/3 \\ P(X=2) = & 1/6 \\ P(X=4) = & 1/6 \\ \end{array} \end{equation*}

Find the expected value, variance, and standard deviation of \(X\text{.}\)

3.

7 A coin is tossed three times. Let\(X\) be the number of heads that turn up. Find \(V (X)\) and \(\sigma(X)\) (the standard deviation of \(X\)).

Answer

This is a straightforward application of the variance of a binomial distribution. \(X\) is a random variable with binomial distribution with parameters \(n = 3>\) and \(p = 1/2\)

\begin{equation*} V(X) = npq = 3*(1/2)*(1/2) = 3/4 \end{equation*}
\begin{equation*} \sigma(X) = \sqrt{3/4} \approx 0.87 \end{equation*}
4.

A random sample of 2400 people are asked if they favor a government proposal to develop new nuclear power plants. If 40 percent of the people in the country are in favor of this proposal, find the expected value and the standard deviation for the number of people in the sample who favored the proposal.

5.

In Las Vegas, a roulette wheel has 38 slots numbered 0, 00, 1, 2, . . . , 36. The 0 and 00 slots are green and half of the remaining 36 slots are red and half are black. A croupier spins the wheel and throws in an ivory ball. If you bet 1 dollar on red, you win 1 dollar if the ball stops in a red slot and otherwise you lose 1 dollar.

You place a 1-dollar bet on black. Let \(X\) be your winnings. Define \(S \) and calculate the values of \(P(X), \mathbb{E}(X) \) and \(V (X)\text{.}\)

Answer

Here the set of outcomes \(S\) is the color of the slots we care about: {black, not black}. Let \(X\) be a random variable that represents your winnings, it takes the value 1 if a spin results in a black slot, and the value -1 otherwise. The probability of winning in a spin is \(P(X = 1)\) the probability the ball lands on a black slot: \(\frac{18}{38}\text{.}\) The probability of losing a spin is \(P(X = -1)\) the probability the ball lands on a green or red slot: \(\frac{20}{38}\text{.}\) Therefore:

\begin{equation*} \begin{array}{rl} \mathbb{E}(X) = & (1)\frac{18}{38} + (-1)\frac{20}{38}\\ = & \frac{-2}{38} \\ = & \frac{-1}{19}\\ \approx & - 5 \text{ cents}. \end{array} \end{equation*}

To calculate the variance we calculate \(\mathbb{E}(X^2)\)

\begin{equation*} \begin{array}{rl} \mathbb{E}(X^2) = & (1^2)\frac{18}{38} + (-1^2)\frac{20}{38}\\ = & \frac{38}{38} \\ = & 1 \text{ dollar}. \end{array} \end{equation*}

Using \(V(X) =\mathbb{E}(X^2) - (\mathbb{E}(X))^2\text{:}\)

\begin{equation*} \begin{array}{rl} V(X) = & 1 - (\frac{-1}{19})^2 \\ = & 1 - \frac{1}{361} \\ = & \frac{360}{361} \\ \approx & 99.7 \text{ cents}. \end{array} . \end{equation*}
6.

Another form of bet for roulette is to bet that a specific number (say 17) will turn up. If the ball stops on your number, you get your dollar back plus 35 dollars. If not, you lose your dollar.

You place a 1-dollar bet on the number 17. Let \(Y\) be your winnings. Define \(S \) and calculate the values of \(P(Y), \mathbb{E}(Y) \) and \(V (Y)\text{.}\) Compare your answers from exercise 5, \(\mathbb{E}(X), \mathbb{E}(Y)\text{,}\) and \(V (X), V (Y).\) What do these computations tell you about the nature of your winnings if you make a sequence of bets, betting each time on a number versus betting each time on a color?

7.

We flip a fair coin 27 times (independently). For each heads, you win 3 dollars, whereas for each tails, you lose 2 dollars. Define the random variable \(Y\) to be the amount of money that you win. Compute the expected value \(\mathbb{E}(Y)\text{.}\)

Answer

For a single flip:

\begin{equation*} P(Y = 3) = P(\text{heads}) = 1/2 \end{equation*}
\begin{equation*} P(Y = -2) = P(\text{tails}) = 1/2 \end{equation*}
\begin{equation*} \begin{array}{rl} \mathbb{E}(Y) = & (3)\frac{1}{2} + (-2)\frac{1}{2}\\ = & \frac{1}{2} \\ = & 50 \text{ cents}. \end{array} \end{equation*}

Therefore the expected amount of winnings is \(27 * \frac{1}{2} = 13.5\) dollars

Alternatively we can think of this as a binomial distribution with \(p = 1/2, n = 27\text{.}\) Let \(X\) be a random variable that takes the value 1 for each head and 0 for each tail.

\begin{equation*} P(X = 1) = p = 1/2 \end{equation*}
\begin{equation*} P(X = 0) = (1-p) = 1/2 \end{equation*}

By Theorem 3.3.14

\begin{equation*} \mathbb{E}(X) = pn = 27* \frac{1}{2} = 13.5 \end{equation*}

The 13.5 is the expected number of wins, so the expected winnings is \(13.5 * 3 = 40.5\) dollars. Then we also must calculate the expected losses which is \(13.5 * 2 = 27\) dollars. So the overall expected winnings is \(40.5 - 27 = 13.5\)

8.

Assume we flip a fair coin twice, independently of each other. Define the following random variables:

\begin{equation*} \begin{array}{rl} X=&\text{ the number of heads,}\\ Y=&\text{ the number of tails,}\\ Z=&\text{ the number of heads times the number of tails.}\\ \end{array} \end{equation*}
  1. Determine the expected values of these three random variables.

  2. Are \(X\) and \(Y\) independent random variables?

  3. Are \(X\) and \(Z\) independent random variables?

  4. Are \(Y\) and \(Z\) independent random variables?