Section 4.1 introduced the basic definition of an expectation as a weighted average. This definition provides a direct formula for computing an expectation: evaluate a weighted average of the possible values of the random variable, weighted by their likelihood.
Evaluating these averages can be tricky. We’ll spend a couple weeks practicing integrals and sums, but, it will take some work before we can confidently evaluate interesting expectations directly.
Nevertheless, expectations are popular summary values because they obey a variety of useful rules. These rules make it possible to compute many expectations without performing summation acrobatics. Instead of plugging into the definition, we can often find an expectation by applying the rules of expectation.
This section will introduce the most useful rules. We will start with an example for motivation, then will explore three sets of rules. Once we’ve built up a list of rules, we’ll come back and complete our example without working through the sum directly.
Binomial Example¶
Suppose that . What is $?
Count variables are discrete, so we could start with a sum:
That’s not an easy sum. We could try simplifying:
but, unless you’re quite clever, it’s not clear how to progress. Sums are easy to set up, but often are hard to close.
Closing the Sum
To close the sum, start by focusing on the combinatorial term involving all the factorials. It is:
This is close to, but not quite, choose . It falls short since the terms in the denominator add to not . That sais, with a little rearranging, we can relate it to choose :
So, we can write our sum:
Now the term inside the sum looks almost like the PMF of a binomial, but the multiplicity (the choose factor) uses and , while the powers are and . The power isn’t an issue since . To handle the extra power of , write:
Now our sum is:
Take a look at the bounds of the sum again. We started from . But will cause us headaches if we plug into a factorial. This is a clue to back up and look at the original expression. Originally, we had:
The first term in this sum is . Adding zero doesn’t change the value of a sum, so we can ignore the term:
So, dropping the “zero” term from our sum, we now have:
Now all the terms in the sum involve , and the bounds of the sum make sense. At this point it may look like we’ve gone to a lot of effort just to arrive at a similar sum. However, we’re actually almost done. Like most math problems, our initial steps are all about expanding terms until we reach an expanded form that is easier to simplify.
To simplify, it helps to come up with a strategy. So far we’ve just simplified what was available, then organized terms. Now, notice that the resulting sum is a sum over different terms, where each term inside the sum looks like the value of a binomial PMF on trials, for successes. Since binomial PMFs are a PMF, they are normalized. If we write the sum as a sum over all the values of a binomial PMF, then we will know its value must equal one!
Let . Then, when , and when , . Therefore,
Let:
denote the binomial PMF for a generic binomial random variable . Then, we’ve just shown that, when :
So, by manipulating the sum, we’ve replaced a weighted average of possible values over a binomial PMF on trials, with just a sum of binomial PMF values on trials. Since the second sum is the sum of a binomial PMF over all possible inputs:
We don’t need to use any algebra to close this sum. Instead, we’re just recalling normalization.
So, putting it all together:
It turns out that:
That’s a wonderfully simple formula. The expected number of successes in a string of independent, identical, binary trials is the number of trials, , times the chance each individual trial succeeds, . If I run 10 experiments, and each succeeds with chance 1/5, then I expect to see 2 successes.
This answer also closely tracks what we learned about the mode of the binomial. The most likely outcome for a binomial random variable is near .
Deriving the expectation directly is exhausting. If you haven’t, open the dropdown above to see how much work it took to get here. Whenever we reach an answer that is suspiciously simple, through a process that is evidently opaque, we should ask, “was there a better way to find this answer?” Often, if your answer is intuitive, but your work is ornate, there is a simpler method. The rest of this chapter will develop a series of rules that make this sort of calculation a breeze.
Rules of Expectation:¶
Here’s our basic strategy:
When given a random variable, always try to write out the expectation directly as a sum or an integral. If you can close it, go ahead. There’s no need to try anything sharper.
If the sum/integral is tricky, try to rewrite the random variable as a combination of simpler random variables.
Then, apply rules of expectation to break up the original expectation into a combination of the expectations of its parts. If each part is simple enough, then we can use the expectations of the parts to put together the expectation of the whole.
Expectations of Key Distributions¶
To use this strategy, we will need to know the expectations of some key reference distributions. Here are three you should always be ready to use:
Constants: If for some constant , then .
This result follows immediately from either interpretation of the expectation. If all of the mass of the distribution is at one value, then that value must be the center of mass. Alternately, if is always , then any sample average of a string of samples, will be a sample average of a string of ’s, so must equal .
Bernoulli (binary) Random Variables: Suppose that is an indicator random variable for some event . Then where . What is ?
Symmetric Distributions: Suppose that is drawn from a distribution that is symmetric about some value . Then, to balance the distribution, the only possible midpoint is , so the center of mass is . It follows that:
Linearity of Expectation¶
Next, we will need rules that help us compute expectations of transformations of random variables. These are just standard algebra rules for averages.
The simplest transformation is a linear function. We’ll break this rule into three parts. The first two are each special cases of the third.
Translations: If for some , then .
As usual, we can either prove this rule using the weighted average formula for expectations, or argue it using the interpretations of expectation. Let’s work by interpretation.
Adding a constant to just shifts its distribution rightward by the constant since . If I translate a distribution horizontally by a distance , then I must also translate its center of mass horizontally by a distance of . So, the new center of mass is the old center of mass, plus .
Scaling: If for some , then .
Let’s prove this one using the weighted average formula. We’ll do the discrete case. The continuous case works for the same reason.
Combining these two rules produces the following:
Additivity of Expectation¶
The final, and most useful property of expectation is another statement about sums. This time, it regards the expectations of sums of random variables.
To prove this result, we’ll use some of the ideas from Sections 1.3 and 1.5. Let . Then, by the weighted average formula for expectations
We can expand the chance that by summing over all pairs, that add to . Each distinct pair of that add to are disjoint, so, by the addition rule (see Section 1.3):
Then, moving all terms inside the sum:
The sum over all possible , of each pair that could add to , is just the sum over all pairs and . So, we can write our sum more simply:
Now, simplifying:
The probabilities in each sum are joint probabilities. We can expand each using the multiplication rule from Section 1.5. For example:
Now, let’s split up the sum. Sum over all first, then, sum over all :
Here’s the kicker. The sum inside the parentheses on the right is the sum of a distribution, over all possible values of the associated random variable. Anytime we sum the PMF of a random variable (here, the PMF of given ), over all possible values, we must get back 1. All PMF’s are normalized.
So:
The same argument applies for the second term in our original sum, so:
Essentially the same arguments apply in the continuous case.
Expectations of Count Variables via Additivity¶
A count variable is an integer-valued random variable that represents some sort of count. For instance, binomial random variables count successes. Geometric random variables count trials until a success. The rules established above make it easy to find the expectations of count variables, since most count variables can be expanded as a sum. After all, most counting processes occur as sequences where, each time an instance occurs, we add 1 to our running count.
Binomial Random Variables¶
Let’s try to find the expectation of a binomial again. This time, we’ll use rules instead of brute force algebra.
First, suppose that . Then is the number of successes in a string of independent, identical, binary trials. So, if we let be an indicator for the event that the trial succeeds, then:
Then, using the additivity property:
Each is an indicator, so is a Bernoulli random variable with success probability . Since the expectation of any Bernoulli random variable is its success probability:
Done! Compare this proof to the dropdown argument provided at the start of the chapter. This one is much better.
It is better in two ways:
It is simpler. It involves fewer steps and is easier to follow/remember.
Each of its steps are meaningful and rely on clearly motivated logical arguments that walk directly towards the desired result. Unlike the algebraic proof, which required a large number of little steps, none of which except the last carried much intrinsic meaning, each step in this proof uses a powerful idea: count variables are sums of indicators, expectations of sums are sums of expectations, and, the expectation of an indicator is its chance of success.
This is why rules are so helpful. They will allow us to find expectations in situations where direct application of the weighted average formula is ungainly.
Hypergeometric Random Variables¶
Suppose you sampled individuals from a pool of total size . You sample uniformly, but sample without replacement. You make sure that your sample of individuals never includes the same individual twice. Of the individuals possess a characteristic of interest. For example, perhaps you wanted to know what fraction of Berkeley data science majors are double majors. Then would be about 2,000, would be 44%, and would be the number of double majors, which is about 880 students. The individuals could be a sample of 100 data science majors selected from Data 89.
Let denote the number of individuals in your sample of who possess the characteristic of interest. In our example, could be the number of students in our sample who are double majors. Abstractly, is the number of successful draws, in a sequence of uniform draws, made without replacement, from a fixed pool. Random variables of this kind are called hypergeometric random variables.
What is ?
First, try to write the expectation as a weighted average:
To fill in the sum, first we need to work out the support of . The minimum and maximum values of depend on , , and . If , then it is possible that every student we sample is a double major, so could be as large as when . If , then, at most, we sample every double major in data science, and . So, . Similar logic applies to , the number of single majors in our sample. The number of single majors must be less than , and less than , so which implies . So:
Already this looks tough.
The PMF is even worse. To find the chance that , use probability by proportion. There are choose ways to select individuals without replacement from a pool of . There are choose ways to select individuals with the characteristic of interest from the in the pool. There are choose wasy to select the remaining from the individuals in the pool who don’t have the characteristic of interest.
Therefore:
So, the expectation is:
That is a properly difficult sum!
To solve it, we’ll adopt the same approach we used for the expectation of the binomial.
First, notice that is a count variable. So, let’s expand it as a sum of indicators. Imagine drawing the individuals in sequence, and checking, one at a time, whether they possess the characteristic of interest. Let be an indicator for the event that the individual has the characteristic of interest (e.g. is a double major). Then, just like we saw for the binomial:
So, by additivity:
Notice that, unlike in the binomial case, the indicators in this example are dependent. They are dependent since, each time we sample an individual, we remove them from the pool.
However, their dependence doesn’t matter since the additivity rule applies to any pair of random variables!
Now we’re almost done. As before, the expectation of an indicator is the chance that the corresponding event occurs. So, is the probability that the individual we pick has the characteristic of interest. This is a marginal probability. It does not depend on the other individuals sampled. On any particular draw, ignoring the other draws, the chance that the selected individual has the characteristic of interest is , since % of all individuals in the pool have the desired characteristic. In other words, the chance the 10th student selected is a double major is 44%, as is the chance that the 40th student selected is a double major.
So:
So, just like binomial radnom variables, the expectation of a hypergeometric random variable is the number of draws, , times the marginal chance each draw succeeds, .
Notice the power of working by properties. Even though the hypergeometric PMF is much harder to work with, its expectation is just as easy as the binomials. Both can be broken into a sum of simple expectations using additivity, even though, when sampling without replacement, all draws are all dependent on each other!