The tail of a distribution refers to the distribution function at extreme, or limiting, values of the random variable.
Suppose that . Then (see Section 4.2) and (see HW 5). So, most of the mass of the distribution will lie between and . So, we could define the tails as, all more than two standard deviations from the mean. We’ve highlighted the region outside two standard deviations of the mean in orange in the figure below.
Notice that, the tails of the distribution correspond to rare events. These are unusually extreme values of that occur with a small probability. In general, a tail event occurs if is surprisingly small or suprisingly large.
There is no hard or fast rule that determines where the tail of a distribution starts. We could have just as well used the regions outside 3 standard deviations of the mean. This corresponds to and . In either case, the tails correspond to unusually large, or small, .
Tails are interesting when we want to study rare events. In any example listed below, the tails are the most important feature of study:
A seismologist is interested in the frequency and likelihood of dangerously large earthquakes.
An insurance company is interested in the frequency and likelihood of catastrophes.
An investor is interested in the frequency and likelihood of sudden changes in the value of an asset.
A gambler is interested in the chance an unlikely bet pays off.
Often, we describe the tails of a distribution using a survival function.
Often, we use survival functions, or CDF’s, to find tail probabilities. A tail probability is the chance that a random variable is greater than, or less than, some threshold. In the binomial examples above, our thresholds were (lower), and (upper).
If is unbounded, then tail probabilities often require a sum with infinitely many terms, or an integral with infinite bounds.
For example, if:
is discrete, integer valued, and unbounded above, then:
is continuous and unbounded above, then:
Sums with infinitely many terms are often harder to close than integrals with an infinite bound. Accordingly, we will pay more attention to the discrete case in this chapter.
Examples:¶
Geometric Distribution¶
Suppose that for some success probability . Here’s the geometric PMF for from Section 2.2:
In a sense, the geometric distribution is all tail. It’s mode is at 1, and its PMF decreases monotonically for increasing .
Since the most likely outcome is always , the geometric distribution only has a right tail. These are unusually large and positive values of .
So, to find a tail probability, we should compute the survival function, for some unusually large (e.g. for ):
How should we solve for the value of this sum?
Before we try anything clever, let’s simplify it as much as possible. First, take the constant multiple of to the outside. Then, let so that we don’t need to keep track of the in the exponent:
Let so that . Then the sum is:
Notice that we did not need to update the upper bound of the sum since the sum runs to infinity, and infinity ± any finite offset is still infinity.
Finally, for concision, let denote the failure probability. Then, the survival function can be expressed:
So, to find the survival function, it is sufficient to close the sum:
This is an example of a geometric series:
It’s not obvious how to simplify a geometric series.
We’ll start with two probability arguments, then will check our result using the standard algebraic solution.
First, we can convert the sum over infinitely many terms into a sum with finitely many terms by thinking in terms of the complement:
Expressing the survival function as a CDF converted the infinite sum to a sum with finitely many terms since geometric random variables are bounded below.
Notice, while this answer is sufficient for small , it becomes unwieldy for large . If , then we would need to add up 100 terms to use this formula. Clearly this formula does not generalize well.
Alternately, let’s think about the event in more detail. A geometric random variable is the number of attempts up to, and including, the first success in a string of identical, independent, Bernoulli trials. So, means that it took at least failures before the first success. So, is the same as the event, . That is:
Since the trials are independent, we can use the multiplication rule to expand the joint probability:
Therefore:
Then, since , we can solve for the value of the geometric series:
An Alternate Probability Argument
Consider the sum again:
Notice that, while the left hand side is a function of the threshold , the geometric series does not depend on . That means that the geometric series must take on the same value for all choices of . So, to solve for the value of the geometric series, just pick a convenient .
If then . The chance a geometric random variable equals 1 is the chance of success on the first trial, , so .
Therefore:
Rearranging and cancelling like terms:
Therefore, for any :
and, the geometric survival function is:
This matches the answer we derived by the multiplication rule.
We’ve shown that the survival function of a geometric distribution is:
This function decays rapidly as increases. In fact, it decays exponentially as a function of :
Accordingly, the geometric distribution has exponential tails. We will often describe a distribution’s tail behavior by comparing the rate of decay of the PMF, PDF, or survival functions, as functions of their inputs.
We can derive the CDF of a geometric distribution directly from its survival function:
Here’s a plot of the CDF when :
Notice that the geometric CDF looks like the difference between 1 and an exponential function of . That exponential function is the survival function we derived above.
This formula for the geometric CDF is interesting since we could also write the CDF explicitly as a sum of the PMF for all . Equating both sides gives:
Simplifying (let ):
This is the general form for the partial geometric sum.
Algebraic Proof
Harmonic Series and Power Laws¶
The geometric distribution decays to zero fairly quickly for large inputs. In this sense it has “light” tails. As shown above, its tails are exponential. Let’s study a “heavy tailed” distribution as a contrast.
Power laws occur in a variety of applications. Here are some examples:
The frequency of the use of a word is inversely proportional to its order, when ordered by frequency. This means that the most common word occurs with chance relative to the most common word. This specifies a distribution that obeys a power law with power . This observation is sometimes called Zipf’s law.
Income and wealth distributions are often Pareto distributed, so often obey power laws.
The number of connections at a node in large graphs often obey power laws (for example, the number of friends each individual has in a social network, or the number of links pointing to a webpage).
Estimated signal to noise ratios in hypothesis testing. In this case, the distribution of possible signal to noise ratios often looks bell-shaped, but has power law tails. The power of the power law usually increases with more samples.
The larger , the faster decays. So to study a case with “heavy” tails, let’s pick small. Of our examples, the smallest suggestion was . Can we define a discrete distribution of the form:
In order for this model to be valid, the PMF must be normalized. Normalization requires:
for some choice of normalizing constant . The normalizing constant is:
The constant is nonzero if and only if the sum in the denominator converges to a finite number. The sum, is the infamous Harmonic series.
To show that the sum diverges, we will bound it from below, then show that the lower bound diverges.
Consider the plot comparing the series (shaded blue area) with the integral (area under the gold curve).
Since the bar chart is above the curve, , the area under the barchart, , is greater than the area under the curve, . Therefore:
If the integral diverges, then the sum must diverge as well. This is our first example of an integral test.
The integral is:
The integral diverges since the logarithm is unbounded above. Therefore, the sum must also diverge!
So, we cannot define a discrete random variable, that is unbounded above, with tails that decay proportional to !
Power laws with larger powers are possible, however, in each case, the associated random variables have “heavy tails.”
When tails decay slowly, the random variable behaves more erratically, and rare events occur more frequently. When a distribution has very heavy tails, typical samples from the distribution will be rare events since most of the mass of the distribution is in its tail!