# Exponential Families

December 21, 2012 3 Comments

In my last post I discussed log-linear models. In this post I’d like to take another perspective on log-linear models, by thinking of them as members of an *exponential family*. There are many reasons to take this perspective: exponential families give us efficient representations of log-linear models, which is important for continuous domains; they always have conjugate priors, which provide an analytically tractable regularization method; finally, they can be viewed as maximum-entropy models for a given set of sufficient statistics. Don’t worry if these terms are unfamiliar; I will explain all of them by the end of this post. Also note that most of this material is available on the Wikipedia page on exponential families, which I used quite liberally in preparing the below exposition.

**1. Exponential Families **

An *exponential family* is a family of probability distributions, parameterized by , of the form

Notice the similarity to the definition of a log-linear model, which is

So, a log-linear model is simply an exponential family model with . Note that we can re-write the right-hand-side of (1) as , so an exponential family is really just a log-linear model with one of the coordinates of constrained to equal . Also note that the normalization constant in (1) is a function of (since fully specifies the distribution over ), so we can express (1) more explicitly as

where

Exponential families are capable of capturing almost all of the common distributions you are familiar with. There is an extensive table on Wikipedia; I’ve also included some of the most common below:

*Gaussian distributions.*Let . Then . If we let , then . We therefore see that Gaussian distributions are an exponential family for .*Poisson distributions.*Let and . Then . If we let then we get ; we thus see that Poisson distributions are also an exponential family.*Multinomial distributions.*Suppose that . Let be an -dimensional vector whose th element is and where all other elements are zero. Then . If , then we obtain an arbitrary multinomial distribution. Therefore, multinomial distributions are also an exponential family.

**2. Sufficient Statistics **

A *statistic* of a random variable is any deterministic function of that variable. For instance, if is a vector of Gaussian random variables, then the sample mean and sample variance are both statistics.

Let be a family of distributions parameterized by , and let be a random variable with distribution given by some unknown . Then a vector of statistics are called *sufficient statistics* for if they contain all possible information about , that is, for any function , we have

for some function that has no dependence on .

For instance, let be a vector of independent Gaussian random variables with unknown mean and variance . It turns out that is a sufficient statistic for and . This is not immediately obvious; a very useful tool for determining whether statistics are sufficient is the **Fisher-Neyman factorization theorem**:

Theorem 1 (Fisher-Neyman)Suppose that has a probability density function . Then the statistics are sufficient for if and only if can be written in the formIn other words, the probability of can be factored into a part that does not depend on , and a part that depends on only via .

What is going on here, intuitively? If depended only on , then would definitely be a sufficient statistic. But that isn’t the only way for to be a sufficient statistic — could also just not depend on at all, in which case would trivially be a sufficient statistic (as would anything else). The Fisher-Neyman theorem essentially says that the only way in which can be a sufficient statistic is if its density is a product of these two cases.

*Proof:* If (6) holds, then we can check that (5) is satisfied:

where the right-hand-side has no dependence on .

On the other hand, if we compute for an arbitrary density , we get

If the right-hand-side cannot depend on for *any* choice of , then the term that we multiply by must not depend on ; that is, must be some function that depends only on and and not on . On the other hand, the denominator depends only on and ; call this dependence . Finally, note that is a deterministic function of , so let . We then see that , which is the same form as (6), thus completing the proof of the theorem.

Now, let us apply the Fisher-Neyman theorem to exponential families. By definition, the density for an exponential family factors as

If we let and , then the Fisher-Neyman condition is met; therefore, is a vector of sufficient statistics for the exponential family. In fact, we can go further:

Theorem 2Let be drawn independently from an exponential family distribution with fixed parameter . Then the empirical expectation is a sufficient statistic for .

*Proof:* The density for given is

Letting and , we see that the Fisher-Neyman conditions are satisfied, so that is indeed a sufficient statistic.

Finally, we note (without proof) the same relationship as in the log-linear case to the gradient and Hessian of with respect to the model parameters:

Theorem 3Again let be drawn from an exponential family distribution with parameter . Then the gradient of with respect to isand the Hessian is

This theorem provides an efficient algorithm for fitting the parameters of an exponential family distribution (for details on the algorithm, see the part near the end of the log-linear models post on parameter estimation).

**3. Moments of an Exponential Family **

If is a real-valued random variable, then the *th moment* of is . In general, if is a random variable on , then for every sequence of non-negative integers, there is a corresponding moment .

In exponential families there is a very nice relationship between the normalization constant and the moments of . Before we establish this relationship, let us define the *moment generating function* of a random variable as .

Lemma 4The moment generating function for a random variable is equal to

The proof of Lemma 4 is a straightforward application of Taylor’s theorem, together with linearity of expectation (note that in one dimension, the expression in Lemma 4 would just be ).

We now see why is called the moment generating function: it is the exponential generating function for the moments of . The moment generating function for the sufficient statistics of an exponential family is particularly easy to compute:

*Proof:*

where the last step uses the fact that is a probability density and hence .

Now, by Lemma 4, is just the coefficient in the Taylor series for the moment generating function , and hence we can compute as . Combining this with Lemma 5 gives us a closed-form expression for in terms of the normalization constant :

Lemma 6The moments of an exponential family can be computed as

For those who prefer cumulants to moments, I will note that there is a version of Lemma 6 for cumulants with an even simpler formula.

**Exercise:** Use Lemma 6 to compute , where is a Gaussian with mean and variance .

**4. Conjugate Priors **

Given a family of distributions , a *conjugate prior family* is a family that has the property that

for some depending on and . In other words, if the prior over lies in the conjugate family, and we observe , then the posterior over also lies in the conjugate family. This is very useful algebraically as it means that we can get our posterior simply by updating the parameters of the prior. The following are examples of conjugate families:

- (Gaussian-Gaussian) Let , and let . Then, by Bayes’ rule,

Therefore, parameterize a family of priors over that is conjugate to .

- (Beta-Bernoulli) Let , , , and . The distribution over given is then called a
*Bernoulli distribution*, and that of given and is called a*beta distribution*. Note that can also be written as . From this, we see that the family of beta distributions is a conjugate prior to the family of Bernoulli distributions, since

- (Gamma-Poisson) Let for . Let . As noted before, the distribution for given is called a
*Poisson distribution*; the distribution for given and is called a*gamma distribution*. We can check that the family of gamma distributions is conjugate to the family of Poisson distributions.unlike in the last two examples, the normalization constant for the Poisson distribution actually depends on , and so we need to include it in our calculations:**Important note:**Note that, in general, a family of distributions will always have some conjugate family, as if nothing else the family of all probability distributions over will be a conjugate family. What we really care about is a conjugate family that itself has nice properties, such as tractably computable moments.

Conjugate priors have a very nice relationship to exponential families, established in the following theorem:

**Theorem 7***Let be an exponential family. Then is a conjugate prior for for any choice of . The update formula is . Furthermore, is itself an exponential family, with sufficient statistics .*Checking the theorem is a matter of straightforward algebra, so I will leave the proof as an exercise to the reader. Note that, as before, there is no guarantee that will be tractable; however, in many cases the conjugate prior given by Theorem 7 is a well-behaved family. See this Wikipedia page for examples of conjugate priors, many of which correspond to exponential family distributions.

**5. Maximum Entropy and Duality**The final property of exponential families I would like to establish is a certain

*duality property*. What I mean by this is that exponential families can be thought of as the maximum entropy distributions subject to a constraint on the expected value of their sufficient statistics. For those unfamiliar with the term, the*entropy*of a distribution over with density is . Intuitively, higher entropy corresponds to higher uncertainty, so a maximum entropy distribution is one specifying as much uncertainty as possible given a certain set of information (such as the values of various moments). This makes them appealing, at least in theory, from a modeling perspective, since they “encode exactly as much information as is given and no more”. (Caveat: this intuition isn’t entirely valid, and in practice maximum-entropy distributions aren’t always necessarily appropriate.)In any case, the duality property is captured in the following theorem:

**Theorem 8***The distribution over with maximum entropy such that lies in the exponential family with sufficient statistic and .*Proving this fully rigorously requires the calculus of variations; I will instead give the “physicist’s proof”.

*Proof:*} Let be the density for . Then we can view as the solution to the constrained maximization problem:By the method of Lagrange multipliers, there exist and such that

This simplifies to:

which implies

for some and . In particular, if we let and , then we recover the exponential family with , as claimed.

**6. Conclusion**Hopefully I have by now convinced you that exponential families have many nice properties: they have conjugate priors, simple-to-fit parameters, and easily-computed moments. While exponential families aren’t always appropriate models for a given situation, their tractability makes them the model of choice when no other information is present; and, since they can be obtained as maximum-entropy families, they are actually appropriate models in a wide family of circumstances.

Very nice write up, extremely helpful, thank you!

Very nice introduce of exponential families! Thank you for sharing it.

I think there is a flaw in your proof of Lemma 5.

The “\lambda^T x” should be “\lambda^T \phi(x)”

i.e. it is not moment generating function of X, but of the sufficient statistics \phi(x).

Thus, the following exercise works only because there is a component \phi(x) = x in Gaussian case, and the straightforward computation is still exhausting (if by hand), it is the 6-th derivative of exp(f(t)), where f(t) is a 2rd order polynomial about t.

Thanks! If I’m correct the actual place where something goes wrong is Lemma 6, correct? (I.e. it is only the case that the moment formula I give is correct in the case where phi(x) = x, but the statement of Lemma 5 is in fact correct even though the proof is wrong.)

Best,

Jacob