Welcome to my blog! For my first post, I decided that it would be useful to write a short introduction to Bayesian learning, and its relationship with the more traditional optimization-theoretic perspective often used in artificial intelligence and machine learning, presented in a minimally technical fashion. We begin by introducing an example.
Example: binary classification using a fully connected network
First, let’s introduce notation. For simplicity suppose there are no biases, and define the following.
- : a binary vector where each element is a target data point. is the amount of input data.
- : a matrix where each row is an input data vector, is the dimensionality of each input.
- : the matrix that maps the input to the hidden layer, is the number of hidden units.
- : the vector that maps the hidden layer to the output.
- : the network’s activation function, for instance a ReLU function.
- : the softmax function.
The standard approach
We begin by defining an optimization problem. Let be a -dimensional vector consisting of all values of and stacked together. Our network’s prediction is given by
Now, we proceed to learn the weights. Let be the learned values for , let be the norm, fix some , and set
The expression being minimized is called cross entropy loss.1 The loss is differentiable, so we can minimize it by using gradient descent or any other method we wish. Learning takes place by minimizing the loss, and the values we learn—here, —are a point in .
Why cross-entropy rather than some other mathematical expression? In most treatments of classification, the reasons given are purely intuitive, for instance, it is often said to stabilize the optimization algorithm. More rigorous treatments1 might introduce ideas from information theory. We will provide another explanation.
The Bayesian approach
Let us now define the exact same network, but this time from a Bayesian perspective. We begin by making probabilistic assumptions on our data. Since we have that , and since we assume that the order in which is presented cannot affect learning—this is formally called exchangeability—there is one and only one distribution that can follow: the Bernoulli distribution. The parameter of that distribution is the same expression as before. Hence, let
This is called the likelihood: it describes the assumptions we are making about the data given the parameters —here, that the data is binary and exchangeable. Now, define the prior for as
This describes our assumptions about external to the data—here, we have assumed that all components of are a priori independent mean-zero Gaussians. We can combine the prior and likelihood using Bayes’ Rule
to obtain the posterior . This is a probability distribution: it describes what we learned about from the data. Learning takes place through the use of Bayes’ Rule, and the values we learn—here, —are a probability distribution on .
Connecting the two approaches
Is there any relationship between and ? It turns out, yes—let’s show it. First, let’s write down the posterior
Now, let’s take logs and simplify:
Having computed that, note that that taking logs and adding constants preserve optima, and consider the posterior mode:
What have we shown? Minimizing cross-entropy loss is equivalent to maximizing the posterior distribution. The loss function maps to the likelihood, and the regularization term maps to the prior.
What it all means
Why is this useful? It gives us a probabilistic interpretation for learning, which helps us to construct and understand our models. This is especially in more complicated settings: for instance, we might ask, where does come from? In fact, we can use ideas from Bayesian nonparametrics to derive by considering a likelihood on a function space under a ReLU basis expansion.2 The network’s loss and architecture can both be explained in a Bayesian way.
There is much more: we could consider drawing samples from the posterior distribution, to quantify uncertainty about how much we learned about from the data. Markov Chain Monte Carlo3 methods are the most common class of methods for doing so. We can use ideas from hierarchical Bayesian models to define better regularizers compared to —the Horseshoe4 prior is a popular example. For brevity, I’ll omit further examples—the book Bayesian Data Analysis5 is a good introduction, though it largely focuses on methods of interest mainly to statisticians.
At the end of the day, having many different mathematical perspectives enables us to better understand how learning works, because things that are not obvious from one perspective might be easy to see from another. Whereas the optimization-theoretic approach we began with did not give a clear reason for why we should use cross-entropy loss, from a Bayesian point of view it follows directly out of the binary nature of the data. Sometimes, the Bayesian approach has little to say about a particular problem, other times it has a lot. It is useful to know how to use it when the need arises, and I hope this short example has given at least one reason to read about Bayesian statistics in more detail.
References
See Chapter 5 of Deep Learning.6
See Chapter 20 of Bayesian Data Analysis.5
See Chapter 11 of Bayesian Data Analysis,5 but note that MCMC methods are far more general than presented there. An article7 by P. Diaconis gives a rather different overview.
C. M. Carvalho, N. G. Polson, and J. G. Scott. The Horseshoe estimator for sparse signals. Biometrika, 97(2):1–26, 2010.
A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. Bayesian Data Analysis. 2013.
I. Goodfellow, Y. Bengio, A. Courville. Deep Learning. 2016.
P. Diaconis. The Markov Chain Monte Carlo revolution. Bulletin of the American Mathematical Society, 46(2):179–205, 2009.