Bayesian Inference on Clusters and GPUs - the Hardware/Software Approach to Scalable Computation

· Talk · Carnegie Mellon University

Abstract. Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method often used in Bayesian learning. MCMC methods can be difficult to deploy on parallel and distributed systems due to their inherently sequential nature. We study asynchronous Gibbs sampling, which achieves parallelism by simply ignoring sequential requirements. This method has been shown to produce good empirical results for some hierarchical models, and is popular in the topic modeling community, but was also shown to diverge for other targets. We introduce a theoretical framework for analyzing asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property. We prove that asynchronous Gibbs can be modified so that it converges under appropriate regularity conditions - we call this the exact asynchronous Gibbs algorithm. We study asynchronous Gibbs on a set of examples by comparing the exact and approximate algorithms, including two where it works well, and one where it fails dramatically. We conclude with a set of heuristics to describe settings where the algorithm can be effectively used.

Gibbs sampling is a widely used Markov chain Monte Carlo (MCMC) method for numerically approximating integrals of interest in Bayesian statistics and other mathematical sciences. Many implementations of MCMC methods do not extend easily to parallel computing environments, as their inherently sequential nature incurs a large synchronization cost. In the case study illustrated by this paper, we show how to do Gibbs sampling in a fully data-parallel manner on a graphics processing unit, for a large class of exchangeable models that admit latent variable representations. Our approach takes a systems perspective, with emphasis placed on efficient use of compute hardware. We demonstrate our method on a Horseshoe Probit regression model and find that our implementation scales effectively to thousands of predictors and millions of data points simultaneously.