About

I’m a graduate student working at the intersection of Bayesian statistical theory, machine learning, artificial intelligence, and high performance computing. I study the relationship between conditional probability theory, statistical learning, and computation on parallel and distributed environments. This includes the theory and implementation of Markov Chain Monte Carlo methods on parallel and distributed systems, including GPUs and compute clusters. My work has found application in natural language processing and other areas where big data and scalable methods are important. Selected works include the following.

  • Asynchronous Gibbs Sampling. I propose a way of analyzing Markov Chain Monte Carlo methods executed asynchronously on a compute cluster – a setting where the Markov property doesn’t hold. I show that such algorithms can be made to converge if worker nodes are allowed to reject other worker nodes’ messages.

  • Pólya Urn Latent Dirichlet Allocation. I propose an algorithm for training Latent Dirichlet Allocation that is exact for large data sets, massively parallel, avoids memory bottlenecks of previous approaches, and has the lowest computational complexity of any method its class.

Alexander Terenin

Google Scholar GitHub arXiv Curriculum Vitae

Statistics PhD student
Imperial College London

a.{my-last-name}17
@imperial.ac.uk