About

I’m a PhD student working at the intersection of Bayesian theory and high performance computing, with applications to models used in statistical machine learning, artificial intelligence, and engineering. This includes the theory and implementation of Markov Chain Monte Carlo methods on parallel and distributed systems, including GPUs and compute clusters. My work has found application in natural language processing and other areas where big data and scalable methods are important. Selected works include the following.

  • Asynchronous Gibbs Sampling. I propose a way of analyzing Markov Chain Monte Carlo methods executed asynchronously on a compute cluster – a setting where the Markov property doesn’t hold. I show that such algorithms can be made to converge if worker nodes are allowed to reject other worker nodes’ messages.

  • Pólya Urn Latent Dirichlet Allocation. I propose an algorithm for training Latent Dirichlet Allocation that is exact for large data sets, massively parallel, avoids memory bottlenecks of previous approaches, and has the lowest computational complexity of any method its class.

Alexander Terenin

Google Scholar arXiv GitHub Curriculum Vitae

Statistics PhD student
Imperial College London

a.{my-last-name}17
@imperial.ac.uk