Jekyll2023-06-02T01:34:19+00:00https://avt.im/feed.xmlAlexander TereninAlexander TereninPhysically Structured Neural Networks for Smooth and Contact Dynamics2023-05-12T00:00:00+00:002023-05-12T00:00:00+00:00https://avt.im/talks/2023/05/12/Physically-Structured-Networks<p>A neural network’s architecture encodes key information and inductive biases that are used to guide its predictions. In this talk, we discuss recent work which leverages the perspective of neural ordinary differential equations to design network architectures that encode the structures of classical mechanics. We examine the cases of both smooth dynamics and non-smooth contact dynamics. The architectures obtained are easy to understand, show excellent performance and data-efficiency on simple benchmark tasks, and are a promising emerging tool for use in robot learning and related areas.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninA neural network’s architecture encodes key information and inductive biases that are used to guide its predictions. In this talk, we discuss recent work which leverages the perspective of neural ordinary differential equations to design network architectures that encode the structures of classical mechanics. We examine the cases of both smooth dynamics and non-smooth contact dynamics. The architectures obtained are easy to understand, show excellent performance and data-efficiency on simple benchmark tasks, and are a promising emerging tool for use in robot learning and related areas.Physically Structured Neural Networks for Smooth and Contact Dynamics2023-04-14T00:00:00+00:002023-04-14T00:00:00+00:00https://avt.im/talks/2023/04/14/Physically-Structured-Networks<p>A neural network’s architecture encodes key information and inductive biases that are used to guide its predictions. In this talk, we discuss recent work which leverages the perspective of neural ordinary differential equations to design network architectures that encode the structures of classical mechanics. We examine the cases of both smooth dynamics and non-smooth contact dynamics. The architectures obtained are easy to understand, show excellent performance and data-efficiency on simple benchmark tasks, and are a promising emerging tool for use in robot learning and related areas.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninA neural network’s architecture encodes key information and inductive biases that are used to guide its predictions. In this talk, we discuss recent work which leverages the perspective of neural ordinary differential equations to design network architectures that encode the structures of classical mechanics. We examine the cases of both smooth dynamics and non-smooth contact dynamics. The architectures obtained are easy to understand, show excellent performance and data-efficiency on simple benchmark tasks, and are a promising emerging tool for use in robot learning and related areas.Pathwise Conditioning and Non-Euclidean Gaussian Processes2023-03-15T00:00:00+00:002023-03-15T00:00:00+00:00https://avt.im/talks/2023/03/15/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2023-01-12T00:00:00+00:002023-01-12T00:00:00+00:00https://avt.im/talks/2023/01/12/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2022-11-18T00:00:00+00:002022-11-18T00:00:00+00:00https://avt.im/talks/2022/11/18/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2022-11-16T00:00:00+00:002022-11-16T00:00:00+00:00https://avt.im/talks/2022/11/16/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2022-11-15T00:00:00+00:002022-11-15T00:00:00+00:00https://avt.im/talks/2022/11/15/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2022-11-10T00:00:00+00:002022-11-10T00:00:00+00:00https://avt.im/talks/2022/11/10/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2022-11-09T00:00:00+00:002022-11-09T00:00:00+00:00https://avt.im/talks/2022/11/09/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.Pathwise Conditioning and Non-Euclidean Gaussian Processes2022-11-08T00:00:00+00:002022-11-08T00:00:00+00:00https://avt.im/talks/2022/11/08/Pathwise-Conditioning<p>In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.</p>
<p>Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.</p>Alexander TereninIn Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.