Jekyll2019-08-17T08:05:38+00:00https://avt.im/feed.xmlAlexander TereninA blog about statistics, machine learning, and artificial intelligenceAlexander TereninHow to show that a Markov chain converges quickly2019-08-16T00:00:00+00:002019-08-16T00:00:00+00:00https://avt.im/blog/2019/08/16/proving-markov-chain-converges-quickly<p>Markov chains appear everywhere: they are used as a computational tool within Bayesian statistics, and a theoretical tool in other areas such as optimal control and reinforcement learning.
Conditions under which a general Markov chain $X_t$ eventually converges to a stationary distribution $\pi$ are well-studied, and can largely be considered classical results.
These are, informally, as follows.</p>
<ul>
<li>$\phi$-irreducibility: the chain can transition from any state into any other state.</li>
<li>Aperiodicity: the chain’s transition behavior is the same at time $t$ as time $t+1$.</li>
<li>Harris recurrence: the chain returns to some set infinitely often.</li>
</ul>
<p>Some care is needed when defining these conditions formally<sup id="fnref:mt"><a href="#fn:mt" class="footnote">1</a></sup> for general Markov chains: for example, in a chain defined over an uncountable state space any given point will have probability zero.
To formulate irreducibility precisely we need to talk about sets of states by, say, introducing a topology.</p>
<h1 id="convergence-to-stationarity-in-finite-time">Convergence to stationarity in finite time</h1>
<p>Proving convergence of a Markov chain is often the first step taken as part of a larger analysis.
In fact, this can be done for broad classes of chains – for instance, for Metropolis-Hastings chains with appropriately well-behaved transition proposals<sup id="fnref:rr"><a href="#fn:rr" class="footnote">2</a></sup>.
This makes stationarity analysis of MCMC algorithms used in Bayesian statistics close to a non-issue.</p>
<p>Unfortunately, from convergence of a Markov chain alone, we can say little about the chain’s distribution after a given number of steps.
This makes it interesting to study chains which are <em>geometrically ergodic</em> – such chains converge at a prescribed rate given by</p>
<p>[
\norm{P^t(\mu) - \pi}_{\f{TV}} \leq c_\mu \rho^t
]</p>
<p>where $\mu$ is the initial distribution, $P$ is the chain’s Markov operator, $t$ is the current iteration, $\pi$ is the stationary distribution, $c_\mu$ and $0 < \rho < 1$ are constants, and $\norm{\cdot}_{\text{TV}}$ is the total variation norm over the Banach space of signed measures.</p>
<p>Convergent chains that are not geometrically ergodic are not well-behaved.
Such chains converge infinitely slowly, and functionals $f(X_t)$ of their output don’t necessarily satisfy a Central Limit Theorem.
In particular, the distribution of $f(X_t)$ can be heavy-tailed, which can cause $\frac{1}{T} \sum_{t=1}^T f(X_t)$ to be infinite even if $\E_{x\dist\pi}(f(x))$ is finite.
This can be problematic.</p>
<h1 id="minorizing-measures-and-regeneration">Minorizing measures and regeneration</h1>
<p>How does one know whether or not a chain is geometrically ergodic?
For simplicity, let’s consider a simpler case, where we set $c_\mu = c$ be constant with respect to the initial distribution $\mu$.
Such a chain is called <em>uniformly ergodic</em>.
This will occur for a chain defined over a state space with compact support, and we comment on the general case once the issues in this setting are clear.</p>
<p>We begin by considering two chains, $X_t$ and $Y_t$ with same stationary distribution $\pi$.
We think of $X_t$ as the chain of interest with initial distribution $\mu$, and $Y_t$ as an auxiliary chain.
Now, we make two assumptions.</p>
<ol>
<li>$X_t$ and $Y_t$ share the same random number generator.</li>
<li>$Y_t$ starts from the stationary distribution $\pi$.</li>
</ol>
<p>This setup can be visualized via the diagram</p>
<p>[
\begin{aligned}
&\mu & &\dist & &X_1 & &\goesto & &X_2 & &\goesto & &X_3 & &\goesto & &..
\\
& & & & &\,\,| & & & &\,\,| & & & &\,\,| & & & &\,|
\\
&\pi & &\dist & &Y_1 & &\goesto & &Y_2 & &\goesto & &Y_3 & &\goesto & &..
\end{aligned}
]</p>
<p>where vertical bars indicate shared random numbers, and we’ve used the identically distributed symbol $\dist$ in opposite of its usual order.</p>
<p>Given this setup, if <em>both chains make a draw from the same distribution</em> during their one-step-ahead transitions, we can conclude that $X_t \dist Y_t \dist \pi$ and so the chain has converged.</p>
<p>How is this condition ever possible?
Suppose that we can write the distribution of $X_t$ as a mixture of a distribution $\nu$ with mixture probability $\gamma$ and some other distribution with probability $(1-\gamma)$.
Then at each time point, with probability $\gamma^2$, both chains draw from the same distribution.
Let’s draw a picture.</p>
<div style="text-align: center;">
<svg width="240px" viewBox="0 0 240 150" xmlns="http://www.w3.org/2000/svg">
<g>
<rect x="10" y="10" width="220" height="130" style="stroke: rgb(0, 0, 0); fill: none;"></rect>
<path style="stroke: rgb(0, 0, 0); fill: rgba(0, 0, 0, 0.1);" d="M 10 140 C 10 140 14.626 -2.562 64.662 68.041 C 107.925 129.086 230 140 230 140"></path>
<path style="stroke: rgb(0, 0, 0); fill: rgba(0, 0, 0, 0.1);" d="M 10 140 C 10 140 14.626 -2.562 64.662 68.041 C 107.925 129.086 230 140 230 140" transform="matrix(-1, 0, 0, 1, 240, 0)"></path>
</g>
<g>
<foreignObject x="10" y="15" width="100" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$X_t \given X_{t-1}$
</div>
</foreignObject>
<foreignObject x="140" y="15" width="100" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$Y_t \given Y_{t-1}$
</div>
</foreignObject>
<foreignObject x="105" y="112.5" width="30" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$\nu$
</div>
</foreignObject>
</g>
</svg>
</div>
<p>Here, both one-step-ahead transition distributions possess an overlapping shaded region.
The probability of each chain landing in this region is $\gamma$, and the distribution within that region is $\nu$.
We say that $\nu$ is a <em>minorizing measure</em><sup id="fnref:mm"><a href="#fn:mm" class="footnote">3</a></sup>.
It can be shown<sup id="fnref:r"><a href="#fn:r" class="footnote">4</a></sup> that to prove uniform ergodicity, it suffices to exhibit such a minorizing measure.</p>
<p>For a non-compact state space and arbitrary current states, such a measure can be impossible to find.
However, by virtue of convergence, our Markov chain should eventually spend most of its time in a compact state space.
One can make this intuition precise and develop techniques for proving geometric ergodicity, by introducing a suitable <em>Lyapunov drift condition</em><sup id="fnref:mt:1"><a href="#fn:mt" class="footnote">1</a></sup> which, once satisfied, largely reduces the analysis to the preceding case.</p>
<p>Once one has the existence of a minorizing measure, there will be a set of random times at which the chain will draw from the minorizing measure and forget its current location.
This allows the analysis of the averages $\frac{1}{T} \sum_{t=1}^T f(X_t)$ to be reduced, in some sense, to the IID case.
Following this line of thought, one can eventually prove a Central Limit Theorem for the ergodic averages, even though successive $f(X_t)$ are not independent.
The study of techniques originating from this observation is called <em>regeneration theory</em>.</p>
<h1 id="concluding-remarks">Concluding remarks</h1>
<p>Here, we examined one technique by which it’s possible to prove that a Markov chain converges quickly.
The analysis is general and provides insights of practical interest in Bayesian models.
For instance, if using a Gibbs sampler for a hierarchical model, one can examine the full conditionals to see whether or not a minorization condition is present.
Even if the minorization constant is not calculated, this gives some idea as to the chain’s numerical performance before ever running it.
In a practical application, this can be useful.</p>
<p>Other techniques for analyzing the mixing rate of a Markov chain are also available.
In particular, reversible chains admit Markov operators which are self-adjoint: this allows one to study their spectral properties, and relate them to mixing times<sup id="fnref:rr2"><a href="#fn:rr2" class="footnote">5</a></sup>.
I hope this article has provided a useful overview as to the need to consider finite-time mixing properties, and illustrated the key idea behind minorization techniques by which they can be studied.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:mt">
<p>S. Meyn and R. Tweedie. Markov Chains and Stochastic Stability. 1993. <a href="#fnref:mt" class="reversefootnote">↩</a> <a href="#fnref:mt:1" class="reversefootnote">↩<sup>2</sup></a></p>
</li>
<li id="fn:rr">
<p>G. Roberts and J. Rosenthal. General state space Markov chains and MCMC algorithms. Probability Surveys, 2004. <a href="#fnref:rr" class="reversefootnote">↩</a></p>
</li>
<li id="fn:mm">
<p>Rather than working with a pair $(\gamma,\nu)$ where $\gamma\in[0,1]$ and $\nu$ is a probability measure, most technical papers equivalently work with a single finite measure. We use $(\gamma,\nu)$ here because it is intuitive and lends well to visualization. <a href="#fnref:mm" class="reversefootnote">↩</a></p>
</li>
<li id="fn:r">
<p>J. Rosenthal. Minorization conditions and convergence rates for Markov chain Monte Carlo. JASA 90(430). 1995. <a href="#fnref:r" class="reversefootnote">↩</a></p>
</li>
<li id="fn:rr2">
<p>G. Roberts and J. Rosenthal. Geometric ergodicity and hybrid Markov chains. ECP 2(2). 1997. <a href="#fnref:rr2" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninMarkov chains appear everywhere: they are used as a computational tool within Bayesian statistics, and a theoretical tool in other areas such as optimal control and reinforcement learning. Conditions under which a general Markov chain $X_t$ eventually converges to a stationary distribution $\pi$ are well-studied, and can largely be considered classical results. These are, informally, as follows.Some macros for making TeX source more readable2019-06-08T00:00:00+00:002019-06-08T00:00:00+00:00https://avt.im/blog/2019/06/08/readable-tex<p>The TeX typesetting system is a lovely bit of software: one can easily use it to typeset production-grade documents such as mathematical papers.
However, typesetting complex equations can be tedious, learning to use TeX well can involve memorizing a large number of macros, and it can be difficult to understand the meaning of an equation from looking purely at its source.
TeX can be made more readable by utilizing packages and introducing macros to simplify code.</p>
<p>Some time ago, I moved for handwritten note-taking to a fully paperless workflow: more-or-less every bit of mathematics I work on is written either on a whiteboard, or in LaTeX directly, including over 100 pages of notes from a functional analysis course, which I typeset in real time as the lectures were being given.
This post contains a collection of useful macros that made this possible.</p>
<h1 id="overloading-accent-macros-in-math-mode">Overloading accent macros in math mode</h1>
<p>One-character macros are an important part of readable LaTeX.
Consider the union symbol $\cup$ given by the macro <code class="highlighter-rouge">\cup</code> – this describes the <em>shape</em> of the symbol, not its <em>meaning</em>, which would be better described by <code class="highlighter-rouge">\union</code>, or more concisely, <code class="highlighter-rouge">\u</code>.</p>
<p>Unfortunately, the macro <code class="highlighter-rouge">\u</code> is already defined by LaTeX to be the underline macro for text mode.
Since the symbol $\cup$ is only used in math mode to begin with, it makes sense for us to extend <code class="highlighter-rouge">\u</code> without changing the original functionality.
This may be achieved by writing</p>
<figure class="highlight"><pre><code class="language-tex" data-lang="tex"><span class="k">\let\textu\u</span> <span class="c">% redefinition: underline</span>
<span class="k">\renewcommand</span><span class="p">{</span><span class="k">\u</span><span class="p">}{</span><span class="k">\relax\ifmmode\cup\else\expandafter\textu\fi</span><span class="p">}</span> <span class="c">% union</span></code></pre></figure>
<p>which redefines <code class="highlighter-rouge">\u</code> in math mode only.
This is achieved by first copying the original definition of <code class="highlighter-rouge">\u</code> to <code class="highlighter-rouge">\textu</code>.
Then, <code class="highlighter-rouge">\ifmmode</code> is used to check if LaTeX is in math mode: if it is, <code class="highlighter-rouge">\u</code> expands to <code class="highlighter-rouge">\cup</code>, if not, it expands to <code class="highlighter-rouge">\expandafter\textu</code>.
Since <code class="highlighter-rouge">\textu</code> is a LaTeX macro which accept an argument, it is important that it is expanded after <code class="highlighter-rouge">\ifmmode</code> is completed, otherwise <code class="highlighter-rouge">\fi</code> would become its argument, resulting in an error.
This is achieved via <code class="highlighter-rouge">\expandafter</code>, which causes <code class="highlighter-rouge">\textu</code> to be expanded after the redefined <code class="highlighter-rouge">\u</code> has completed.</p>
<p>Similarly, one can use this trick to redefine <code class="highlighter-rouge">\v</code> to make a symbol bold italic – a widely-used notation for vectors.
Redefinition is done the same way, except we use <code class="highlighter-rouge">\expandafter\boldsymbol</code> in place of <code class="highlighter-rouge">\cup</code>.
The <code class="highlighter-rouge">\expandafter</code> similarly allows <code class="highlighter-rouge">\boldsymbol</code> to correctly accept an input argument.
This lets one to write <code class="highlighter-rouge">$\v{x}$</code> to produce $\v{x}$ while still writing <code class="highlighter-rouge">\v{c}</code> to produce č, avoiding bibliography errors in cases where eastern European author names are present.</p>
<p>I use this trick everywhere: <code class="highlighter-rouge">\P</code> becomes the probability symbol $\P$, and <code class="highlighter-rouge">\c{X}</code> becomes a calligraphic X, i.e. $\c{X}$, while their original definitions in text mode are retained.
This helps make my source easier to read and write, and goes a long way toward it possible to typeset large expressions in real time.</p>
<h1 id="the-indicator-symbol">The indicator symbol</h1>
<p>I prefer to use the blackboard bold symbol 𝟙 for indicators.
Unfortunately, this symbol is defined in the <strong>bbold</strong> package, which changes the AMS blackboard bold font used for the probability symbol $\P$, and by default produces a blurry font.
The blurriness can be avoided by installing the <strong>bbold-type1</strong> package – once this is done, the font can be loaded by writing</p>
<figure class="highlight"><pre><code class="language-tex" data-lang="tex"><span class="c">% requires packages bbold and bbold-type1 to avoid bitmap font</span>
<span class="k">\DeclareSymbolFont</span><span class="p">{</span>bbold<span class="p">}{</span>U<span class="p">}{</span>bbold<span class="p">}{</span>m<span class="p">}{</span>n<span class="p">}</span>
<span class="k">\DeclareSymbolFontAlphabet</span><span class="p">{</span><span class="k">\mathbbold</span><span class="p">}{</span>bbold<span class="p">}</span></code></pre></figure>
<p>which defines the command <code class="highlighter-rouge">\mathbbold</code>.
From here, one can use <code class="highlighter-rouge">\newcommand{\1}{\mathbbold{1}}</code> to define <code class="highlighter-rouge">\1</code> to be the indicator symbol.</p>
<h1 id="a-less-verbose-enumerate-and-itemize">A less-verbose enumerate and itemize</h1>
<p>When writing notes, I often prefer to use enumerated and bullet-point lists in order to make the document easier to read.
In LaTeX, using <code class="highlighter-rouge">\begin{enumerate}</code>, <code class="highlighter-rouge">\item</code>, and <code class="highlighter-rouge">\end{enumerate}</code> is rather verbose.
A more concise and more readable syntax can be obtained by writing</p>
<figure class="highlight"><pre><code class="language-tex" data-lang="tex"><span class="k">\1</span> Here's my first item.
<span class="k">\2</span> The second item!
<span class="k">\1*</span> A bullet point.
<span class="k">\2*</span> Another bullet point!
<span class="k">\0*</span>
<span class="k">\3</span> A third item.
<span class="k">\0</span></code></pre></figure>
<p>to produce</p>
<ol>
<li>Here’s my first item.</li>
<li>The second item!
<ul>
<li>A bullet point.</li>
<li>Another bullet point!</li>
</ul>
</li>
<li>A third item.</li>
</ol>
<p>where <code class="highlighter-rouge">\0*</code> closes the itemize, <code class="highlighter-rouge">\0</code> closes the enumerate.
As before <code class="highlighter-rouge">\1</code> is defined separately for text and math mode.
This syntax supports custom labels and more-or-less arbitrary nesting.
It is compatible with the <code class="highlighter-rouge">enumerate</code> package, and is largely unproblematic: the only TeX package I am aware of which defines <code class="highlighter-rouge">\1</code> or <code class="highlighter-rouge">\2</code> is <strong>xymatrix</strong>, which appears to seldom use them.
The code for the simplified enumerate and itemize syntax is given below.</p>
<figure class="highlight"><pre><code class="language-tex" data-lang="tex"><span class="k">\providecommand</span><span class="p">{</span><span class="k">\1</span><span class="p">}{}</span> <span class="c">% xymatrix workaround</span>
<span class="k">\renewcommand</span><span class="p">{</span><span class="k">\1</span><span class="p">}{</span><span class="k">\relax\ifmmode\mathbbold</span><span class="p">{</span>1<span class="p">}</span><span class="k">\else\expandafter\@</span>onenonmath<span class="k">\fi</span><span class="p">}</span> <span class="c">% indicator function and enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\@</span>onenonmath<span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\@</span>onestarred<span class="k">\@</span>onenonstarred<span class="p">}</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\@</span>onestarred<span class="p">}{</span><span class="nt">\begin{itemize}</span><span class="k">\item</span><span class="p">}</span> <span class="c">% itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\@</span>onenonstarred<span class="p">}</span>[1][]<span class="p">{</span><span class="k">\ifx\\</span>#1<span class="k">\\</span><span class="nt">\begin{enumerate}</span><span class="k">\item\else</span><span class="nt">\begin{enumerate}</span>[#1]<span class="k">\item\fi</span><span class="p">}</span> <span class="c">% enumerate with possible iteration choice</span>
<span class="k">\providecommand</span><span class="p">{</span><span class="k">\2</span><span class="p">}{}</span> <span class="c">% xymatrix workaround</span>
<span class="k">\renewcommand</span><span class="p">{</span><span class="k">\2</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\3</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\4</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\5</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\6</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\7</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\8</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\9</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\item\item</span><span class="p">}</span> <span class="c">% enumerate/itemize shorthand</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\0</span><span class="p">}{</span><span class="k">\@</span>ifstar<span class="k">\@</span>zerostarred<span class="k">\@</span>zerononstarred<span class="p">}</span> <span class="c">% close enumerate/itemize</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\@</span>zerostarred<span class="p">}{</span><span class="nt">\end{itemize}</span><span class="p">}</span> <span class="c">% close itemize</span>
<span class="k">\newcommand</span><span class="p">{</span><span class="k">\@</span>zerononstarred<span class="p">}{</span><span class="nt">\end{enumerate}</span><span class="p">}</span> <span class="c">% close enumerate</span></code></pre></figure>
<h1 id="math-mode-inline-text-with-correct-spacing">Math-mode inline text with correct spacing</h1>
<p>Sometimes, it’s useful to write a snippet of text inside a mathematical expression, for instance in</p>
<p>[
f(x) = \begin{cases}
1 \t{if} x=0 \\
0 \t{otherwise.}
\end{cases}
]</p>
<p>The LaTeX command <code class="highlighter-rouge">\text</code> does not by default take spacing into account, so I prefer to redefine <code class="highlighter-rouge">\t</code> in math mode in a way that produces spacing.
This allows me to write</p>
<figure class="highlight"><pre><code class="language-tex" data-lang="tex">f(x) = <span class="nt">\begin{cases}</span>
1 <span class="k">\t</span><span class="p">{</span>if<span class="p">}</span> x=0 <span class="k">\\</span>
0 <span class="k">\t</span><span class="p">{</span>otherwise.<span class="p">}</span>
<span class="nt">\end{cases}</span></code></pre></figure>
<p>to produce the above.
LaTeX has a number of commands that automatically determine spacing: the one that I find to work best for inline text is <code class="highlighter-rouge">\mathrel</code>.
The full definition of <code class="highlighter-rouge">\t</code> in the above is given below.</p>
<figure class="highlight"><pre><code class="language-tex" data-lang="tex"><span class="k">\let\textt\t</span> <span class="c">% redefinition: tie accent</span>
<span class="k">\renewcommand</span><span class="p">{</span><span class="k">\t</span><span class="p">}{</span><span class="k">\relax\ifmmode\expandafter\mathrel\expandafter\text\else\expandafter\textt\fi</span><span class="p">}</span> <span class="c">% text with spacing</span></code></pre></figure>
<h1 id="concluding-remarks">Concluding remarks</h1>
<p>Using customs TeX macros helps make the source more readable, which makes it easier to typeset notes in real time.
Beyond the macros listed here, I define a number of commands for readability: <code class="highlighter-rouge">\m</code> for upface bold symbols typically used for matrices, i.e. $\m{x}$, <code class="highlighter-rouge">\N</code> for $\N$, <code class="highlighter-rouge">\R</code> for $\R$, and many others.
I also define <code class="highlighter-rouge">\<</code> and <code class="highlighter-rouge">\></code> to be <code class="highlighter-rouge">\begin{align}</code> and <code class="highlighter-rouge">\end{align}</code>, define <code class="highlighter-rouge">\?</code> to be <code class="highlighter-rouge">\begin{gather}</code> and <code class="highlighter-rouge">\end{gather}</code>, and redefine <code class="highlighter-rouge">\[</code> and <code class="highlighter-rouge">\]</code> to automatically number equations.</p>
<p>For collaboratively-written documents, custom macros can improve source readability, but will generally be unfamiliar to others.
In my experience, the benefits of having a more concise and easier-to-read source tend to be worth the costs, because it’s usually straightforward to figure out what was meant.</p>
<p>I hope these tricks help make TeX source easier to read, and make it slightly simpler for anyone attending a mathematical course to take their notes directly in LaTeX.</p>Alexander TereninThe TeX typesetting system is a lovely bit of software: one can easily use it to typeset production-grade documents such as mathematical papers. However, typesetting complex equations can be tedious, learning to use TeX well can involve memorizing a large number of macros, and it can be difficult to understand the meaning of an equation from looking purely at its source. TeX can be made more readable by utilizing packages and introducing macros to simplify code.Building this website2018-09-15T00:00:00+00:002018-09-15T00:00:00+00:00https://avt.im/blog/2018/09/15/building-this-website<p>Lots of people, both in the academic and software communities, have personal websites.
Building one with today’s frameworks is easier than perhaps at any point in history, yet many people still have websites consisting of an index file inside of a folder hosted by some outdated service.
In this post, I describe how this website is built, showcasing software used to make all aspects of developing and maintaining a blog intuitive and easy.</p>
<p>Using modern tools is worthwhile: sites that are minimally styled tend to display tiny fonts on mobile devices, making them inconvenient for readers.
They are also inconvenient for authors: if updating a website is cumbersome, then it is more likely to never be updated and contain out-of-date information.
These issues are entirely avoidable, without spending time or paying anyone.
Let’s see how.</p>
<h1 id="building-the-site-with-jekyll">Building the site with Jekyll</h1>
<p>This is a static website: its HTML code is generated when it is created, not when the user opens it.
The code is generated by a static site generator called <a href="https://jekyllrb.com">Jekyll</a><sup id="fnref:jekyll"><a href="#fn:jekyll" class="footnote">1</a></sup>, written in Ruby.
We can think of Jekyll as a magic box that takes in a directory of files, and outputs a fancy formatted website.
Let’s take a look at a minimal example directory.</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">.
├── _config.yml
├── _posts
| └── 2018-09-15-building-this-website.md
├─ about.md
└─ index.md</code></pre></figure>
<p>Here, the file <code class="highlighter-rouge">_config.yml</code> is Jekyll’s configuration file.
Jekyll is blog-aware: the <code class="highlighter-rouge">_posts</code> directory is where it expects to find blog posts.
The remaining files are Markdown text files to be used for generating individual pages and blog posts.
Let’s examine a fairly minimal configuration file.</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">title</span><span class="pi">:</span> <span class="s">My new website</span>
<span class="na">description</span><span class="pi">:</span> <span class="s">A really cool website</span></code></pre></figure>
<p>This defines the title, description, and author fields which will be used by the theme.
Now, let’s see what a minimal post such as <code class="highlighter-rouge">2018-09-15-building-this-website.md</code> might look like.</p>
<figure class="highlight"><pre><code class="language-markdown" data-lang="markdown"><span class="nn">---</span>
<span class="na">layout</span><span class="pi">:</span> <span class="s">post</span>
<span class="na">title</span><span class="pi">:</span> <span class="s">Building this website</span>
<span class="na">author</span><span class="pi">:</span> <span class="s">Example Author</span>
<span class="nn">---</span>
<span class="gh"># Welcome to the new post</span>
Here's a new post about how the website is built!</code></pre></figure>
<p>The section surrounded by <code class="highlighter-rouge">---</code> at the top is called the post’s front matter: it tells Jekyll that the post’s HTML should be generated using the <code class="highlighter-rouge">post</code> layout, and that its title is <code class="highlighter-rouge">Building this website</code>.
The rest of the file is just Markdown text: <code class="highlighter-rouge"># Welcome to my new post</code> will render as an HTML header that says <code class="highlighter-rouge">Welcome to my new post</code>, and <code class="highlighter-rouge">Here's a new post about how the website is built!</code> will render as a line of text.</p>
<p>To have Jekyll generate the site, we run it, by typing <code class="highlighter-rouge">jekyll serve</code> into a terminal window.
This starts up a web server, and we can view our page by navigating to <code class="highlighter-rouge">localhost:4000</code>.
We get a fully functional website, containing an index page, an about page, and the post.
Jekyll automatically inserts the correct author and date into the post.
Jekyll’s default theme includes a homepage layout, and Jekyll will automatically create a link to the blog post from there.
At the end of this process, we have a working site - all without ever touching any HTML code.</p>
<h1 id="hosting-the-site-with-github-pages">Hosting the site with GitHub Pages</h1>
<p>In order to have the website be publicly visible, we need to host it somewhere.
It used to be that the easiest way to do so was to acquire a server and copy files onto it.
Today, we can instead use <a href="https://pages.github.com">GitHub Pages</a><sup id="fnref:ghpages"><a href="#fn:ghpages" class="footnote">2</a></sup>, which greatly simplifies this process, hosts our website, and costs absolutely nothing.</p>
<p>GitHub Pages works very simply.
First, the user creates a repository using the version control software Git and hosts it for free on GitHub.
Then, every time files are committed and pushed to the Git repository, GitHub automatically runs Jekyll to build and publish the site.
That’s it!</p>
<p>This process is exceedingly simple and tends to just work.
We don’t need to do anything other than keep the website in a Git repository, which is good practice regardless, as it allows us to maintain version history and undo changes that broke something if need be.
By default, GitHub will host the site at <code class="highlighter-rouge">{username}.github.io</code>, but it’s possible to buy a custom domain from any provider for a few dollars per year and configure it easily.
The domain name is the only thing I pay for: everything else is completely free.</p>
<h1 id="responsive-design-with-bootstrap-and-the-minima-reboot-theme">Responsive design with Bootstrap and the Minima Reboot theme</h1>
<p>Jekyll ships with a small number of built-in themes, and allows users to easily select other ones.
Its default theme, <a href="https://github.com/jekyll/minima">Minima</a><sup id="fnref:minima"><a href="#fn:minima" class="footnote">3</a></sup>, is very good.
It is well-designed, its style is simple but modern, and it includes a navigation menu for mobile devices that have a narrow screen width.
Unfortunately, it also renders narrow pages on large desktop screens, and making custom pages with responsive design elements - parts of the website that render differently on mobile devices compared to desktops - is cumbersome.</p>
<p>When first creating this blog, I wanted to do better, and to learn a bit of web development, so I wrote my own theme called <a href="https://github.com/aterenin/minima-reboot">Minima Reboot</a><sup id="fnref:minimareboot"><a href="#fn:minimareboot" class="footnote">4</a></sup> – named so because it’s essentially a rewrite of Minima.
The theme is enabled by adding the following line to <code class="highlighter-rouge">_config.yml</code>.</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">remote_theme</span><span class="pi">:</span> <span class="s">aterenin/minima-reboot</span></code></pre></figure>
<p>The main functional difference between Minima and Minima Reboot is that the latter is written using the <a href="http://getbootstrap.com">Bootstrap</a><sup id="fnref:bootstrap"><a href="#fn:bootstrap" class="footnote">5</a></sup> frontend framework.
Bootstrap makes it easy to design responsive websites that render the same on all recent browsers - a task that can be rather difficult because older browsers, especially those made by Microsoft, do not always follow web standards correctly.
The technical details are out of scope of this post, but for those interested Minima Reboot’s code can be found in its <a href="https://github.com/aterenin/minima-reboot">GitHub repository</a>.</p>
<p>This site has a few additional customizations on top of the theme, such as removing the footer and making the color of hyperlinks less bright compared to Bootstrap’s default.
It also uses the Open Sans font for headers, loading it in a browser-consistent way using the Google Fonts framework.</p>
<h1 id="typesetting-mathematics-with-katex">Typesetting mathematics with KaTeX</h1>
<p>This blog is, to a large degree, about mathematics.
Hence, it includes mathematical equations that need to be rendered and displayed.
The most popular way to do this - used on websites such as arXiv and Stack Overflow - is using a JavaScript package called <a href="http://mathjax.org">MathJax</a>.
MathJax works and is very popular, but it’s big, complicated, and slow - so, this blog uses a newer package called <a href="https://katex.org/">KaTeX</a><sup id="fnref:katex"><a href="#fn:katex" class="footnote">6</a></sup>.
To load it, we simply add a <code class="highlighter-rouge"><script></code> element into the <code class="highlighter-rouge"><head></code> element of our website, as described in the package’s documentation.</p>
<p>By default, KaTeX and MathJax use the <code class="highlighter-rouge">\(</code>, <code class="highlighter-rouge">\)</code> delimiters for inline math, and the <code class="highlighter-rouge">$$</code> delimiters for display-style math.
I prefer to instead use <code class="highlighter-rouge">$</code> for inline math and <code class="highlighter-rouge">\[</code>, <code class="highlighter-rouge">\]</code> for display-style math, so I override the default configuration to use these instead.
Note that since the <code class="highlighter-rouge">\</code> character is not escaped, this means that my display-style delimiters are <code class="highlighter-rouge">\[</code>,<code class="highlighter-rouge">\]</code> in Markdown, but <code class="highlighter-rouge">[</code>,<code class="highlighter-rouge">]</code> in HTML.
I also use a variety of custom macros and aliases designed to make my LaTeX more readable, which both packages allow me to define.
Therefore, I can write <code class="highlighter-rouge">e^{2\pi i} - 1 = 0</code> to get $e^{2\pi i} - 1 = 0$, and can write</p>
<figure class="highlight"><pre><code class="language-latex" data-lang="latex"><span class="p">\[</span><span class="nb">
</span><span class="nv">\int</span><span class="p">_{</span><span class="nv">\R</span><span class="p">}</span><span class="nb"> </span><span class="nv">\frac</span><span class="p">{</span><span class="m">1</span><span class="p">}{</span><span class="nv">\sqrt</span><span class="p">{</span><span class="m">2</span><span class="nv">\pi\sigma</span><span class="p">^</span><span class="m">2</span><span class="p">}}</span><span class="nb"> </span><span class="nv">\exp\cbr</span><span class="p">{</span><span class="nv">\frac</span><span class="p">{</span><span class="o">(</span><span class="nb">x</span><span class="o">-</span><span class="nv">\mu</span><span class="o">)</span><span class="p">^</span><span class="m">2</span><span class="p">}{</span><span class="o">-</span><span class="m">2</span><span class="nv">\sigma</span><span class="p">^</span><span class="m">2</span><span class="p">}}</span><span class="nb"> </span><span class="nv">\d</span><span class="nb"> x </span><span class="o">=</span><span class="nb"> </span><span class="m">1</span><span class="nb">.
</span><span class="p">\]</span></code></pre></figure>
<p>to get the equation</p>
<p>[
\int_{\R} \frac{1}{\sqrt{2\pi\sigma^2}} \exp\cbr{\frac{(x-\mu)^2}{-2\sigma^2}} \d x = 1.
]</p>
<p>Generally, these packages work seamlessly.
However, sometimes <a href="http://kramdown.gettalong.org">Kramdown</a>, the Markdown preprocessor used by Jekyll, interprets LaTeX code as something other than text.
In these situations, it can insert things into the LaTeX code that KaTeX and MathJax don’t understand - this includes <code class="highlighter-rouge"><br></code> elements which interrupt processing of multi-line equations.
This can be avoided by adding the HTML tag <code class="highlighter-rouge"><p></code> around the relevant LaTeX code, which ensures that Kramdown treats it as raw HTML and doesn’t modify it.</p>
<p>At present, there is no good way of making either KaTeX or MathJax responsive.
This means that large equations will occasionally go off-screen for users viewing the site on mobile devices.
This isn’t ideal, so hopefully one day there will be software that lets us avoid it.</p>
<h1 id="concluding-remarks">Concluding Remarks</h1>
<p>Building a personal website is easier than ever using modern technology.
Jekyll makes it easy to build a site, GitHub Pages makes it easy to host it, and tools like as Bootstrap make it easy to create a theme that looks good on all modern browsers, desktops, and mobile devices.
Modern mathematical typesetting packages make it just as easy to display high-quality equations on the web as when using LaTeX to typeset a document.</p>
<p>Given my interest in statistical theory, I find the format offered by a blog to be incredibly useful.
In my time practicing statistics, I have come across ideas that were worth communicating to other researchers, but too simple to write a paper about, or already published but using arcane notation difficult to understand.
A blog post provides a wonderful way to communicate these ideas - it is more time-efficient to write a post once and refer people to it, rather than re-derive the same idea repeatedly when it comes up in discussion.
I hope that this post showcases how easy building such a platform is in today’s world.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:jekyll">
<p><a href="https://jekyllrb.com">Jekyll</a> <a href="#fnref:jekyll" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ghpages">
<p><a href="https://pages.github.com">GitHub Pages</a> <a href="#fnref:ghpages" class="reversefootnote">↩</a></p>
</li>
<li id="fn:minima">
<p><a href="https://github.com/jekyll/minima">Minima</a> <a href="#fnref:minima" class="reversefootnote">↩</a></p>
</li>
<li id="fn:minimareboot">
<p><a href="https://github.com/aterenin/minima-reboot">Minima Reboot</a> <a href="#fnref:minimareboot" class="reversefootnote">↩</a></p>
</li>
<li id="fn:bootstrap">
<p><a href="https://getbootstrap.com">Bootstrap</a> <a href="#fnref:bootstrap" class="reversefootnote">↩</a></p>
</li>
<li id="fn:katex">
<p><a href="https://katex.org/">KaTeX</a> <a href="#fnref:katex" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninLots of people, both in the academic and software communities, have personal websites. Building one with today’s frameworks is easier than perhaps at any point in history, yet many people still have websites consisting of an index file inside of a folder hosted by some outdated service. In this post, I describe how this website is built, showcasing software used to make all aspects of developing and maintaining a blog intuitive and easy.How to use R packages such as ggplot in Julia2018-03-23T00:00:00+00:002018-03-23T00:00:00+00:00https://avt.im/blog/2018/03/23/R-packages-ggplot-in-julia<p>Julia is a wonderful programming language.
It’s modern with good functional programming support, and unlike R and Python - both slow - Julia is fast.
Writing packages is straightforward, and high performance can be obtained without bindings to a lower-level language.
Unfortunately, its plotting frameworks are, at least in my view, not as good as the ggplot package in R.
Fortunately, Julia’s interoperability with other programming languages is outstanding.
In this post, I illustrate how to make ggplot work near-seamlessly with Julia using the RCall package.</p>
<h1 id="calling-r-packages-in-julia">Calling R packages in Julia</h1>
<p>R packages can be loaded can be loaded in Julia<sup id="fnref:jl"><a href="#fn:jl" class="footnote">1</a></sup> through the RCall<sup id="fnref:rcall"><a href="#fn:rcall" class="footnote">2</a></sup> package by using</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">using</span> <span class="n">RCall</span>
<span class="nd">@rlibrary</span> <span class="n">ggplot2</span></code></pre></figure>
<p>which works much like the popular <code class="highlighter-rouge">@pyimport</code> macro in the PyCall<sup id="fnref:pycall"><a href="#fn:pycall" class="footnote">3</a></sup> package.
It is important to note that this <em>properly loads</em> an R package as a Julia module, rather than simply defining a set of bindings to it.
This means that every function in the R package can automatically be called with Julia data structures as arguments, which will be automatically transformed into R data structures.
There is no need to painstakingly convert every input, as is often necessary when making different languages interface with one other - it is done automatically using the magic offered by 21st century programming languages.
So, we can write</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">qplot</span><span class="x">(</span><span class="mi">1</span><span class="x">:</span><span class="mi">10</span><span class="x">,[</span><span class="n">i</span><span class="o">^</span><span class="mi">2</span> <span class="k">for</span> <span class="n">i</span> <span class="k">in</span> <span class="mi">1</span><span class="x">:</span><span class="mi">10</span><span class="x">])</span></code></pre></figure>
<p>and a plot generated by the ggplot<sup id="fnref:gg"><a href="#fn:gg" class="footnote">4</a></sup> function <code class="highlighter-rouge">qplot</code> shows up, even though <code class="highlighter-rouge">1:10</code> is a Julia range and <code class="highlighter-rouge">[i^2 for i in 1:10]</code> is a Julia array.</p>
<h1 id="data-frame-interoperability">Data frame interoperability</h1>
<p>RCall can automatically convert Julia <code class="highlighter-rouge">DataFrame</code> objects into R <code class="highlighter-rouge">data.frame</code> objects.
For example, the following code is valid.</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">using</span> <span class="n">DataFrames</span>
<span class="n">d</span> <span class="o">=</span> <span class="n">DataFrame</span><span class="x">(</span><span class="n">v</span> <span class="o">=</span> <span class="x">[</span><span class="mi">3</span><span class="x">,</span><span class="mi">4</span><span class="x">,</span><span class="mi">5</span><span class="x">],</span> <span class="n">w</span> <span class="o">=</span> <span class="x">[</span><span class="mi">5</span><span class="x">,</span><span class="mi">6</span><span class="x">,</span><span class="mi">7</span><span class="x">],</span> <span class="n">x</span> <span class="o">=</span> <span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">2</span><span class="x">,</span><span class="mi">3</span><span class="x">],</span> <span class="n">y</span> <span class="o">=</span> <span class="x">[</span><span class="mi">4</span><span class="x">,</span><span class="mi">5</span><span class="x">,</span><span class="mi">6</span><span class="x">],</span> <span class="n">z</span> <span class="o">=</span> <span class="x">[</span><span class="mi">1</span><span class="x">,</span><span class="mi">1</span><span class="x">,</span><span class="mi">2</span><span class="x">])</span>
<span class="n">ggplot</span><span class="x">(</span><span class="n">d</span><span class="x">,</span> <span class="n">aes</span><span class="x">(</span><span class="n">x</span><span class="o">=</span><span class="x">:</span><span class="n">x</span><span class="x">,</span><span class="n">y</span><span class="o">=</span><span class="x">:</span><span class="n">y</span><span class="x">))</span> <span class="o">+</span> <span class="n">geom_line</span><span class="x">()</span></code></pre></figure>
<p>Note that the <code class="highlighter-rouge">aes</code> function uses Julia symbols like <code class="highlighter-rouge">:x</code> to refer to data frame columns.
We don’t need to do any Julia to R type conversions, the code simply works.</p>
<h1 id="dealing-with-dots-formulas-and-other-r-quirks">Dealing with dots, formulas, and other R quirks</h1>
<p>There are a few issues that arise when making complicated plots.
For example, ggplot R commands such as</p>
<figure class="highlight"><pre><code class="language-r" data-lang="r"><span class="n">geom_point</span><span class="p">(</span><span class="n">na.rm</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">TRUE</span><span class="p">)</span></code></pre></figure>
<p>don’t translate directly to Julia code because the <code class="highlighter-rouge">.</code> in <code class="highlighter-rouge">na.rm</code> is interpreted as Julia syntax.
Similar issues arise if, for instance, an R function uses <code class="highlighter-rouge">end</code> as an argument name.
The solution to this problem is to use the <code class="highlighter-rouge">var</code> string macro provided by RCall, which enables us to write</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">geom_point</span><span class="x">(</span><span class="n">var</span><span class="s">"na.rm"</span> <span class="o">=</span> <span class="n">true</span><span class="x">)</span></code></pre></figure>
<p>in place of the above R code.
This macro works by defining a Julia symbol that includes the dot, which we couldn’t have done with standard syntax.</p>
<p>Another useful feature is the <code class="highlighter-rouge">R</code> string macro, which enables us to write R code in line with Julia code.
For example, the Julia code <code class="highlighter-rouge">R"~z"</code> will execute the R code <code class="highlighter-rouge">~z</code>, which creates an R formula object with the variable <code class="highlighter-rouge">z</code>, and returns it as an R object in Julia.
This can be useful for functions such as <code class="highlighter-rouge">facet_grid</code> and <code class="highlighter-rouge">facet_wrap</code> that accept formulas as input.
It enables us to write</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">ggplot</span><span class="x">(</span><span class="n">d</span><span class="x">,</span> <span class="n">aes</span><span class="x">(</span><span class="n">x</span><span class="o">=</span><span class="x">:</span><span class="n">x</span><span class="x">,</span><span class="n">y</span><span class="o">=</span><span class="x">:</span><span class="n">y</span><span class="x">))</span> <span class="o">+</span> <span class="n">geom_point</span><span class="x">()</span> <span class="o">+</span> <span class="n">facet_wrap</span><span class="x">(</span><span class="n">R</span><span class="s">"~z"</span><span class="x">)</span></code></pre></figure>
<p>as well as execute R functions such as <code class="highlighter-rouge">data.frame</code> if we need to.
We can also use this macro to fix issues arising when automatic data frame conversion doesn’t behave as intended.
This occasionally happens for data frames that contain symbols or strings.
For example, we can write code such as</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">d</span> <span class="o">=</span> <span class="n">d</span> <span class="o">|></span>
<span class="n">x</span> <span class="o">-></span> <span class="n">R</span><span class="s">"</span><span class="si">$</span><span class="s">x[,1] = as.numeric(</span><span class="si">$</span><span class="s">d[,1]); </span><span class="si">$</span><span class="s">x"</span> <span class="o">|></span>
<span class="n">x</span> <span class="o">-></span> <span class="n">R</span><span class="s">"</span><span class="si">$</span><span class="s">x[,2] = as.numeric(</span><span class="si">$</span><span class="s">d[,2]); </span><span class="si">$</span><span class="s">x"</span> <span class="o">|></span>
<span class="n">x</span> <span class="o">-></span> <span class="n">R</span><span class="s">"</span><span class="si">$</span><span class="s">x[,3] = as.numeric(</span><span class="si">$</span><span class="s">d[,3]); </span><span class="si">$</span><span class="s">x"</span> <span class="o">|></span>
<span class="n">x</span> <span class="o">-></span> <span class="n">R</span><span class="s">"</span><span class="si">$</span><span class="s">x[,4] = as.factor(as.numeric(</span><span class="si">$</span><span class="s">x[,4])); </span><span class="si">$</span><span class="s">x"</span> <span class="o">|></span>
<span class="n">x</span> <span class="o">-></span> <span class="n">R</span><span class="s">"</span><span class="si">$</span><span class="s">x[,5] = as.factor(as.character(</span><span class="si">$</span><span class="s">x[,5])); </span><span class="si">$</span><span class="s">x"</span> <span class="o">|></span>
<span class="n">x</span> <span class="o">-></span> <span class="n">names!</span><span class="x">(</span><span class="n">d</span><span class="x">,</span> <span class="x">[:</span><span class="n">u_min</span><span class="x">,</span> <span class="x">:</span><span class="n">u_max</span><span class="x">,</span> <span class="x">:</span><span class="n">x</span><span class="x">,</span> <span class="x">:</span><span class="n">u</span><span class="x">,</span> <span class="x">:</span><span class="n">solution</span><span class="x">])</span></code></pre></figure>
<p>to convert strings to factors inside our data frame - inline.
There’s a couple of points worth expanding on here.
Note first the functional style: we use a pipe<sup id="fnref:magrittr"><a href="#fn:magrittr" class="footnote">5</a></sup> to input the data frame <code class="highlighter-rouge">d</code> into a function that takes <code class="highlighter-rouge">x</code> as input and executes the string macro <code class="highlighter-rouge">R"$x[,1] = as.numeric($d[,1]); $x"</code> and returns its results.
These are immediately piped into another function.
The code <code class="highlighter-rouge">$x</code> in the line <code class="highlighter-rouge">R"$x[,1] = as.numeric($d[,1]); $x"</code> means that the Julia variable <code class="highlighter-rouge">x</code> is passed into the R code.
This syntax allows us to execute R code without ever worrying about manually passing variables between Julia and R.</p>
<p>Putting everything together, it’s easy to make a layered plot such as</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">ggplot</span><span class="x">(</span><span class="n">d</span><span class="x">,</span> <span class="n">aes</span><span class="x">(</span><span class="n">x</span><span class="o">=</span><span class="x">:</span><span class="n">x</span><span class="x">))</span> <span class="o">+</span>
<span class="n">geom_ribbon</span><span class="x">(</span><span class="n">aes</span><span class="x">(</span><span class="n">ymin</span><span class="o">=</span><span class="x">:</span><span class="n">u_min</span><span class="x">,</span> <span class="n">ymax</span><span class="o">=</span><span class="x">:</span><span class="n">u_max</span><span class="x">),</span> <span class="n">fill</span><span class="o">=</span><span class="s">"blue"</span><span class="x">,</span> <span class="n">alpha</span><span class="o">=</span><span class="mf">0.5</span><span class="x">)</span> <span class="o">+</span>
<span class="n">geom_line</span><span class="x">(</span><span class="n">aes</span><span class="x">(</span><span class="n">y</span><span class="o">=</span><span class="x">:</span><span class="n">u</span><span class="x">),</span> <span class="n">color</span><span class="o">=</span><span class="s">"blue"</span><span class="x">)</span> <span class="o">+</span>
<span class="n">lims</span><span class="x">(</span><span class="n">x</span><span class="o">=</span><span class="x">[</span><span class="mi">0</span><span class="x">,</span><span class="mi">5</span><span class="x">],</span> <span class="n">y</span><span class="o">=</span><span class="x">[</span><span class="mi">0</span><span class="x">,</span><span class="mi">10</span><span class="x">])</span> <span class="o">+</span>
<span class="n">geom_line</span><span class="x">(</span><span class="n">aes</span><span class="x">(</span><span class="n">y</span><span class="o">=</span><span class="x">:</span><span class="n">solution</span><span class="x">),</span> <span class="n">color</span><span class="o">=</span><span class="s">"red"</span><span class="x">)</span> <span class="o">|></span>
<span class="n">p</span> <span class="o">-></span> <span class="n">ggsave</span><span class="x">(</span><span class="s">"p1.pdf"</span><span class="x">,</span> <span class="n">p</span><span class="x">)</span></code></pre></figure>
<p>and save it to a PDF file using functional syntax, without ever writing a line of R code.
In doing so, we sacrifice very little and retain essentially all aspects of ggplot that make it a user-friendly and productive package.
I’ll conclude by nothing that everything here is just ordinary use of the RCall package and would work with any R package – in all of the above, we did not use any ggplot-specific Julia packages, nor did we write a single line of language bindings.</p>
<h1 id="why-ggplot-arent-we-using-julia-in-order-to-not-use-r">Why ggplot? Aren’t we using Julia in order to not use R?</h1>
<p>Why bother with ggplot when Julia offers its own full-featured plotting packages such as Gadfly<sup id="fnref:gadfly"><a href="#fn:gadfly" class="footnote">6</a></sup> and Plots.jl<sup id="fnref:plotsjl"><a href="#fn:plotsjl" class="footnote">7</a></sup>?
In my view – and I’m not generally a fan of criticizing other people’s hard work but I find it warranted here and will be as gentle as I can – neither of these frameworks have well-designed programming interfaces.
Let’s look at what the issues are, and why ggplot handles them better.</p>
<p>Plots.jl is a powerful, fully-featured plotting package with lots of features.
Unfortunately, its interface is very similar to that of the base R: making a complicated plot requires executing a list of commands.
This is its main downside: to use it effectively, the user needs to memorize every command and its options individually – there is no over-arching principle upon which commands are based, which users can learn instead of the commands themselves.
Indeed, this one of the major features of the Wickham-Wilkerson Grammar of Graphics<sup id="fnref:ggbooks"><a href="#fn:ggbooks" class="footnote">8</a></sup> interface, which works as follows.</p>
<ul>
<li>Plots are visualizations of data frames consisting of layered geometric objects.</li>
<li>Aesthetic mappings describe how individual data points are mapped to geometric objects.</li>
</ul>
<p>For example, to plot a function and a 95% probability interval around that function, we create a data frame where each row contains the function’s $x$ and $f(x)$ values at a point, together with the lower and upper interval endpoints $a$ and $b$.
We then add a <em>line</em> geometric object with the aesthetic mapping $(x,y) \goesto (x, f(x))$, as well as a <em>ribbon</em> geometric object with the mapping $(x,\min,\max) \goesto (x,a,b)$.
We do not need to memorize how lines and ribbons work to use them, and simply follow the principles given by the bullet points above.
If we need to use a new geometric object that we’ve never seen before, all we need to do is look at what kind of aesthetic mappings it utilizes – we never need to memorize any other details.</p>
<p>On the other hand, consider the Plots.jl code that I wrote for a project</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">contour</span><span class="x">(</span><span class="o">-</span><span class="mi">3</span><span class="x">:</span><span class="mf">0.1</span><span class="x">:</span><span class="mi">3</span><span class="x">,</span> <span class="o">-</span><span class="mi">3</span><span class="x">:</span><span class="mf">0.1</span><span class="x">:</span><span class="mi">3</span><span class="x">,</span> <span class="x">(</span><span class="n">x</span><span class="x">,</span><span class="n">y</span><span class="x">)</span> <span class="o">-></span> <span class="n">pdf</span><span class="x">(</span><span class="n">MultivariateNormal</span><span class="x">(</span><span class="mi">2</span><span class="x">,</span><span class="mi">1</span><span class="x">),[</span><span class="n">x</span><span class="x">,</span><span class="n">y</span><span class="x">]))</span>
<span class="n">scatter!</span><span class="x">(</span><span class="n">θ</span><span class="x">[</span><span class="n">i</span><span class="x">][</span><span class="mi">1</span><span class="x">,:],</span> <span class="n">θ</span><span class="x">[</span><span class="n">i</span><span class="x">][</span><span class="mi">2</span><span class="x">,:])</span></code></pre></figure>
<p>and note how this syntax differs from</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">plot!</span><span class="x">(</span><span class="n">hcat</span><span class="x">(</span><span class="n">L</span><span class="x">,</span><span class="nb">error</span><span class="x">),</span><span class="n">layout</span><span class="o">=</span><span class="mi">2</span><span class="x">,</span> <span class="n">label</span><span class="o">=</span><span class="x">[</span><span class="s">"L: test"</span> <span class="s">"Error: test"</span><span class="x">],</span> <span class="n">alpha</span><span class="o">=</span><span class="mf">0.5</span><span class="x">)</span></code></pre></figure>
<p>where a single matrix is used as input rather than two ranges and a function.
It is <em>a priori</em> unclear whether the input to a particular plotting function should be an array, data frame, or something else.
Looking a bit further, imagine setting color labels in a complicated multilayered plot - in which layer’s command should we specify how labels are displayed?
Ambiguity like this wastes time by forcing the user to spend time reading documentation rather than making their plots, and in my experience the time saved by having concise commands like <code class="highlighter-rouge">plot(x,y)</code> in simple cases does not outweigh the cost in complicated ones.</p>
<p>It’s true that the Grammar of Graphics interface is not well-suited to every kind of plot, but it works well for most of the ones encountered in everyday data science.
Most importantly, it offers a single unified way to think about plots and how to construct them.
Writing plots in it can be more verbose, but I prefer being verbose and consistent than concise and different in every scenario.
I don’t have time to memorize individual commands in a plotting package that doesn’t contain a central set of guiding principles – and neither should you.</p>
<p>So if I don’t prefer Plots.jl due to its interface, what about Gadfly, which is is Grammar of Graphics based?
Unfortunately, Gadfly both doesn’t support many useful features such as transparency and geometric objects like <code class="highlighter-rouge">geom_raster</code>, and suffers from a whole other set of issues that makes it difficult to use.
One particular problem is that it uses a varargs-based interface rather than a functional one.
This makes us write things like</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">plot</span><span class="x">(</span><span class="n">plot_data_1</span><span class="x">,</span> <span class="n">x</span><span class="o">=</span><span class="s">"x"</span><span class="x">,</span> <span class="n">y</span><span class="o">=</span><span class="s">"u"</span><span class="x">,</span> <span class="n">Geom</span><span class="o">.</span><span class="n">line</span><span class="x">,</span>
<span class="n">layer</span><span class="x">(</span><span class="n">Geom</span><span class="o">.</span><span class="n">line</span><span class="x">,</span> <span class="n">x</span> <span class="o">=</span> <span class="s">"x"</span><span class="x">,</span> <span class="n">y</span> <span class="o">=</span> <span class="s">"solution"</span><span class="x">,</span> <span class="n">Theme</span><span class="x">(</span><span class="n">default_color</span><span class="o">=</span><span class="s">"red"</span><span class="x">)),</span>
<span class="n">layer</span><span class="x">(</span><span class="n">Geom</span><span class="o">.</span><span class="n">line</span><span class="x">,</span> <span class="n">x</span><span class="o">=</span><span class="s">"x"</span><span class="x">,</span> <span class="n">y</span><span class="o">=</span><span class="s">"u_mc"</span><span class="x">,</span> <span class="n">Theme</span><span class="x">(</span><span class="n">default_color</span> <span class="o">=</span> <span class="s">"purple"</span><span class="x">)),</span>
<span class="n">layer</span><span class="x">(</span><span class="n">Geom</span><span class="o">.</span><span class="n">line</span><span class="x">,</span> <span class="n">x</span><span class="o">=</span><span class="s">"x"</span><span class="x">,</span> <span class="n">y</span><span class="o">=</span><span class="s">"u_mf"</span><span class="x">,</span> <span class="n">Theme</span><span class="x">(</span><span class="n">default_color</span> <span class="o">=</span> <span class="s">"orange"</span><span class="x">))</span>
<span class="x">)</span></code></pre></figure>
<p>instead of</p>
<figure class="highlight"><pre><code class="language-julia" data-lang="julia"><span class="n">ggplot</span><span class="x">(</span><span class="n">plot_data_1</span><span class="x">,</span> <span class="n">aes</span><span class="x">(</span><span class="n">x</span><span class="o">=</span><span class="s">"x"</span><span class="x">,</span> <span class="n">y</span><span class="o">=</span><span class="s">"u"</span><span class="x">))</span> <span class="o">+</span>
<span class="n">geom_line</span><span class="x">(</span><span class="n">color</span><span class="o">=</span><span class="s">"blue"</span><span class="x">)</span> <span class="o">+</span>
<span class="n">geom_line</span><span class="x">(</span><span class="n">aes</span><span class="x">(</span><span class="n">y</span><span class="o">=</span><span class="x">:</span><span class="n">u_mc</span><span class="x">),</span> <span class="n">color</span><span class="o">=</span><span class="s">"purple"</span><span class="x">)</span> <span class="o">+</span>
<span class="n">geom_line</span><span class="x">(</span><span class="n">aes</span><span class="x">(</span><span class="n">y</span><span class="o">=</span><span class="x">:</span><span class="n">u_mf</span><span class="x">),</span> <span class="n">color</span><span class="o">=</span><span class="s">"orange"</span><span class="x">)</span></code></pre></figure>
<p>which is much simpler.
The issue here is that a <code class="highlighter-rouge">...</code> based interface requires the user to waste time on the irritating task of balancing commas and parentheses.
Plots.jl suffers just as much from the exact same problem.</p>
<p>This code raises another major issue: Gadfly doesn’t follow the Grammar of Graphics strictly enough: a color not given by an aesthetic mapping should be defined as part of a geometric object, not part of a theme.
Themes are supposed to control parts of the plot that have nothing to do with the data or geometric objects, such as the font size for the plot’s title – certainly not the color of a line.
This is an inconsistency that a user needs to learn, rather than a consequence of a set of principles that is immediately obvious.</p>
<p>At the end of the day, memorizing a plotting package is not a good use of my time or yours, and after spending a good bit of time with both packages I’ve found dealing with R-Julia interoperability and its occasional difficulties to be a lesser problem compared to the issues raised above.</p>
<h1 id="concluding-thoughts">Concluding thoughts</h1>
<p>Julia is wonderful, made even more so through its strong interoperability given by RCall<sup id="fnref:rcall:1"><a href="#fn:rcall" class="footnote">2</a></sup> and PyCall<sup id="fnref:pycall:1"><a href="#fn:pycall" class="footnote">3</a></sup>.
I find it better than R, and much better than Python.
It does have its flaws.
Its syntax isn’t ideal in certain situations, particularly when writing highly functional code, and would be improved by being more like Scala, or even like pipe-oriented R written with the magrittr<sup id="fnref:magrittr:1"><a href="#fn:magrittr" class="footnote">5</a></sup> package.
Multiple dispatch is not a proper replacement for Python-style objects, and having a language features similar to Rust’s <em>Implementations</em> would be a major improvement.
This said, in my view Julia is already ahead of R and Python, which have bigger issues than the above.
Usability and cleanliness are critically important in a programming language, and this is why it’s worth using ggplot in Julia.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:jl">
<p><a href="https://julialang.org">Julia</a> <a href="#fnref:jl" class="reversefootnote">↩</a></p>
</li>
<li id="fn:rcall">
<p><a href="https://github.com/JuliaInterop/RCall.jl">RCall</a> <a href="#fnref:rcall" class="reversefootnote">↩</a> <a href="#fnref:rcall:1" class="reversefootnote">↩<sup>2</sup></a></p>
</li>
<li id="fn:pycall">
<p><a href="https://github.com/JuliaPy/PyCall.jl">PyCall</a> <a href="#fnref:pycall" class="reversefootnote">↩</a> <a href="#fnref:pycall:1" class="reversefootnote">↩<sup>2</sup></a></p>
</li>
<li id="fn:gg">
<p><a href="http://ggplot2.tidyverse.org">ggplot</a> <a href="#fnref:gg" class="reversefootnote">↩</a></p>
</li>
<li id="fn:magrittr">
<p><a href="https://cran.r-project.org/web/packages/magrittr/vignettes/magrittr.html">magrittr</a> <a href="#fnref:magrittr" class="reversefootnote">↩</a> <a href="#fnref:magrittr:1" class="reversefootnote">↩<sup>2</sup></a></p>
</li>
<li id="fn:gadfly">
<p><a href="http://gadflyjl.org">Gadfly</a> <a href="#fnref:gadfly" class="reversefootnote">↩</a></p>
</li>
<li id="fn:plotsjl">
<p><a href="https://github.com/JuliaPlots/Plots.jl">Plots.jl</a> <a href="#fnref:plotsjl" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ggbooks">
<p>See the original book<sup id="fnref:grammarofgraphics"><a href="#fn:grammarofgraphics" class="footnote">9</a></sup> and ggplot manual<sup id="fnref:ggplot2"><a href="#fn:ggplot2" class="footnote">10</a></sup>. <a href="#fnref:ggbooks" class="reversefootnote">↩</a></p>
</li>
<li id="fn:grammarofgraphics">
<p>L. Wilkerson. The Grammar of Graphics. 2005. <a href="#fnref:grammarofgraphics" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ggplot2">
<p>H. Wickham. ggplot2: Elegant Graphics for Data Analysis. 2016. <a href="#fnref:ggplot2" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninJulia is a wonderful programming language. It’s modern with good functional programming support, and unlike R and Python - both slow - Julia is fast. Writing packages is straightforward, and high performance can be obtained without bindings to a lower-level language. Unfortunately, its plotting frameworks are, at least in my view, not as good as the ggplot package in R. Fortunately, Julia’s interoperability with other programming languages is outstanding. In this post, I illustrate how to make ggplot work near-seamlessly with Julia using the RCall package.What does it really mean to be Bayesian?2018-02-09T00:00:00+00:002018-02-09T00:00:00+00:00https://avt.im/blog/2018/02/09/real-meaning-of-bayesian<p>In my previous posts, I introduced Bayesian models and argued that they are meaningful.
I claimed that studying them is worthwhile because the probabilistic interpretation of learning that they offered can be more intuitive than other interpretations.
I showcased an example illustrating what a Bayesian model looks like.
I did not, however, say what a Bayesian model actually is – at least not in a sufficiently general setting to encompass models people regularly use.
I’m going to discuss that in this post, and then showcase some surprising behavior in infinite-dimensional settings where the general approach is necessary.
The subject matter here can be highly technical, but will be discussed at an intuitive level meant to explain what is going on.</p>
<p><strong>Definition.</strong>
A model $\s{M}$ is <em>mathematically Bayesian</em> if it can be fully specified via a prior $\pi(\theta)$ and likelihood $f(x \given \theta)$ for which the posterior distribution $f(\theta \given x)$ is well-defined.</p>
<p>Here, $\theta$ is an abstract parameter, and $x$ is an abstract data set.
The argument for using Bayesian learning, given by Cox’s Theorem, is that conditional probability can be interpreted as an extension of true-false logic under uncertainty.
This is great – but, formality considerations aside, there are scenarios that involve learning from data that are not included in the above definition.
Let’s look at one.</p>
<h1 id="a-motivating-example">A motivating example</h1>
<p>To illustrate a case not covered by the above definition, consider the problem of learning a function from a finite set of points.
Here, we have a set of points $(y_i, x_i), i=1,..,n$ and we want to learn a function $y = f(x)$ from the data.
A simple Bayesian model for the data can be written as</p>
<p>[
\begin{aligned}
y_i &= f(x_i) + \eps_i
&
\eps_i &\iid N(0,\sigma^2)
&
f \dist\f{GP}(\mu, \Sigma)
\end{aligned}
]</p>
<p>What are we saying here?
If we know $f$, we can use a set of points $x_i$ to generate $y_i$ by calculating $f(x_i)$ and adding Gaussian noise $\eps_i$.
Since we don’t know $f$, we specify its prior probability distribution as a Gaussian process with mean function $\mu: \R \goesto \R$ and covariance function $\Sigma: \R \cross \R \goesto \R$.
Since we’ve specified a conditional and marginal distribution, this defines a joint distribution, so we can try to get the posterior distribution using Bayes’ Rule
[
f(f \given \v{y},\v{x}) \propto f(\v{y} \given \v{x}, f) \pi(f)
.
]</p>
<p><em>Except we can’t do that</em>.
The above expression is not well-defined – $\pi(f)$ does not exist, because the probability distribution $f \dist\f{GP}(\mu, \Sigma)$ is a distribution over a space of functions, not of real numbers – therefore, it has no density in the standard sense<sup id="fnref:leb"><a href="#fn:leb" class="footnote">1</a></sup>.</p>
<p>Why not?
A probability density is a function that assigns a weight to every unit of volume in space.
In one dimension, every interval of the form $[a,b]$ is assigned volume $|a-b|$ – this depends only on its length, not its location.
In infinite-dimensional spaces, this is impossible.
It can be proven that any notion of volume must depend both on the length and location – more formally, the infinite-dimensional Lebesgue measure is not locally finite<sup id="fnref:infleb"><a href="#fn:infleb" class="footnote">2</a></sup>.</p>
<p>So what do we do?
Is there a sense in which we can consider the above model Bayesian?
Let’s discuss that.</p>
<h1 id="bayesian-learning-as-conditional-probability">Bayesian learning as conditional probability</h1>
<p>If we’re not allowed to discuss probability densities, what else can we do?
One thing that the definition says is that a model is <em>Bayesian</em> if it is <em>probabilistic</em>.
This entails two parts.</p>
<ol>
<li>$\s{M}$ is specified via a joint probability density $f(\theta, x)$ over the parameters and data.</li>
<li>Learning takes place via conditional probability.</li>
</ol>
<p>It turns out that these two intuitive notions are precisely the ones we need.
Informally, this leads to the definition below.</p>
<p><strong>Definition.</strong>
A model $\s{M}$ is <em>mathematically Bayesian</em> if it is fully specified via a random variable $(x,\theta)$ for which the conditional probability distribution $\theta \given x$ exists for all $x$.</p>
<p>This definition can be made formal using measure-theoretic notions such as <em>regular conditional probability</em><sup id="fnref:rcp"><a href="#fn:rcp" class="footnote">3</a></sup> and <em>disintegration</em><sup id="fnref:disint"><a href="#fn:disint" class="footnote">4</a></sup>.
These have various flavors with different technical requirements on $(x,\theta)$ that need to be checked to ensure that writing down a probability distribution conditional on a set of data points actually makes sense.
Let’s now look at two different ways of specifying $(x,\theta)$ in infinite-dimensional settings where the usual approach fails.</p>
<h1 id="two-infinite-dimensional-approaches">Two infinite-dimensional approaches</h1>
<p>One way to define Bayesian models in infinite-dimensional settings is through a <em>top-down</em> approach.
Here, we specify $\theta \given x$ by selecting a complicated but well-defined infinite-dimensional notion of volume.
Often, the prior distribution is used to select this notion of volume.
From there, we can specify how the posterior distribution changes that volume, by writing down a <em>Radon-Nikodym derivative</em><sup id="fnref:rn"><a href="#fn:rn" class="footnote">5</a></sup>.
This viewpoint is often used in the Gaussian measure and Bayesian inverse problem literatures.
The main price we pay is that for many infinite-dimensional models, the prior and posterior distributions may not have the same support – they may fail to be <em>absolutely continuous</em><sup id="fnref:ac"><a href="#fn:ac" class="footnote">6</a></sup>, in which case the Radon-Nikodym derivative between them would not exist.</p>
<p>Alternatively, we could use a <em>bottom-up</em> approach.
Here, we define a family of probability of distributions using finite-dimensional slices of our parameter space, using Kolmogorov’s Extension Theorem as our primary theoretical tool for handling the infinite dimensional object.
This is the primary viewpoint in the Gaussian process and Dirichlet process literatures.
The main price we pay is that from this perspective, we can only reason about the infinite-dimensional object we wish to study indirectly.
This may cause us to make poor choices, such as writing down algorithms that stop working as we approach the infinite-dimensional limit<sup id="fnref:pcn"><a href="#fn:pcn" class="footnote">7</a></sup>, which are easily avoided with a more direct perspective.</p>
<h1 id="cromwells-rule-and-some-surprising-consequences">Cromwell’s Rule and some surprising consequences</h1>
<p>We briefly mentioned that in infinite-dimensional settings, prior and posterior distributions may not be absolutely continuous with one another.
This property deserves some attention.
Consider Bayes’ Rule for probabilities</p>
<p>[
\P(B \given A) = \frac{\P(A \given B) \P(B)}{\P(A)}
]</p>
<p>and note that for $\P(A)$ nonzero, then $\P(B) = 0$ implies $\P(B \given A) = 0$ – no matter what $A$ is.
By analogy, if $A$ is data and $B$ is an event of interest, then Bayes’ Rule ignores the data if the prior probability is zero.
This is often not desirable, which leads to <em>Cromwell’s Rule</em><sup id="fnref:cr"><a href="#fn:cr" class="footnote">8</a></sup>, given below.</p>
<blockquote>
<p>To avoid making learning impossible, the use of prior probabilities that are zero or one should be avoided.</p>
</blockquote>
<p>Except, in many infinite-dimensional settings, this doesn’t apply because $\P(A)$ may be zero.
Indeed, it is easy to construct examples where the prior probability of an event is zero, but the posterior probability is nonzero – more formally, where the posterior is not absolutely continuous with respect to the prior.
This is not an esoteric occurrence: even something as basic as adding a mean function to a Gaussian process can break absolute continuity<sup id="fnref:cmt"><a href="#fn:cmt" class="footnote">9</a></sup>.
Let’s examine a case where this happens.</p>
<h1 id="breaking-probabilistic-impossibility">Breaking probabilistic impossibility</h1>
<p>Consider the following model.</p>
<p>[
\begin{aligned}
y_i &\given F \iid F
&
F &\dist\f{DP}(\alpha, \delta_0)
\end{aligned}
]</p>
<p>where $\delta_0$ is a Dirac measure that places all of its probability on zero.
Under the prior, we have
[
\P(F \neq \delta_0) = 0
.
]
The standard posterior for this model is
[
F \given \v{y} \dist\f{DP}\del{\alpha + n, \frac{\alpha}{\alpha+n}\delta_0 + \frac{n}{\alpha+n}\hat{F}_n}
]
where $n$ is the length of $\v{y}$ and $\hat{F}_n$ is the empirical CDF of $\v{y}$.
But we can tell immediately that</p>
<p>[
\P(F \neq \delta_0 \given \v{y}) > 0
.
]</p>
<p>This example illustrates a whole host of bizarre consequences.
Since $F \given \v{y}$ is not absolutely continuous with respect to $F$, we see that in infinite dimensions, data may convince us to believe in something we in a sense thought was impossible.
Furthermore, $\f{DP}(\alpha_1, \delta_0)$ and $\f{DP}(\alpha_2, \delta_0)$ are, as probability distributions, identical – but their respective posterior distributions are not.
So, what matters for Bayesian learning in infinite dimensions is not the distribution of the prior, but the <em>functional form of the joint probability measure</em>.
This behavior is both surprising and typical – conditional probability can act in complicated ways.</p>
<h1 id="what-it-all-means">What it all means</h1>
<p>In my view, an abstract model is <em>Bayesian</em> if it is <em>probabilistic</em> and learning takes place through <em>conditional probability</em>.
In well-behaved finite-dimensional settings, this means that learning takes place using Bayes’ Rule.
There, we have a <em>likelihood</em> $f(x \given \theta)$ that acts as the generative distribution for the data given the parameters, and a <em>prior</em> that describes what sorts of parameters we’d like to regularize the learning process towards.
In full generality, however, neither the generative nature of the likelihood nor the use of Bayes’ Rule matters: it is the use of conditional probability that is important.
From a philosophical standpoint this makes sense: learning is just reasoning about something we don’t know using the things we do, and Cox’s Theorem<sup id="fnref:ct"><a href="#fn:ct" class="footnote">10</a></sup> tells us that true-false reasoning under uncertainty must have the same mathematical structure as conditional probability.</p>
<p>Once we’ve taken the general perspective, we are free to define models in infinite-dimensional settings.
Such models are powerful and have proven useful in many applications, but at times they may behave bizarrely.
It’s worthwhile to take a moment to step back, appreciate, and understand why the expressions we calculate are the way they are.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:leb">
<p>The standard notion of volume is taken to be the Lebesgue measure. See Chapter 3 of Probability and Stochastics<sup id="fnref:cinlar"><a href="#fn:cinlar" class="footnote">11</a></sup>. <a href="#fnref:leb" class="reversefootnote">↩</a></p>
</li>
<li id="fn:infleb">
<p>See Section 1.2 of Analysis and Probability on Infinite-Dimensional Spaces<sup id="fnref:eldredge"><a href="#fn:eldredge" class="footnote">12</a></sup>. <a href="#fnref:infleb" class="reversefootnote">↩</a></p>
</li>
<li id="fn:rcp">
<p>See Chapter 2 of Probability and Stochastics<sup id="fnref:cinlar:1"><a href="#fn:cinlar" class="footnote">11</a></sup>. <a href="#fnref:rcp" class="reversefootnote">↩</a></p>
</li>
<li id="fn:disint">
<p>See Section 2 of Conditioning as Disintegration<sup id="fnref:condasdisint"><a href="#fn:condasdisint" class="footnote">13</a></sup>. <a href="#fnref:disint" class="reversefootnote">↩</a></p>
</li>
<li id="fn:rn">
<p>A Radon-Nikodym derivatives tells us how to re-weight one probability measure to obtain another one. See Chapter 5 of Probability and Stochastics<sup id="fnref:cinlar:2"><a href="#fn:cinlar" class="footnote">11</a></sup>. <a href="#fnref:rn" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ac">
<p>If two measures are absolutely continuous, they assign nonzero probability to the same events. See Chapter 5 of Probability and Stochastics<sup id="fnref:cinlar:3"><a href="#fn:cinlar" class="footnote">11</a></sup>. <a href="#fnref:ac" class="reversefootnote">↩</a></p>
</li>
<li id="fn:pcn">
<p>A recent line of work<sup id="fnref:infmcmc"><a href="#fn:infmcmc" class="footnote">14</a></sup> has sought to prevent Markov Chain Monte Carlo algorithms from slowing down for high-dimensional models by ensuring their infinite-dimensional limits are well-defined. <a href="#fnref:pcn" class="reversefootnote">↩</a></p>
</li>
<li id="fn:cr">
<p>See Chapter 6 Section 8 of Understanding Uncertainty<sup id="fnref:lindley"><a href="#fn:lindley" class="footnote">15</a></sup>. <a href="#fnref:cr" class="reversefootnote">↩</a></p>
</li>
<li id="fn:cmt">
<p>The space of vectors that can be added to a Gaussian measure while preserving absolute continuity is called its <em>Cameron-Martin</em> space. See Chapter 5 of Lectures on Gaussian Processes<sup id="fnref:lecgm"><a href="#fn:lecgm" class="footnote">16</a></sup>. <a href="#fnref:cmt" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ct">
<p>A. Terenin and D. Draper. Cox’s Theorem and the Jaynesian Interpretation of Probability. <a href="https://arxiv.org/abs/1507.06597">arXiv:1507.06597</a>, 2015. <a href="#fnref:ct" class="reversefootnote">↩</a></p>
</li>
<li id="fn:cinlar">
<p>E. Çınlar. Probability and Stochastics. 2010. <a href="#fnref:cinlar" class="reversefootnote">↩</a> <a href="#fnref:cinlar:1" class="reversefootnote">↩<sup>2</sup></a> <a href="#fnref:cinlar:2" class="reversefootnote">↩<sup>3</sup></a> <a href="#fnref:cinlar:3" class="reversefootnote">↩<sup>4</sup></a></p>
</li>
<li id="fn:eldredge">
<p>N. Eldredge. Analysis and Probability on Infinite-Dimensional Spaces. 2016. <a href="#fnref:eldredge" class="reversefootnote">↩</a></p>
</li>
<li id="fn:condasdisint">
<p>J. T. Chang and D. Pollard. Conditioning as Disintegration. Statistica Neerlandica 51(3). 1997. <a href="#fnref:condasdisint" class="reversefootnote">↩</a></p>
</li>
<li id="fn:infmcmc">
<p>S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White. MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster. Statistical Science 28(3), 2013. <a href="#fnref:infmcmc" class="reversefootnote">↩</a></p>
</li>
<li id="fn:lindley">
<p>D. Lindley. Understanding Uncertainty. 2006. <a href="#fnref:lindley" class="reversefootnote">↩</a></p>
</li>
<li id="fn:lecgm">
<p>M. Lifshits. Lectures on Gaussian Processes. 2012. <a href="#fnref:lecgm" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninIn my previous posts, I introduced Bayesian models and argued that they are meaningful. I claimed that studying them is worthwhile because the probabilistic interpretation of learning that they offered can be more intuitive than other interpretations. I showcased an example illustrating what a Bayesian model looks like. I did not, however, say what a Bayesian model actually is – at least not in a sufficiently general setting to encompass models people regularly use. I’m going to discuss that in this post, and then showcase some surprising behavior in infinite-dimensional settings where the general approach is necessary. The subject matter here can be highly technical, but will be discussed at an intuitive level meant to explain what is going on.What does it mean to be Bayesian?2017-11-03T00:00:00+00:002017-11-03T00:00:00+00:00https://avt.im/blog/2017/11/03/meaning-of-bayesian<p>Bayesian statistics provides powerful theoretical tools, but it is also sometimes viewed as a philosophical framework.
This has lead to rich academic debates over what statistical learning is and how it should be done.
Academic debates are healthy when their content is precise and independent issues are not conflated.
In this post, I argue that it is not always meaningful to consider the merits of Bayesian learning directly, because the fundamental questions surrounding it encompass not one issue, but several, that are best understood independently.
These can be viewed informally as follows.</p>
<ul>
<li>A model is <em>mathematically Bayesian</em> if it is defined using Bayes’ Rule.</li>
<li>A procedure is <em>computationally Bayesian</em> if it involves calculation of a full posterior distribution.</li>
</ul>
<p>The key idea of this post is that the two notions above are different, and that the common term <em>Bayesian</em> is often ambiguous.
This makes it unclear, for instance, that there are situations where it makes sense to be mathematically but not computationally Bayesian.
Let’s disentangle the terminology and explore the concepts in more detail.</p>
<h1 id="motivating-example-logistic-lasso">Motivating Example: Logistic Lasso</h1>
<p>To make my arguments concrete, I now introduce the Logistic Lasso model, beginning with notation.
Let $\m{X}<em>{N \times p}$ be the matrix to be used for predicting the binary vector $\v{y}</em>{N\times 1}$, let $\v\beta$ be the parameter vector, and let $\phi$ be the logistic function.</p>
<p>From the classical perspective, the Logistic Lasso model<sup id="fnref:lasso"><a href="#fn:lasso" class="footnote">1</a></sup> involves finding the estimator</p>
<p>[
\v{\hat\beta} = \underset{\v\beta}{\arg\min}\cbr{ \sum_{i=1}^N -y_i\ln\del{ \phi(\m{X}_i\v\beta) } - (1-y_i)\ln\del{1 - \phi(\m{X}_i\v\beta)} + \lambda\vert\vert\v\beta\vert\vert_1}
]</p>
<p>for $\lambda \in \R^+$, where $\vert\vert\cdot\vert\vert_1$ denotes the $L^1$ norm. On the other hand, the Bayesian Logistic Lasso model<sup id="fnref:blasso"><a href="#fn:blasso" class="footnote">2</a></sup> is specified using the likelihood and prior</p>
<p>[
\begin{aligned}
y_i \given \v\beta &\dist \f{Ber}\del{\phi(\m{X}_i\v\beta)}
&
\v\beta&\dist \f{Laplace} (\lambda^{-1})
\end{aligned}
]</p>
<p>for which the posterior distribution is found via Bayes’ Rule.</p>
<p>For the Logistic Lasso, both formulations are equivalent<sup id="fnref:bda"><a href="#fn:bda" class="footnote">3</a></sup> in the sense that they yield the same point estimates.
This connection is discussed in detail in my <a href="/blog/2017/07/05/bayesian-learning">previous post</a>.
Since the same model can be expressed both ways, it may be unclear to someone unfamiliar with Bayesian statistics what people might disagree about here.
Let’s proceed to that.</p>
<h1 id="statistical-learning-theory">Statistical Learning Theory</h1>
<p>The first philosophical question we consider is what statistical learning is.
This fundamental question has been considered by a variety of people throughout history.
One formulation – due to Vapnik<sup id="fnref:vv"><a href="#fn:vv" class="footnote">4</a></sup> – involves defining a <em>loss function</em> $L(y, \hat{y})$ for predicted data, and finding a function $f$ that minimizes the expected loss</p>
<p>[
\underset{f}{\arg\min} \int_\Omega L(y, f(x)) \dif F(x,y)
]</p>
<p>with respect to an unknown distribution $F(x,y)$.
This loss is then approximated in various ways because the data is finite – for instance, by restricting the domain of optimization.
In this approach, a <em>statistical learning problem</em> is defined to be a <em>functional optimization problem</em>, the problem’s <em>answer</em> is given by the function $f$, and the model $\mathscr{M}$ is given by the loss function together with whatever approximations are made. For Logistic Lasso, we assume that the functional form of $f$ is given by $\phi(\m{X}\v\beta)$, and that $L$ is $L^1$-regularized cross-entropy loss.</p>
<h1 id="bayesian-theory">Bayesian Theory</h1>
<p>The other formalism we consider involves defining statistical learning more abstractly.
We suppose that we are given a parameter $\theta$ and data set $x$.
We define a set $\Omega$ consisting of true-false statements $\theta = \theta’$ and $x = x’$ for all possible parameter values $\theta’$ and data values $x’$.
From the data, we know the statement $x=x’$ is true – but we do not know which $\theta’$ makes it so that $\theta = \theta’$ is true.
Thus, we cannot simply deduce $\theta$ via logical reasoning, and must extend the concept of logical reasoning to accommodate uncertainty.</p>
<p>To do so, we suppose that there is a relationship between $x$ and $\theta$ such that different values of $x$ may change the relative truth of different values of $\theta$.
Thus, we seek to define a function $\P(\theta = \theta’ \given x = x’)$ such that if $x=x’$ is true, the function tells us how close to true or to false $\theta=\theta’$ is.
It turns out under appropriate formal definitions<sup id="fnref:ct"><a href="#fn:ct" class="footnote">5</a></sup>, any reasonable such function is isomorphic to conditional probability.
Thus, to perform <em>logical reasoning under uncertainty</em>, we need to specify two probability distributions – the <em>likelihood</em> $f(x \given \theta)$ and <em>prior</em> $\pi(\theta)$, and calculate</p>
<p>[
f(\theta \given x) = \frac{f(x \given \theta) \pi(\theta)}{\int_\Theta f(x \given \theta) \pi(\theta) \dif \theta} \propto f(x \given \theta) \pi(\theta)
]</p>
<p>using Bayes’ Rule, which gives us the <em>posterior</em> distribution.
In this approach, <em>statistical learning</em> is taken to mean <em>reasoning under uncertainty</em>, the <em>answer</em> is given by the probability distribution $f(\theta \given x)$, and the model $\mathscr{M}$ is given by the likelihood together with the prior.
For Logistic Lasso, we assume that the likeihood is Bernoulli, and that the prior is Laplace.</p>
<h1 id="interpretation-of-models">Interpretation of Models</h1>
<p>At first glance, the theories may appear somewhat different, but the Logistic Lasso – and just about every model used in practice – can be formalized in both ways.
This leads to the first question.</p>
<blockquote>
<p>Should we interpret statistical models as probability distributions or as loss functions?</p>
</blockquote>
<p>The answer, of course, depends on the preferences of the person being asked – if we want, we may interpret a model whose loss function corresponds to a posterior distribution in a Bayesian way.
The probabilistic structure it possesses can be a useful theoretical tool for understanding its behavior.
This lets us see for instance that if priors are considered subjective, regularizers must be as well.
We conclude with an informal definition this class of models.</p>
<p><strong>Definition.</strong>
A model $\mathscr{M}$ is <em>mathematically Bayesian</em> if it can be fully specified via a prior $\pi(\theta)$ and likelihood $f(x \given \theta)$ for which the posterior distribution $f(\theta \given x)$ is well-defined.</p>
<h1 id="assessment-of-inferential-uncertainty">Assessment of Inferential Uncertainty</h1>
<p>The second question does not concern the model in a mathematical sense.
Instead, we consider an abstract procedure $\mathscr{P}$ that utilizes a model $\mathscr{M}$ to do something useful.
Here, we encounter our second question.</p>
<blockquote>
<p>Should we assess uncertainty regarding what was learned about $\theta$ from the data by computing the posterior distribution $f(\theta \given x)$?</p>
</blockquote>
<p>Often, assessing inferential uncertainty is interesting, but not always.
One important note is that for any given data set, the uncertainty given by $f(\theta \given x)$ is completely determined by the specification of $\mathscr{M}$.
If $\mathscr{M}$ is not the correct model, its uncertainty estimates may be arbitrary bad, even if its predictions are good.
Thus, we may prefer to not assess uncertainty at all, rather than delude ourselves into thinking we know it.</p>
<p>Similarly, for some problems there may exist a simple and easy way to determine whether $\theta$ is good or not.
For example, in image classification, we might simply ask a human if the labels produced by $\theta$ are reasonable.
This might be far more effective than using the probability distribution $f(\theta \given x)$ to compare the chosen value for $\theta$ to other possible values, especially when calculating $f(\theta \given x)$ is challenging.</p>
<p>This leads to a choice undertaken by the practitioner: should $f(\theta \given x)$ be calculated, or is picking one value $\hat\theta$ good enough?
In some cases, such as when a decision-theoretic analysis is performed, $f(\theta \given x)$ is indispensable, other times it is unnecessary.
We conclude with an informal definition encompassing this choice.</p>
<p><strong>Definition.</strong>
A statistical procedure $\mathscr{P}$ that makes use of a model $\mathscr{M}$ is <em>computationally Bayesian</em> if it involves calculation of the full posterior distribution $f(\theta \given x)$ in at least one of its steps.</p>
<h1 id="disentangling-the-disagreements">Disentangling the Disagreements</h1>
<p>It is unfortunate that the term <em>Bayesian</em> has come to mean <em>mathematically Bayesian</em> and <em>computationally Bayesian</em> simultaneously.
In my opinion, these distinctions should be considered separately, because they concern two very different questions.
In the mathematical case, we are asking whether or not to interpret our model using its probabilistic representation.
In the computational case, we are asking whether calculating the entire distribution is necessary, or whether one value suffices.</p>
<p>A model’s Bayesian representation can be useful as a theoretical tool, whether we calculate the posterior or not.
If one value does suffice, we should not discard the probabilistic interpretation entirely, because it might help us understand the model’s structure.
For the Logistic Lasso, the Bayesian approach makes it obvious where cross-entropy loss comes from: it maps uniquely to the Bernoulli likelihood.</p>
<p>It is unfortunate that the two cases are often conflated.
It is common to hear practitioners say that they are not interested in whether models are Bayesian or frequentist – instead, it matters whether or not they work.
More often than not, models can be interpreted both ways, so the distinction’s premise is itself an illusion.
Every mathematical perspective tells us something about the objects we are studying,
Even if we do not perform Bayesian calculations, it can often still be useful to think of models in a Bayesian way.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:lasso">
<p>R. Tibshirani. Regression Shrinkage and Selection via the Lasso. JRSSB 58(1), 1996. <a href="#fnref:lasso" class="reversefootnote">↩</a></p>
</li>
<li id="fn:blasso">
<p>T. Park and G. Casella. The Bayesian Lasso. JASA 103(402), 2008. <a href="#fnref:blasso" class="reversefootnote">↩</a></p>
</li>
<li id="fn:bda">
<p>A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. Bayesian Data Analysis. 2013. <a href="#fnref:bda" class="reversefootnote">↩</a></p>
</li>
<li id="fn:vv">
<p>V. Vapnik. The Nature of Statistical Learning Theory. 1995. <a href="#fnref:vv" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ct">
<p>A. Terenin and D. Draper. Cox’s Theorem and the Jaynesian Interpretation of Probability. <a href="https://arxiv.org/abs/1507.06597">arXiv:1507.06597</a>, 2015. <a href="#fnref:ct" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninBayesian statistics provides powerful theoretical tools, but it is also sometimes viewed as a philosophical framework. This has lead to rich academic debates over what statistical learning is and how it should be done. Academic debates are healthy when their content is precise and independent issues are not conflated. In this post, I argue that it is not always meaningful to consider the merits of Bayesian learning directly, because the fundamental questions surrounding it encompass not one issue, but several, that are best understood independently. These can be viewed informally as follows.Deep Learning with function spaces2017-08-16T00:00:00+00:002017-08-16T00:00:00+00:00https://avt.im/blog/2017/08/16/deep-learning-function-spaces<p>Deep learning is perhaps the single most important breakthrough in statistics, machine learning, and artificial intelligence that has been popularized in recent years.
It has allowed us to classify images - for decades a challenging problem - with nowadays usually better-than-human accuracy.
It has solved Computer Go, which for decades was the classical example of a board game that was exceedingly difficult for computers to play.
But what exactly is deep learning?</p>
<p>Many popular explanations involve analogies with the human brain, where deep learning models are interpreted as complex networks of neurons interacting with one another.
These perspectives are useful, but they’re not math: just because deep learning models mimic the brain, doesn’t mean they provably work.
This post will highlight some ideas that may be helpful in moving toward an understanding of why deep learning works, presented at an intuitive level.
The focus will be on high-level concepts, omitting algebraic details such as the precise form of tensor products.</p>
<h1 id="the-function-space-perspective">The Function Space Perspective</h1>
<p>The key idea of this post is that to understand why deep learning works, we should not work with the network directly.
Instead, we will define a model for learning on a space of functions, truncate that model, and obtain deep learning.</p>
<p>Consider the model</p>
<p>[
\hat{\v{y}} = f(\m{X})
]</p>
<p>where the goal is to learn the function $f$ that maps data $\m{X}$ to the predicted value $\hat{\v{y}}$.
But wait, how do we go about learning a function?
Let’s first consider a single-variable function $f(x): \R \goesto \R$ and recall that any function may be written as an infinite sum with respect to a location-scale basis, i.e. we have for an appropriately defined function $\sigma$ that</p>
<p>[
f(x) = \sum_{k=1}^\infty a_k \, \sigma(b_k x + c_k) + d_k
.
]</p>
<p>What’s happening here?
We’re taking the function $\sigma$, shifting it left-right by $b_k$, stretching it by a combination of $a_k$ and $c_k$, and shifting it up-down by $d_k$.
As long as $\sigma$ is sufficiently rich to form a basis on $\R$, if we add up infinitely many of them, we can approximate $f$ to any precision we want.
To make learning possible, let’s truncate the sum, so that we sum $K$ elements instead of $\infty$, and get</p>
<p>[
f(x) = \sum_{k=1}^K a_k \, \sigma(b_k x + c_k) + d_k
.
]</p>
<p>We now have a finite set of parameters, so given a data set $(\m{X},\v{y})$, we can define a probability distribution for $\v{y}$ under the predicted values $\hat{\v{y}}$, and <a href="/blog/2017/07/05/bayesian-learning">learn the coefficients using Bayes’ Rule</a>.</p>
<p>But wait: the expressions we get by following this procedure, extended to matrices and vectors, are exactly those given by a <a href="/blog/2017/07/05/bayesian-learning">1-layer fully connected network</a>.
This is what a fully connected network does, and this is why it works: we are expanding an arbitrary function with respect to a basis, and learning the coefficients of the expansion using Bayes’ Rule<sup id="fnref:be"><a href="#fn:be" class="footnote">1</a></sup>.
That’s it!</p>
<h1 id="going-deep">Going Deep</h1>
<p>With the above perspective in mind, let’s consider deep learning.
We’re going to apply another trick: rather than learning $f$ directly, let’s instead define functions $f^{(1)},f^{(2)},f^{(3)}$ such that</p>
<p>[
\hat{\v{y}} = f(\m{X}) = f^{(1)}\cbr{f^{(2)}\sbr{f^{(3)}\del{\m{X}}}}
]</p>
<p>It’s not obvious why we should do this, but let’s go with it for now.
Then, let $\sigma$ be the ReLU function, and expand $f^{(3)}$ with respect to that basis, just as we did above, but with matrix-vector notation, to get</p>
<p>[
\hat{\v{y}} = f^{(1)}\cbr{f^{(2)}\sbr{ \v{a}^{(3)} \sigma\del{\m{X}\v{b}^{(3)} + \v{c}^{(3)}} + \v{d}^{(3)} }}
.
]</p>
<p>Now, let’s expand $f^{(2)}$, yielding</p>
<p>[
\hat{\v{y}} = f^{(1)}\cbr{\v{a}^{(2)}\sigma\sbr{\del{\v{a}^{(3)} \sigma\del{\m{X}\v{b}^{(3)} + \v{c}^{(3)}} + \v{d}^{(3)}}\v{b}^{(2)} + \v{c}^{(2)}} + \v{d}^{(2)}}
.
]</p>
<p>Notice that we can set $\v{b}^{(2)} = \v{1}$ and $\v{c}^{(2)} = \v{0}$ with no loss of generality to slightly simplify our expression.
Upon expanding $f^{(1)}$, we are left with</p>
<p>[
\hat{\v{y}} = \v{a}^{(1)}\sigma\cbr{\v{a}^{(2)}\sigma\sbr{\v{a}^{(3)} \sigma\del{\m{X}\v{b}^{(3)} + \v{c}^{(3)}} + \v{d}^{(3)}} + \v{d}^{(2)}} + \v{d}^{(1)}
]</p>
<p>which is exactly the expression for a 3-layer fully connected network.</p>
<p>So, what is deep learning?
Deep learning is a model that learns a function $f$ by splitting it up into a sequence of functions $f^{(1)},f^{(2)},f^{(3)},..$, performing a ReLU basis expansion on each one, truncating it, and learning the remaining coefficients using Bayes’ Rule.</p>
<h1 id="example-why-residual-networks-work">Example: why Residual Networks work</h1>
<p>This perspective can be used to understand recently popularized technique in deep learning.
For illustrative purposes, let’s consider a 3-layer residual network.
Suppose $\m{X}$ is of the same dimensionality as the network.
A residual network is a model of the form</p>
<p>[
\begin{aligned}
\hat{\v{y}} = f(\m{X}) = &f^{(1)}\cbr{f^{(2)}\sbr{f^{(3)}\del{\m{X}} + \m{X}} + \sbr{f^{(3)}\del{\m{X}} + \m{X}}}
\nonumber
\\
&+ \cbr{f^{(2)}\sbr{f^{(3)}\del{\m{X}} + \m{X}} + \sbr{f^{(3)}\del{\m{X}} + \m{X}}}
.
\end{aligned}
]</p>
<p>So, why do residual networks perform better?
Consider the above from a Bayesian learning the point of view: we start with a prior distribution - determined uniquely by the regularization term - and end with a posterior distribution that describes what we learned.
Suppose that nothing is learned in the 3rd layer.
Then the posterior distribution must be the same as the prior.
With $L^2$ regularization, this means that the posterior mode of the coefficients of the basis expansion of $f^{(3)}$ will be zero.
Hence,</p>
<p>[
f^{(3)}(x) = \sum_{k=1}^K 0 \, \sigma(0 \times x + 0) + 0 = 0
]</p>
<p>and the model collapses to</p>
<p>[
\hat{\v{y}} = f(\m{X}) = f^{(1)}\cbr{f^{(2)}\sbr{\m{X}} + \m{X}} + \cbr{f^{(2)}\sbr{\m{X}} + \m{X}}
.
]</p>
<p>Contrast this with a non-residual network, which collapses to</p>
<p>[
\hat{\v{y}} = f(\m{X}) = f^{(1)}\cbr{f^{(2)}\sbr{\v{0}}} = \text{constant}
.
]</p>
<p>In reality, of course, the network learns <em>something</em> in deeper layers, so behavior isn’t quite this bad.
But, if we suppose that deeper layers learn less and less given the same data, the model must eventually stop working if we keep adding layers.
Thus, standard networks don’t work if we make them too deep.
Residual networks fix the problem.</p>
<h1 id="what-have-we-gained-from-this-perspective">What have we gained from this perspective?</h1>
<p>Thinking about function spaces can make deep learning substantially more understandable.
Instead of thinking about networks, which are complicated, we can think about functions, which are in my view simpler.</p>
<p>The ideas above can for instance be used to understand what convolutional networks do: they make assumptions on how each $f^{(i)}$ behaves over space.
Similarly, we can see why ReLU<sup id="fnref:relu"><a href="#fn:relu" class="footnote">2</a></sup> units might perform slightly better than sigmoid units: because they are unbounded, less of them may be required to approximate a given function well.</p>
<p>Part of what makes functions simpler is that it is easy to visualize what scaling and shifting does to them.
For example, it is easy to see that switching from ReLU to Leaky ReLU<sup id="fnref:lrelu"><a href="#fn:lrelu" class="footnote">3</a></sup> units is the same as increasing the bias term in the basis expansion.
It’s certainly possible that this may sometimes be helpful, but it would be a big surprise to me if doing this resulted in substantially better performance across the board.</p>
<p>One major question that the function space perspective raises is why learning $f^{(1)}, f^{(2)}, f^{(3)},..$ separately is so much easier than learning $f$ directly.
I don’t know of a good answer to this question.</p>
<p>A key benefit of thinking with function spaces is that it gives us a principled way to derive the expressions needed to define and train networks.
The residual networks presented here differ slightly from the original work in which they were presented<sup id="fnref:resnet"><a href="#fn:resnet" class="footnote">4</a></sup> – more recent work has proposed precisely the formulas derived here<sup id="fnref:resnetidentity"><a href="#fn:resnetidentity" class="footnote">5</a></sup> which were found to improve performance.</p>
<p>I’m not sure why deep learning is not typically presented in this way – the function space perspective is largely omitted from the classical text <em>Deep Learning</em><sup id="fnref:dlintro"><a href="#fn:dlintro" class="footnote">6</a></sup>.
Overall, I hope that this short introduction has been useful for understanding deep learning and making the structure present in the models more transparent.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:be">
<p>See Chapter 20 of Bayesian Data Analysis<sup id="fnref:bda"><a href="#fn:bda" class="footnote">7</a></sup>. <a href="#fnref:be" class="reversefootnote">↩</a></p>
</li>
<li id="fn:relu">
<p>R Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, H. S. Seung (2000). Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405(6789), 2000. <a href="#fnref:relu" class="reversefootnote">↩</a></p>
</li>
<li id="fn:lrelu">
<p>A. L. Maas, A. Y. Hannun, A. Y. Ng. Rectifier Nonlinearities Improve Neural Network Acoustic Models. ICML 30(1), 2013. <a href="#fnref:lrelu" class="reversefootnote">↩</a></p>
</li>
<li id="fn:resnet">
<p>K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. CVPR 28(1), 2015. <a href="#fnref:resnet" class="reversefootnote">↩</a></p>
</li>
<li id="fn:resnetidentity">
<p>K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. ECCV 14(1), 2016. <a href="#fnref:resnetidentity" class="reversefootnote">↩</a></p>
</li>
<li id="fn:dlintro">
<p>See Chapter 6 of Deep Learning<sup id="fnref:dl"><a href="#fn:dl" class="footnote">8</a></sup>. <a href="#fnref:dlintro" class="reversefootnote">↩</a></p>
</li>
<li id="fn:bda">
<p>A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. Bayesian Data Analysis. 2013. <a href="#fnref:bda" class="reversefootnote">↩</a></p>
</li>
<li id="fn:dl">
<p>I. Goodfellow, Y. Bengio, A. Courville. <a href="http://www.deeplearningbook.org">Deep Learning</a>. 2016. <a href="#fnref:dl" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninDeep learning is perhaps the single most important breakthrough in statistics, machine learning, and artificial intelligence that has been popularized in recent years. It has allowed us to classify images - for decades a challenging problem - with nowadays usually better-than-human accuracy. It has solved Computer Go, which for decades was the classical example of a board game that was exceedingly difficult for computers to play. But what exactly is deep learning?Bayesian Learning - by example2017-07-05T00:00:00+00:002017-07-05T00:00:00+00:00https://avt.im/blog/2017/07/05/bayesian-learning<p>Welcome to my blog!
For my first post, I decided that it would be useful to write a short introduction to Bayesian learning, and its relationship with the more traditional optimization-theoretic perspective often used in artificial intelligence and machine learning, presented in a minimally technical fashion.
We begin by introducing an example.</p>
<h1 id="example-binary-classification-using-a-fully-connected-network">Example: binary classification using a fully connected network</h1>
<p>First, let’s introduce notation. For simplicity suppose there are no biases, and define the following.</p>
<ul>
<li>$\v{y}_{N\times 1}$: a binary vector where each element is a target data point. $N$ is the amount of input data.</li>
<li>$\m{X}_{N\times p}$: a matrix where each row is an input data vector. $p$ is the dimensionality of each input.</li>
<li>$\v\beta^{(x)}_{p \times m}$: the matrix that maps the input to the hidden layer. $m$ is the number of hidden units.</li>
<li>$\v\beta^{(h)}_{m \times 1}$: the vector that maps the hidden layer to the output.</li>
<li>$\sigma$: the network’s activation function, for instance a ReLU function.</li>
<li>$\phi$: the softmax function.</li>
</ul>
<div style="text-align: center;">
<svg width="250px" viewBox="0 0 250 265" xmlns="http://www.w3.org/2000/svg">
<g>
<line style="stroke: rgb(0, 0, 0);" x1="50" y1="200" x2="200" y2="125" />
<line style="stroke: rgb(0, 0, 0);" x1="50" y1="50" x2="200" y2="125" />
<line style="stroke: rgb(0, 0, 0);" x1="50" y1="125" x2="125" y2="162.5" />
<line style="stroke: rgb(0, 0, 0);" x1="50" y1="125" x2="125" y2="87.5" />
<line style="stroke: rgb(0, 0, 0);" x1="50" y1="200" x2="125" y2="87.5" />
<line style="stroke: rgb(0, 0, 0);" x1="50" y1="50" x2="125" y2="162.5" />
</g>
<g>
<ellipse style="stroke: rgb(0, 0, 0); fill: rgb(167, 167, 167);" transform="matrix(1, 0.000003, -0.000003, 1, -73.458435, -2.691527)" cx="123.459" cy="52.691" rx="25" ry="25" />
<ellipse style="stroke: rgb(0, 0, 0); fill: rgb(167, 167, 167);" transform="matrix(1, 0.000003, -0.000003, 1, -73.45842, 147.308301)" cx="123.459" cy="52.691" rx="25" ry="25" />
<ellipse style="stroke: rgb(0, 0, 0); fill: rgb(167, 167, 167);" transform="matrix(1, 0.000003, -0.000003, 1, -73.458427, 72.308369)" cx="123.459" cy="52.691" rx="25" ry="25" />
<ellipse style="fill: rgb(216, 216, 216); stroke: rgb(0, 0, 0);" transform="matrix(1, 0.000003, -0.000003, 1, 1.541482, 34.808468)" cx="123.459" cy="52.691" rx="25" ry="25" />
<ellipse style="fill: rgb(216, 216, 216); stroke: rgb(0, 0, 0);" transform="matrix(1, 0.000003, -0.000003, 1, 1.541482, 109.808331)" cx="123.459" cy="52.691" rx="25" ry="25" />
<ellipse style="stroke: rgb(0, 0, 0); fill: rgb(167, 167, 167);" transform="matrix(1, 0.000003, -0.000003, 1, 76.541375, 72.308369)" cx="123.459" cy="52.691" rx="25" ry="25" />
</g>
<g>
<foreignObject x="35" y="235" width="30" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$\m{X}$
</div>
</foreignObject>
<foreignObject x="80" y="235" width="30" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$\v\beta^{(x)}$
</div>
</foreignObject>
<foreignObject x="150" y="235" width="30" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$\v\beta^{(h)}$
</div>
</foreignObject>
<foreignObject x="190" y="235" width="30" height="30">
<div xmlns="http://www.w3.org/1999/xhtml">
$\v{y}$
</div>
</foreignObject>
</g>
</svg>
</div>
<h1 id="the-standard-approach">The standard approach</h1>
<p>We begin by defining an optimization problem.
Let $\v\beta$ be a $k$-dimensional vector consisting of all values of $\v\beta^{(x)}$ and $\v\beta^{(h)}$ stacked together.
Our network’s prediction $\v{\hat{y}} \in [0,1]^N$ is given by</p>
<p>[
\hat{\v{y}} = \phi\del{\sigma\del{\m{X} \v\beta^{(x)}} \v\beta^{(h)}}
]</p>
<p>Now, we proceed to learn the weights.
Let $\v{\hat\beta}$ be the learned values for $\v\beta$, let $\vert\vert\cdot\vert\vert$ be the $L^2$ norm, fix some $\lambda \in \R^+$, and set</p>
<p>[
\v{\hat\beta} = \underset{\v\beta}{\arg\min}\cbr{ \sum_{i=1}^N -y_i\ln(\hat{y}_i) - (1-y_i)\ln(1 - \hat{y}_i) + \lambda\vert\vert\v\beta\vert\vert^2}
.
]</p>
<p>The expression being minimized is called <em>cross entropy loss</em><sup id="fnref:ce"><a href="#fn:ce" class="footnote">1</a></sup>.
The loss is differentiable, so we can minimize it by using gradient descent or any other method we wish.
Learning takes place by minimizing the loss, and the values we learn – here, $\v{\hat\beta}$ – are a point in $\R^k$.</p>
<p>Why cross-entropy rather than some other mathematical expression?
In most treatments of classification, the reasons given are purely intuitive, for instance, it is often said to stabilize the optimization algorithm.
More rigorous treatments<sup id="fnref:ce:1"><a href="#fn:ce" class="footnote">1</a></sup> might introduce ideas from information theory.
We will provide another explanation.</p>
<h1 id="the-bayesian-approach">The Bayesian approach</h1>
<p>Let us now define the exact same network, but this time from a Bayesian perspective. We begin by making probabilistic assumptions on our data.
Since we have that $\v{y} \in \cbr{0,1}^N$, and since we assume that the order in which $\v{y}$ is presented cannot affect learning – this is formally called exchangeability – there is one and only one distribution that $\v{y}$ can follow: the Bernoulli distribution.
The parameter of that distribution is the same expression $\v{\hat{y}}$ as before.
Hence, let</p>
<p>[
\v{y} \given \v\beta \dist\f{Ber}\sbr{\phi\del{\sigma\del{\m{X} \v\beta^{(x)}} \v\beta^{(h)}}}
.
]</p>
<p>This is called the <em>likelihood</em>: it describes the assumptions we are making about the data $\v{y}$ given the parameters $\v\beta$ – here, that the data is binary and exchangeable.
Now, define the <em>prior</em> for $\v\beta$ as</p>
<p>[
\v\beta \dist\f{N}_k\del{0, \frac{\lambda^{-1}}{2}}
.
]</p>
<p>This describes our assumptions about $\v\beta$ external to the data – here, we have assumed that all components of $\v\beta$ are <em>a priori</em> independent mean-zero Gaussians.
We can combine the prior and likelihood using Bayes’ Rule</p>
<p>[
f(\v\beta \given \v{y}) = \frac{f(\v{y} \given \v\beta) \pi(\v\beta)}{\int_{\R^k} f(\v{y} \given \v\beta) \pi(\v\beta) \dif \beta} \propto f(\v{y} \given \v\beta) \pi(\v\beta)
]</p>
<p>to obtain the <em>posterior</em> $\v\beta \given \v{y}$.
This is a probability distribution: it describes what we learned about $\v\beta$ from the data.
Learning takes place through the use of Bayes’ Rule, and the values we learn – here, $\v\beta \given \v{y}$ – are a probability distribution on $\R^k$.</p>
<h1 id="connecting-the-two-approaches">Connecting the two approaches</h1>
<p>Is there any relationship between $\v{\hat\beta}$ and $\v\beta \given \v{y}$?
It turns out, yes – let’s show it. First, let’s write down the posterior</p>
<p>[
f(\v\beta \given \v{y}) \propto f(\v{y} \given \v\beta) \pi(\v\beta) \propto \sbr{\prod_{i=1}^N \hat{y}_i^{y_i} (1 - \hat{y}_i)^{1 - y_i}} \exp\cbr{\frac{\v\beta^T\v\beta}{-\lambda^{-1}}}
.
]</p>
<p>Now, let’s take logs and simplify:</p>
<p>[
\ln f(\v\beta \given \v{y}) = \sum_{i=1}^N y_i \ln(\hat{y}_i) + (1-y_i)\ln(1 - \hat{y}_i) - \lambda\vert\vert\v\beta\vert\vert^2 + \f{const}
.
]</p>
<p>Having computed that, note that that taking logs and adding constants preserve optima, and consider the posterior mode:</p>
<p>[
\begin{aligned}
\underset{\v\beta}{\arg\max}\cbr{f(\v\beta \given \v{y})} &= \underset{\v\beta}{\arg\max}\cbr{\ln f(\v\beta \given \v{y})} =
\nonumber
\\
&=\underset{\v\beta}{\arg\max}\cbr{ \sum_{i=1}^N y_i \ln(\hat{y}_i) + (1-y_i)\ln(1 - \hat{y}_i) - \lambda\vert\vert\v\beta\vert\vert^2 } =
\nonumber
\\
&=\underset{\v\beta}{\arg\min}\cbr{ \sum_{i=1}^N -y_i \ln(\hat{y}_i) - (1-y_i)\ln(1 - \hat{y}_i) + \lambda\vert\vert\v\beta\vert\vert^2 } =
\nonumber
\\
&= \v{\hat{\beta}}
.
\end{aligned}
]</p>
<p>What have we shown? Minimizing cross-entropy loss is equivalent to maximizing the posterior distribution.
The loss function maps to the likelihood, and the regularization term maps to the prior.</p>
<h1 id="what-it-all-means">What it all means</h1>
<p>Why is this useful?
It gives us a probabilistic interpretation for learning, which helps us to construct and understand our models.
This is especially in more complicated settings: for instance, we might ask, where does $\v{\hat{y}} = \sigma\del{\m{X} \v\beta^{(x)}} \v\beta^{(h)}$ come from? In fact, we can use ideas from <em>Bayesian Nonparametrics</em> to derive $\v{\hat{y}}$ by considering a likelihood on a function space under a ReLU basis expansion<sup id="fnref:be"><a href="#fn:be" class="footnote">2</a></sup>.
The network’s loss and architecture can both be explained in a Bayesian way.</p>
<p>There is much more: we could consider drawing samples from the posterior distribution, to quantify uncertainty about how much we learned about $\v\beta$ from the data.
<em>Markov Chain Monte Carlo</em><sup id="fnref:mcmc"><a href="#fn:mcmc" class="footnote">3</a></sup> methods are the most common class of methods for doing so.
We can use ideas from hierarchical Bayesian models to define better regularizers compared to $L^2$ – the <em>Horseshoe</em><sup id="fnref:hs"><a href="#fn:hs" class="footnote">4</a></sup> prior is a popular example.
For brevity, I’ll omit further examples – the book <em>Bayesian Data Analysis</em><sup id="fnref:bda"><a href="#fn:bda" class="footnote">5</a></sup> is a good introduction, though it largely focuses on methods of interest mainly to statisticians.</p>
<p>How general is this perspective?
Very: an abstract result called Cox’s Theorem states, in modern terms, that <em>every true-false logic under uncertainty is isomorphic to conditional probability</em>.
This means that <em>all learning formalizable in the above sense is Bayesian</em>.
So, if you <em>can’t</em> represent a given method in a Bayesian way, I would be rather worried.
For a formal statement and details, see my preprint<sup id="fnref:ct"><a href="#fn:ct" class="footnote">6</a></sup> on the subject.</p>
<p>At the end of the day, having many different mathematical perspectives enables us to better understand how learning works, because things that are not obvious from one perspective might be easy to see from another.
Whereas the optimization-theoretic approach we began with did not give a clear reason for why we should use cross-entropy loss, from a Bayesian point of view it follows directly out of the binary nature of the data.
Sometimes, the Bayesian approach has little to say about a particular problem, other times it has a lot.
It is useful to know how to use it when the need arises, and I hope this short example has given at least one reason to read about Bayesian statistics in more detail.</p>
<h1 id="references">References</h1>
<div class="footnotes">
<ol>
<li id="fn:ce">
<p>See Chapter 5 of Deep Learning<sup id="fnref:dl"><a href="#fn:dl" class="footnote">7</a></sup>. <a href="#fnref:ce" class="reversefootnote">↩</a> <a href="#fnref:ce:1" class="reversefootnote">↩<sup>2</sup></a></p>
</li>
<li id="fn:be">
<p>See Chapter 20 of Bayesian Data Analysis<sup id="fnref:bda:1"><a href="#fn:bda" class="footnote">5</a></sup>. <a href="#fnref:be" class="reversefootnote">↩</a></p>
</li>
<li id="fn:mcmc">
<p>See Chapter 11 of Bayesian Data Analysis<sup id="fnref:bda:2"><a href="#fn:bda" class="footnote">5</a></sup>, but note that MCMC methods are far more general than presented there. An article<sup id="fnref:pdmcmc"><a href="#fn:pdmcmc" class="footnote">8</a></sup> by P. Diaconis gives a rather different overview. <a href="#fnref:mcmc" class="reversefootnote">↩</a></p>
</li>
<li id="fn:hs">
<p>C. M. Carvalho, N. G. Polson, and J. G. Scott. The Horseshoe estimator for sparse signals. Biometrika, 97(2):1–26, 2010. <a href="#fnref:hs" class="reversefootnote">↩</a></p>
</li>
<li id="fn:bda">
<p>A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. Bayesian Data Analysis. 2013. <a href="#fnref:bda" class="reversefootnote">↩</a> <a href="#fnref:bda:1" class="reversefootnote">↩<sup>2</sup></a> <a href="#fnref:bda:2" class="reversefootnote">↩<sup>3</sup></a></p>
</li>
<li id="fn:ct">
<p>A. Terenin and D. Draper. Cox’s Theorem and the Jaynesian Interpretation of Probability. <a href="https://arxiv.org/abs/1507.06597">arXiv:1507.06597</a>, 2015. <a href="#fnref:ct" class="reversefootnote">↩</a></p>
</li>
<li id="fn:dl">
<p>I. Goodfellow, Y. Bengio, A. Courville. <a href="http://www.deeplearningbook.org">Deep Learning</a>. 2016. <a href="#fnref:dl" class="reversefootnote">↩</a></p>
</li>
<li id="fn:pdmcmc">
<p>P. Diaconis. The Markov Chain Monte Carlo revolution. Bulletin of the American Mathematical Society, 46(2):179–205, 2009. <a href="#fnref:pdmcmc" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Alexander TereninWelcome to my blog! For my first post, I decided that it would be useful to write a short introduction to Bayesian learning, and its relationship with the more traditional optimization-theoretic perspective often used in artificial intelligence and machine learning, presented in a minimally technical fashion. We begin by introducing an example.