{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# SVI Part I: An Introduction to Stochastic Variational Inference in Pyro\n", "\n", "Pyro has been designed with particular attention paid to supporting stochastic variational inference as a general purpose inference algorithm. Let's see how we go about doing variational inference in Pyro.\n", "\n", "## Setup\n", "\n", "We're going to assume we've already defined our model in Pyro (for more details on how this is done see [Introduction to Pyro](intro_long.ipynb)).\n", "As a quick reminder, the model is given as a stochastic function `model(*args, **kwargs)`, which, in the general case takes arguments. The different pieces of `model()` are encoded via the mapping:\n", "\n", "1. observations $\\Longleftrightarrow$ `pyro.sample` with the `obs` argument\n", "2. latent random variables $\\Longleftrightarrow$ `pyro.sample`\n", "3. parameters $\\Longleftrightarrow$ `pyro.param`\n", "\n", "Now let's establish some notation. The model has observations ${\\bf x}$ and latent random variables ${\\bf z}$ as well as parameters $\\theta$. It has a joint probability density of the form \n", "\n", "$$p_{\\theta}({\\bf x}, {\\bf z}) = p_{\\theta}({\\bf x}|{\\bf z}) p_{\\theta}({\\bf z})$$\n", "\n", "We assume that the various probability distributions $p_i$ that make up $p_{\\theta}({\\bf x}, {\\bf z})$ have the following properties:\n", "\n", "1. we can sample from each $p_i$\n", "2. we can compute the pointwise log pdf $\\log p_i$ \n", "3. $p_i$ is differentiable w.r.t. the parameters $\\theta$\n", "\n", "\n", "## Model Learning\n", "\n", "In this context our criterion for learning a good model will be maximizing the log evidence, i.e. we want to find the value of $\\theta$ given by\n", "\n", "$$\\theta_{\\rm{max}} = \\underset{\\theta}{\\operatorname{argmax}} \\log p_{\\theta}({\\bf x})$$\n", "\n", "where the log evidence $\\log p_{\\theta}({\\bf x})$ is given by\n", "\n", "$$\\log p_{\\theta}({\\bf x}) = \\log \\int\\! d{\\bf z}\\; p_{\\theta}({\\bf x}, {\\bf z})$$\n", "\n", "In the general case this is a doubly difficult problem. This is because (even for a fixed $\\theta$) the integral over the latent random variables $\\bf z$ is often intractable. Furthermore, even if we know how to calculate the log evidence for all values of $\\theta$, maximizing the log evidence as a function of $\\theta$ will in general be a difficult non-convex optimization problem. \n", "\n", "In addition to finding $\\theta_{\\rm{max}}$, we would like to calculate the posterior over the latent variables $\\bf z$:\n", "\n", "$$ p_{\\theta_{\\rm{max}}}({\\bf z} | {\\bf x}) = \\frac{p_{\\theta_{\\rm{max}}}({\\bf x} , {\\bf z})}{\n", "\\int \\! d{\\bf z}\\; p_{\\theta_{\\rm{max}}}({\\bf x} , {\\bf z}) } $$\n", "\n", "Note that the denominator of this expression is the (usually intractable) evidence. Variational inference offers a scheme for finding $\\theta_{\\rm{max}}$ and computing an approximation to the posterior $p_{\\theta_{\\rm{max}}}({\\bf z} | {\\bf x})$. Let's see how that works.\n", "\n", "## Guide\n", "\n", "The basic idea is that we introduce a parameterized distribution $q_{\\phi}({\\bf z})$, where $\\phi$ are known as the variational parameters. This distribution is called the variational distribution in much of the literature, and in the context of Pyro it's called the **guide** (one syllable instead of nine!). The guide will serve as an approximation to the posterior. We can think of $\\phi$ as parameterizing a space or family of probability distributions. Our goal will be to find the (not necessarily unique) probability distribution in that space that is the best possible approximation to the posterior distribution.\n", "\n", "Just like the model, the guide is encoded as a stochastic function `guide()` that contains `pyro.sample` and `pyro.param` statements. It does _not_ contain observed data, since the guide needs to be a properly normalized distribution. Note that Pyro enforces that `model()` and `guide()` have the same call signature, i.e. both callables should take the same arguments. \n", "\n", "Since the guide is an approximation to the posterior $p_{\\theta_{\\rm{max}}}({\\bf z} | {\\bf x})$, the guide needs to provide a valid joint probability density over all the latent random variables in the model. Recall that when random variables are specified in Pyro with the primitive statement `pyro.sample()` the first argument denotes the name of the random variable. These names will be used to align the random variables in the model and guide. To be very explicit, if the model contains a random variable `z_1`\n", "\n", "```python\n", "def model():\n", " pyro.sample(\"z_1\", ...)\n", "```\n", "\n", "then the guide needs to have a matching `sample` statement\n", "\n", "```python\n", "def guide():\n", " pyro.sample(\"z_1\", ...)\n", "```\n", "\n", "The distributions used in the two cases can be different, but the names must line-up 1-to-1. \n", "\n", "Once we've specified a guide (we give some explicit examples below), we're ready to proceed to inference.\n", "Learning will be setup as an optimization problem where each iteration of training takes a step in $\\theta-\\phi$ space that moves the guide closer to the exact posterior.\n", "To do this we need to define an appropriate objective function. \n", "\n", "## ELBO\n", "\n", "A simple derivation (for example see reference [1]) yields what we're after: the evidence lower bound (ELBO). The ELBO, which is a function of both $\\theta$ and $\\phi$, is defined as an expectation w.r.t. to samples from the guide:\n", "\n", "$${\\rm ELBO} \\equiv \\mathbb{E}_{q_{\\phi}({\\bf z})} \\left [ \n", "\\log p_{\\theta}({\\bf x}, {\\bf z}) - \\log q_{\\phi}({\\bf z})\n", "\\right]$$\n", "\n", "By assumption we can compute the log probabilities inside the expectation. And since the guide is assumed to be a parametric distribution we can sample from, we can compute Monte Carlo estimates of this quantity. Crucially, the ELBO is a lower bound to the log evidence, i.e. for all choices of $\\theta$ and $\\phi$ we have that \n", "\n", "$$\\log p_{\\theta}({\\bf x}) \\ge {\\rm ELBO} $$\n", "\n", "So if we take (stochastic) gradient steps to maximize the ELBO, we will also be pushing the log evidence higher (in expectation). Furthermore, it can be shown that the gap between the ELBO and the log evidence is given by the [Kullback-Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) (KL divergence) between the guide and the posterior:\n", "\n", "$$ \\log p_{\\theta}({\\bf x}) - {\\rm ELBO} = \n", "\\rm{KL}\\!\\left( q_{\\phi}({\\bf z}) \\lVert p_{\\theta}({\\bf z} | {\\bf x}) \\right) $$\n", "\n", "This KL divergence is a particular (non-negative) measure of 'closeness' between two distributions. So, for a fixed $\\theta$, as we take steps in $\\phi$ space that increase the ELBO, we decrease the KL divergence between the guide and the posterior, i.e. we move the guide towards the posterior. In the general case we take gradient steps in both $\\theta$ and $\\phi$ space simultaneously so that the guide and model play chase, with the guide tracking a moving posterior $\\log p_{\\theta}({\\bf z} | {\\bf x})$. Perhaps somewhat surprisingly, despite the moving target, this optimization problem can be solved (to a suitable level of approximation) for many different problems.\n", "\n", "So at high level variational inference is easy: all we need to do is define a guide and compute gradients of the ELBO. Actually, computing gradients for general model and guide pairs leads to some complications (see the tutorial [SVI Part III](svi_part_iii.ipynb) for a discussion). For the purposes of this tutorial, let's consider that a solved problem and look at the support that Pyro provides for doing variational inference. \n", "\n", "## `SVI` Class\n", "\n", "In Pyro the machinery for doing variational inference is encapsulated in the [`SVI`](https://docs.pyro.ai/en/stable/inference_algos.html?highlight=svi) class.\n", "\n", "The user needs to provide three things: the model, the guide, and an optimizer. We've discussed the model and guide above and we'll discuss the optimizer in some detail below, so let's assume we have all three ingredients at hand. To construct an instance of `SVI` that will do optimization via the ELBO objective, the user writes\n", "\n", "```python\n", "import pyro\n", "from pyro.infer import SVI, Trace_ELBO\n", "svi = SVI(model, guide, optimizer, loss=Trace_ELBO())\n", "```\n", "\n", "The `SVI` object provides two methods, `step()` and `evaluate_loss()`, that encapsulate the logic for variational learning and evaluation:\n", "\n", "1. The method `step()` takes a single gradient step and returns an estimate of the loss (i.e. minus the ELBO). If provided, the arguments to `step()` are piped to `model()` and `guide()`. \n", "\n", "2. The method `evaluate_loss()` returns an estimate of the loss _without_ taking a gradient step. Just like for `step()`, if provided, arguments to `evaluate_loss()` are piped to `model()` and `guide()`.\n", "\n", "For the case where the loss is the ELBO, both methods also accept an optional argument `num_particles`, which denotes the number of samples used to compute the loss (in the case of `evaluate_loss`) and the loss and gradient (in the case of `step`). \n", "\n", "## Optimizers\n", "\n", "In Pyro, the model and guide are allowed to be arbitrary stochastic functions provided that\n", "\n", "1. `guide` doesn't contain `pyro.sample` statements with the `obs` argument\n", "2. `model` and `guide` have the same call signature\n", "\n", "This presents some challenges because it means that different executions of `model()` and `guide()` may have quite different behavior, with e.g. certain latent random variables and parameters only appearing some of the time. Indeed parameters may be created dynamically during the course of inference. In other words the space we're doing optimization over, which is parameterized by $\\theta$ and $\\phi$, can grow and change dynamically.\n", "\n", "In order to support this behavior, Pyro needs to dynamically generate an optimizer for each parameter the first time it appears during learning. Luckily, PyTorch has a lightweight optimization library (see [torch.optim](http://pytorch.org/docs/master/optim.html)) that can easily be repurposed for the dynamic case. \n", "\n", "All of this is controlled by the [`optim.PyroOptim`](https://docs.pyro.ai/en/stable/optimization.html?highlight=optim#pyro.optim.optim.PyroOptim) class, which is basically a thin wrapper around PyTorch optimizers. `PyroOptim` takes two arguments: a constructor for PyTorch optimizers `optim_constructor` and a specification of the optimizer arguments `optim_args`. At high level, in the course of optimization, whenever a new parameter is seen `optim_constructor` is used to instantiate a new optimizer of the given type with arguments given by `optim_args`. \n", "\n", "Most users will probably not interact with `PyroOptim` directly and will instead interact with the aliases defined in `optim/__init__.py`. Let's see how that goes. There are two ways to specify the optimizer arguments. In the simpler case, `optim_args` is a _fixed_ dictionary that specifies the arguments used to instantiate PyTorch optimizers for _all_ the parameters:\n", "\n", "```python\n", "from pyro.optim import Adam\n", "\n", "adam_params = {\"lr\": 0.005, \"betas\": (0.95, 0.999)}\n", "optimizer = Adam(adam_params)\n", "```\n", "\n", "The second way to specify the arguments allows for a finer level of control. Here the user must specify a callable that will be invoked by Pyro upon creation of an optimizer for a newly seen parameter. This callable must take the Pyro name of the parameter as an argument.\n", "This gives the user the ability to, for example, customize learning rates for different parameters. For an example where this sort of level of control is useful, see the [discussion of baselines](svi_part_iii.ipynb). Here's a simple example to illustrate the API:\n", "\n", "```python\n", "from pyro.optim import Adam\n", "\n", "def per_param_callable(param_name):\n", " if param_name == 'my_special_parameter':\n", " return {\"lr\": 0.010}\n", " else:\n", " return {\"lr\": 0.001}\n", "\n", "optimizer = Adam(per_param_callable)\n", "```\n", "\n", "This simply tells Pyro to use a learning rate of `0.010` for the Pyro parameter `my_special_parameter` and a learning rate of `0.001` for all other parameters.\n", "\n", "## A simple example\n", "\n", "We finish with a simple example. You've been given a two-sided coin. You want to determine whether the coin is fair or not, i.e. whether it falls heads or tails with the same frequency. You have a prior belief about the likely fairness of the coin based on two observations:\n", "\n", "- it's a standard quarter issued by the US Mint\n", "- it's a bit banged up from years of use\n", "\n", "So while you expect the coin to have been quite fair when it was first produced, you allow for its fairness to have since deviated from a perfect 1:1 ratio. So you wouldn't be surprised if it turned out that the coin preferred heads over tails at a ratio of 11:10. By contrast you would be very surprised if it turned out that the coin preferred heads over tails at a ratio of 5:1—it's not _that_ banged up.\n", "\n", "To turn this into a probabilistic model we encode heads and tails as `1`s and `0`s. We encode the fairness of the coin as a real number $f$, where $f$ satisfies $f \\in [0.0, 1.0]$ and $f=0.50$ corresponds to a perfectly fair coin. Our prior belief about $f$ will be encoded by a Beta distribution, specifically $\\rm{Beta}(10,10)$, which is a symmetric probability distribution on the interval $[0.0, 1.0]$ that is peaked at $f=0.5$. " ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/html" }, "source": [ "
Figure 1: The distribution Beta that encodes our prior belief about the fairness of the coin.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To learn something about the fairness of the coin that is more precise than our somewhat vague prior, we need to do an experiment and collect some data. Let's say we flip the coin 10 times and record the result of each flip. In practice we'd probably want to do more than 10 trials, but hey this is a tutorial.\n", "\n", "Assuming we've collected the data in a list `data`, the corresponding model is given by\n", "\n", "```python\n", "import pyro.distributions as dist\n", "\n", "def model(data):\n", " # define the hyperparameters that control the Beta prior\n", " alpha0 = torch.tensor(10.0)\n", " beta0 = torch.tensor(10.0)\n", " # sample f from the Beta prior\n", " f = pyro.sample(\"latent_fairness\", dist.Beta(alpha0, beta0))\n", " # loop over the observed data\n", " for i in range(len(data)):\n", " # observe datapoint i using the Bernoulli \n", " # likelihood Bernoulli(f)\n", " pyro.sample(\"obs_{}\".format(i), dist.Bernoulli(f), obs=data[i])\n", "```\n", "\n", "Here we have a single latent random variable (`'latent_fairness'`), which is distributed according to $\\rm{Beta}(10, 10)$. Conditioned on that random variable, we observe each of the datapoints using a Bernoulli likelihood. Note that each observation is assigned a unique name in Pyro.\n", "\n", "Our next task is to define a corresponding guide, i.e. an appropriate variational distribution for the latent random variable $f$. The only real requirement here is that $q(f)$ should be a probability distribution over the range $[0.0, 1.0]$, since $f$ doesn't make sense outside of that range. A simple choice is to use another Beta distribution parameterized by two trainable parameters $\\alpha_q$ and $\\beta_q$. Actually, in this particular case this is the 'right' choice, since conjugacy of the Bernoulli and Beta distributions means that the exact posterior is a Beta distribution. In Pyro we write:\n", "\n", "```python\n", "def guide(data):\n", " # register the two variational parameters with Pyro.\n", " alpha_q = pyro.param(\"alpha_q\", torch.tensor(15.0), \n", " constraint=constraints.positive)\n", " beta_q = pyro.param(\"beta_q\", torch.tensor(15.0), \n", " constraint=constraints.positive)\n", " # sample latent_fairness from the distribution Beta(alpha_q, beta_q)\n", " pyro.sample(\"latent_fairness\", dist.Beta(alpha_q, beta_q))\n", "```\n", "\n", "There are a few things to note here:\n", "\n", "- We've taken care that the names of the random variables line up exactly between the model and guide.\n", "- `model(data)` and `guide(data)` take the same arguments.\n", "- The variational parameters are `torch.tensor`s. The `requires_grad` flag is automatically set to `True` by `pyro.param`.\n", "- We use `constraint=constraints.positive` to ensure that `alpha_q` and `beta_q` remain non-negative during optimization. Under the hood an exponential transform ensures positivity.\n", "\n", "Now we can proceed to do stochastic variational inference. \n", "\n", "```python\n", "# set up the optimizer\n", "adam_params = {\"lr\": 0.0005, \"betas\": (0.90, 0.999)}\n", "optimizer = Adam(adam_params)\n", "\n", "# setup the inference algorithm\n", "svi = SVI(model, guide, optimizer, loss=Trace_ELBO())\n", "\n", "n_steps = 5000\n", "# do gradient steps\n", "for step in range(n_steps):\n", " svi.step(data)\n", "``` \n", "\n", "Note that in the `step()` method we pass in the data, which then get passed to the model and guide. \n", "\n", "The only thing we're missing at this point is some data. So let's create some data and assemble all the code snippets above into a complete script:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math\n", "import os\n", "import torch\n", "import torch.distributions.constraints as constraints\n", "import pyro\n", "from pyro.optim import Adam\n", "from pyro.infer import SVI, Trace_ELBO\n", "import pyro.distributions as dist\n", "\n", "# this is for running the notebook in our testing framework\n", "smoke_test = ('CI' in os.environ)\n", "n_steps = 2 if smoke_test else 2000\n", "\n", "assert pyro.__version__.startswith('1.9.0')\n", "\n", "# clear the param store in case we're in a REPL\n", "pyro.clear_param_store()\n", "\n", "# create some data with 6 observed heads and 4 observed tails\n", "data = []\n", "for _ in range(6):\n", " data.append(torch.tensor(1.0))\n", "for _ in range(4):\n", " data.append(torch.tensor(0.0))\n", "\n", "def model(data):\n", " # define the hyperparameters that control the Beta prior\n", " alpha0 = torch.tensor(10.0)\n", " beta0 = torch.tensor(10.0)\n", " # sample f from the Beta prior\n", " f = pyro.sample(\"latent_fairness\", dist.Beta(alpha0, beta0))\n", " # loop over the observed data\n", " for i in range(len(data)):\n", " # observe datapoint i using the ernoulli likelihood\n", " pyro.sample(\"obs_{}\".format(i), dist.Bernoulli(f), obs=data[i])\n", "\n", "def guide(data):\n", " # register the two variational parameters with Pyro\n", " # - both parameters will have initial value 15.0. \n", " # - because we invoke constraints.positive, the optimizer \n", " # will take gradients on the unconstrained parameters\n", " # (which are related to the constrained parameters by a log)\n", " alpha_q = pyro.param(\"alpha_q\", torch.tensor(15.0), \n", " constraint=constraints.positive)\n", " beta_q = pyro.param(\"beta_q\", torch.tensor(15.0), \n", " constraint=constraints.positive)\n", " # sample latent_fairness from the distribution Beta(alpha_q, beta_q)\n", " pyro.sample(\"latent_fairness\", dist.Beta(alpha_q, beta_q))\n", "\n", "# setup the optimizer\n", "adam_params = {\"lr\": 0.0005, \"betas\": (0.90, 0.999)}\n", "optimizer = Adam(adam_params)\n", "\n", "# setup the inference algorithm\n", "svi = SVI(model, guide, optimizer, loss=Trace_ELBO())\n", "\n", "# do gradient steps\n", "for step in range(n_steps):\n", " svi.step(data)\n", " if step % 100 == 0:\n", " print('.', end='')\n", "\n", "# grab the learned variational parameters\n", "alpha_q = pyro.param(\"alpha_q\").item()\n", "beta_q = pyro.param(\"beta_q\").item()\n", "\n", "# here we use some facts about the Beta distribution\n", "# compute the inferred mean of the coin's fairness\n", "inferred_mean = alpha_q / (alpha_q + beta_q)\n", "# compute inferred standard deviation\n", "factor = beta_q / (alpha_q * (1.0 + alpha_q + beta_q))\n", "inferred_std = inferred_mean * math.sqrt(factor)\n", "\n", "print(\"\\nBased on the data and our prior belief, the fairness \" +\n", " \"of the coin is %.3f +- %.3f\" % (inferred_mean, inferred_std))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sample output:\n", "\n", "```\n", "based on the data and our prior belief, the fairness of the coin is 0.532 +- 0.090\n", "```\n", "\n", "This estimate is to be compared to the exact posterior mean, which in this case is given by $16/30 = 0.5\\bar{3}$.\n", "Note that the final estimate of the fairness of the coin is in between the the fairness preferred by the prior (namely $0.50$) and the fairness suggested by the raw empirical frequencies ($6/10 = 0.60$). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "[1] `Automated Variational Inference in Probabilistic Programming`,\n", "
    \n", "David Wingate, Theo Weber\n", "\n", "[2] `Black Box Variational Inference`,
    \n", "Rajesh Ranganath, Sean Gerrish, David M. Blei\n", "\n", "[3] `Auto-Encoding Variational Bayes`,
    \n", "Diederik P Kingma, Max Welling\n", "\n", "[4] `Variational Inference: A Review for Statisticians`,
    \n", "David M. Blei, Alp Kucukelbir, Jon D. McAuliffe" ] } ], "metadata": { "celltoolbar": "Raw Cell Format", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 2 }