# SVI Part II: Conditional Independence, Subsampling, and Amortization¶

## The Goal: Scaling SVI to Large Datasets¶

For a model with \(N\) observations, running the `model`

and
`guide`

and constructing the ELBO involves evaluating log pdf’s whose
complexity scales badly with \(N\). This is a problem if we want to
scale to large datasets. Luckily, the ELBO objective naturally supports
subsampling provided that our model/guide have some conditional
independence structure that we can take advantage of. For example, in
the case that the observations are conditionally independent given the
latents, the log likelihood term in the ELBO can be approximated with

where \(\mathcal{I}_M\) is a mini-batch of indices of size \(M\) with \(M<N\) (for a discussion please see references [1,2]). Great, problem solved! But how do we do this in Pyro?

## Marking Conditional Independence in Pyro¶

If a user wants to do this sort of thing in Pyro, he or she first needs
to make sure that the model and guide are written in such a way that
Pyro can leverage the relevant conditional independencies. Let’s see how
this is done. Pyro provides two language primitives for marking
conditional independencies: `irange`

and `iarange`

. Let’s start with
the simpler of the two.

`irange`

¶

Let’s return to the example we used in the previous
tutorial. For convenience let’s replicate the main
logic of `model`

here:

```
def model(data):
# sample f from the beta prior
f = pyro.sample("latent_fairness", dist.beta, alpha0, beta0)
# loop over the observed data
for i in range(len(data)):
# observe datapoint i using the bernoulli likelihood
pyro.observe("obs_{}".format(i), dist.bernoulli,
data[i], f)
```

For this model the observations are conditionally independent given the
latent random variable `latent_fairness`

. To explicitly mark this in
Pyro we basically just need to replace the Python builtin `range`

with
the Pyro construct `irange`

:

```
def model(data):
# sample f from the beta prior
f = pyro.sample("latent_fairness", dist.beta, alpha0, beta0)
# loop over the observed data [WE ONLY CHANGE THE NEXT LINE]
for i in pyro.irange("data_loop", len(data)):
# observe datapoint i using the bernoulli likelihood
pyro.observe("obs_{}".format(i), dist.bernoulli,
data[i], f)
```

We see that `pyro.irange`

is very similar to `range`

with one main
difference: each invocation of `irange`

requires the user to provide a
unique name. The second argument is an integer just like for `range`

.

So far so good. Pyro can now leverage the conditional indendency of the
observations given the latent random variable. But how this does
actually work? Basically `pyro.irange`

is implemented using a context
manager. At every execution of the body of the `for`

loop we enter a
new (conditional) independence context which is then exited at the end
of the `for`

loop body. Let’s be very explicit about this:

- because each
`pyro.observe`

statement occurs within a different execution of the body of the`for`

loop, Pyro marks each observation as independent - this independence is properly a
*conditional*independence*given*`latent_fairness`

because`latent_fairness`

is sampled*outside*of the context of`data_loop`

.

Before moving on, let’s mention some gotchas to be avoided when using
`irange`

. Consider the following variant of the above code snippet:

```
# WARNING do not do this!
my_reified_list = list(pyro.irange("data_loop", len(data)))
for i in my_reified_list:
pyro.observe("obs_{}".format(i), dist.bernoulli, data[i], f)
```

This will *not* achieve the desired behavior, since `list()`

will
enter and exit the `data_loop`

context completely before a single
`pyro.observe`

statement is called. Similarly, the user needs to take
care not to leak mutable computations across the boundary of the context
manager, as this may lead to subtle bugs. For example, `pyro.irange`

is not appropriate for temporal models where each iteration of a loop
depends on the previous iteration; in this case a `range`

should be
used instead.

`iarange`

¶

Conceptually `iarange`

is the same as `irange`

except that it is a
vectorized operation (as `torch.arange`

is to `range`

). As such it
potentially enables large speed-ups compared to the explicit `for`

loop that appears with `irange`

. Let’s see how this looks for our
running example. First we need `data`

to be in the form of a tensor:

```
data = Variable(torch.zeros(10, 1))
data[0:6, 0].data = torch.ones(6) # 6 heads and 4 tails
```

Then we have:

```
with iarange('observe_data'):
pyro.observe('obs', dist.bernoulli, data, f)
```

Let’s compare this to the analogous `irange`

construction
point-by-point: - just like `irange`

, `iarange`

requires the user to
specify a unique name. - note that this code snippet only introduces a
single (observed) random variable (namely `obs`

), since the entire
tensor is considered at once. - since there is no need for an iterator
in this case, there is no need to specify the length of the tensor(s)
involved in the `iarange`

context

Note that the gotchas mentioned in the case of `irange`

also apply to
`iarange`

.

## Subsampling¶

We now know how to mark conditional independence in Pyro. This is useful in and of itself (see the dependency tracking section in SVI Part III), but we’d also like to do subsampling so that we can do SVI on large datasets. Depending on the structure of the model and guide, Pyro supports several ways of doing subsampling. Let’s go through these one by one.

### Automatic subsampling with `irange`

and `iarange`

¶

Let’s look at the simplest case first, in which we get subsampling for
free with one or two additional arguments to `irange`

and `iarange`

:

```
for i in pyro.irange("data_loop", len(data), subsample_size=5):
pyro.observe("obs_{}".format(i), dist.bernoulli, data[i], f)
```

That’s all there is to it: we just use the argument `subsample_size`

.
Whenever we run `model()`

we now only evaluate the log likelihood for
5 randomly chosen datapoints in `data`

; in addition, the log
likelihood will be automatically scaled by the appropriate factor of
\(\tfrac{10}{5} = 2\). What about `iarange`

? The incantantion is
entirely analogous:

```
with iarange('observe_data', size=10, subsample_size=5) as ind:
pyro.observe('obs', dist.bernoulli,
data.index_select(0, ind), f)
```

Importantly, `iarange`

now returns a tensor of indices `ind`

, which,
in this case will be of length 5. Note that in addition to the argument
`subsample_size`

we also pass the argument `size`

so that
`iarange`

is aware of the full size of the tensor `data`

so that it
can compute the correct scaling factor. Just like for `irange`

, the
user is responsible for selecting the correct datapoints using the
indices provided by `iarange`

.

Finally, note that the user must pass the argument `use_cuda=True`

to
`irange`

or `iarange`

if `data`

is on the GPU.

### Custom subsampling strategies with `irange`

and `iarange`

¶

Every time the above `model()`

is run `irange`

and `iarange`

will
sample new subsample indices. Since this subsampling is stateless, this
can lead to some problems: basically for a sufficiently large dataset
even after a large number of iterations there’s a nonnegligible
probability that some of the datapoints will have never been selected.
To avoid this the user can take control of subsampling by making use of
the `subsample`

argument to `irange`

and `iarange`

. See the
docs for
details.

### Subsampling when there are only local random variables¶

We have in mind a model with a joint probability density given by

For a model with this dependency structure the scale factor introduced
by subsampling scales all the terms in the ELBO by the same amount.
Consequently there’s no need to invoke any special Pyro constructs. This
is the case, for example, for a vanilla VAE. This explains why for the
VAE it’s permissible for the user to take complete control over
subsampling and pass mini-batches directly to the model and guide
without using `irange`

or `iarange`

. To see how this looks in
detail, see the VAE tutorial

### Subsampling when there are both global and local random variables¶

In the coin flip examples above `irange`

and `iarange`

appeared in
the model but not in the guide, since the only thing being subsampled
was the observations. Let’s look at a more complicated example where
subsampling appears in both the model and guide. To make things simple
let’s keep the discussion somewhat abstract and avoid writing a complete
model and guide.

Consider the model specified by the following joint distribution:

There are \(N\) observations \(\{ {\bf x}_i \}\) and \(N\) local latent random variables \(\{ {\bf z}_i \}\). There is also a global latent random variable \(\beta\). Our guide will be factorized as

Here we’ve been explicit about introducing \(N\) local variational
parameters \(\{\lambda_i \}\), while the other variational
parameters are left implicit. Both the model and guide have conditional
independencies. In particular, on the model side, given the
\(\{ {\bf z}_i \}\) the observations \(\{ {\bf x}_i \}\) are
independent. In addition, given \(\beta\) the latent random
variables \(\{\bf {z}_i \}\) are independent. On the guide side,
given the variational parameters \(\{\lambda_i \}\) and
\(\beta\) the latent random variables \(\{\bf {z}_i \}\) are
independent. To mark these conditional independencies in Pyro and do
subsampling we need to make use of either `irange`

or `iarange`

in
*both* the model *and* the guide. Let’s sketch out the basic logic using
`irange`

(a more complete piece of code would include `pyro.param`

statements, etc.). First, the model:

```
def model(data):
beta = pyro.sample("beta", ...) # sample the global RV
for i in pyro.irange("locals", len(data)):
z_i = pyro.sample("z_i", ...)
# compute the parameter used to define the observation
# likelihood using the local random variable
theta_i = compute_something(z_i)
pyro.observe("obs_{}".format(i), dist.mydist,
data[i], theta_i)
```

Note that in contrast to our running coin flip example, here we have
`pyro.sample`

statements both inside and outside of the `irange`

context. Next the guide:

```
def guide(data):
beta = pyro.sample("beta", ...) # sample the global RV
for i in pyro.irange("locals", len(data), subsample_size=5):
# sample the local RVs
pyro.sample("z_i", ..., lambda_i)
```

Note that crucially the indices will only be subsampled once in the
guide; the Pyro backend makes sure that the same set of indices are used
during execution of the model. For this reason `subsample_size`

only
needs to be specified in the guide.

## Amortization¶

Let’s again consider a model with global and local latent random variables and local variational parameters:

For small to medium-sized \(N\) using local variational parameters
like this can be a good approach. If \(N\) is large, however, the
fact that the space we’re doing optimization over grows with \(N\)
can be a real probelm. One way to avoid this nasty growth with the size
of the dataset is *amortization*.

This works as follows. Instead of introducing local variational parameters, we’re going to learn a single parametric function \(f(\cdot)\) and work with a variational distribution that has the form

The function \(f(\cdot)\)—which basically maps a given observation to a set of variational parameters tailored to that datapoint—will need to be sufficiently rich to capture the posterior accurately, but now we can handle large datasets without having to introduce an obscene number of variational parameters. This approach has other benefits too: for example, during learning \(f(\cdot)\) effectively allows us to share statistical power among different datapoints. Note that this is precisely the approach used in the VAE.

## References¶

[1] `Stochastic Variational Inference`

, Matthew D. Hoffman, David
M. Blei, Chong Wang, John Paisley

[2] `Auto-Encoding Variational Bayes`

, Diederik P Kingma, Max
Welling