Bayesian Regression

Regression is one of the most common and basic supervised learning tasks in machine learning. Suppose we’re given a dataset \(\mathcal{D}\) of the form

\[\mathcal{D} = \{ (X_i, y_i) \} \qquad \text{for}\qquad i=1,2,...,N\]

The goal of linear regression is to fit a function to the data of the form:

\[y = w X + b + \epsilon\]

where \(w\) and \(b\) are learnable parameters and \(\epsilon\) represents observation noise. Specifically \(w\) is a matrix of weights and \(b\) is a bias vector.

Let’s first implement linear regression in PyTorch and learn point estimates for the parameters \(w\) and \(b\). Then we’ll see how to incorporate uncertainty into our estimates by using Pyro to implement Bayesian linear regression.

Setup

As always, let’s begin by importing the modules we’ll need.

In [ ]:
import numpy as np
import torch
import torch.nn as nn

from torch.autograd import Variable

import pyro
from pyro.distributions import Normal
from pyro.infer import SVI
from pyro.optim import Adam

Data

We’ll generate a toy dataset with one feature and \(w = 3\) and \(b = 1\) as follows:

In [ ]:
N = 100  # size of toy data
p = 1    # number of features

def build_linear_dataset(N, noise_std=0.1):
    X = np.linspace(-6, 6, num=N)
    y = 3 * X + 1 + np.random.normal(0, noise_std, size=N)
    X, y = X.reshape((N, 1)), y.reshape((N, 1))
    X, y = Variable(torch.Tensor(X)), Variable(torch.Tensor(y))
    return torch.cat((X, y), 1)

Note that we generate the data with a fixed observation noise \(\sigma = 0.1\).

Regression

Now let’s define our regression model. We’ll use PyTorch’s nn.Module for this. Our input \(X\) is a matrix of size \(N \times p\) and our output \(y\) is a vector of size \(p \times 1\). The function nn.Linear(p, 1) defines a linear transformation of the form \(Xw + b\) where \(w\) is the weight matrix and \(b\) is the additive bias.

In [ ]:
class RegressionModel(nn.Module):
    def __init__(self, p):
        super(RegressionModel, self).__init__()
        self.linear = nn.Linear(p, 1)

    def forward(self, x):
        return self.linear(x)

regression_model = RegressionModel(p)

Training

We will use the mean squared error (MSE) as our loss and Adam as our optimizer. We would like to optimize the parameters of the regression_model neural net above. We will use a somewhat large learning rate of 0.01 and run for 500 iterations.

In [ ]:
loss_fn = torch.nn.MSELoss(size_average=False)
optim = torch.optim.Adam(regression_model.parameters(), lr=0.01)
num_iterations = 500

def main():
    data = build_linear_dataset(N, p)
    x_data = data[:, :-1]
    y_data = data[:, -1]
    for j in range(num_iterations):
        # run the model forward on the data
        y_pred = regression_model(x_data)
        # calculate the mse loss
        loss = loss_fn(y_pred, y_data)
        # initialize gradients to zero
        optim.zero_grad()
        # backpropagate
        loss.backward()
        # take a gradient step
        optim.step()
        if (j + 1) % 50 == 0:
            print("[iteration %04d] loss: %.4f" % (j + 1, loss.data[0]))
    # Inspect learned parameters
    print("Learned parameters:")
    for name, param in regression_model.named_parameters():
        print("%s: %.3f" % (name, param.data.numpy()))

if __name__ == '__main__':
    main()

Sample Output:

[iteration 0050] loss: 4405.9824
[iteration 0100] loss: 2636.4561
[iteration 0150] loss: 1497.4330
[iteration 0200] loss: 811.6588
[iteration 0250] loss: 429.7031
[iteration 0300] loss: 234.2143
[iteration 0350] loss: 142.6732
[iteration 0400] loss: 103.5445
[iteration 0450] loss: 88.2896
[iteration 0500] loss: 82.8650
Learned parameters:
linear.weight: 2.996
linear.bias: 1.094

Not too bad - you can see that the regressor learned parameters that were pretty close to the ground truth of \(w = 3, b = 1\). But how confident should we be in these point estimates?

Bayesian modeling (see here for an overview) offers a systematic framework for reasoning about model uncertainty. Instead of just learning point estimates, we’re going to learn a distribution over values of the parameters \(w\) and \(b\) that are consistent with the observed data.

Bayesian Regression

In order to make our linear regression Bayesian, we need to put priors on the parameters \(w\) and \(b\). These are distributions that represent our prior belief about reasonable values for \(w\) and \(b\) (before observing any data).

random_module()

In order to do this, we’ll ‘lift’ the parameters \(w\) and \(b\) to random variables. We can do this in Pyro via random_module(), which effectively takes a given nn.Module and turns it into a distribution over the same module; in our case, this will be a distribution over regressors. Specifically, each parameter in the original regression model is sampled from the provided prior. This allows us to repurpose vanilla regression models for use in the Bayesian setting. For example:

In [ ]:
mu = Variable(torch.zeros(1, 1))
sigma = Variable(torch.ones(1, 1))
# define a unit normal prior
prior = Normal(mu, sigma)
# overload the parameters in the regression module with samples from the prior
lifted_module = pyro.random_module("regression_module", regression_model, prior)
# sample a regressor from the prior
sampled_reg_model = lifted_module()

Model

We now have all the ingredients needed to specify our model. First we define priors over \(w\) and \(b\). Because we’re uncertain about the parameters a priori, we’ll use relatively wide priors \(\mathcal{N}(\mu = 0, \sigma = 10)\). Then we wrap regression_model with random_module and sample an instance of the regressor, lifted_reg_model. We then run the regressor forward on the inputs x_data. Finally we use the pyro.observe statement to condition on the observed data y_data. Note that we use the same fixed observation noise that was used to generate the data.

In [ ]:
def model(data):
    # Create unit normal priors over the parameters
    x_data = data[:, :-1]
    y_data = data[:, -1]
    mu, sigma = Variable(torch.zeros(p, 1)), Variable(10 * torch.ones(p, 1))
    bias_mu, bias_sigma = Variable(torch.zeros(1)), Variable(10 * torch.ones(1))
    w_prior, b_prior = Normal(mu, sigma), Normal(bias_mu, bias_sigma)
    priors = {'linear.weight': w_prior, 'linear.bias': b_prior}
    # lift module parameters to random variables sampled from the priors
    lifted_module = pyro.random_module("module", regression_model, priors)
    # sample a regressor (which also samples w and b)
    lifted_reg_model = lifted_module()
    # run the regressor forward conditioned on data
    prediction_mean = lifted_reg_model(x_data).squeeze()
    # condition on the observed data
    pyro.observe("obs", Normal(prediction_mean, Variable(0.1 * torch.ones(data.size(0)))),
                 y_data.squeeze())

Guide

In order to do inference we’re going to need a guide, i.e. a parameterized family of distributions over \(w\) and \(b\). Writing down a guide will proceed in close analogy to the construction of our model, with the key difference that the guide parameters need to be trainable. To do this we register the guide parameters in the ParamStore using pyro.param() and make sure each PyTorch Variable has the flag requires_grad set to True.

In [ ]:
softplus = torch.nn.Softplus()

def guide(data):
    # define our variational parameters
    w_mu = Variable(torch.randn(p, 1), requires_grad=True)
    # note that we initialize our sigmas to be pretty narrow
    w_log_sig = Variable(-3.0 * torch.ones(p, 1) + 0.05 * torch.randn(p, 1),
                         requires_grad=True)
    b_mu = Variable(torch.randn(1), requires_grad=True)
    b_log_sig = Variable(-3.0 * torch.ones(1) + 0.05 * torch.randn(1),
                         requires_grad=True)
    # register learnable params in the param store
    mw_param = pyro.param("guide_mean_weight", w_mu)
    sw_param = softplus(pyro.param("guide_log_sigma_weight", w_log_sig))
    mb_param = pyro.param("guide_mean_bias", b_mu)
    sb_param = softplus(pyro.param("guide_log_sigma_bias", b_log_sig))
    # guide distributions for w and b
    w_dist, b_dist = Normal(mw_param, sw_param), Normal(mb_param, sb_param)
    dists = {'linear.weight': w_dist, 'linear.bias': b_dist}
    # overload the parameters in the module with random samples
    # from the guide distributions
    lifted_module = pyro.random_module("module", regression_model, dists)
    # sample a regressor (which also samples w and b)
    return lifted_module()

Note that we choose Gaussians for both guide distributions. Also, to ensure positivity, we pass each log sigma through a softplus() transformation (an alternative to ensure positivity would be an exp()-transformation).

Inference

To do inference we’ll use stochastic variational inference (SVI) (for an introduction to SVI, see SVI Part I). Just like in the non-Bayesian linear regression, each iteration in our training loop will take a gradient step, with the difference that in this case, we’ll use the ELBO objective instead of the MSE loss.

The Pyro backend will construct the ELBO objective function for us; this logic is handled by the SVI class:

In [ ]:
optim = Adam({"lr": 0.01})
svi = SVI(model, guide, optim, loss="ELBO")

Here Adam is a thin wrapper around torch.optim.Adam (see here for a discussion). The complete training loop is as follows:

In [ ]:
def main():
    pyro.clear_param_store()
    data = build_linear_dataset(N, p)
    for j in range(num_iterations):
        # calculate the loss and take a gradient step
        loss = svi.step(data)
        if j % 100 == 0:
            print("[iteration %04d] loss: %.4f" % (j + 1, loss / float(N)))

if __name__ == '__main__':
    main()

To take an ELBO gradient step we simply call the step method of SVI. Notice that the data argument we pass to step will be passed to both model() and guide().

Validating Results

Let’s compare the variational parameters we learned to our previous result:

In [ ]:
for name in pyro.get_param_store().get_all_param_names():
    print("[%s]: %.3f" % (name, pyro.param(name).data.numpy()))

Sample Output:

[guide_log_sigma_weight]: -3.217
[guide_log_sigma_bias]: -3.164
[guide_mean_weight]: 2.966
[guide_mean_bias]: 0.941

As you can see, the means of our parameter estimates are pretty close to the values we previously learned. Now, however, instead of just point estimates, the parameters guide_log_sigma_weight and guide_log_sigma_bias provide us with uncertainty estimates. (Note that the sigmas are in log-space here, so the more negative the value, the narrower the width).

Finally, let’s evaluate our model by checking its predictive accuracy on new test data. This is known as point evaluation. We’ll sample 20 regressors from our posterior and run them on the new test data, then average across their predictions and calculate the MSE of the predicted values compared to the ground truth.

In [ ]:
X = np.linspace(6, 7, num=20)
y = 3 * X + 1
X, y = X.reshape((20, 1)), y.reshape((20, 1))
x_data, y_data = Variable(torch.Tensor(X)), Variable(torch.Tensor(y))
loss = nn.MSELoss()
y_preds = Variable(torch.zeros(20, 1))
for i in range(20):
    # guide does not require the data
    sampled_reg_model = guide(None)
    # run the regression model and add prediction to total
    y_preds = y_preds + sampled_reg_model(x_data)
# take the average of the predictions
y_preds = y_preds / 20
print "Loss: ", loss(y_preds, y_data).data[0]

Sample Output:

Loss:  0.0310659464449

Bayesian nonlinear regression can be implemented analogously by using random_module to lift neural network modules.

See the full code on Github.