We will use data from the 2009 British Social Attitudes Survey. You can download the data here. The codebook is available here.

Our goal is to predict the following output variable (the name in brackets corresponds to the variable name in the codebook):

We will use the following input variables:

Let’s have a look at the distribution of the output variable. The red dashed line shows the true percentage of non-Western immigrants (see

The plot shows that 756 of 1,044 respondents (or 72.4%) overestimate the percentage of non-Western immigrants in the UK.

Data Preparation

The original data set contains 1,044 respondents.


# Set working directory
# setwd(...)

# Load data set

# Declare factor variables
bsas_data <- bsas_data %>%
  dplyr::mutate(resp_urban_area = factor(resp_urban_area,
                                         levels = 1:4,
                                         labels = c("rural", "rather rural",
                                                    "rather urban", "urban")),
                resp_health = factor(resp_health,
                                     levels = 0:3,
                                     labels = c("bad", "fair", "fairly good", "good")))

We will use the glmnet() function in the glmnet package to perform ridge regression and the lasso. Before doing so, we use the model.matrix() function to create a matrix of input variables (also called “design matrix”). This function automatically transforms any qualitative variables into dummy variables. This is important because glmnet() can only take quantitative inputs.

We remove the intercept from the matrix produced by model.matrix() because glmnet() will automatically include an intercept. We also exclude the input resp_party_cons, which will serve as the baseline in our model.

# Matrix of input variables (remove the intercept and resp_party_cons)
x <- model.matrix(imm_brit ~ . -1 -resp_party_cons, bsas_data)

# Output variable
y <- bsas_data$imm_brit

Exercise 1: Ridge Regression

The goal of the first exercise is to fit a ridge regression to our data.

Problem 1.1

We first split the data into a training set and a test set.

# Set seed

# Split data into a training set and a test set
train <- sample(1:nrow(x), nrow(x) / 2)
y_test <- y[-train]

We next define a vector of values for the tuning parameter \(\lambda\).

# Define grid of lambda values
grid <- 10^seq(10, -2, length = 100)

We then use the training set to fit a ridge regression for each \(\lambda\) value. We do this by using the glmnet() function in the glmnet package. The glmnet() function has an alpha argument that determines what type of model is fit. If alpha = 0, then a ridge regression model is fit, and if alpha = 1, then a lasso model is fit. Note that by default, the glmnet() function standardizes the variables so that they are on the same scale.


# Fit a ridge regression for each lambda using the glmnet function
ridge_out <- glmnet(x[train, ], y[train], alpha = 0, lambda = grid)

Associated with each value of \(\lambda\) is a vector of ridge regression coefficients, stored in a matrix that can be accessed by coef(). In our case, this is a \(23 \times 100\) matrix, with a row for each input variable and the intercept and a column for each value of \(\lambda\).

## [1]  23 100

Finally, we use the plot() function to create a graph that shows the shrinkage of the coefficient estimates as a function of \(\lambda\).

# Plot shrinkage of coefficient estimates
plot(ridge_out, xvar = "lambda", label = TRUE)

Problem 1.2

We use cross-validation to choose the optimal tuning parameter \(\lambda\). We do this by using the cross-validation function cv.glmnet() in the glmnet package. By default, the function performs 10-fold cross-validation, though we can change this using the argument nfolds. Here, we do 5-fold cross-validation.

# Perform cross-validation
ridge_cv_out <- cv.glmnet(x[train, ], y[train], alpha = 0, lambda = grid, nfolds = 5)

We then use the plot() function to create a graph that shows the CV estimate of the expected test MSE associated with each value of \(\lambda\).

# Plot MSE associated with each lambda value

Next, we determine the optimal value of \(\lambda\) and use the test set to compute the test MSE associated with our optimal model.

# Determine optimal lambda value
(lambda_opt <- ridge_cv_out$lambda.min)
## [1] 4.641589
# Use test set to compute test error
ridge_out_pred <- predict(ridge_out, s = lambda_opt, newx = x[-train, ])
mean((ridge_out_pred - y_test)^2)
## [1] 375.5659

Let’s see what the benefit is of performing ridge regression rather than least squares regression.

ridge_out_pred_ls <- predict(ridge_out, s = 0, x = x[train, ], y = y[train],
                          newx = x[-train, ], exact = TRUE)
mean((ridge_out_pred_ls - y_test)^2)
## [1] 379.4074

For this data set, ridge regression decreases the test error somewhat relative to least squares.

Finally, we re-fit our ridge regression model on the full data set, using the optimal value of \(\lambda\).

ridge_final <- glmnet(x, y, alpha = 0)
predict(ridge_final, type = "coefficients", s = lambda_opt)[1:23, ]
##                  (Intercept)                  resp_female 
##                 37.031952483                  5.937727674 
##                     resp_age          resp_household_size 
##                 -0.041434850                  1.148784733 
##               resp_party_lab            resp_party_libdem 
##                 -2.577587782                 -3.895195596 
##               resp_party_snp             resp_party_green 
##                  3.818745888                 -3.339799903 
##              resp_party_ukip               resp_party_bnp 
##                 -4.152818311                  8.821819329 
##             resp_party_other               resp_newspaper 
##                  2.725760372                  1.612797275 
##            resp_internet_hrs               resp_religious 
##                 -0.025656308                  0.511561245 
## resp_time_current_employment         resp_urban_arearural 
##                 -0.002295805                 -1.223758130 
##  resp_urban_arearather rural  resp_urban_arearather urban 
##                 -0.860122923                  0.286077178 
##         resp_urban_areaurban              resp_healthfair 
##                  1.664619958                 -1.214776029 
##       resp_healthfairly good              resp_healthgood 
##                 -0.238974681                 -0.102325897 
##        resp_household_income 
##                 -1.200245890

Exercise 2: The Lasso

The goal of the second exercise is to fit a lasso model to our data.

Problem 2.1

In order to fit a Lasso model, we again use the glmnet() function. However, we now set the argument alpha = 1. We then use the plot() function to create a graph that shows the shrinkage of the estimated coefficients.

# Fit Lasso regression models using the glmnet function
lasso_out <- glmnet(x[train, ], y[train], alpha = 1, lambda = grid)

# Plot shrinkage of coefficient estimates
plot(lasso_out, xvar = "lambda", label = TRUE)