1 Introduction

bayesm is an R package that facilitates statistical analysis using Bayesian methods. The package provides a set of functions for commonly used models in applied microeconomics and quantitative marketing.

The goal of this vignette is to make it easier for users to adopt bayesm by providing a comprehensive overview of the package’s contents and detailed examples of certain functions. We begin with the overview, followed by a discussion of how to work with bayesm. The discussion covers the structure of function arguments, the required input data formats, and the various output formats. The final section provides detailed examples to demonstrate Bayesian inference with the linear normal, multinomial logit, and hierarchical multinomial logit regression models.

2 Package Contents

For ease of exposition, we have grouped the package contents into:

  • Posterior sampling functions
  • Utility functions
  • S3 methods1
  • Datasets

Because the first two groups contain many functions, we organize them into subgroups by purpose. Below, we display each group of functions in a table with one column per subgroup.

Posterior Sampling Functions
Linear Models \ Limited Dependent Variable Models Hierarchical Models \ Density Estimation \
runireg* rbprobitGibbs** rhierLinearModel rnmixGibbs*
runiregGibbs rmnpGibbs rhierLinearMixture rDPGibbs
rsurGibbs* rmvpGibbs rhierMnlRwMixture*
rivGibbs rmnlIndepMetrop rhierMnlDP
rivDP rscaleUsage rbayesBLP

rnegbinRw rhierNegbinRw

rordprobitGibbs

*bayesm offers the utility function breg with a related but limited set of capabilities as runireg — similarly with rmultireg for rsurGibbs, rmixGibbs for rnmixGibbs, and rhierBinLogit for rhierMnlRwMixture.

**rbiNormGibbs provides a tutorial-like example of Gibbs Sampling from a bivariate normal distribution.

Utility Functions
Log-Likelihood 
(data vector)
Log Density (univariate) Random Draws \ Mixture-of-Normals \ Miscellaneous \
llmnl lndIChisq rdirichlet clusterMix cgetC
llmnp lndIWishart rmixture eMixMargDen condMom
llnhlogit lndMvn rmvst mixDen createX

lndMvst rtrun mixDenBi ghkvec


rwishart momMix logMargDenNR




mnlHess




mnpProb




nmat




numEff




simnhlogit
S3 Methods
Plot Methods Summary Methods
plot.bayesm.mat summary.bayesm.mat
plot.bayesm.nmix summary.bayesm.nmix
plot.bayesm.hcoef summary.bayesm.var
Datasets
\ \ \
bank customerSat orangeJuice
camera detailing Scotch
cheese margarine tuna

Some discussion of the naming conventions may be warranted. All functions use CamelCase but begin lowercase. Posterior sampling functions begin with r to match R’s style of naming random number generation functions since these functions all draw from (posterior) distributions. Common abbreviations include DP for Dirichlet Process, IV for instrumental variables, MNL and MNP for multinomial logit and probit, SUR for seemingly unrelated regression, and hier for hierarchical. Utility functions that begin with ll calculate the log-likelihood of a data vector, while those that begin with lnd provide the log-density. Other abbreviations should be straighforward; please see the help file for a specific function if its name is unclear.

3 Working with bayesm

We expect most users of the package to interact primarily with the posterior sampling functions. These functions take a consistent set of arguments as inputs (Data, Prior, and Mcmc — each is a list) and they return output in a consistent format (always a list). summary and plot generic functions can then be used to facilitate analysis of the output because of the classes and methods defined in bayesm. The following subsections describe the format of the standardized function arguments as well as the required format of the data inputs and the format of the output from bayesm’s posterior sampling functions.

3.1 Input: Function Arguments

The posterior sampling functions take three arguments: Data, Prior, and Mcmc. Each argument is a list.

As a minimal example, assume you’d like to perform a linear regression and that you have in your work space y (a vector of length \(n\)) and X (a matrix of dimension \(n \times p\)). For this example, we utilize the default values for Prior and so we do not specify the Prior argument. These components (Data, Prior, and Mcmc as well as their arguments including R and nprint) are discussed in the subsections that follow. Then the bayesm syntax is simply:

mydata <- list(y = y, X = X)
mymcmc <- list(R = 1e6, nprint = 0)

out <- runireg(Data = mydata, Mcmc = mymcmc)

The list elements of Data, Prior, and Mcmc must be named. For example, you could not use the following code because the Data argument mydata2 has unnamed elements.

mydata2 <- list(y, X)
out <- runireg(Data = mydata2, Mcmc = mymcmc)

3.1.1 Data Argument

bayesm’s posterior sampling functions do not accept data stored in dataframes; data must be stored as vectors or matrices.

For non-hierarchical models, the list of input data simply contains the approprate data vectors or matrices from the set {y, X, w, z}2 and possibly a scalar (length one vector) from the set {k, p}.

  • For functions that require only a single data argument (such as the two density estimation functions, rnmixGibbs and rDPGibbs), y is that argument. More typically, y is used for LHS3 variable(s) and X for RHS variables. For estimation using instrumental variables, variables in the structural equation are separated into “exogenous” variables w and an “edogenous” variable x; z is a matrix of instruments.

  • For the scalars, p indicates the number of choice alternatives in discrete choice models and k is used as the maximum ordinate in models for ordinal data (e.g., rordprobitGibbs).

For hierarchical models, the input data has up to 3 components. We’ll discuss these components using the mixed logit model (rhierMnlRwMixture) as an example. For rhierMnlRwMixture, the Data argument is list(lgtdata, Z, p).

  1. The first component for all hierarchical models is required. It is a list of lists named either regdata or lgtdata, depending on the function. In rhierMnlRwMixture, lgtdata is a list of lists, with each interior list containing a vector or one-column matrix of multinomial outcomes y and a design matrix of covariates X. In Example 3 below, we show how to convert data from a data frame to this required list-of-lists format.

  2. The second component, Z, is present but optional for all hierarchical models. Z is a matrix of cross-sectional unit characteristics that drive the mean responses; that is, a matrix of covariates for the individual parameters (e.g. \(\beta_i\)’s). For example, the model (omitting the priors) for rhierMnlRwMixture is:

    \[ y_i \sim \text{MNL}(X_i'\beta_i) \hspace{1em} \text{with} \hspace{1em} \beta_i = Z \Delta_i + u_i \hspace{1em} \text{and} \hspace{1em} u_i \sim N(\mu_j, \Sigma_j) \]

    where \(i\) indexes individuals and \(j\) indexes cross-sectional unit characteristics.

  3. The third component (if accepted) is a scalar, such as p or k, and like the arguments by the same names in the non-hierarchical models, is used to indicate the size of the choice set or the maximum value of a scale or count variable. In rhierMnlRwMixture, the argument is p, which is used to indicate the number of choice alternatives.

Note that rbayesBLP (the hierarchical logit model with aggregate data as in Berry, Levinsohn, and Pakes (1995) and Jiang, Manchanda, and Rossi (2009)) deviates slightly from the standard data input. rbayesBLP uses j instead of p to be consistent with the literature and calls the LHS variable share rather than y to emphasize that aggregate (market share instead of individual choice) data are required.

3.1.2 Prior Argument

Specification of prior distributions is model-specific, so our discussion here is brief.

All posterior sampling functions offer default values for parameters of prior distributions. These defaults were selected to yield proper yet reasonably-diffuse prior distributions (assuming the data are in approximately unit scale). In addition, these defaults are consistent across functions. For example, normal priors have default values of mean 0 with value 0.01 for the scaling factor of the prior precision matrix.

Specification of the prior is important. Significantly more detail can be found in chapters 2–5 of BSM4 and the help files for the posterior sampling functions. We strongly recommend you consult them before accepting the default values.

3.1.3 Mcmc Argument

The Mcmc argument controls parameters of the sampling algorithm. The most common components of this list include:

  • R: the number of MCMC draws
  • keep: a thinning parameter indicating that every keep\(^\text{th}\) draw should be retained
  • nprint: an option to print the estimated time remaining to the console after each nprint\(^\text{th}\) draw

MCMC methods are non-iid. As a result, a large simulation size may be required to get reliable results. We recommend setting R large enough to yield an adequate effective sample size and letting keep default to the value 1 unless doing so imposes memory constraints. A careful reader of the bayesm help files will notice that many of the examples set R equal to 1000 or less. This low number of draws was chosen for speed, as the examples are meant to demonstrate how to run the code and do not necessarily suggest best practices for a proper statistical analysis.

nprint defaults to 100, but for large R, you may want to increase the nprint option. Alternatively, you can set nprint=0, in which case the priors will still be printed to the console, but the estimated time remaining will not.

Additional components of the Mcmc argument are function-specific, but typically include starting values for the algorithm. For example, the Mcmc argument for runiregGibbs takes sigmasq as a scalar element of the list. The Gibbs Sampler for runiregGibbs first draws \(\beta | \sigma^2\), then draws \(\sigma^2 | \beta\), and then repeats. For the first draw of \(\beta\) in the MCMC chain, a value of \(\sigma^2\) is required. The user can specify a value using Mcmc$sigmasq, or the user can omit the argument and the function will use its default (sigmasq = var(Data$y)).

3.2 Output: Returned Results

bayesm posterior sampling functions return a consistent set of results and output to the user. At a minimum, this output includes draws from the posterior distribution. bayesm provides a set of summary and plot methods to facilitate analysis of this output, but the user is free to analyze the results as he or she sees fit.

3.2.1 Output Formats

All bayesm posterior sampling functions return a list. The elements of that list include a set of vectors, matrices, and/or arrays (and possibly a list), the exact set of which depend on the function.

  • Vectors are returned for draws of parameters with no natural grouping. For example, runireg implements and iid sampler to draw from the posterior of a homoskedastic univariate regression with a conjugate prior (i.e., a Bayesian analog to OLS regression). One output list element is sigmasqdraw, a length R/keep vector for the scalar parameter \(\sigma^2\).

  • Matrices are returned for draws of parameters with a natural grouping. Again using runireg as the example, the output list includes betadraw, an R/keep \(\times\) ncol(X) matrix for the vector of \(\beta\) parameters.

    Contrary to the next bullet, draws for the parameters in a variance-covariance matrix are returned in matrix form. For example, rmnpGibbs implements a Gibbs Sampler for a multinomial probit model where one set of parameters is the \((p-1) \times (p-1)\) matrix \(\Sigma\). The output list for rmnpGibbs includes the list element sigmadraw, which is a matrix of dimension R/keep\(\times (p-1)*(p-1)\) with each row containing a draw (in vector form) for all the elements of the matrix \(\Sigma\). bayesm’s summary and plot methods (see below) are designed to handle this format.

  • Arrays are used when parameters have a natural matrix-grouping, such that the MCMC algorithm returns R/keep draws of the matrix. For example, rsurGibbs returns a list that includes Sigmadraw, an \(m \times m \times\)R/keep array, where \(m\) is the number of regression equations. As a second example, rhierLinearModel estimates a hierarchical linear regression model with a normal prior, and returns a list that includes betadraw, an \(n \times k \times\)R/keep array, where \(n\) signifies the number of individuals (each with their own \(\beta_i\)) and \(k\) signifies the number of covariates (ncol(X) = \(k\)).

  • For functions that use a mixture-of-normals or Dirichlet Process prior, the output list includes a list (nmix) pertaining to that prior. nmix is itself a list with 3 components: probdraw, zdraw, and compdraw. probdraw reports the probability that each draw came from a particular normal component; zdraw indicates which mixture-of-normals component each draw is assigned to; and compdraw provides draws for the mixture-of-normals components (i.e., mean vectors and Cholesky roots of covariance matrices). Note that if you specify a “mixture” with only one normal component, there will be no useful information in probdraw. Also note that zdraw is not relevant for density estimation and will be null except in rnmixGibbs and rDPGibbs.

3.2.2 Classes and Methods

In R generally, objects can be assigned a class and then a generic function can be used to run a method on an object with that class. The list elements in the output from bayesm posterior sampling functions are assigned special bayesm classes. The bayesm package includes summary and plot methods for use with these classes (see the table in Section 2 above). This means you can call the generic function (e.g., summary) on individual list elements of bayesm output and it will return specially-formatted summary results, including the effective sample size.

To see this, the code below provides an example using runireg. Here the generic function summary dispatches the method summary.bayesm.mat because the betadraw element of runireg’s output has class bayesm.mat. This example also shows the information about the prior that is printed to the console during the call to a posterior sampling function. Notice, however, that no remaining time is printed because nprint is set to zero.

set.seed(66)
R <- 2000
n <- 200
X <- cbind(rep(1,n), runif(n))
beta <- c(1,2)
sigsq <- 0.25
y <- X %*% beta + rnorm(n, sd = sqrt(sigsq))
out <- runireg(Data = list(y = y, X = X), Mcmc = list(R = R, nprint = 0))
##  
## Starting IID Sampler for Univariate Regression Model
##   with  200  observations
##  
## Prior Parms: 
## betabar
## [1] 0 0
## A
##      [,1] [,2]
## [1,] 0.01 0.00
## [2,] 0.00 0.01
## nu =  3  ssq=  0.5721252
##  
## MCMC parms: 
## R=  2000  keep=  1  nprint=  0
## 
summary(out$betadraw, tvalues = beta)
## Summary of Posterior Marginal Distributions 
## Moments 
##   tvalues mean std dev num se rel eff sam size
## 1       1  1.0    0.07 0.0015    0.85     1800
## 2       2  2.1    0.12 0.0029    1.01      900
## 
## Quantiles 
##   tvalues 2.5%  5% 50% 95% 97.5%
## 1       1 0.88 0.9 1.0 1.1   1.2
## 2       2 1.83 1.9 2.1 2.3   2.3
##    based on 1800 valid draws (burn-in=200)

3.3 Access to Code

bayesm was originally created as a companion to BSM, at which time most functions were written in R. The package has since been expanded to include additional functionality and most code has been converted to C++ via Rcpp for faster performance. However, for users interested in obtaining the original implementation of a posterior sampling function (in R instead of C++), you may still access the last version (2.2-5) of bayesm prior to the C++/Rcpp conversion from the package archive on CRAN.

To access the R code in the current version of bayesm, the user can simply call a function without parenthesis. For example, bayesm::runireg. However, most posterior sampling functions only perform basic checks in R and then call an unexported C++ function to do the heavy lifting (i.e., the MCMC draws). This C++ source code is not available to the user via the installed bayesm package because C++ code is compiled upon package installation on Linux machines and pre-compiled by CRAN for Mac and Windows. To access this source code, the user must download the “package source” from CRAN. This can be accomplished by clicking on the appropriate link at the bayesm package archive or by executing the R command download.packages(pkgs="bayesm", destdir=".", type="source"). Either of these methods will provide you with a compressed file “bayesm_version.tar.gz” that can be uncompressed. The C++ code can then be found in the “src” subdirectory.

4 Examples

We begin with a brief introduction to regression and Bayesian estimation. This will help set the notation and provide background for the examples that follow. We do not claim that this will be a sufficient introduction to the reader for which these ideas are new. We refer that reader to excellent texts on regression analysis by Cameron & Trivedi, Davidson & MacKinnon, Goldberger, Greene, Wasserman, and Wooldridge.5 For Bayesian methods, we recommend Gelman et al., Jackman, Marin & Robert, Rossi et al., and Zellner.6

4.1 What is Regression

Suppose you believe a variable \(y\) varies with (or is caused by) a set of variables \(x_1, x_2, \ldots, x_k\). For notational convenience, we’ll collect the set of \(x\) variables into \(X\). These variables \(y\) and \(X\) have a joint distribution \(f(y, X)\). Typically, interest will not fall on this joint distribution, but rather on the conditional distribution of the “outcome” variable \(y\) given the “explanatory” variables (or “covariates”) \(x_1, x_2, \ldots, x_k\); this conditional distribution being \(f(y|X)\).

To carry out inference on the relationship between \(y\) and \(X\), the researcher then often focuses attention on one aspect of the conditional distribution, most commonly its expected value. This conditional mean is assumed to be a function \(g\) of the covariates such that \(\mathbb{E}[y|X] = g(X, \beta)\) where \(\beta\) is a vector of parameters. A function for the conditional mean is known as a “regression” function.

The canonical introductory regression model is the normal linear regression model, which assumes that \(y \sim N(X\beta, \sigma^2)\). Most students of regression will have first encountered this model as a combination of deterministic and stochastic components. There, the stochastic component is defined as deviations from the conditional mean, \(\varepsilon = y - \mathbb{E}[y|X]\), such that \(y = \mathbb{E}[y|X] + \varepsilon\) or that \(y = g(X, \beta) + \varepsilon\). The model is then augmented with the assumptions that \(g(X, \beta) = X \beta\) and \(\varepsilon \sim N(0,\sigma^2)\) so that the normal linear regression model is:

\[ y = X \beta + \varepsilon \text{ with } \varepsilon \sim N(0,\sigma^2) \hspace{1em} \text{or} \hspace{1em} y \sim N(X\beta, \sigma^2) \]

When taken to data, additional assumptions are made which include a full-rank condition on \(X\) and often that \(\varepsilon_i\) for \(i=1,\ldots,n\) are independent and identically distributed.

Our first example will demonstrate how to estimate the parameters of the normal linear regression model using Bayesian methods made available by the posterior sampling function runireg. We then provide an example to estimate the parameters of a model when \(y\) is a categorical variable. This second example is called a multinomial logit model and uses the logistic “link” function \(g(X, \beta) = [1 + exp(-X\beta)]^{-1}\). Our third and final example will extend the multinomial logit model to permit individual-level parameters. This is known as a hierarchical model and requires panel data to perform the estimation.

Before launching into the examples, we briefly introduce Bayesian methodology and contrast it with classical methods.

4.2 What is Bayesian Inference

Under classical econometric methods, \(\beta\) is most commonly estimated by minimizing the sum of squared residuals, maximizing the likelihood, or matching sample moments to population moments. The distribution of the estimators (e.g., \(\hat{\beta}\)) and test statistics derived from these methods rely on asymptotic concepts and are based on imaginary samples not observed.

In contrast, Bayesian inference provides the benefits of (a) exact sample results, (b) integration of descision-making, estimation, testing, and model selection, and (c) a full accounting of uncertainty. These benefits from Bayesian inference rely heavily on probability theory and, in particular, distributional theory, some elements of which we now briefly review.

Recall the relationship between the joint and conditional densities for random variables \(W\) and \(Z\):

\[ P_{A|B}(A=a|B=b) = \frac{P_{A,B}(A=a, B=b)}{P_B(B=b)} \]

This relationship can be used to derive Bayes’ Theorem, which we write with \(D\) for “data” and \(\theta\) as the parameters (and with implied subscripts):

\[ P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)} \]

Noticing that \(P(D)\) does not contain the parameters of interest (\(\theta\)) and is therefore simply a normalizing constant, we can instead write:

\[ P(\theta|D) \propto P(D|\theta)P(\theta) \]

Introducing Bayesian terminology, we have that the “Posterior” is proportional to the Likelihood times the Prior.

Thus, given (1) a dataset (\(D\)), (2) an assumption on the data generating process (the likelihood, \(P(D|\theta)\)), and (3) a specification of the prior distribution of the parameters (\(P(\theta)\)), we can find the exact (posterior) distribution of the parameters given the observed data. This is in stark contrast to classical econometric methods, which typically only provide the asymptotic distributions of estimators.

However, for any problem of practical interest, the posterior distribution is a high-dimensional object. Additionally, it may not be possible to analytically calculate the posterior or its features (e.g., marginal distributions or moments such as the mean). To handle these issues, the modern approach to Bayesian inference relies on simulation methods to sample from the (high-dimensional) posterior distribution and then construct marginal distributions (or their features) from the sampled draws of the posterior. As a result, simulation and summaries of the posterior play important roles in modern Bayesian statistics.

bayesm’s posterior sampling functions (as their name suggests) sample from posterior distributions. bayesm’s summary and plot methods can be used to analyze those draws. Unlike most classical econometric methods, the MCMC methods implemented in bayesm’s posterior sampling functions provide an estimate of the entire posterior distribution, not just a few moments. Given this “rich” result from Bayesian methods, it is best to summarize posterior distributions using histograms or quantiles. We advise that you resist the temptation to simply report the posterior mean and standard deviation; for non-normal distributions, those moments may have little meaning.

In the examples that follow, we will describe the data we use, present the model, demonstrate how to estimate it using the appropriate posterior sampling function, and provide various ways to summarize the output.

4.3 Example 1: Linear Normal Regression

4.3.1 Data

For our first example, we will use the cheese dataset, which provides 5,555 observations of weekly sales volume for a package of Borden sliced cheese, as well as a measure of promotional display activity and price. The data are aggregated to the “key” account (i.e., retailer-market) level.

data(cheese)
names(cheese) <- tolower(names(cheese))
str(cheese)
## 'data.frame':    5555 obs. of  4 variables:
##  $ retailer: Factor w/ 88 levels "ALBANY,NY - PRICE CHOPPER",..: 42 43 44 19 20 21 35 36 64 31 ...
##  $ volume  : int  21374 6427 17302 13561 42774 4498 6834 3764 5112 6676 ...
##  $ disp    : num  0.162 0.1241 0.102 0.0276 0.0906 ...
##  $ price   : num  2.58 3.73 2.71 2.65 1.99 ...

Suppose we want to assess the relationship between sales volume and price and promotional display activity. For this example, we will abstract from whether these relationships vary by retailer or whether prices are set endogenously. Simple statistics show a negative correlation between volume and price, and a positive correlation between volume and promotional activity, as we would expect.

options(digits=3)
cor(cheese$volume, cheese$price)
## [1] -0.227
cor(cheese$volume, cheese$disp)
## [1] 0.173

4.3.2 Model

We model the expected log sales volume as a linear function of log(price) and promotional activity. Specifically, we assume \(y_i\) to be iid with \(p(y_i|x_i,\beta)\) normally distributed with a mean linear in \(x\) and a variance of \(\sigma^2\). We will denote observations with the index \(i = 1, \ldots, n\) and covariates with the index \(j = 1, \ldots, k\). The model can be written as:

\[ y_i = \sum_{j=1}^k \beta_j x_{ij} + \varepsilon_i = x_i'\beta + \varepsilon_i \hspace{1em} \text{with} \hspace{1em} \varepsilon_i \sim iid\ N(0,\sigma^2) \]

or equivalently but more compactly as:

\[ y \sim MVN(X\beta,\ \sigma^2I_n) \]

Here, the notation \(N(0, \sigma^2)\) indicates a univariate normal distribution with mean \(0\) and variance \(\sigma^2\), while \(MVN(X\beta,\ \sigma^2I_n)\) indicates a multivariate normal distribution with mean vector \(X\beta\) and variance-covariance matrix \(\sigma^2I_n\). In addition, \(y_i\), \(x_{ij}\), \(\varepsilon_i\), and \(\sigma^2\) are scalars while \(x_i\) and \(\beta\) are \(k \times 1\) dimensional vectors. In the more compact notation, \(y\) is an \(n \times 1\) dimensional vector, \(X\) is an \(n \times k\) dimensional matrix with row \(x_i\), and \(I_n\) is an \(n \times n\) dimensional identity matrix. With regard to the cheese dataset, \(k = 2\) and \(n = 5,555\).

When employing Bayesian methods, the model is incomplete until the prior is specified. For our example, we elect to use natural conjugate priors, meaning the family of distributions for the prior is chosen such that, when combined with the likelihood, the posterior will be of the same distributional family. Specifically, we first factor the joint prior into marginal and conditional prior distributions:

\[ p(\beta,\sigma^2) = p(\beta|\sigma^2)p(\sigma^2) \]

We then specify the prior for \(\sigma^2\) as inverse-gamma (written in terms of a chi-squared random variable) and the prior for \(\beta|\sigma^2\) as multivariate normal:

\[ \sigma^2 \sim \frac{\nu s^2}{\chi^2_{\nu}} \hspace{1em} \text{and} \hspace{1em} \beta|\sigma^2 \sim MVN(\bar{\beta},\sigma^2A^{-1}) \]

Other than convenience, we have little reason to specify priors from these distributional families; however, we will select diffuse priors so as not to impose restrictions on the model. To do so, we must pick values for \(\nu\) and \(s^2\) (the degrees of freedom and scale parameters for the inverted chi-squared prior on \(\sigma^2\)) as well as \(\bar{\beta}\) and \(A^{-1}\) (the mean vector and variance-covariance matrix for the multivariate normal prior on the \(\beta\) vector). The bayesm posterior sampling function for this model, runireg, defaults to the following values:

  • \(\nu = 3\)
  • \(s^2 =\) var(y)
  • \(\bar{\beta} = 0\)
  • \(A = 0.01*I\)

We will use these defaults, as they are chosen to be diffuse for data with a unit scale. Thus, for each \(\beta_j | \sigma^2\) we have specified a normal prior with mean 0 and variance \(100\sigma^2\), and for \(\sigma^2\) we have specified an inverse-gamma prior with \(\nu = 3\) and \(s^2 = \text{var}(y)\). We graph these prior distributions below.

par(mfrow = c(1,2))

curve(dnorm(x,0,10), xlab = "", ylab = "", xlim = c(-30,30),
      main = expression(paste("Prior for ", beta[j])),
      col = "dodgerblue4")

nu  <- 3
ssq <- var(log(cheese$volume))
curve(nu*ssq/dchisq(x,nu), xlab = "", ylab = "", xlim = c(0,1),
      main = expression(paste("Prior for ", sigma^2)), 
      col = "darkred")

par(mfrow = c(1,1))