Hence, the conditional posterior distribution for b given a is normal with mean ba and covariance matrix Af-1, so it is easily sampled directly, as noted in the above description of the MH algorithm. Observe that ba is a (matrix) weighted average of the prior mean b and the sample estimate 6, where the weights are the precisions of b and b conditional on a. This weighting can be interpreted as shrinking the sample estimate b toward its prior mean b, where the degree of shrinkage depends on the relative reliability of the sample estimate. This shrinkage effect is discussed further in Section II.
We find that the first and second posterior moments of 6, computed using the MH algorithm, are approximated well by the moments of p(b\cr, r, F^) evaluated at a reasonable estimate of a (using (25) and (26)). An estimate of a for this purpose is computed in two steps. Using (26), the posterior mean of b conditioned on a = a is computed, and its value is denoted as b*.
The final estimate of a is computed as the posterior mean of a conditioned on b = b*, using the conditional posterior density for a that arises when a and b are made independent in the normal-inverted-gamma prior (i.e., the proposal density for a in the MH algorithm). In the empirical analyses in Sections II and III, we present a one-stock example based on the MH algorithm, but we use the approximation to compute posterior moments for a large number of stocks, since performing the MH algorithm for each stock would be computationally prohibitive.