Stein’s Paradox and What Partial Pooling Can Do For You

tl;dr

Sometimes a mathematical result is strikingly contrary to generally held belief even though an obviously valid proof is given. Charles Stein of Stanford University discovered such a paradox in statistics in 1995. His result undermined a century and a half of work on estimation theory. (Efron & Morris, 1977, p. 119)

The James-Stein estimator leads to better predictions than simple means. Though I don’t recommend you actually use the James-Stein estimator in applied research, understanding why it works might help clarify why it’s time social scientists consider defaulting to multilevel models for their work-a-day projects.

The James-Stein can help us understand multilevel models.

I recently noticed someone—I wish I could recall who—tweet about Efron and Morris’s classic, Stein’s Paradox in Statistics. At the time, I was vaguely aware of the paper but hadn’t taken the chance to read it. The tweet’s author mentioned how good a read it was. Now I’ve looked at it, I concur. I’m not a sports fan, but I really appreciated their primary example using batting averages from baseball players in 1970. It clarified why partial pooling leads to better estimates than taking simple averages.

In this project, I’ll walk out Efron and Morris’s baseball example in R and then link it to contemporary Bayesian multilevel models.

I assume things.

For this project, I’m presuming you are familiar with logistic regression, vaguely familiar with the basic differences between frequentist and Bayesian approaches to fitting regression models, and have heard of multilevel models. All code in is R, with a heavy use of the tidyverse—which you might learn a lot about here, especially chapter 5—, and the brms package for Bayesian regression.

Behold the baseball data.

Stein’s paradox concerns the use of observed averages to estimate unobservable quantities. Averaging is the second most basic process in statistics, the first being the simple act of counting. A baseball player who gets seven hits in 20 official times at bat is said to have a batting average of .350. In computing this statistic we are forming an estimate of the payer’s true batting ability in terms of his observed average rate of success. Asked how well the player will do in his next 100 times at bat, we would probably predict 35 more hits. In traditional statistical theory it can be proved that no other estimation rule is uniformly better than the observed average.

The paradoxical element in Stein’s result is that it sometimes contradicts this elementary law of statistical theory. If we have three or more baseball players, and if we are interested in predicting future batting averages for each of them, then there is a procedure that is better than simply extrapolating from the three separate averages…

As our primary data we shall consider the batting averages of 18 major-league players as they were recorded after their first 45 times at bat in the 1970 season. (p. 119)

Let’s enter the baseball data.

library(tidyverse)

baseball <- 
  tibble(player = c("Clemente", "F Robinson", "F Howard", "Johnstone", "Berry", "Spencer", "Kessinger", "L Alvarado", "Santo", "Swoboda", "Unser", "Williams", "Scott", "Petrocelli", "E Rodriguez", "Campaneris", "Munson", "Alvis"),
         hits = c(18:15, 14, 14:12, 11, 11, rep(10, times = 5), 9:7),
         times_at_bat = 45,
         true_ba = c(.346, .298, .276, .222, .273, .27, .263, .21, .269, .23, .264, .256, .303, .264, .226, .286, .316, .2))

Here’s what they look like.

glimpse(baseball)
## Observations: 18
## Variables: 4
## $ player       <chr> "Clemente", "F Robinson", "F Howard", "Johnstone", "Berry", "Spencer", "Kess…
## $ hits         <dbl> 18, 17, 16, 15, 14, 14, 13, 12, 11, 11, 10, 10, 10, 10, 10, 9, 8, 7
## $ times_at_bat <dbl> 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45
## $ true_ba      <dbl> 0.346, 0.298, 0.276, 0.222, 0.273, 0.270, 0.263, 0.210, 0.269, 0.230, 0.264,…

We have data from 18 players. The main columns are of the number of hits for their first 45 times_at_bat. I got the player, hits, and times_at_bat values directly from the paper. However, Efron and Morris didn’t include the batting averages for the end of the season in the paper. Happily, I was able to find those values online. They’re included in the true_ba column.

…These were all the players who happened to have batted exactly 45 times the day the data were tabulated. A batting average is defined, of course, simply as the number of hits divided by the number of times at bat; it is always a number between 0 and 1. (p. 119)

I like use a lot of plots to better understand what I’m doing. Before we start plotting, I should point out the color theme in this project comes from here. [Haters gonna hate.]

navy_blue <- "#0C2C56"
nw_green  <- "#005C5C"  
silver    <- "#C4CED4"
theme_set(theme_grey() +
            theme(panel.grid = element_blank(),
                  panel.background = element_rect(fill = silver),
                  strip.background = element_rect(fill = silver)))

We might use a histogram to get a sense of the hits.

baseball %>% 
  ggplot(aes(x = hits)) +
  geom_histogram(color = nw_green,
                 fill  = navy_blue,
                 size  = 1/10, binwidth = 1) +
  scale_x_continuous("hits during the first 45 trials",
                     breaks = 7:18)

And here is the distribution of the end-of-the-season batting averages, true_ba.

library(tidybayes)

baseball %>% 
  ggplot(aes(x = true_ba, y = 0)) +
  geom_halfeyeh(color = navy_blue,
                fill  = alpha(nw_green, 2/3),
                point_range = median_qi, .width = .5) +
  geom_rug(color = navy_blue,
           size = 1/3, alpha = 1/2) +
  ggtitle(NULL, subtitle = "The dot and horizontal line are the median and\ninterquartile range, respectively.")

James-Stein will help us achieve our goal.

For each of the 18 players in the data, our goal is to the best job possible to use the data for their first 45 times at bat (i.e., hits and times_at_bat) to predict their batting averages at the end of the season (i.e., true_ba). Before Charles Stein, the conventional reasoning was their initial batting averages (i.e., hits / times_at_bat) are the best way to do this. It turns out that would be naïve. To see why, let

  • y (i.e., \(y\)) = the batting average for the first 45 times at bat
  • y_bar (i.e., \(\overline y\)) = the grand mean for the first 45 times at bat
  • c (i.e., \(c\)) = shrinking factor
  • z (i.e., \(z\)) = James-Stein estimate
  • true_ba (i.e., theta, \(\theta\)) = the batting average at the end of the season

The first step in applying Stein’s method is to determine the average of the averages. Obviously this grand average, which we give the symbol \(\overline y\), must also lie between 0 and 1. The essential process in Stein’s method is the “shrinking” of all the individual averages toward this grand average. If a player’s hitting record is better than the grand average, then it must be reduced; if he is not hitting as well as the grand average, then his hitting record must be increased. The resulting shrunken value for each player we designate \(z\). (p. 119)

As such, the James-Stein estimator is:

\[z = \overline y + c(y - \overline y)\]

And in the paper, \(c = .212\). Let’s get some of those values into the baseball data.

(
  baseball <-
  baseball %>% 
  mutate(y     = hits / times_at_bat) %>% 
  mutate(y_bar = mean(y),
         c     = .212) %>% 
  mutate(z     = y_bar + c * (y - y_bar),
         theta = true_ba)
  )
## # A tibble: 18 x 9
##    player       hits times_at_bat true_ba     y y_bar     c     z theta
##    <chr>       <dbl>        <dbl>   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
##  1 Clemente       18           45   0.346 0.4   0.265 0.212 0.294 0.346
##  2 F Robinson     17           45   0.298 0.378 0.265 0.212 0.289 0.298
##  3 F Howard       16           45   0.276 0.356 0.265 0.212 0.285 0.276
##  4 Johnstone      15           45   0.222 0.333 0.265 0.212 0.280 0.222
##  5 Berry          14           45   0.273 0.311 0.265 0.212 0.275 0.273
##  6 Spencer        14           45   0.27  0.311 0.265 0.212 0.275 0.27 
##  7 Kessinger      13           45   0.263 0.289 0.265 0.212 0.270 0.263
##  8 L Alvarado     12           45   0.21  0.267 0.265 0.212 0.266 0.21 
##  9 Santo          11           45   0.269 0.244 0.265 0.212 0.261 0.269
## 10 Swoboda        11           45   0.23  0.244 0.265 0.212 0.261 0.23 
## 11 Unser          10           45   0.264 0.222 0.265 0.212 0.256 0.264
## 12 Williams       10           45   0.256 0.222 0.265 0.212 0.256 0.256
## 13 Scott          10           45   0.303 0.222 0.265 0.212 0.256 0.303
## 14 Petrocelli     10           45   0.264 0.222 0.265 0.212 0.256 0.264
## 15 E Rodriguez    10           45   0.226 0.222 0.265 0.212 0.256 0.226
## 16 Campaneris      9           45   0.286 0.2   0.265 0.212 0.252 0.286
## 17 Munson          8           45   0.316 0.178 0.265 0.212 0.247 0.316
## 18 Alvis           7           45   0.2   0.156 0.265 0.212 0.242 0.2

Which set of values, \(y\) or \(z\), is the better indicator of batting ability for the 18 players in our example? In order to answer that question in a precise way one would have to know the “true batting ability” of each player. This true average we shall designate \(\theta\) (the Greek letter theta). Actually it is an unknowable quantity, an abstraction representing the probability that a player will get a hit on any given time at bat. Although \(\theta\) is unobservable, we have a good approximation to it: the subsequent performance of the batters. It is sufficient to consider just the remainder of the 1970 season, which includes about nine times as much data as the preliminary averages were based on. (p. 119)

Now we have both \(y\) and \(z\) in the data, let’s compare their distributions.

baseball %>% 
  select(y, z) %>% 
  gather() %>% 
  mutate(label = ifelse(key == "z", 
                        "the James-Stein estimate", 
                        "early-season batting average")) %>% 
  
  ggplot(aes(x = value, y = label)) +
  geom_vline(color = "white",
             xintercept = 0.2654321, linetype = 2) +
  geom_halfeyeh(color = navy_blue,
                fill  = alpha(nw_green, 2/3),
                point_range = median_qi, .width = .5, relative_scale = 4) +
  labs(x = "batting average", y = NULL) +
  coord_cartesian(ylim = c(1.25, 5.25))

As implied in the formula, the James-Stein estimates are substantially shrunken towards the grand mean, y_bar. To get a sense of which estimate is better, we can subtract the estimate from theta, the end of the season batting average.

baseball <-
  baseball %>% 
  mutate(y_error = theta - y,
         z_error = theta - z)

Since y_error and y_error are error distributions, we prefer values to be as close to zero as possible. Let’s take a look.

baseball %>% 
  select(y_error:z_error) %>% 
  gather() %>% 
  
  ggplot(aes(x = value, y = key)) +
  geom_vline(xintercept = 0, linetype = 2,
             color = "white") +
  geom_halfeyeh(color = navy_blue,
                fill  = alpha(nw_green, 2/3),
                point_range = median_qi, .width = .5, relative_scale = 2.5) +
  labs(x = NULL, y = NULL) +
  coord_cartesian(ylim = c(1.25, 4))

The James-Stein errors (i.e., z_error) are much more concentrated toward zero. In the paper, we read: “One method of evaluating the two estimates is by simply counting their successes and failures. For 16 of the 18 players the James-Stein estimator \(z\) is closer than the observed average \(y\) to the ‘true,’ or seasonal, average \(\theta\)” (pp. 119–121). We can compute that with a little ifelse().

baseball %>% 
  transmute(closer_to_theta = ifelse(abs(y_error) - abs(z_error) == 0, "equal",
                                     ifelse(abs(y_error) - abs(z_error) > 0, "z", "y"))) %>% 
  group_by(closer_to_theta) %>% 
  count()
## # A tibble: 2 x 2
## # Groups:   closer_to_theta [2]
##   closer_to_theta     n
##   <chr>           <int>
## 1 y                   2
## 2 z                  16

A more quantitative way of comparing the two techniques is through the total squared error of estimation… The observed averages \(y\) have a total squared error of .077, whereas the squared error of the James-Stein estimators is only .022. By this comparison, then, Stein’s method is 3.5 times as accurate. (p. 121)

baseball %>% 
  select(y_error:z_error) %>% 
  gather() %>% 
  group_by(key) %>% 
  summarise(total_squared_error = sum(value * value))
## # A tibble: 2 x 2
##   key     total_squared_error
##   <chr>                 <dbl>
## 1 y_error              0.0755
## 2 z_error              0.0214

We can get the 3.5 value with simple division.

0.07548795 / 0.02137602
## [1] 3.531431

So it does indeed turn out that shrinking each player’s initial estimate toward the grand mean of those initial estimates does a better job of predicting their end-of-the-season batting averages than using their individual batting averages. To get a sense of what this looks like, let’s make our own version of the figure on page 121.

baseball %>% 
  select(y, z, theta, player) %>% 
  gather(key, value, -player) %>% 
  mutate(time = ifelse(key == "theta", "theta", "estimate")) %>% 
  bind_rows(
    baseball %>% 
      select(player, theta) %>% 
      rename(value = theta) %>% 
      mutate(key  = "theta", 
             time = "theta")
  ) %>% 
  mutate(facet = rep(c("estimate = y", "estimate = z"), each = n() / 4) %>% rep(., times = 2)) %>% 
  
  ggplot(aes(x = time, y = value, group = player)) +
  geom_hline(yintercept = 0.2654321, linetype = 2,
             color = "white") +
  geom_line(alpha = 1/2,
            color = nw_green) +
  geom_point(alpha = 1/2,
             color = navy_blue) +
  labs(x = NULL,
       y = "batting average") +
  theme(axis.ticks.x = element_blank()) +
  facet_wrap(~facet)

The James-Stein estimator works because of its shrinkage. The shrinkage factor is \(c\). In the first parts of the paper, Efron and Morris just told us \(c = .212\). A little later in the paper, they gave the actual formula for \(c\). If you let \(k\) be the number of means (i.e., the number of clusters), then:

\[c = 1 - \frac{(k - 3)\sigma^2}{\sum (y - \overline y)^2}\]

The difficulty of that formula is we don’t know the value for \(\sigma^2\). It’s not the simple variance of \(y\) (i.e., var(y)). An answer to this stackexchange question appears to have uncovered the method Efron and Morris used in the paper. I’ll reproduce it in detail:

Following along, we can compute sigma_squared like so:

(sigma_squared <- mean(baseball$y) * (1 - mean(baseball$y)) / 45)
## [1] 0.004332842

Now we can reproduce the \(c\) value from the paper.

baseball %>% 
  select(player, y:c) %>% 
  mutate(squared_deviation = (y - y_bar)^2) %>%
  summarise(c_by_hand = 1 - ((n() - 3) * sigma_squared / sum(squared_deviation)))
## # A tibble: 1 x 1
##   c_by_hand
##       <dbl>
## 1     0.212

Let’s go Bayesian.

This has been fun. But I don’t recommend you actually use the James-Stein estimator in your research.

The James-Stein estimator is not the only one that is known to be better than the sample averages…

The search for new estimators continues. Recent efforts [in the 1970s, that is] have been concentrated on achieving results like those obtained with Stein’s method for problems involving distributions other than the normal distribution. Several lines of work, including Stein’s and Robbins’ and more formal Bayesian methods seem to be converging on a powerful general theory of parameter estimation. (p. 127, emphasis added)

The James-Stein estimator is not Bayesian, but it is a precursor to the kind of analyses we now do with Bayesian multilevel models, which pool cluster-level means toward a grand mean. To get a sense of this, let’s fit a couple models. First, let’s load the brms package.

library(brms)

I typically work with the linear regression paradigm. If we were to analyze the baseball data, we’d use an aggregated binomial mode, which is a particular kind of logistic regression. You can learn more about it here or here. If we wanted a model that corresponded to the \(y\) estimates, above, we’d use hits as the criterion and allow each player to get his own separate estimate. Since we’re working within the Bayesian paradigm, we also need to assign priors. In this case, we’ll use a weakly-regularizing \(\text{Normal} (0, 1.5)\) on the intercepts. See this wiki for more on weakly-regularizing priors.

Here’s the code to fit the model in brms.

fit_y <-
  brm(data = baseball, family = binomial,
      hits | trials(45) ~ 0 + player,
      prior(normal(0, 1.5), class = b),
      seed = 1)

If you were curious, that model followed the statistical formula

\[ \begin{eqnarray} \text{hits}_i & \sim & \text{Binomial} (n = 45, p_i) \\ \text{logit}(p_i) & = & \alpha_\text{player} \\ \alpha_\text{player} & \sim & \text{Normal} (0, 1.5) \end{eqnarray} \]

where \(p_i\) is the probability of player \(i\), \(\alpha_\text{player}\) is a vector of \(\text{player}\)-specific intercepts from within the logistic regression model, and each of those intercepts are given a \(\text{Normal} (0, 1.5)\) prior on the log-odds scale. (If this is all new and confusing, don’t worry. I’ll recommended some resources at the end of this post.)

For our analogue to the James-Stein estimate \(z\), we’ll fit the multilevel version of that last model. While each player still gets his own estimate, those estimates are now partially-pooled toward the grand mean.

fit_z <-
  brm(data = baseball, family = binomial,
      hits | trials(45) ~ 1 + (1 | player),
      prior = c(prior(normal(0, 1.5), class = Intercept),
                prior(normal(0, 1.5), class = sd)),
      seed = 1)

And that model followed the statistical formula

\[ \begin{eqnarray} \text{hits}_i & \sim & \text{Binomial} (n = 45, p_i) \\ \text{logit}(p_i) & = & \alpha + \alpha_\text{player} \\ \alpha & \sim & \text{Normal} (0, 1.5) \\ \alpha_\text{player} & \sim & \text{Normal} (0, \sigma_\text{player}) \\ \sigma_\text{player} & \sim & \text{HalfNormal} (0, 1.5) \end{eqnarray} \]

where \(\alpha\) is the grand mean among the \(\text{player}\)-specific intercepts, \(\alpha_\text{player}\) is the vector of \(\text{player}\)-specific deviations from the grand mean, which are Normally distributed with a mean of zero and a standard deviation of \(\sigma_\text{player}\), which is estimated from the data.

Here are the model summaries.

fit_y$fit
## Inference for Stan model: 1d2456d7f7a08ebf8ef5fda01ce9b808.
## 4 chains, each with iter=2000; warmup=1000; thin=1; 
## post-warmup draws per chain=1000, total post-warmup draws=4000.
## 
##                      mean se_mean   sd   2.5%    25%    50%    75%  97.5% n_eff Rhat
## b_playerAlvis       -1.63    0.00 0.39  -2.44  -1.89  -1.62  -1.35  -0.89  7862    1
## b_playerBerry       -0.78    0.00 0.31  -1.41  -0.98  -0.77  -0.56  -0.19  8886    1
## b_playerCampaneris  -1.34    0.00 0.35  -2.05  -1.57  -1.33  -1.10  -0.69  7628    1
## b_playerClemente    -0.39    0.00 0.30  -0.99  -0.59  -0.39  -0.19   0.18 10134    1
## b_playerERodriguez  -1.21    0.00 0.35  -1.93  -1.45  -1.20  -0.97  -0.55 10145    1
## b_playerFHoward     -0.59    0.00 0.31  -1.22  -0.79  -0.58  -0.38   0.03 10787    1
## b_playerFRobinson   -0.49    0.00 0.30  -1.08  -0.69  -0.49  -0.28   0.10 10544    1
## b_playerJohnstone   -0.69    0.00 0.31  -1.32  -0.88  -0.68  -0.48  -0.09  9763    1
## b_playerKessinger   -0.88    0.00 0.33  -1.55  -1.10  -0.88  -0.67  -0.28  9094    1
## b_playerLAlvarado   -0.98    0.00 0.32  -1.63  -1.20  -0.97  -0.76  -0.37 10622    1
## b_playerMunson      -1.48    0.00 0.38  -2.27  -1.72  -1.46  -1.21  -0.77 11067    1
## b_playerPetrocelli  -1.21    0.00 0.33  -1.89  -1.43  -1.20  -0.99  -0.59  9253    1
## b_playerSanto       -1.10    0.00 0.33  -1.78  -1.32  -1.09  -0.87  -0.46  9619    1
## b_playerScott       -1.22    0.00 0.36  -1.94  -1.45  -1.20  -0.98  -0.54 10948    1
## b_playerSpencer     -0.78    0.00 0.33  -1.45  -0.99  -0.77  -0.55  -0.14  8511    1
## b_playerSwoboda     -1.10    0.00 0.35  -1.81  -1.33  -1.10  -0.87  -0.42 10665    1
## b_playerUnser       -1.21    0.00 0.35  -1.92  -1.44  -1.21  -0.97  -0.54 11893    1
## b_playerWilliams    -1.22    0.00 0.35  -1.96  -1.45  -1.20  -0.97  -0.56  8597    1
## lp__               -73.45    0.08 2.93 -79.97 -75.21 -73.15 -71.34 -68.48  1444    1
## 
## Samples were drawn using NUTS(diag_e) at Sat Feb 23 17:19:53 2019.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
fit_z$fit
## Inference for Stan model: 33295e60ce033f843c74128ac973bc03.
## 4 chains, each with iter=2000; warmup=1000; thin=1; 
## post-warmup draws per chain=1000, total post-warmup draws=4000.
## 
##                                   mean se_mean   sd   2.5%    25%    50%    75%  97.5% n_eff Rhat
## b_Intercept                      -1.02    0.00 0.09  -1.21  -1.08  -1.02  -0.96  -0.84  3116    1
## sd_player__Intercept              0.17    0.00 0.11   0.01   0.08   0.16   0.24   0.42  1643    1
## r_player[Alvis,Intercept]        -0.13    0.00 0.20  -0.62  -0.22  -0.07   0.00   0.17  2411    1
## r_player[Berry,Intercept]         0.05    0.00 0.17  -0.26  -0.03   0.02   0.13   0.44  4251    1
## r_player[Campaneris,Intercept]   -0.08    0.00 0.17  -0.51  -0.16  -0.04   0.02   0.21  3621    1
## r_player[Clemente,Intercept]      0.14    0.00 0.20  -0.13   0.00   0.09   0.25   0.63  2902    1
## r_player[E.Rodriguez,Intercept]  -0.05    0.00 0.16  -0.43  -0.12  -0.02   0.04   0.27  4722    1
## r_player[F.Howard,Intercept]      0.09    0.00 0.18  -0.20  -0.01   0.05   0.19   0.54  3081    1
## r_player[F.Robinson,Intercept]    0.12    0.00 0.19  -0.17   0.00   0.07   0.22   0.58  2766    1
## r_player[Johnstone,Intercept]     0.07    0.00 0.17  -0.22  -0.02   0.04   0.15   0.47  4122    1
## r_player[Kessinger,Intercept]     0.03    0.00 0.16  -0.29  -0.05   0.01   0.09   0.40  4051    1
## r_player[L.Alvarado,Intercept]    0.00    0.00 0.17  -0.37  -0.08   0.00   0.08   0.36  4060    1
## r_player[Munson,Intercept]       -0.10    0.00 0.19  -0.59  -0.18  -0.05   0.01   0.19  3625    1
## r_player[Petrocelli,Intercept]   -0.05    0.00 0.17  -0.46  -0.14  -0.02   0.04   0.25  4014    1
## r_player[Santo,Intercept]        -0.02    0.00 0.16  -0.40  -0.09  -0.01   0.05   0.30  4388    1
## r_player[Scott,Intercept]        -0.05    0.00 0.17  -0.45  -0.13  -0.02   0.04   0.26  3650    1
## r_player[Spencer,Intercept]       0.05    0.00 0.17  -0.27  -0.04   0.02   0.13   0.43  3611    1
## r_player[Swoboda,Intercept]      -0.03    0.00 0.16  -0.38  -0.10  -0.01   0.05   0.28  4562    1
## r_player[Unser,Intercept]        -0.05    0.00 0.16  -0.44  -0.13  -0.02   0.04   0.25  3412    1
## r_player[Williams,Intercept]     -0.05    0.00 0.17  -0.44  -0.13  -0.02   0.04   0.26  4306    1
## lp__                            -73.87    0.13 4.11 -82.53 -76.49 -73.67 -71.00 -66.54  1053    1
## 
## Samples were drawn using NUTS(diag_e) at Sat Feb 23 17:20:43 2019.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

If you’re new to aggregated binomial or logistic regression, those estimates might be confusing. For technical reasons—see here—, they’re in a log-odds metric. But we can use the brms::inv_logit_scaled() function to convert them back to a probability metric. Why would we want a probability metric?, you might ask. As it turns out, batting average is in a probability metric, too. So you might also think of the inv_logit_scaled() function as turning the model results into a batting-average metric. For example, if we wanted to get the estimated batting average for E. Rodriguez baed on the y_fit model (i.e., the model corresponding to the \(y\) estimator), we might do something like this.

fixef(fit_y)["playerERodriguez", 1] %>% 
  inv_logit_scaled()
## [1] 0.2293629

To double check the model returned a sensible estimate, here’s the corresponding y value from the baseball data.

baseball %>% 
  filter(player == "E Rodriguez") %>% 
  select(y)
## # A tibble: 1 x 1
##       y
##   <dbl>
## 1 0.222

It’s a little off, but in the right ballpark. Here is the corresponding estimate from the multilevel model, fit_z:

coef(fit_z)$player["E Rodriguez", 1, ] %>% inv_logit_scaled()
## [1] 0.2558496

And indeed that’s pretty close to the z value from the baseball data, too.

baseball %>% 
  filter(player == "E Rodriguez") %>% 
  select(z)
## # A tibble: 1 x 1
##       z
##   <dbl>
## 1 0.256

So now we have these too competing ways to model the data of the first 45 times at bat, let’s see how well their estimates predict the true_ba values. We’ll do so with a couple plots. This first one is of the single-level model which did not pool the batting averages.

# get the `fitted()` draws and wrangle a bit
f_y <-
  baseball %>% 
  distinct(player) %>% 
  add_fitted_draws(fit_y, dpar = "mu") %>% 
  left_join(baseball %>% 
              select(player, true_ba))
  
# save the plot
p1 <-
  f_y %>% 
  ggplot(aes(x = mu, y = reorder(player, true_ba))) +
  geom_vline(xintercept = mean(baseball$true_ba), color = "white") +
  stat_intervalh(.width = .95, alpha = 1/3, color = nw_green) +
  stat_intervalh(.width = .50, alpha = 1/3, color = nw_green) +
  geom_point(data = baseball,
             aes(x = true_ba),
             size = 2, alpha = 3/4,
             color = navy_blue) +
  labs(x = "batting average", 
       y = NULL,
       subtitle = "fit_y, the no pooling model") +
  coord_cartesian(xlim = c(0, .6)) +
  theme(axis.text.y   = element_text(hjust = 0),
        axis.ticks.y  = element_blank(),
        plot.subtitle = element_text(hjust = .5))

Note our use of some handy convenience functions (i.e., add_fitted_draws() and stat_intervalh()) from the tidybayes package.

This second plot is almost the same as the previous one, but this time based on the partial-pooling multilevel model.

f_z <-
  baseball %>% 
  distinct(player) %>% 
  add_fitted_draws(fit_z, dpar = "mu") %>% 
  left_join(baseball %>% 
              select(player, true_ba))

p2 <-
  f_z %>% 
  ggplot(aes(x = mu, y = reorder(player, true_ba))) +
  geom_vline(xintercept = mean(baseball$true_ba), color = "white") +
  stat_intervalh(.width = .95, alpha = 1/3, color = nw_green) +
  stat_intervalh(.width = .50, alpha = 1/3, color = nw_green) +
  geom_point(data = baseball,
             aes(x = true_ba),
             size = 2, alpha = 3/4,
             color = navy_blue) +
  labs(x = "batting average", 
       y = NULL,
       subtitle = "fit_z, the multilevel pooling model") +
  coord_cartesian(xlim = c(0, .6)) +
  theme(axis.text.y   = element_text(hjust = 0),
        axis.ticks.y  = element_blank(),
        plot.subtitle = element_text(hjust = .5))

Here we join them together.

library(gridExtra)

grid.arrange(p1, p2, ncol = 2)

In both panels, the end-of-the-season batting averages (i.e., \(\theta\)) are the blue dots. The model-implied estimates are depicted by 95% and 50% interval bands (i.e., the lighter and darker green horizontal lines, respectively). The white line in the background marks off the mean of \(\theta\). Although neither model was perfect, the multilevel model, our analogue to the James-Stein estimates, yielded predictions that appear both more valid and more precise.

We might also compare the models by their prediction errors. Here we’ll subtract the end-of-the-season batting averages from the model estimates. But unlike with y and z estimates, above, our fit_y and fit_z models yielded entire posterior distributions. Therefore, we’ll express our prediction errors in terms of error distributions, rather than single values.

# save the `fit_y` plot
p3 <-
  f_y %>% 
  # the error distribution is just the model-implied values minus 
  # the true end-of-season values
  mutate(error = mu - true_ba)  %>% 

  ggplot(aes(x = error, y = reorder(player, true_ba))) +
  geom_vline(xintercept = c(0, -.2, .2), size = c(1/2, 1/4, 1/4), 
             linetype = c(1, 3, 3), color = "white") +
  geom_halfeyeh(point_interval = mean_qi, .width = .95,
                color = navy_blue, fill = alpha(nw_green, 2/3)) +
  coord_cartesian(xlim = c(-.35, .35)) +
  labs(x = "error", 
       y = NULL,
       subtitle = "fit_y, the no pooling model") +
  theme(axis.text.y   = element_text(hjust = 0),
        axis.ticks.y  = element_blank(),
        plot.subtitle = element_text(hjust = .5))

# save the `fit_z` plot
p4 <-
  f_z %>%   
  mutate(error = mu - true_ba)  %>% 
  
  ggplot(aes(x = error, y = reorder(player, true_ba))) +
  geom_vline(xintercept = c(0, -.2, .2), size = c(1/2, 1/4, 1/4), 
             linetype = c(1, 3, 3), color = "white") +
  geom_halfeyeh(point_interval = mean_qi, .width = .95,
                color = navy_blue, fill = alpha(nw_green, 2/3)) +
  coord_cartesian(xlim = c(-.35, .35)) +
  labs(x = "error", 
       y = NULL,
       subtitle = "fit_z, the multilevel pooling model") +
  theme(axis.text.y   = element_text(hjust = 0),
        axis.ticks.y  = element_blank(),
        plot.subtitle = element_text(hjust = .5))

# now combine the two and behold
grid.arrange(p3, p4, ncol = 2)

For consistency, I’ve ordered the players along the y-axis the same as above. In both panels, we see the prediction error distribution for each player in green and then summarize those distributions in terms of their means and percentile-based 95% intervals. Since these are error distributions, we prefer them to be as close to zero as possible. Although neither model made perfect predictions, the overall errors in the multilevel model were clearly smaller. Much like with the James-Stein estimator, the partial pooling of the multilevel model made for better end-of-the-season estimates.

The paradoxical [consequence of Bayesian multilevel models] is that [they can contradict] this elementary law of statistical theory. If we have [two] or more baseball players, and if we are interested in predicting future batting averages for each of them, then [the Bayesian multilevel model can be better] than simply extrapolating from [the] separate averages. (p. 119)

This is another example of how the KISS principle isn’t always the best bet with data analysis.

Next steps

If you’re new to logistic regression, multilevel models or Bayesian statistics, I recommend any of the following texts:

And if you choose Statistical Rethinking, do check out these great lectures on the text or my project translating the code in the text to brms and the tidyverse.

Also, don’t miss the provocative preprint by Davis-Stober, Dana and Rouder, When are sample means meaningful? The role of modern estimation in psychological science.

Session info

sessionInfo()
## R version 3.5.1 (2018-07-02)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS High Sierra 10.13.6
## 
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] gridExtra_2.3   brms_2.7.0      Rcpp_1.0.0      bindrcpp_0.2.2  tidybayes_1.0.3 forcats_0.3.0  
##  [7] stringr_1.3.1   dplyr_0.7.6     purrr_0.2.5     readr_1.1.1     tidyr_0.8.1     tibble_2.0.1   
## [13] ggplot2_3.1.0   tidyverse_1.2.1
## 
## loaded via a namespace (and not attached):
##   [1] colorspace_1.3-2          ggridges_0.5.0            rsconnect_0.8.8          
##   [4] rprojroot_1.3-2           ggstance_0.3              markdown_0.8             
##   [7] base64enc_0.1-3           rstudioapi_0.7            rstan_2.18.2             
##  [10] svUnit_0.7-12             DT_0.4                    fansi_0.4.0              
##  [13] mvtnorm_1.0-8             lubridate_1.7.4           xml2_1.2.0               
##  [16] bridgesampling_0.4-0      knitr_1.20                shinythemes_1.1.1        
##  [19] bayesplot_1.6.0           jsonlite_1.5              broom_0.5.1              
##  [22] shiny_1.1.0               compiler_3.5.1            httr_1.3.1               
##  [25] backports_1.1.2           assertthat_0.2.0          Matrix_1.2-14            
##  [28] lazyeval_0.2.1            cli_1.0.1                 later_0.7.3              
##  [31] prettyunits_1.0.2         htmltools_0.3.6           tools_3.5.1              
##  [34] igraph_1.2.1              coda_0.19-2               gtable_0.2.0             
##  [37] glue_1.3.0                reshape2_1.4.3            cellranger_1.1.0         
##  [40] nlme_3.1-137              blogdown_0.8              crosstalk_1.0.0          
##  [43] xfun_0.3                  ps_1.2.1                  rvest_0.3.2              
##  [46] mime_0.5                  miniUI_0.1.1.1            gtools_3.8.1             
##  [49] MASS_7.3-50               zoo_1.8-2                 scales_1.0.0             
##  [52] colourpicker_1.0          hms_0.4.2                 promises_1.0.1           
##  [55] Brobdingnag_1.2-5         parallel_3.5.1            inline_0.3.15            
##  [58] shinystan_2.5.0           yaml_2.1.19               StanHeaders_2.18.0-1     
##  [61] loo_2.0.0                 stringi_1.2.3             dygraphs_1.1.1.5         
##  [64] pkgbuild_1.0.2            rlang_0.3.1               pkgconfig_2.0.2          
##  [67] matrixStats_0.54.0        evaluate_0.10.1           lattice_0.20-35          
##  [70] bindr_0.1.1               rstantools_1.5.0          htmlwidgets_1.2          
##  [73] labeling_0.3              processx_3.2.1            tidyselect_0.2.4         
##  [76] plyr_1.8.4                magrittr_1.5              bookdown_0.7             
##  [79] R6_2.3.0                  generics_0.0.2            pillar_1.3.1             
##  [82] haven_1.1.2               withr_2.1.2               xts_0.10-2               
##  [85] abind_1.4-5               modelr_0.1.2              crayon_1.3.4             
##  [88] arrayhelpers_1.0-20160527 utf8_1.1.4                rmarkdown_1.10           
##  [91] grid_3.5.1                readxl_1.1.0              callr_3.1.0              
##  [94] threejs_0.3.1             digest_0.6.18             xtable_1.8-2             
##  [97] httpuv_1.4.4.2            stats4_3.5.1              munsell_0.5.0            
## [100] viridisLite_0.3.0         shinyjs_1.0

Related