Posts

tl;dr When people conclude results from group-level data will tell you about individual-level processes, they commit the ecological fallacy. This is true even of the individuals whose data contributed to those group-level results. This phenomenon can seem odd and counterintuitive. Keep reading to improve your intuition. We need history. The ecological fallacy is closely related to Simpson’s paradox 1. It is often attributed to sociologist William S. Robinson’s (1950) paper Ecological Correlations and the Behavior of Individuals.

CONTINUE READING

tl;dr If you are under the impression group-level data and group-based data analysis will inform you about within-person processes, you would be wrong. Stick around to learn why. This is gonna be a long car ride. Earlier this year I published a tutorial 1 on a statistical technique that will allow you to analyze the multivariate time series data of a single individual. It’s called the dynamic p-technique. The method has been around since at least the 80s (Molenaar, 1985) and its precursors date back to at least the 40s (Cattell, Cattell, & Rhymer, 1947).

CONTINUE READING

Hit the pause I started this series intending to add semiregular installments. Perhaps I’d add one every other month or so. It turns out I’m just not up to it, right now. Someone important to me died during this plague. Reading and blogging on this book was and has been a way to connect with them—to honor their life. The experience has been rich and raw and meaningful. I hope that not too far off in the future, I’ll be in a better place to take up this task.

CONTINUE READING

Version 1.0.0 In the last post, we covered how the Poisson distribution is handy for modeling count data. Binary data are even weirder than counts. They typically only take on two values: 0 and 1. Sometimes 0 is a stand-in for “no” and 1 for “yes” (e.g., Are you an expert in Bayesian power analysis? For me that would be 0). You can also have data of this kind if you asked people whether they’d like to choose option A or B.

CONTINUE READING

Version 1.0.0 tl;dr So far we’ve covered Bayesian power simulations from both a null hypothesis orientation (see part I) and a parameter width perspective (see part II). In both instances, we kept things simple and stayed with Gaussian (i.e., normally distributed) data. But not all data follow that form, so it might do us well to expand our skill set a bit. In the next few posts, we’ll cover how we might perform power simulations with other kinds of data.

CONTINUE READING

Version 1.0.0 tl;dr When researchers decide on a sample size for an upcoming project, there are more things to consider than null-hypothesis-oriented power. Bayesian researchers might like to frame their concerns in terms of precision. Stick around to learn what and how. Are Bayesians doomed to refer to \(H_0\) 1 with sample-size planning? If you read my last post, you may have found yourself thinking: Sure, last time you avoided computing \(p\)-values with your 95% Bayesian credible intervals.

CONTINUE READING

Version 1.0.0 tl;dr If you’d like to learn how to do Bayesian power calculations using brms, stick around for this multi-part blog series. Here with part I, we’ll set the foundation. Power is hard, especially for Bayesians. Many journals, funding agencies, and dissertation committees require power calculations for your primary analyses. Frequentists have a variety of tools available to perform these calculations (e.g., here). Bayesians, however, have a more difficult time of it.

CONTINUE READING

A colleague reached out to me earlier this week with a plotting question. They had fit a series of Bayesian models, all containing a common parameter of interest. They knew how to plot their focal parameter one model at a time, but were stumped on how to combine the plots across models into a seamless whole. It reminded me a bit of this gif which I originally got from Jenny Bryan’s great talk, Behind every great plot there’s a great deal of wrangling.

CONTINUE READING

tl;dr Sometimes a mathematical result is strikingly contrary to generally held belief even though an obviously valid proof is given. Charles Stein of Stanford University discovered such a paradox in statistics in 1995. His result undermined a century and a half of work on estimation theory. (Efron & Morris, 1977, p. 119) The James-Stein estimator leads to better predictions than simple means. Though I don’t recommend you actually use the James-Stein estimator in applied research, understanding why it works might help clarify why it’s time social scientists consider defaulting to multilevel models for their work-a-day projects.

CONTINUE READING

tl;dr There’s more than one way to fit a Bayesian correlation in brms. Here’s the deal. In the last post, we considered how we might estimate correlations when our data contain influential outlier values. Our big insight was that if we use variants of Student’s \(t\)-distribution as the likelihood rather than the conventional normal distribution, our correlation estimates were less influenced by those outliers. And we mainly did that as Bayesians using the brms package.

CONTINUE READING