tutorial

Individuals are not small groups, II: The ecological fallacy

tl;dr When people conclude results from group-level data will tell you about individual-level processes, they commit the ecological fallacy. This is true even of the individuals whose data contributed to those group-level results. This phenomenon can seem odd and counterintuitive. Keep reading to improve your intuition. We need history. The ecological fallacy is closely related to Simpson’s paradox 1. It is often attributed to sociologist William S. Robinson’s (1950) paper Ecological Correlations and the Behavior of Individuals.

Individuals are not small groups, I: Simpson's paradox

tl;dr If you are under the impression group-level data and group-based data analysis will inform you about within-person processes, you would be wrong. Stick around to learn why. This is gonna be a long car ride. Earlier this year I published a tutorial 1 on a statistical technique that will allow you to analyze the multivariate time series data of a single individual. It’s called the dynamic p-technique. The method has been around since at least the 80s (Molenaar, 1985) and its precursors date back to at least the 40s (Cattell, Cattell, & Rhymer, 1947).

Bayesian power analysis: Part III.b. What about 0/1 data?

Version 1.0.0 In the last post, we covered how the Poisson distribution is handy for modeling count data. Binary data are even weirder than counts. They typically only take on two values: 0 and 1. Sometimes 0 is a stand-in for “no” and 1 for “yes” (e.g., Are you an expert in Bayesian power analysis? For me that would be 0). You can also have data of this kind if you asked people whether they’d like to choose option A or B.

Bayesian power analysis: Part III.a. Counts are special.

Version 1.0.0 tl;dr So far we’ve covered Bayesian power simulations from both a null hypothesis orientation (see part I) and a parameter width perspective (see part II). In both instances, we kept things simple and stayed with Gaussian (i.e., normally distributed) data. But not all data follow that form, so it might do us well to expand our skill set a bit. In the next few posts, we’ll cover how we might perform power simulations with other kinds of data.

Bayesian power analysis: Part II. Some might prefer precision to power

Version 1.0.0 tl;dr When researchers decide on a sample size for an upcoming project, there are more things to consider than null-hypothesis-oriented power. Bayesian researchers might like to frame their concerns in terms of precision. Stick around to learn what and how. Are Bayesians doomed to refer to \(H_0\) 1 with sample-size planning? If you read my last post, you may have found yourself thinking: Sure, last time you avoided computing \(p\)-values with your 95% Bayesian credible intervals.

Bayesian power analysis: Part I. Prepare to reject $H_0$ with simulation.

Version 1.0.0 tl;dr If you’d like to learn how to do Bayesian power calculations using brms, stick around for this multi-part blog series. Here with part I, we’ll set the foundation. Power is hard, especially for Bayesians. Many journals, funding agencies, and dissertation committees require power calculations for your primary analyses. Frequentists have a variety of tools available to perform these calculations (e.g., here). Bayesians, however, have a more difficult time of it.

Would you like all your posteriors in one plot?

A colleague reached out to me earlier this week with a plotting question. They had fit a series of Bayesian models, all containing a common parameter of interest. They knew how to plot their focal parameter one model at a time, but were stumped on how to combine the plots across models into a seamless whole. It reminded me a bit of this gif which I originally got from Jenny Bryan’s great talk, Behind every great plot there’s a great deal of wrangling.

Stein’s Paradox and What Partial Pooling Can Do For You

tl;dr Sometimes a mathematical result is strikingly contrary to generally held belief even though an obviously valid proof is given. Charles Stein of Stanford University discovered such a paradox in statistics in 1995. His result undermined a century and a half of work on estimation theory. (Efron & Morris, 1977, p. 119) The James-Stein estimator leads to better predictions than simple means. Though I don’t recommend you actually use the James-Stein estimator in applied research, understanding why it works might help clarify why it’s time social scientists consider defaulting to multilevel models for their work-a-day projects.

Bayesian Correlations: Let’s Talk Options.

tl;dr There’s more than one way to fit a Bayesian correlation in brms. Here’s the deal. In the last post, we considered how we might estimate correlations when our data contain influential outlier values. Our big insight was that if we use variants of Student’s \(t\)-distribution as the likelihood rather than the conventional normal distribution, our correlation estimates were less influenced by those outliers. And we mainly did that as Bayesians using the brms package.

Bayesian robust correlations with brms (and why you should love Student’s $t$)

[edited June 18, 2019] In this post, we’ll show how Student’s \(t\)-distribution can produce better correlation estimates when your data have outliers. As is often the case, we’ll do so as Bayesians. This post is a direct consequence of Adrian Baez-Ortega’s great blog, “Bayesian robust correlation with Stan in R (and why you should use Bayesian methods)”. Baez-Ortega worked out the approach and code for direct use with Stan computational environment.