brms

Bayesian power analysis: Part III.b. What about 0/1 data?

Version 1.0.0 In the last post, we covered how the Poisson distribution is handy for modeling count data. Binary data are even weirder than counts. They typically only take on two values: 0 and 1. Sometimes 0 is a stand-in for “no” and 1 for “yes” (e.g., Are you an expert in Bayesian power analysis? For me that would be 0). You can also have data of this kind if you asked people whether they’d like to choose option A or B.

Bayesian power analysis: Part II. Some might prefer precision to power

Version 1.0.0 tl;dr When researchers decide on a sample size for an upcoming project, there are more things to consider than null-hypothesis-oriented power. Bayesian researchers might like to frame their concerns in terms of precision. Stick around to learn what and how. Are Bayesians doomed to refer to \(H_0\) 1 with sample-size planning? If you read my last post, you may have found yourself thinking: Sure, last time you avoided computing \(p\)-values with your 95% Bayesian credible intervals.

Bayesian power analysis: Part I. Prepare to reject $H_0$ with simulation.

Version 1.0.0 tl;dr If you’d like to learn how to do Bayesian power calculations using brms, stick around for this multi-part blog series. Here with part I, we’ll set the foundation. Power is hard, especially for Bayesians. Many journals, funding agencies, and dissertation committees require power calculations for your primary analyses. Frequentists have a variety of tools available to perform these calculations (e.g., here). Bayesians, however, have a more difficult time of it.

Would you like all your posteriors in one plot?

A colleague reached out to me earlier this week with a plotting question. They had fit a series of Bayesian models, all containing a common parameter of interest. They knew how to plot their focal parameter one model at a time, but were stumped on how to combine the plots across models into a seamless whole. It reminded me a bit of this gif which I originally got from Jenny Bryan’s great talk, Behind every great plot there’s a great deal of wrangling.

Stein’s Paradox and What Partial Pooling Can Do For You

tl;dr Sometimes a mathematical result is strikingly contrary to generally held belief even though an obviously valid proof is given. Charles Stein of Stanford University discovered such a paradox in statistics in 1995. His result undermined a century and a half of work on estimation theory. (Efron & Morris, 1977, p. 119) The James-Stein estimator leads to better predictions than simple means. Though I don’t recommend you actually use the James-Stein estimator in applied research, understanding why it works might help clarify why it’s time social scientists consider defaulting to multilevel models for their work-a-day projects.

Bayesian Correlations: Let’s Talk Options.

tl;dr There’s more than one way to fit a Bayesian correlation in brms. Here’s the deal. In the last post, we considered how we might estimate correlations when our data contain influential outlier values. Our big insight was that if we use variants of Student’s \(t\)-distribution as the likelihood rather than the conventional normal distribution, our correlation estimates were less influenced by those outliers. And we mainly did that as Bayesians using the brms package.

Bayesian robust correlations with brms (and why you should love Student’s $t$)

[edited June 18, 2019] In this post, we’ll show how Student’s \(t\)-distribution can produce better correlation estimates when your data have outliers. As is often the case, we’ll do so as Bayesians. This post is a direct consequence of Adrian Baez-Ortega’s great blog, “Bayesian robust correlation with Stan in R (and why you should use Bayesian methods)”. Baez-Ortega worked out the approach and code for direct use with Stan computational environment.

Robust Linear Regression with Student’s $t$-Distribution

[edited Feb 3, 2019] The purpose of this post is to demonstrate the advantages of the Student’s \(t\)-distribution for regression with outliers, particularly within a Bayesian framework. I make assumptions I’m presuming you are familiar with linear regression, familiar with the basic differences between frequentist and Bayesian approaches to fitting regression models, and have a sense that the issue of outlier values is a pickle worth contending with. All code in is R, with a heavy use of the tidyverse–which you might learn a lot about here, especially chapter 5– and Paul Bürkner’s brms package.

Make rotated Gaussians, Kruschke style

[edited Dec 23, 2018] tl;dr You too can make sideways Gaussian density curves within the tidyverse. Here’s how. Here’s the deal: I like making pictures. Over the past several months, I’ve been slowly chipping away at John Kruschke’s Doing Bayesian data analysis, Second Edition: A tutorial with R, JAGS, and Stan. Kruschke has a unique plotting style. One of the quirks is once in a while he likes to express the results of his analyses in plots where he shows the data alongside density curves of the model-implied data-generating distributions.

Bayesian meta-analysis in brms

[edited Feb 27, 2019] Preamble I released the first bookdown version of my Statistical Rethinking with brms, ggplot2, and the tidyverse project a couple weeks ago. I consider it the 0.9.0 version. I wanted a little time to step back from the project before giving it a final edit for the first major edition. I also wanted to give others a little time to take a look and suggest edits, which some thankfully have.