What Does Randomization do to Confounders?

Lately, I have been reading a lot of descriptions of experiments and randomization by applied researchers. One kind of phrase particularly bugs me: I frequently see language like “experimental randomization controls for potential alternative explanations…” or something similar.

Specifically, I don’t like “controls for” as a description of the function of randomization, or more generally the metaphor of the experiment as a method of controlling for confounding variables. It comes from a regression-based model of statistical inference that was dominant in most poli sci graduate programs until somewhere in the early 2000s, where you often deal with an alternative explanation by adding a “control variable” to a regression.

I’m no expert, but here is my one-sentence description of randomization and confounders: Randomization ensures that in expectation, each confounder is independent of the joint distribution of treatments and potential outcomes in the sample. Let me be clear: this may be a very reasonable assumption. But stating it this way helps to illustrate the differences between experiments and regression-based adjustments. Careful description of what experiments do is critical for careful thinking about what experiments do, and that is undoubtedly good social science.

Now, the dirty little secret is that while people of my generation who are trained in multiple regression always nod when they hear the words “control for,” it’s devilishly hard to explain in words what control variables actually do. (Illustrative task: explain to a smart undergraduate student—one who can talk back—exactly what a control variable does. I bet that you will find that it’s a lot harder than you think.)

Methodologists/experimentalists: did I get my description of randomization right? If not, how can I fix it?