Then there’s Munder (2013), which is a meta-meta-analysis on whether meta-analyses of confounding by researcher allegiance effect were themselves meta-confounded by meta-researcher allegiance effect. He found that indeed, meta-researchers who believed in researcher allegiance effect were more likely to turn up positive results in their studies of researcher allegiance effect (p < .002).Everything about it is a delight. The layers of meta-analysis. The English noun-phrase-constructing rules that permit the construction of a sentence in which the prefix "meta-" appears five times, variously modifying words which themselves are modifying other "meta-"-modified words.
I wonder if the same researcher bias/confounding exists in fields where the experiments are entirely done on computers. Can researchers' belief in the effectiveness of certain machine learning techniques affect their experiments? What about physics simulations? I don't see how, but of course I deeply believe in the inviolable sanctity of mathematics. This is an opinion founded in my acknowledged bias. Maybe coders would self-sabotage by writing bad code, so that experiments run slower? ... but in the end this wouldn't affect the actual outcome, just the agony and feasibility of running the experiment many times.
On a larger scale, I am supremely happy that scientists are using their scientific reasoning to criticize the very practice of science itself. In the same way that I frequently remind myself that the basis of the field studying privacy is "trust no one"*, it would be nice to have big science conferences where we all get together and just shake our heads at how unreliable the current practice of science is. Apparently. I mean, check out this conclusion:
But rather than speculate, I prefer to take it as a brute fact. Studies are going to be confounded by the allegiance of the researcher. When researchers who don’t believe something discover it, that’s when it’s worth looking into.... which sounds convincing.
But.
You know what?
I'm skeptical.
This post's theme word is obverse, "the more conspicuous of two alternatives or cases or sides." The skeptic and his obverse performed a coordinated, randomized, double-blind study.
*Or, as I memorably put it during a job interview, "We've known for a long time that almost everything is impossible."
4 comments:
Plausible mechanisms for similar effects in computer based fields:
I) One is more likely to overfit data so that one's beliefs are supported.
II) One imagines that it is harder to find bugs/errors in code/math that provides results that supports one's beliefs (See the Rienhart-Rogoff hubbub from a few years back).
III) Overcomplicating models so that they are not understandable except at a handwavy level to anyone else, and see how they genially reproduce one's biases. (Related to 1, but not necessarily the same; and I believe people who have worked with Nate Silver have levelled these sorts of criticisms at him)
Also, apparently I don't know how to post comments here in Tor.
Too bad on most other matters Scott Alexander is completely terrible. So much so that I don't want to read even the occasional good post he writes to avoid sending traffic his way.
I'm sorry to hear that --- what "other matter" is he so terrible on?
He supports eugenics, belittles feminism, and seems to have more than a passing spiritual kinship with MRAs. He also is quite enamored with IQ and doesn't seem to get it is complete bullshit. In fact, I am 95% sure he wants to select embryos for IQ. He gives way too much of an audience to the awful people who show up in his comments sections (they are much much worse than he is of course, people like Steve Sailer and all sorts of neoreactionaries).
I don't like the "rationalist" community generally since they seem to be mostly smug techno-utopian libertarians who believe they have privileged access to truth. They are perfectly happy entertaining any argument, no matter how disgusting, as long as it's framed as intellectually rigorous by their bizarre standards. So sort of philosophical exercises where people try hopeless projects to create things from "first principles" about everything. SSC seems to be somewhat more liberal than others in this community so on some issues it's basically a long exercise in trying to convince himself to be less shitty. Often he succeeds!
Post a Comment