So, if you’ve been reading my series of posts so far, I hope that I’ve convinced you that science does sometimes ignore good ideas. This post is going to answer a simple question: so what? If this problem is true, what should we do about it?
My answer: we should ignore science’s positive pronouncements—only pay attention to what science says is wrong, not what it says is right.
But before explaining why I think that, I’ve realized a big problem. The ignoring-good-ideas-when-they-are-not-made-loudly issue needs a better name, and not only because I’m tired of typing all that out. More importantly, when we give a concept a short, manageable name, we “crystalize the pattern.” An idea becomes a lot easier to talk about when it has a usable name (that doesn’t have eight hyphens). So, I’m officially calling this problem science has of ignoring good ideas that aren’t made loudly the “haystack problem.” Science may be right about what ideas are awful when it holds them up for scrutiny, but it sucks at finding the needle-in-a-haystack good ideas in the first place.
So, why should weFN 1 react to science’s haystack problem by only paying attention to the scientific establishment when it says something is wrong?
Well, to answer that, I’ve been thinking of a contrived hypothetical. Imagine that, for some reason (let’s say you’re captured by a mad social scientist conducting a psychology experiment) you are presented with ten theories you’ve never heard before, along with all the true facts you need to evaluate the theories. The mad scientist tells you that five of the theories are standard ones held by the scientific establishment and five are alternative theories that the establishment has not weighed in on, either positively or negatively. The mad scientist forces you to evaluate the plausibility of each theory, and you come to some conclusion about each one.
After the experiment is over, the mad social scientist mentions (while untying your ropes) which of the ten theories were the establishment ones. How much should this change your estimate of how likely those theories are to be true? Or, to be more concrete: say you had evaluated one theory as seeming 70% likely to be true before the scientist released you. Once you find out that it’s an alternative theory, how likely should you then think it is?
Here’s what I’d think: I’d think back to Gregor Mendel, Ralph Baldwin, and the oil companies that knew about continental drift. I’d think about all the papers that are never published, and all the published ideas that are never followed up on. I’d remember that science has a bad haystack problem, and I’d think that the fact that science has not embraced this theory ought not count against it. True, the scientific establishment might not have embraced the theory because it’s incorrect. But it’s much more likely that the reason the scientific establishment hasn’t addressed the idea is that science fails to address the vast majority of ideas that aren’t brought to its attention.
In short, I’d keep estimating the odds of that theory being true as pretty much 70%.
If we take the haystack problem seriously, we should pay very little attention to the positive pronouncements of the scientific establishment. We can and should agree with scientists when they’ve looked into a question and come to a firm decision (“vaccines don’t cause autism“), but we shouldn’t think that scientists have the best answer out there for any given issue they haven’t looked into. In particular, if some outsider (whether that’s a non-scientist or a scientist outside the academic establishment) comes up with a new idea, we should evaluate that idea on its merits, and not dismiss it because it’s outside the mainstream. Unless and until science looks into it and determines that it is bunk, the fact that science has failed to agree with it provides approximately zero evidence that it’s wrong.
I want to defend my argument from two opposite objections: first, that it is right but obvious, and second that it is dangerously wrong.
On the right-but-obvious point: it is likely true that most people who’ve thought about the issue would agree that science can ignore good ideas. In fact, even the post I started off disagreeing with acknowledges this issue. In the same post that states that “scientific consensus is almost always an accurate reflection of the best knowledge we have at the time,” also included a parenthetical disclaimer:
(and I’m making it even easier for myself in that I say “scientific consensus for” when I probably mean “no scientific consensus against”. I don’t claim that 90%+ of scientists always believe true things, only that there are very few cases where 90%+ of scientists believe things which smarter people know to be false.)
And the author of that post, Scott Alexander, wrote elsewhere about medical advances that seem promising but don’t get more research. Scott wondered whether “everyone mutually assume[s] that if something this revolutionary were true, someone would have noticed beyond a single article,” and so no one bothers to investigate the idea. He also wrote about how the scientific establishment often ignores contrarians who are right until someone more abrasive comes along to make the same point more stridently. In short, if Scott were to read these posts, there’s a pretty good chance he’d agree that the haystack problem is something that happens in science. And I bet a lot of other people who still place their faith in the scientific establishment would agree too that the haystack problem exists.
So, they could reasonably object, if we all agree that the haystack problem exists, what are you adding to the conversation? If you’re not telling us anything we didn’t already know, what justifies posting about it (much less in a five-part series!)?
My reply: it’s not really a question of knowing about the haystack problem’s existence. The issue is emphasis.
My argument is not just that the haystack problem exists, it’s that it’s a big deal. My claim is that basically every scientific theory could be badly wrong in all sorts of ways—even in ways that smart people have pointed out—but that science just hasn’t noticed. My claim is that we should take the haystack problem seriously, and be very willing to consider ideas outside the scientific mainstream but that aren’t contradicted by strong evidence.
Another way of putting this point is that, as Overcoming Bias has pointed out, “there are two kinds of ‘no evidence.'” There’s the “no evidence” that means someone has looked into a question, and all the evidence comes out the other way. And then there’s the “no evidence” that means no one has bothered to look. I think the haystack problem is a prominent example of science not bothering to look, and I think that many people—even people who would acknowledge the existence of the haystack problem—don’t take issue seriously enough. And as a result, they err by treating science’s type-2 no evidence as type-1 no evidence, and place far more weight on the scientific establishment’s positive views.
The other objection to my suggestion of discounting the science except for when it’s telling us something is wrong is that my idea is dangerous. Sure, my view lets me share in the scientific establishment’s view that vaccines don’t cause autism now, once the British Medical Journal wrote a long review of the evidence against the theory. But back when it was just a fringe idea, wouldn’t my view of the haystack problem have committed me to taking this theory seriously and prevented me from pointing to the silence of the scientific establishment as good evidence against the theory?
I have three responses to this objection:
First, zero evidence doesn’t mean negative evidence. Even if I didn’t count the scientific establishment’s silence as a reason to object to the vaccine-autism theory, I might still have rejected it on its merits and because it did not have strong evidentiary support or a convincing theoretical justification. There are lots of junk theories out there; I’m not saying we should start believing every crackpot who comes along. I’m just saying that we should evaluate—on their individual merits—any ideas that science hasn’t shouted down.
Second, I guess I am saying, at least a bit, that people should have taken that theory seriously before the science came in to reject it. When I hear about new theories—“freeze-dried poop capsules may help fight obesity,” for example—thinking about the haystack problem makes me a bit more inclined to give them real thought. Yes, this means taking the anti-vaxxers of the world seriously until they are shouted down by science. But it also helps not to miss the Mendels of the world. And the whole point of our haystack-problem discussion is that science will get around to disproving the anti-vaxxers pretty quickly, but may miss the Mendels for decades or forever.
Third, I want to emphasize that I’m talking about what we ought to do when we’re really trying to figure out the truth of the matter. We may not always be doing that, and that’s fine. Sometimes, we may be motivated to fit in or avoid looking foolish—and it might be better to avoid the unusual errors that look dumb even if the price is making a bunch of common errors that no one will care about. One thing that following the scientific establishment has going for it is that it is the scientific establishment, and you’re unlikely to look like an idiot if the only ways you’re wrong are the same ways that a bunch of fancily credentialed experts were also wrong.
Another thing the scientific establishment has going for it is that it’s usually not trying to sell you anything.FN 2 If the people pitching alternative theories are trying to sell you something, then adopting a sort of epistemic learned helplessness may be the safer course of action. This is especially true if you lack some combination of the time, educational background, energy, and patience that it would take to evaluate the question on its merits (with “on its merits” here understood to include accounting for biases in the sources of available information).
* * *
In sum: if what you really want is to get at the truth, the haystack problem should cause you to treat the scientific establishment as a rare sort of creature that is only ever capable of saying “no.” If science tells you it has studied something, and that thing is definitely wrong, then believe it. But if science tells you some theory is right, and there are no improvements out there, well maybe. And if science tells you what to do, take very seriously the idea that science just hasn’t looked at the better option.
Because of the haystack problem, we ought to be much more open to ideas outside the scientific establishment, so long as science hasn’t yelled those ideas down. And maybe we won’t miss the next Mendel, even if science does.