So, if you’ve been reading my seriesofpostsso far, I hope that I’ve convinced you that science does sometimes ignore good ideas. This post is going to answer a simple question: so what? If this problem is true, what should we do about it?
My answer: we should ignore science’s positive pronouncements—only pay attention to what science says is wrong, not what it says is right.
But before explaining why I think that, I’ve realized a big problem. The ignoring-good-ideas-when-they-are-not-made-loudly issue needs a better name, and not only because I’m tired of typing all that out. More importantly, when we give a concept a short, manageable name, we “crystalize the pattern.” An idea becomes a lot easier to talk about when it has a usable name (that doesn’t have eight hyphens). So, I’m officially calling this problem science has of ignoring good ideas that aren’t made loudly the “haystack problem.” Science may be right about what ideas are awful when it holds them up for scrutiny, but it sucks at finding the needle-in-a-haystack good ideas in the first place.
So, why should weFN 1When I say “we,” I mean that literally: you and me. There’s an interesting question about how “society” should solve the haystack problem, or about what a Science Czar should change to make the haystack problem go away. But I’m not the Science Czar, and I’m guessing most of you aren’t either. I’m more interested on how knowledge of the haystack problem should change our own, individual-level beliefs and actions. react to science’s haystack problem by only paying attention to the scientific establishment when it says something is wrong?
Stepping back from these three examples, can we see any commonalities? In all three cases, the problem was not that science shouted down an alternative theory, but rather than science shrugged, and failed either to notice or failed to follow up on a promising, even revolutionary theory or set of evidence. This suggests a very specific sort of science failure, one where science fails not because it rejects good ideas but because it fails to notice good ones.
In this series, we’ve been talking about how the scientific establishment frequently ignores good ideas, even if it doesn’t actively shout them down very often. In the first post, I gave the example of Gregor Mendel, who was ignored from 1866 through 1900 (and only fortuitously rediscovered). The second post focused on a more recent example: the theory that Earth has been subjected to cataclysmic asteroid impacts throughout its history. This theory was ignored from 1942 through 1980. 1980 marked the beginning of the shouting-down phase, where the scientific establishment stopped ignoring the theory and started arguing with it, the process that ended in short-ish order with the theory’s acceptance (as it usually does for correct theories).
People usually tell the story of the fight over continental drift as a rare example of the scientific establishment disagreeing with a good idea for longer than the 10-or-so years that seem to be typical for paradigm shifts. And you can definitely read it that way. You can cite German climatologist and rugged arctic explorer Alfred Wegener, as one of those rare iconoclasts who got it right in the face of a hostile establishment.
Last time, we talked about how science’s biggest failures come not from shouting down heretical-but-correct ideas but from ignoring good ideas. I gave the example of Gregor Mendel, who was ignored for decades even though he’d made discoveries that, once recognized, revolutionized biology. We ended by noting that Mendel lived in the 1800s, and I promised to give a more recent example to show that science has not fully fixed its problem with ignoring good ideas.
So now it’s time for an example with a bit more punch: the asteroid impact that killed off the dinosaurs. In 1942, the astrophysicist Ralph Baldwin, professor of astronomy at the University of Michigan, first propounded the idea that earth was subject to perpetual, frequent, and cataclysmic asteroid bombardment.
The first post on a new blog! Starting fresh on a clean slate—but also starting without any readers. So, what better subject to start with than the thought that most ideas are never read and, of the few that are read, most are ignored?
Scott Alexander recently had a post that listed ten examples of scientists who were roundly mocked before later being vindicated. Scott went through each example and showed that most of them weren’t true—either the scientists weren’t really mocked at all, or they were briefly but were vindicated very quickly. He also looked at several areas where he once thought that the scientific consensus was badly wrong but now thinks that it is right, either because he had misunderstood what the consensus position was, or because the consensus position changed rapidly to embrace the good ideas of a new paradigm. From all this, he concludes that “scientific consensus is almost always an accurate reflection of the best knowledge we have at the time.” (Of course, his argument is more nuanced than this; read the whole thing for his thoughts).
I disagree with Scott about the accuracy of science. I think that science can go badly wrong in (at least) two ways, and Scott looked at only one of the failure modes. Scott asked whether science is frequently confronted with a loud, prominent critic and rejects that critic publically for years—only for that critic to later turn out to have been right all along. He determined that this is rare, and I agree. But the question he didn’t ask was whether science, when confronted with a quiet criticism that doesn’t demand a response, is also likely to change to the correct view.
What I want to argue is that the biggest way for science to go wrong is for it to quietly ignore good ideas, many times for decades. These good ideas might literally receive no response, or they might get people to say, “Huh, that’s interesting, someone should really look into that.” But what they don’t get is the attention and energy that would cause them to be revolutionary in the way they should be.
So, science ignores good ideas. I’m going to argue for this claim by telling a couple of stories about when it happened, and then arguing why stories don’t tell the whole picture. I’m going to present some statistics to show that the issue I’m worried about isn’t a minor exception, but rather is a major, systematic problem. And then I’m going to talk a bit about how all this changes how I think about the scientific establishment. Continue reading “On Ignoring Good Ideas (in Science!) — Part 1”