On Ignoring Good Ideas (in science!) — Part 4

So far, this series has presented three examples of science ignoring good ideas: Gregor Mendel’s theory of heredity, Ralph Baldwin’s theory of asteroid impacts, and oil companies’ use of magnetometers to prove continental drift. I’m devoting this post to arguing that these three examples aren’t isolated occurrences, but are typical of a much larger pattern.

Stepping back from these three examples, can we see any commonalities? In all three cases, the problem was not that science shouted down an alternative theory, but rather than science shrugged, and failed either to notice or failed to follow up on a promising, even revolutionary theory or set of evidence. This suggests a very specific sort of science failure, one where science fails not because it rejects good ideas but because it fails to notice good ones.

Another commonality: in all three cases we’ve discussed, we are lucky to know what we do about the originators of the idea. With Mendel, we’re lucky that Carl Correns wouldn’t let De Vries get away with claiming credit. With Baldwin, we’re lucky that he didn’t just give up in the face of the apathy he confronted. “A normal astronomy instructor, perhaps worried about tenure, would have gone back to stellar spectra [his previous research area, which had been well received].” Baldwin, however, decided that the tepid reaction to his initial presentation was a “fiasco” and, in a reaction that was “typical of him,” decided to write an entire book on the subject. And the role of oil prospectors in establishing the evidence for continental drift still isn’t widely appreciated or documented: I had to dig in some obscure industry-specific sources to find as much as I did.

So, figuring out how often science ignores good ideas is not as simple as listing out every ignored idea and then comparing that total to every accepted idea. Some ignored ideas (like Mendel’s) will be fortuitously rediscovered in time for us to note our collective error in passing them by. Others (like Baldwin’s) will have proponents so tenacious that they will persevere in the face of apathy and indifference. But many ideas will be lost for good, permanently ignored.

We are trying to measure the number of times someone has a good, wonderful idea and that idea is ignored. “Ideas that were ignored”—potentially forever—is intrinsically a hard number to measure—a dark number that necessarily cannot be measured precisely. Accordingly, much of our evidence will be unavoidably indirect. But, direct or indirect, what clarity can statistics provide?

Well, here’s a question for you: say you are an eager young academic, who has devoted the better part of a year to writing, revising, and publishing an academic paper (papers, like babies, take about nine months to be produced). It’s finally been published, and is available to the world. You sit back and wait for your paper to start influencing other papers. You keep a casual eye on the count of citations to your paper. What are you most likely to see?

Absolutely nothing. Bubkis. Zero. Zilch. Nada. The details vary by field, but by far the most common number of times for a paper to be cited is zero. An astounding 82% of social science research is never cited; 43% of legal scholarship is never cited. The results are a bit better for natural science (23%) and medicine (12%), but it’s still the case that our hypothetical junior academic in those fields is more likely to receive zero citations than any other number of citations. (And the 12% number strikes me as surprisingly high, given the high cost of medical research.)

Even if we drop back from the modal outcome, and start talking about average outcomes, the picture isn’t much better.  The average academic paper is cited 10.81 times.  Again, this varies by field, with the average immunology paper being cited twenty-two times, while the average economics paper being cited only six times. And, of course, these averages are brought up by well-cited papers. The most cited economics papers are all cited thousands of times.

Now, of course it’s true that papers can have an impact on the conversation or the development of ideas without ever being cited. They can be read without being cited, and frequently are: it’s probably not true, as some have suggested, that the average academic paper is read by only three people. Maybe uncited papers have their ideas percolate through the community (or even be stolen by someone who really should have cited them!). But the low citation numbers at least leave open the possibility that some good ideas are out there—have been published, even—and yet are being ignored by scientists generally.

One defense(?) of the scientific establishment would be to say that peer review sucks at screening out nonsense: that all those papers that are published but never cited were awful ideas that didn’t deserve to be cited, and that the good ideas are uniformly recognized and rise to the top. And there’s certainly room to criticize the job academic journals do at screening out dross; even randomly generated papers have surprisingly good success at getting published.

A sophisticated version of this argument would make claims about false positives and false negatives. If journal articles were all great and well-cited, that might mean we’re risking screening out some useful work. Maybe it’s better to be more permissive in what we publish, and count on finding the best out of a broad field. As Frank Easterbrook put it:

A free mind is apt to err—most mutations in thought, as well as in genes, are neutral or harmful—but because intellectual growth flows from the best of today standing on the shoulders of the tallest of yesterday, the failure of most scholars and their ideas is unimportant. High risk probably is an essential ingredient of high gain.

* * *

So, given all this, how confident should we be that science finds the good ideas? I think it comes down to this: We know science sometimes ignores good ideas (Mendel, etc.). We know science is faced with an overwhelming flood of published ideas, many of which get little or no response. Based on this, it would be amazing if science didn’t keep missing some good ones, especially absent a mechanism to ensure that it doesn’t.

And everything we’ve said so far has focused just on published ideas, that is, ideas most accessible to the scientific establishment. As we saw with the oil companies proof of continental drift, science also has a problem with ignoring good ideas that are never published in scientific journals. Another obligatory xkcd:

Some engineer out there has solved P=NP and it's locked up in an electric eggbeater calibration routine. For every 0x5f375a86 we learn about, there are thousands we never see.

Science can ignore good ideas by having them slip by, uncited and unnoticed, in the torrent of published scholarship. Science can ignore good ideas by never hearing about them when they are developed outside the scientific establishment altogether. One way or another, the possibility of science ignoring good ideas seems like a real risk. Next time, I’ll talk about what I think we should do about it.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *