On Ignoring Good Ideas (in science!) — Part 5

So, if you’ve been reading my series of posts so far, I hope that I’ve convinced you that science does sometimes ignore good ideas. This post is going to answer a simple question: so what? If this problem is true, what should we do about it?

My answer: we should ignore science’s positive pronouncements—only pay attention to what science says is wrong, not what it says is right.

But before explaining why I think that, I’ve realized a big problem. The ignoring-good-ideas-when-they-are-not-made-loudly issue needs a better name, and not only because I’m tired of typing all that out. More importantly, when we give a concept a short, manageable name, we “crystalize the pattern.” An idea becomes a lot easier to talk about when it has a usable name (that doesn’t have eight hyphens). So, I’m officially calling this problem science has of ignoring good ideas that aren’t made loudly the “haystack problem.” Science may be right about what ideas are awful when it holds them up for scrutiny, but it sucks at finding the needle-in-a-haystack good ideas in the first place.

So, why should weFN 1 react to science’s haystack problem by only paying attention to the scientific establishment when it says something is wrong?

Well, to answer that, I’ve been thinking of a contrived hypothetical. Imagine that, for some reason (let’s say you’re captured by a mad social scientist conducting a psychology experiment) you are presented with ten theories you’ve never heard before, along with all the true facts you need to evaluate the theories. The mad scientist tells you that five of the theories are standard ones held by the scientific establishment and five are alternative theories that the establishment has not weighed in on, either positively or negatively. The mad scientist forces you to evaluate the plausibility of each theory, and you come to some conclusion about each one.

After the experiment is over, the mad social scientist mentions (while untying your ropes) which of the ten theories were the establishment ones. How much should this change your estimate of how likely those theories are to be true? Or, to be more concrete: say you had evaluated one theory as seeming 70% likely to be true before the scientist released you. Once you find out that it’s an alternative theory, how likely should you then think it is?

Here’s what I’d think: I’d think back to Gregor Mendel, Ralph Baldwin, and the oil companies that knew about continental drift. I’d think about all the papers that are never published, and all the published ideas that are never followed up on. I’d remember that science has a bad haystack problem, and I’d think that the fact that science has not embraced this theory ought not count against it. True, the scientific establishment might not have embraced the theory because it’s incorrect. But it’s much more likely that the reason the scientific establishment hasn’t addressed the idea is that science fails to address the vast majority of ideas that aren’t brought to its attention.

In short, I’d keep estimating the odds of that theory being true as pretty much 70%.

If we take the haystack problem seriously, we should pay very little attention to the positive pronouncements of the scientific establishment. We can and should agree with scientists when they’ve looked into a question and come to a firm decision (“vaccines don’t cause autism“), but we shouldn’t think that scientists have the best answer out there for any given issue they haven’t looked into. In particular, if some outsider (whether that’s a non-scientist or a scientist outside the academic establishment) comes up with a new idea, we should evaluate that idea on its merits, and not dismiss it because it’s outside the mainstream. Unless and until science looks into it and determines that it is bunk, the fact that science has failed to agree with it provides approximately zero evidence that it’s wrong.

I want to defend my argument from two opposite objections: first, that it is right but obvious, and second that it is dangerously wrong.

On the right-but-obvious point: it is likely true that most people who’ve thought about the issue would agree that science can ignore good ideas. In fact, even the post I started off disagreeing with acknowledges this issue. In the same post that states that “scientific consensus is almost always an accurate reflection of the best knowledge we have at the time,” also included a parenthetical disclaimer:

(and I’m making it even easier for myself in that I say “scientific consensus for” when I probably mean “no scientific consensus against”. I don’t claim that 90%+ of scientists always believe true things, only that there are very few cases where 90%+ of scientists believe things which smarter people know to be false.)

And the author of that post, Scott Alexander, wrote elsewhere about medical advances that seem promising but don’t get more research. Scott wondered whether “everyone mutually assume[s] that if something this revolutionary were true, someone would have noticed beyond a single article,” and so no one bothers to investigate the idea. He also wrote about how the scientific establishment often ignores contrarians who are right until someone more abrasive comes along to make the same point more stridently. In short, if Scott were to read these posts, there’s a pretty good chance he’d agree that the haystack problem is something that happens in science. And I bet a lot of other people who still place their faith in the scientific establishment would agree too that the haystack problem exists.

So, they could reasonably object, if we all agree that the haystack problem exists, what are you adding to the conversation? If you’re not telling us anything we didn’t already know, what justifies posting about it (much less in a five-part series!)?

My reply: it’s not really a question of knowing about the haystack problem’s existence. The issue is emphasis.

My argument is not just that the haystack problem exists, it’s that it’s a big deal. My claim is that basically every scientific theory could be badly wrong in all sorts of ways—even in ways that smart people have pointed out—but that science just hasn’t noticed. My claim is that we should take the haystack problem seriously, and be very willing to consider ideas outside the scientific mainstream but that aren’t contradicted by strong evidence.

Another way of putting this point is that, as Overcoming Bias has pointed out, “there are two kinds of ‘no evidence.'” There’s the “no evidence” that means someone has looked into a question, and all the evidence comes out the other way. And then there’s the “no evidence” that means no one has bothered to look. I think the haystack problem is a prominent example of science not bothering to look, and I think that many people—even people who would acknowledge the existence of the haystack problem—don’t take issue seriously enough. And as a result, they err by treating science’s type-2 no evidence as type-1 no evidence, and place far more weight on the scientific establishment’s positive views.

The other objection to my suggestion of discounting the science except for when it’s telling us something is wrong is that my idea is dangerous. Sure, my view lets me share in the scientific establishment’s view that vaccines don’t cause autism now, once the British Medical Journal wrote a long review of the evidence against the theory. But back when it was just a fringe idea, wouldn’t my view of the haystack problem have committed me to taking this theory seriously and prevented me from pointing to the silence of the scientific establishment as good evidence against the theory?

I have three responses to this objection:

First, zero evidence doesn’t mean negative evidence. Even if I didn’t count the scientific establishment’s silence as a reason to object to the vaccine-autism theory, I might still have rejected it on its merits and because it did not have strong evidentiary support or a convincing theoretical justification. There are lots of junk theories out there; I’m not saying we should start believing every crackpot who comes along. I’m just saying that we should evaluate—on their individual merits—any ideas that science hasn’t shouted down.

Second, I guess I am saying, at least a bit, that people should have taken that theory seriously before the science came in to reject it. When I hear about new theories—“freeze-dried poop capsules may help fight obesity,” for example—thinking about the haystack problem makes me a bit more inclined to give them real thought. Yes, this means taking the anti-vaxxers of the world seriously until they are shouted down by science. But it also helps not to miss the Mendels of the world. And the whole point of our haystack-problem discussion is that science will get around to disproving the anti-vaxxers pretty quickly, but may miss the Mendels for decades or forever.

Third, I want to emphasize that I’m talking about what we ought to do when we’re really trying to figure out the truth of the matter. We may not always be doing that, and that’s fine. Sometimes, we may be motivated to fit in or avoid looking foolish—and it might be better to avoid the unusual errors that look dumb even if the price is making a bunch of common errors that no one will care about. One thing that following the scientific establishment has going for it is that it is the scientific establishment, and you’re unlikely to look like an idiot if the only ways you’re wrong are the same ways that a bunch of fancily credentialed experts were also wrong.

Another thing the scientific establishment has going for it is that it’s usually not trying to sell you anything.FN 2 If the people pitching alternative theories are trying to sell you something, then adopting a sort of epistemic learned helplessness may be the safer course of action. This is especially true if you lack some combination of the time, educational background, energy, and patience that it would take to evaluate the question on its merits (with “on its merits” here understood to include accounting for biases in the sources of available information).

* * *

In sum: if what you really want is to get at the truth, the haystack problem should cause you to treat the scientific establishment as a rare sort of creature that is only ever capable of saying “no.” If science tells you it has studied something, and that thing is definitely wrong, then believe it. But if science tells you some theory is right, and there are no improvements out there, well maybe. And if science tells you what to do, take very seriously the idea that science just hasn’t looked at the better option.

Because of the haystack problem, we ought to be much more open to ideas outside the scientific establishment, so long as science hasn’t yelled those ideas down. And maybe we won’t miss the next Mendel, even if science does.

Ok, looking over this post, more than half the links are to Scott Alexander’s blogs. Maybe that’s somewhat defensible because this series started off as a response to one of Scott’s posts. But still, it seems excessive—I don’t want this to turn into a fan blog. Next time, I’m cutting myself off; no more of Scott’s posts for a while. (Instead, I’ll be arguing with a different one of my favorite bloggers—and on a totally different subject.)
Related Posts

7 thoughts on “On Ignoring Good Ideas (in science!) — Part 5”

  1. I enjoyed reading this. I’m sorry to come late to the comments section.

    Your posts have focused on the haystack of ideas that were ignored by the scientific community and never tested, but I worry about a much larger haystack: the ideas that no one has ever had. There is an infinite variety of scientific theories that no one has thought of. There is even an infinite variety of theories that are compatible with all data that any human has ever seen (or will see)! This haystack is HUGE, and it is impossible for us to sort through it. Even if we did manage to examine every possible theory, we would find that many theories match the data equally well. How can we ever find the Truth?

    In my opinion, the fact that data is not sufficient to discriminate between many similar theories, makes focusing on which theories can be disproven essential.

    1. There is an infinite variety of scientific theories that no one has thought of. There is even an infinite variety of theories that are compatible with all data that any human has ever seen (or will see)! This haystack is HUGE,

      Agreed.

      and it is impossible for us to sort through it.

      Why do you say that?

      It seems to me that we can definitely sort through the theories. For example, a theory that say that the sun is governed by an unknown rule that causes it not to rise starting in next Tuesday would technically be compatible with all the data that any human has ever seen, but it seems much less plausible than a theory that predicts the sun will come up next Tuesday largely the way it did last Tuesday. It seems to me that one of the key jobs of science–of any epistemological framework, really, but I’m focusing on science–is to help us sort through those theories and distinguish the plausible from the implausible.

      Moreover, I think science generally does an ok job of this and, on the whole, seems to be moving us closer to truth. So my critique is more limited: that science sometimes goes so far as to reject plausible theories–by overlooking them–when it shouldn’t. But it’s not a global critique of the entire project, if that makes sense.

      I think Science is too likely to screen something out as nonsense and ignore it, but I also acknowledge that Science does a pretty good job of correctly screening out a lot of nonsense. Put differently, Science’s nonsense filter has too many false positives; that doesn’t mean that it has no true positives or is worse than chance or any more extreme claim I could be making.

      1. I say that it may be impossible to sort through the haystack because two theories may give EXACTLY the same predictions — not only about data that has already been observed but about all possible future observations. Also metatheoretical considerations, like which theory seems more “plausible” or simpler can be open to debate. An important example is the continuing indecision about interpretation of quantum mechanics. There are several quantum theories that make exactly the same predictions for all measurements but that make radically different claims about the nature of reality. Because their predictions are identical, physicists have spent decades arguing about which is most plausible or parsimonious. The debate has been interesting, but we have not made any progress toward which interpretation matches reality.

        In my opinion, this same situation may happen in other scientific domains. That it does not happen more often is only because humans’ creativity in inventing theories is limited, and usually scientists are poorly motivated to think of an equivalent theory when we have one that works.

        1. I don’t think that works. Yes, quantum mechanics supplies one example where, given two theories that are equally consistent with observed facts, plausibility/parsimony did not cause one to win out over the other.

          But from that, you seem to conclude that plausibility/parsimony will never (or, at least, rarely) cause one theory to win out. But that seems like a strong claim. As I see it, those are just two theories that happen to be (roughly) equally plausible. But there are many other situations where two theories might have very different plausibility even though both are equally consistent with observed reality—a child might present the theory that an owl few in the open window and snatched the cookie, but that seems much less plausible than the theory that the child is lying and ate the cookie. Even if both theories are consistent with all observed facts, one is overwhelmingly more plausible.

          (If your position is that there are not only infinitely many theories that are consistent with observed reality but also infinitely many equally plausible theories, then I think I disagree but would be interested in hearing why you think that.)

          (Also, apologies if any elements of this blog were briefly an obnoxious shade of yellow. I was tweaking the formatting while at the same time you were commenting, so things might have been a little odd.)

          1. > If your position is that there are not only infinitely many theories that are consistent with observed reality but also infinitely many equally plausible theories, then I think I disagree but would be interested in hearing why you think that.

            That is a good summary of my position. I would add the nuance that the theories may not be exactly equally plausible, but I have no algorithm that calculates or measures the plausibility of a theory. A sufficiently clever person (or AI or God or whatever) can always think up a theory that is consistent with all observations, makes exactly the same measurable predictions as some existing theory, is approximately as plausible, but makes significantly different claims about the truth of reality.

            The mere potential existence of such a competing theory makes me uncertain that my current theory matches reality.

            I would agree that (at least so far in the history of science) usually we have been able to use plausibility/parsimony to decide between competing theories, but I am not certain that will continue as our theories become even more abstract. It seems to me that interpretative problems of quantum field theory and string theory are even more difficult than we see in non-relativistic quantum mechanics.

            (I didn’t notice any weird formatting problems. 🙂 )

  2. The hackstack problem is a major issue in science, and it’s one major reason why I try hard to generate a comprehensive list of steelmanned hypotheses in my own research. Thanks for writing this series. I enjoyed it greatly. (Found your blog via the recent SSC classifieds thread.)

    However, I disagree with your answer in part 5 that we should only pay attention to what science says is wrong. I think this is true only if scientists in a field exercise due diligence. Often, when someone does consider multiple hypotheses, they often strawman or weakman certain hypotheses and then claim this proves those hypotheses wrong. Ultimately, if you want something done right, in many fields this means you have to do it yourself. (This certainly seems true in my field, liquid jet breakup.)

    You can find a lot more examples of the haystack problem in papers identified as “sleeping beauties”: http://www.nature.com/news/sleeping-beauty-papers-slumber-for-decades-1.17615

    I think a better solution to the haystack problem would be encouraging comprehensive literature surveys, steelmanning of hypotheses, and better hypothesis generation (to take into account some of Scott Glancy’s criticism; I’ve thought about this exact problem many times before). I’m aware of precious little scientific papers which do the right things on even a basic level. If I started a journal it certainly would set these as the basic standard for papers in the journal.

    One contributor to the haystack problem: Papers not written in English in particular tend to be unfairly neglected. It’s not that these papers are being ignored per se, rather, people aren’t even aware that they exist. In liquid jet breakup research, I’ve been amazed by the volume of high quality Russian research that no one outside of Russia seems to be aware of. It’s not difficult to find citations for many of these papers. Just open up a translation of any Russian textbook on the subject (there are several, and some of them are available online for free) and look at the references. Many of these papers have been translated into English, but you often need to know how to use an interlibrary loan service to get them. To most researchers, if it’s not available online, they have no idea how to obtain it. That’s also a problem with getting copies of the original Russian papers. There’s the further problem of translating foreign language papers into English, which is much easier today due to online translation tools like Google Translate, but still not trivial. All in all, these obstacles means the foreign literature can be a good source of neglected ideas.

    1. Yeah, I agree with a lot of what you’re saying.

      I think the main difference is that you’re coming at the problem from the position of a fellow scientist. So for a scientist in the field I think “encouraging comprehensive literature surveys, steelmanning of hypotheses, and better hypothesis generation” is a better solution. Similarly, a domain-expert is in the position to say “if you want something done right, in many fields this means you have to do it yourself.”

      From my position as a non-domain expert, though, I don’t think those solutions are good ideas. If I were to spend the next six months studying liquid jet breakup as a fulltime job, I am not sure I would have the knowledge to reach any better conclusions than I can by reading the (likely misleading and incomplete) expert reports.

      I’m also quite sure that I’m not going to spend the next 1,000 hours building up domain knowledge in liquid jet breakup. More generally, it’s inevitable that we’ll all have important—even critical—gaps in our knowledge. And, in those gaps, we have to confront the question of how much to trust existing expertise. My claim is that—when we can’t do it ourselves—we should pay more attention to what science says is wrong that to what it says is right. I agree, though, that when we can do it ourselves, we should.

      (Thanks for the kind words and for the link. I have another post (series?) that I want to write along similar lines, and I’ll definitely use that link when I do.)

Leave a Reply

Your email address will not be published. Required fields are marked *