Free Novel Read

The Scientific Attitude Page 6


  The Problem with Truth and Certainty

  As with most misconceptions, there is a kernel of truth in the critics’ case that has gone awry. Science aims at truth. It does this by attempting to test its theories rigorously against empirical data. The vetting is savage. As we saw with Karl Popper, if a theory meets with falsifying data, there is a problem. We may not know at the outset whether the theory can be saved through judicious modifications but, on pain of ceasing to be scientific, eventually something must be done. Yet even if a theory passes every test with flying colors, we still cannot be certain that it is true. Why not? Because, as we shall see in this chapter, that is not how science actually works. The only thing we can be certain of in science is that when a theory fails to be consistent with the empirical evidence there must be something wrong, either with the theory itself or with one of the auxiliary assumptions that support it. But even when a theory is consistent with the evidence, we can never be sure that this is because the theory is true or merely because it has worked out so far.

  As Popper, Kuhn, and many others in the philosophy of science long ago recognized, scientific theories are always tentative. And that is the foundation for both the strength and the flexibility of scientific reasoning. Any time we are dealing with empirical data, we face the problem that our knowledge is open ended because it is always subject to revision based on future experience. The problem of induction (which we met briefly in the last chapter) is this: if we are trying to formulate a hypothesis about how the world works, and are basing this hypothesis on the data that we have examined so far, we are making a rather large assumption that future data will conform to what we have experienced in the past. But how do we know this? Just because all of the swans we have seen in the past are white, this does not preclude the existence of a black swan in the future. The problem here is a deep one, for it undermines not just the idea that we can be certain that any of our proposals about the world are true (no matter how well they may conform to the data), but also, technically speaking, we cannot even be sure that our proposals are more likely to be true, given the indefinably small relationship between the size of the sample of the world we have examined so far compared to the size of the set of possible experiences we may have in the future. How can we be sure that the sample of the world we have seen so far is representative of the rest of it? Just as we cannot be sure that the future is going to be like the past, we cannot be sure that the piece of the world we have met in our limited experience can tell us anything at all about what it is like elsewhere.

  Naturally, there is a remaining debate here (courtesy of Karl Popper) over whether science actually uses induction. Even though science tries to draw general conclusions about how the world works based on our knowledge of particular circumstances, Popper proposed a way to do this that avoids the uncertainties of the problem of induction. As we saw in chapter 1, if we use modus tollens, then we can learn from data in a way that is deductively valid. If rather than trying to verify our theory we instead try to falsify it, we will be on more solid logical footing.

  But it is now time to realize that this way of proceeding faces deep criticisms. For all of the logical virtues of Karl Popper’s theory, it is questionable (1) whether his account does justice to the way that science is actually carried out and (2) whether he can avoid becoming entangled in the need to rely on positive (confirming) instances. The Duhem–Quine thesis says that it is impossible to perform a “crucial test” in science, because every theory exists within a web of supporting assumptions. This means that even when there are falsifying instances for a theory, we can always sacrifice one of its auxiliary assumptions in order to save it. According to strict falsificationism, this must seem wrong. Popper himself, however, claims that he had already anticipated this objection and accommodated it within his theory.1 According to the logic of falsification, when a theory is falsified we must abandon it. But in actual practice, Popper recognized, scientists are loath to give up a well-ensconced theory just because of one falsifying instance. Maybe they made a mistake; maybe there was something wrong with their apparatus. Popper admits as much when he writes “he who gives up his theory too easily in the face of apparent refutations will never discover the possibilities inherent in his theory.”2

  In explicating his falsificationist account, Popper nonetheless prefers to illustrate its virtues with stories of those larger-than-life dramatic instances when a theorist made a risky prediction that was later vindicated by the data. As we’ve seen, Einstein’s general theory of relativity made a bold prediction about the bending of light in a strong gravitational field, which was confirmed during the total solar eclipse of 1919. If the prediction had been wrong, the theory would have been rejected. But, since it was right, the epistemic reward was tremendous.3

  But most science does not work like this. In his book Philosophy of Science: A Very Short Introduction, Samir Okasha recounts the example of John Couch Adams and Urbain Le Verrier, who discovered the planet Neptune in 1846, when they were working (independently) within the Newtonian paradigm and noticed a slight perturbation in the orbit of the planet Uranus. Without Neptune, this would have been a falsifying instance for Newtonian theory, which held (following Kepler) that all planets should move in perfectly elliptical orbits, unless some other force was acting upon them. But, instead of abandoning Newton, Adams and Le Verrier instead sought out and found that other gravitational force.4

  Some might complain that this is not even close to a falsifying instance, since the theorists were working well within the predictions that had been made by Newtonian theory. Indeed, Popper himself has sometimes cited this very example as just such an instance where scientists were wise not to give up on a theory too quickly. Yet contrast this with a similar challenge to Newtonian theory that had been around for over one hundred and fifty years by the time of the discovery of Neptune: the slight perihelion advance in the orbit of the planet Mercury.5 How could this be explained? Astronomers tried various ad hoc solutions (sacrificing auxiliary assumptions) but none were successful. Finally, it was none other than Le Verrier himself who proposed that the slight perturbation in Mercury’s orbit could be explained by the existence of an unseen planet—which he named Vulcan—between Mercury and the Sun. Although his tests to find it were unsuccessful, Le Verrier went to his grave in 1877 believing that Vulcan existed. Virtually all of the other astronomers at the time agreed with him in principle that—whether Vulcan existed or not—there must be some Newtonian explanation. Forty years later Einstein pulled this thread a little harder and the whole Newtonian sleeve fell off, for this was an instance in which the anomalous orbit was explained not by the strong gravitational force of another planet, but instead by the non-Newtonian idea that the Sun’s gravitational force could warp the space around it. When the orbit of Mercury fit those calculations, it ended up being a major mark in favor of the general theory of relativity. Yet this was not a prediction but a “retrodiction,” which is to say that Einstein’s theory was used to explain a past falsifying instance that Newtonian theorists had been living with for over two hundred years! How long is too long before one “gives up too easily” in the face of a falsified theory? Popper provides no rules for judging this. Falsification may work well with the logic of science, but it gives few guidelines for how scientists should actually choose between theories.

  As Thomas Kuhn demonstrated, there are no easy answers to the question of when a falsifying instance should take down a well-believed theory and when it should merely lead us to keep searching for answers within the dominant paradigm. In Kuhn’s work, we find a rich description of the way that scientists actually struggle with the day-to-day puzzles that present themselves in the course of “normal science,” which is when we work to accommodate the predictions, errors, and what may seem like falsifying instances within the four corners of whatever well-accepted theory one may be working under at the time.6 Of course, Kuhn also recognized that sometimes science does take a turn toward the drama
tic. When anomalies pile up and scientists begin to have a hard time reconciling their paradigm with things it cannot explain, enough force builds up that a scientific revolution may occur, as the field shifts rapidly from one paradigm to another. But, as Kuhn tells us, this is often about more than mere lack of fit with the evidence; it can also encompass scope, simplicity, fruitfulness, and other “subjective” or “social” factors that Popper would be reluctant to account for within his logical account of falsification.

  But there are other problems for Popper’s theory, too, even if a hypothesis passes all of the rigorous tests we can throw at it. As Popper admits, even when a theory succeeds it cannot be accepted as true—or even approximately true—but must always inhabit the purgatory of having merely survived “so far.”7 As powerful as scientific testing is, we are always left at the end with a potentially infinite number of hypotheses that could fit the data, and an infinite supply of possible empirical evidence that could overthrow any theory. Scientific reasoning thus is forced to make peace with the fact that it will always be open ended, because the data will always be open ended too. Even if we are not “inductivists” about our method, we must admit that even when a theory has passed many tests there are always going to be more ahead.

  Popper tried to solve this problem through his account of corroboration, in which he said that after a theory had survived many rigorous tests, it built up a kind of credibility such that we would be foolish to abandon it cavalierly; as Popper put it, some theories have “proven their mettle.”8 To some ears, however, this began to sound very much like the type of verification and confirmation that Popper insisted he had abandoned. Of course, we cannot say that a theory is true just because it has passed many tests. Popper admits as much. But the problem is that we also cannot say that a theory is even more likely to be true on the basis of these tests either. At some level, Popper seems to understand this danger (as well he should, for this is just the problem of induction again), but it is unclear how he proposes to deal with it.9 Remember that induction undermines not just certainty, but also probability. If the sample of potential tests is infinite, then the sample we’ve chosen to test our theory against is infinitesimally small. Thus it cannot help a theory’s verisimilitude that it is well corroborated. At various points, Popper says that falsification is a “purely logical affair.”10 But why then does he introduce a notion like corroboration? Perhaps this is a rare concession to the practical issues faced by scientists? Yet what to make then of Popper’s claim that falsification is not concerned with practical matters?11

  Philosophers of science will continue to fight over the soul of Karl Popper. Meanwhile, many are left to draw the inevitable conclusion that inductivist-type problems cannot help but accrete for surviving theories, even if one is a falsificationist. The idea that there are always more data that can overthrow a theory—along with the related idea that there are infinitely many potential theories to explain the evidence that we already have—is frequently dismissed as a philosopher’s worry by many scientists, who continue to maintain that when a theory survives rigorous testing it is either true or more likely to be true. But in their bones I am not sure how they could possibly believe this, for scientists as well as philosophers of science know full well that the history of science is littered with the wreckage of such hubris.12 Perhaps this is one instance in which some of those who wish to defend science may be tempted to pretend that science can prove a theory, even if they know it cannot. Sometimes, in the excitement of discovery, or the heat of criticism, it may seem expedient to maintain that one’s theory is true—that certainty is possible. But I submit that those who wish to defend science have a special obligation to accept the peculiarities of science for what they are and not retreat behind half-truths and wishful thinking when called upon to articulate what is most special about it. Whether we believe that science is based on inductive reasoning or not—whether we believe in falsificationism or not—we must accept that (no matter how good the evidence) science cannot prove the truth of any empirical theory, nor can we say that it is even, technically speaking, more probably true.

  Yet as we shall see, this does not mean that we have no grounds for believing a scientific theory.

  “Just a Theory”

  At this point it is important to confront a second misconception about science, which is that if a scientific claim falls short of “proof,” “truth,” or “verification” then it is “just a theory” and should not be believed. Sometimes this is read out as a claim that another theory is “just as likely to be true” or “could be true,” while others maintain that any theoretical knowledge is simply inferior.

  The first thing to understand is that there is a difference between a theory and a hypothesis. A hypothesis is in some ways a guess. It is not normally a wild guess; it is usually informed by some prior experience with the subject matter in question. Normally a hypothesis arises after someone has noticed the beginnings of a pattern in the data and said, “Huh, that’s funny, I wonder if …” Then comes the prediction and the test. A hypothesis has perhaps been “back-tested” by our reason to see if it fits with the data we have encountered so far, but this is a far cry from the sort of scrutiny that will meet a hypothesis once it has become a theory.

  A scientific theory must not only be firmly embedded in empirical evidence, it must also be capable of predictions that can be extrapolated into the wider world, so that we can see whether it survives a rigorous comparison with new evidence. The standards are high. Many theories meet their demise before they are even put forth by their proponents, as a result of meticulous self-testing in advance of peer review. Customarily a theory must also include an explanatory account for why the predictions are expected to work, so that there is a way to reason back from any falsification of a prediction to what might be wrong with the theory that produced it. (As we saw with the example of the perihelion advance of Mercury, it is also a plus if a theory can explain or retrodict any anomalies that scientists have been contending with in their previous theory.)

  Here we should return to Karl Popper for a moment and give him some credit. Although he may not have been right in the details of his account of falsifiability—or his more general idea that one can demarcate science from nonscience on a logical basis—he did at least capture one essential element that explains how science works: our knowledge of the world grows by keeping close to the empirical evidence. We can love a theory only provisionally and must be willing to abandon it either when it is refuted or when the data favor a better one. This is to say that one of the most special things about science is that evidence counts.

  The Nobel Prize–winning physicist Richard Feynman said it best:

  In general, we look for a new law by the following process: First we guess it. … Then we compute the consequences of the guess to see what … it would imply. And then we compare the computation results to nature, or we say compare to experiment or experience, compare it directly with observations to see if it works. If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is, it doesn’t make any difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong. That’s all there is to it.13

  In this sense, it is not entirely wrong to say that “scientific method” does capture something important about the process of scientific reasoning. Even if it does not quite serve as a criterion of demarcation, it does demonstrate the critical state of mind that one must have when testing a theory against sensory evidence—that is the hallmark of empirical knowledge. We see something strange, we make a hypothesis about it, we make a prediction, we test it, and if all goes well we may have a theory on our hands.14 This kind of reasoning may not be unique to science, but without it one can hardly see how science could go forward.

  A theory arises when we are prepared to extrapolate a hypothesis more widely into the world. A theory is bigge
r than a hypothesis because it is the result of a hypothesis that has been shaped by its clash with the data, and has survived rigorous testing before someone puts it forward. In some sense, a theory is just short of a law of nature. And you can’t get any more solid than a law of nature. Indeed, some have held that this is what scientists have been searching for all along when they say that they are searching for the “truth” about the empirical world. They want to discover a scientific law that unifies, predicts, and explains the world we observe. But laws have to be embedded in a theory. And a theory has to be more than a guess. A theory is the result of enormous “beta testing” of a hypothesis against the data, and projection of a reason why the pattern should hold for future experience. The apple falls because it is pulled by gravity. The global temperature is rising because of greenhouse gas emission. A scientific theory seeks to explain both how and why we see what we see, and why we will see such things in the future. A theory offers not just a prediction but an explanation that is embedded in the fabric of our experience.

  Ideally, a scientific theory should (1) identify a pattern in our experience, (2) support predictions of that pattern into the future, and (3) explain why that pattern exists. In this way, a theory is the backbone of the entire edifice of scientific explanation. For example, when one looks at Newton’s theory of gravity, it is remarkable that it unified both Galileo’s theory of terrestrial motion and Kepler’s theory of celestial motion. No longer did we have to wonder why objects fell to Earth or planets orbited the Sun; both were accounted for by the law of gravity. This fit well with our experience and explained why, if you throw a ball near the surface of the Earth, it will—like the planets—trace an elliptical path (and that if you threw it hard enough it would go into orbit), and supported predictions as well (for instance, the appearance and return of comets). Did it give an account of how gravity did this? Not quite yet. Newton famously said that he “framed no hypotheses” on the subject of what gravity was, leaving it for others later to deal with the anomalies of action at a distance and how attraction and repulsion could occur through empty space. He had a theory, but not yet a mechanism.