Principal Problems with Principles
This is one of a series of essays discussing problems with the way people commonly think about science. This particular essay looks at misunderstandings concerning the nature and value of scientific knowledge, the product of scientific work. Our focus is the concept of truth as the goal of science.
Many people believe that science is the best route, if not the only route, to truth about the natural world. Other people, including many scientists, believe that scientific knowledge may not be perfectly true, but it is closer to the truth than other sources of knowledge and belief.
Both those views are misguided, not because of any problem with scientific knowledge itself, but because of our overly simple beliefs about it. In this essay I will argue that totally correct knowledge -- "truth" -- is neither the goal, nor the product, nor any part of the process of scientific work.
Thinking about the goal of scientific work as perfect knowledge, whether as something we actually attain or as something we approach, distorts our understanding of the process. It was only by giving up clinging to certainty that science became such a powerful source of understanding. We endlessly continue improving our ways of studying and thinking about the natural world, with no access to standards of perfect knowledge and no need for them.
Although the root meaning of the word "science" is "to know," a word meaning "to learn" or "to explore" might have been more appropriate. Improving the various aspects of our knowledge, and the tools we use to advance it, is what science seems to be all about. Every aspect of the process and contents of science is open to repeated examination, criticism and revision.
Sometimes these revisions are quite spectacular. For example, the Earth used to be the center of the Universe. Then we learned that the Earth was revolving around the Sun. Later we learned that the Sun and Earth together were revolving around center of our local galaxy (part of which we see as the Milky Way). These days we know that there are countless other galaxies, and we have no reason to suppose that ours is central. Indeed, we aren't sure it makes any sense to say that the Universe has a center at all.
The unceasing flow of scientific progress makes talking about the truth of scientific knowledge seem quite paradoxical, actually. How can knowledge that is completely correct -- which is what we mean when we say that it is true -- keep changing all the time?
We know that many facts and principles that we currently accept will be replaced by ones that are obviously superior -- not just a little closer to the truth, but whole new ways of looking at things -- theories that reveal facets of the natural world that today we can't even imagine. That progress is obviously quite wonderful, but it does create a problem for people who want to believe that science produces or approximates truth: We have no way to know if what we currently believe about any particular aspect of the natural world is the final word on the subject -- or if it will someday be thoroughly transformed in the light of new discoveries, more powerful analytical tools, and more accurate measurements.
In this essay we look more deeply into our conviction that knowledge must be true to be valuable, and examine the notion of truth itself -- the ideal of completely correct knowledge. The idea that knowledge can be true and should be true is actually just another concept, a theory, a belief about how things are or how they could be or should be. This belief has no particularly privileged status: As with other notions in science, our ideas about knowledge and truth have changed many times, and may well change again. In the main part of this essay, titled Changing Concepts of Truth in Science, we'll look at how some of these changes have come about.
Looking into some details of the history of science, and of peoples' ideas about science, we see that thinking about scientific theories as true, or as approaching truth, is actually quite suspect. Sometimes concepts in science outlive their usefulness: This may be what has happened with the idea that the Universe has a center, for example. And it may be that the same fate awaits the concept of truth as the goal of science.
Science is not a defective process that should produce
truth, but can't quite get the job done. Perhaps
at this point we can afford to take a good look at how the concept of truth
is used in talking about science, and ask ourselves if it really serves
any useful function. The teaching of science might well do without it,
and the practice of science seems not to need it at all. We might
consider giving up the imaginary ideal of perfect truth as a goal, and
replacing it with the idea of continually improving understanding. That's
what we conclude in the final part of this essay, titled Knowledge
Scientific research is very good at improving our understanding of the natural world, including ourselves. Many of the things that people do, like learning and using language, like feeding themselves, like having and raising children, have been carefully studied by scientists with very interesting and useful results. However, science itself, which is after all another type of human activity, hasn't been studied much, scientifically. As a result, many of our beliefs about science don't really make much sense.
This article is about one of those beliefs, the notion that science is a method for discovering truth. We are discussing this now not just from an interest in the history of science, but because confidence in the certainty of scientific knowledge still distorts our understanding of what scientists do.
From the time of the discovery of the basic physical principles which we know as Newton's Laws, in the middle of the Seventeenth Century, until the revolutionary developments in physics and mathematics at the beginning of the Twentieth Century, scientifically educated people believed that science produces completely correct knowledge.
In an amazing feat of sustained intelligence, Copernicus, Galileo, Tycho, Kepler and Newton had developed a body of research that combined observation, experimentation and theory in a new way. The resulting Laws of Motion and Universal Gravitation seemed to explain beautifully every movement on the Earth and in the heavens with a small set of simple principles. Deductive certainty, familiar from the logical precision of geometry, guaranteed the truth of the natural laws revealed by science.
Consider this passage, from Magee's
of a Philosopher, on the unique certainty
of scientific knowledge:
Toward the end of the Nineteenth Century this view was still dominant, even stronger (if that is possible) after two hundred years of dramatically successful applications and extensions of Newton's principles. Indeed, after Maxwell's equations for electromagnetism were added to the list of well established physical principles, one could hear in scientifically sophisticated company the quite serious claim that physics was finished — nothing of real importance remained to be explained, nothing which could not be understood by application of the principles which were already known.
This confident promotion of scientific knowledge as infallible and nearly complete went silent, when Einstein's two amazing theories, the Special and General Theories of Relativity, along with the bizarre but powerful methods of quantum mechanics, emerged as the new physics.
The formula for certainty, given in the quote from Magee, can be summarized like this: Careful Observation + Careful Deductive Reasoning = Completely Correct Knowledge. By the first decades of the Twentieth Century, all three of the components of this formula for infallible scientific truth had become highly suspect.
On the left side of the formula, observations (facts) were being revealed as conceptual in nature, just as theoretical in their own way as the more general theories they supported. Further, deductive logic clearly depended on human judgment, in several different ways. These two developments demolished the seeming infallibility of both of the ingredients in the recipe.
Furthermore, it became clear that new theories sometimes replace (rather than merely adding to) older theories — which would certainly remove any need to explain how scientific theories could be infallible, since they obviously are not. Scientific theories, even extremely well developed and useful scientific theories, can be incorrect — not just incomplete, but wrong.
That is the main focus of this particular essay. To simplify
the discussion, let's call it the Certainty Paradox:
You may wonder why we're making such a big deal about this, because there's obviously an easy answer to the certainty paradox: Whoever believed that the knowledge was certainly correct was just wrong. The knowledge wasn't correct, and what's more, the belief that the knowledge was certainly correct was also wrong.
When the new physics challenged the general confidence in science as certain knowledge, people could have simply accepted the obvious implication, that scientific knowledge isn't certain at all. However, that is not what happened. That challenge elicited a number of different responses, responses that are very important because they form the basis of the currently common views about the nature of scientific knowledge.
Scientific Knowledge as True but Incomplete
One way to deal with this contradiction in our beliefs about scientific knowledge would be to ignore it. Some scholars actually did make remarkable efforts to deny and discredit these problems: They continued to believe, even after fundamental revisions of physical theory and other scientific theories, that science inevitably produces a steadily increasing store of completely correct knowledge. The story they tried to tell claimed that the older theories were not incorrect, but merely incomplete. Many people still believe this.
Thinking about the revolution in physics can help to put
this claim in perspective: Einstein and his colleagues replaced the comfortable
Newtonian view of the world as a well ordered machine. The new physics
was a quantum and relativistic Wonderland, in which the ability to "believe
six impossible things before breakfast" was an important part of the physicist's
Now, it may be true that Newton's Laws can reasonably be considered as special cases of relativity, under conditions of low speeds, low masses, and low energies. However, that somewhat misses the point: Newton's Laws were supposed to explain all movements of all objects, on the Earth and in the heavens. They were supposed to apply to all of physical reality. They don't.
Let's look at another example. Two hundred years ago, we knew nothing about dinosaurs. Then we began to learn about them, and one of the things we learned was that the last dinosaurs became extinct 65 million years ago. Everyone who knew anything about dinosaurs knew that they were extinct. Although there were many mysteries and big gaps in our knowledge about dinosaurs, the fact that they were extinct was one of the things we could be sure about. However, the theory is now being revised: It seems that birds are descended from dinosaurs, so they aren't all extinct. The theory that dinosaurs were all extinct was partly wrong.
Please note that that is not all all the same as saying that the theory was correct, but incomplete, with some dinos surviving on some island that no one knew about, for example. The same scientists who studied the evolution of dinosaurs were studying the evolution of birds. The theory clearly made incorrect statements about a part of it's domain.
So the idea that there is no real problem, because older scientific theories were not incorrect, but merely incomplete, is just wrong.
Scientific Knowledge as Statistical Approximations of Truth
Far more plausible than these efforts to deny that there is a problem are the results of trying to deal with the certainty paradox more directly. One strategy for facing the difficulties and working out their implications led to a view that is still quite popular -- the notion that scientific knowledge is never perfectly true, but science gets closer and closer to the truth. However, this notion, that scientific findings are approximations to the truth, doesn't work either -- although many people, including many scientists, believe it.
If we take this sort of claim seriously, we should be clear about what it means. However, although there are several obvious ways of trying to do that, none of them works, except in a limited range of cases -- cases which are not at all like the problem that led to the paradox in the first place.
One way of being clear about the notion of getting closer to the truth is the idea that scientific knowledge gets more and more accurate the way a measurement gets more and more accurate, as scientists devote their time and intelligence to determining it more precisely. Another way is the idea that scientific knowledge becomes more and more likely to be true (or that scientists are more and more justified in being confident that it is true).
Let's consider these two versions of the relationship between scientific knowledge and truth. Regarding the first, if measurement were were all there is to science, then of course it would make sense to say that, although we don't have the exact values of all (or any) of our important measurements, they are getting more and more accurate and thus science is getting closer and closer to the truth.
The second version has two variants. One is based on what is called inductive logic. The classical example is to imagine observing swans: One after another, we see that they are white, and eventually we have seen so many white ones, without seeing any of another color, we begin to suspect that they are all white. Even though we can't be sure that all swans are white, the more white ones we see, without any exceptions, the more likely it is that they are all white. (This example works better if you stay away from Australia, where the native swans are black.)
The other way of being clear about the idea that scientific knowledge becomes more and more likely to be true is an analogy to statistical hypothesis testing. Collecting more and more evidence decreases the probability that a pattern could have resulted from random variation, rather than resulting from the phenomenon we are trying to study.
These several versions of the statistical approximations or probable truth metaphors have clear meanings when we are talking about certain types of scientific evidence. However, if we try to extend any of those analogies to the conclusions, to the principles and theories that we base on our evidence, none of them works.
The analogy to measurement fails because neither of the main methods we use for improving measurements can be applied to theories or principles. With a measurement we can make a good estimate of the uncertainty of the current values by repeating the measurement a number of times, as carefully as we can, and looking at how large the differences are among the various values. We can reduce that uncertainty by figuring out how to make the measurement more precisely. We can see a definite reduction in the variability, which means that typical measurements tend to be closer to the true value. Or, if we don't know how to reduce the variability of the measurements, we can still improve the quality of our measurements just by measuring more instances of the phenomenon, and averaging the results. However, neither of these standard techniques for improving measurements makes any sense at all if we are talking about improvements in theories.
The analogy to inductive logic fails for many reasons. Obviously, as in the case of the swans, our sampling plan may be biased toward observations that are easy for the observer -- convenient, inexpensive, legal, or whatever. We've been sampling from a bag of colored balls, and found that it contains only balls of a certain color -- but there may be other bags we haven't sampled at all.
Even if our sampling pattern is relatively comprehensive and unbiased, in fact, even if you could examine all the instances of a phenomenon, induction still couldn't give you certain knowledge. Why? One reason is that the population that we are sampling from can suddenly change in ways that overturn our findings.
Consider the example of the dinosaurs again, from another perspective: We are studying dinosaurs, and we suspect that they are extinct. Every one that anyone has ever found, anywhere in the world, is fossilized, embedded in rock strata that are at least 65 million years old. Lots of dinosaurs are found, many different sorts, that lived for over a hundred million years before that, but none are found in rocks younger than that. We can imagine the inductive logician tallying each new find in the 'Extinct' column, and looking smugly at the empty 'Living' column. Our logician knows that inductive evidence can never lead to certainty, but the case is so strong now that no one outside the comic books and science fiction novels suggests that any dinos are still living.
But then the theory changes: The characteristics of dinosaurs are compared carefully to the characteristics of birds, and birds are reclassified as dinosaurs. Now the dinos suddenly are not extinct at all -- not because someone found living sauropods in a hidden geothermally heated valley in Antarctica, but because all the millions and millions of birds that we knew about all along suddenly were dinosaurs. Even the black Australian swans are dinosaurs.
The analogy to statistical hypothesis testing fails for a similar reason: In the realm of improvements in theories we are no longer talking about the probability that the result could have occurred by chance -- we are talking about the probability that the phenomenon is caused by anything other than the cause specified in the current theory: chance or any other cause or combination of causes. Obviously, we have no way at all to measure the probability that a particular theory will someday be revised. We can estimate it, though: It's quite likely.
Most, if not all, of our current theories will eventually be revised in various ways. Improvements in our knowledge don't make further improvements less likely. Actually, the real situation is exactly the opposite: The more we learn about a particular topic, the easier it becomes to learn even more.
If we were actually approaching perfectly correct knowledge, the situation would be very different. We can see a good analogy by looking at what happens when programmers debug their computer software: At first, problems are easy to find and the rate of bug fixes is very high. As the process continues, the bugs get harder to find, and the rate of revisions drops off. That is what would be happening in science if we were actually approaching truth -- the rate of new discoveries and other improvements would be decreasing, not increasing.
When we consider improvements in theories, the analogies to precision of measurement, to inductive reasoning, and to statistical hypothesis testing all break down. We are no longer talking about moving closer to a certain point on a line, or becoming more and more confident that a simple hypothesis is correct. In the realm of improvements in mathematically expressed theory the statistical approximation metaphor is useless, and even the more general metaphor of getting closer to the truth is strained. When we consider qualitative theories and evidence, the metaphor of getting closer to the truth falls apart entirely, except as some sort of vague abstraction that will certainly wind up being more trouble than it is worth.
In the complex realm of possible theories, models, different families of equations or symmetry groups, classification schemes and all the rest, thinking of scientific progress as moving closer to some perfectly correct ideal makes the metaphor so abstract that one begins to realize that it is no more than a metaphor.
Metaphors and analogies are useful,
sometimes even powerful cognitive tools, if we know them for what they
are. In this case, though, we're talking about the leftover intellectual
baggage of a failed romance between science and wishful thinking. The notion
that science yields certain knowledge was incorrect, and trying to save
top of page
Our understanding of ourselves and our world, and our understanding of science itself, grew out of pre scientific beliefs and practices. One of those beliefs is the concept of truth.
We have been discussing some of the history of the concept of truth as applied to scientific knowledge. In light of that history, we can see that to regard scientific knowledge as true, or even as approaching truth, is actually quite suspect.
Scientific concepts are often refined and clarified, but sometimes concepts are seen to have outlived their usefulness, and we just give them up. This may be what has happened with the idea that the Universe has a center, for example. And it may be that the same fate awaits the concept of truth as the product or goal of science.
The pre scientific notion that the Earth is the center of the Universe was widely believed before Copernicus discovered that the Earth is revolving around the Sun. That discovery came at the very beginning of what we now call "science" -- it started the series of developments that led to the Newton's Laws of Motion and Universal Gravitation. These powerful principles seemed to offer a new source of truth: not divine revelation, just the careful work of intelligent people.
Our later discovery that the Sun and Earth together were revolving around center of our galaxy meant that even our sun could no longer be considered the center of the universe. The opening image on this page shows another galaxy, M100, as seen by the Hubble Space Telescope. M100 is so far away that the light that reaches our telescopes left that galaxy over 50 million years ago. Scientists now can see galaxies that are much, much further away than that -- billions of light years away.
We have no reason to suppose that our galaxy is any more central than any of them. All distant galaxies are speeding away from us, evidence for the uniform expansion of the Universe. According to the currently accepted interpretation, any observer, anywhere in the Universe, would observe the same thing: The further away a galaxy is, the faster it moves away from the observer.
All these dramatic improvements in our understanding haven't showed us the true center of the Universe. Instead, they suggest that looking for the center of the Universe may not be a very interesting way to think about things.
The situation with the idea that scientific knowledge is true, or should be true, seems similarly useless. The pre scientific notion -- that principles are valuable if and only if they are certainly true -- is obviously wrong. Newton's Laws were incredibly useful, and still are, even though they were wrong about most of what they supposedly explained.
When people learned that scientific theories are often incorrect in various ways -- not just incomplete, but wrong -- they did not give up the ideal of certain truth. Instead, they became convinced that truth plays a more indirect role in science.
We now have quite a varied menu of options: Many people still believe that scientific knowledge is simply true; others believe that it is true as far as it goes, but incomplete; many believe that science approaches truth as an unattainable goal; and many people believe that science produces knowledge that is more and more likely to be correct. Some folks think about science in several of these ways, using one or the other at different times as the mood strikes them.
One option that never made it onto that list is the one I'm suggesting here: Let's just give up the pre scientific notion that principles are valuable only if they are true for certain.
What would science look like, without truth as its imaginary goal? It would look exactly the way it looks now, but a lot of people would be less confused about it. Knowledge, or understanding, is the real goal of science, and knowledge doesn't need to be certain in order to be useful.
What do we really gain by saying that our research has brought us closer to the truth? What truth? Isn't that just vaguely waving at some kind of imaginary perfectly correct set of ideal facts and principles, as if there were actually something of that sort that we are talking about? What is the point of complicating the discussion by introducing complex imaginary ideals of perfect knowledge?
We can easily improve our ability communicate with each other by giving up talking as if truth were the goal or product of science. Instead, we can simply say that the theory, or our understanding, or our measuring instrument, or science in general, is improving.
How can we know science is improving, if it has no goal to get closer to? It's very simple: That's what scientists have been doing all along. We've never been able to compare our actual knowledge to the imaginary perfection called 'truth,' and we never needed to do so. We improve our knowledge by finding errors in the ways we've been thinking about things, or better ways to observe and measure things.
Look at these two images of the galaxy M100: The one on the right is the image you saw at the top of this page, taken with the Hubble Space Telescope. The one on the left was taken before astronauts installed improvements in the optical system.
Scientists didn't build the Space
Telescope because they thought it would give correct images; they built
it because they expected it to provide better images. The Hubble's successors
will continue the process. There seems to be no end to the improvements
we can make, in just about every aspect of every field of science, and
they don't require that we compare our theories and methods to an imaginary
goal of perfect knowledge.
We cloud the minds of our readers when we talk about scientific knowledge being true as far as it goes, but incomplete; or when we say that scientific knowledge gets more and more likely to be correct, but we can never really be certain that it is true. Science is not a defective process that should produce truth, but can't quite get the job done. We started with pre-scientific beliefs, found various ways of improving them, and have continued doing that, improvements following other improvements, seemingly without end. There's no reason to expect this process to stop -- indeed, the growth of our knowledge in almost every field is accelerating.
I like to remind people of how insects grow: A caterpillar is not a defective butterfly. Similarly, Newton's Laws were not a defective Theory of Relativity. Giving up the imaginary goal of perfect knowledge, along with all its lame offspring like "statistical approximations to the truth," frees us to celebrate the process of discovery and learning without having to pay lip service to the antiquated notion that knowledge must be certain to be worth anything.
Facing the facts about our scientific evidence and principles won't stop the growth of knowledge. Science never had the infallibility that simplistic beliefs attributed to it -- it never could have had it, and it never needed to. All our progress in understanding has been accomplished with just the limited, fallible but endlessly self correcting cognitive equipment that human beings have been working with all along. Facing the Terrible Truth about Truth will make it easier for us to celebrate that process
top of page
More Science Pages from Dharma Haven
The core of M100, a large spiral galaxy, similar to our own Milky Way, which contains over 100 billion stars. M100 is so far away that we see it as it looked over 50 million years ago. Discovered in 1781 in the constellation Comae Berenices, it was one of the "nebular objects" believed to be no more distant than the stars.
top of page
David Berlinski; Newton's Gift: How Sir Isaac Newton Unlocked the System of the World.
James Burke; The Day the Universe Changed.
Brian Magee; Confessions of a Philosopher.
top of page
Your Comments and Suggestions
Revised on October 10, 2001
Copyright © 2001 Dharma Haven
top of page