The Terrible Truth About Truth
The main goal of this series of essays is to improve our understanding of science and how it works -- how scientists are able to keep making such remarkable contributions to our knowledge of the natural world. One of the most often confusing facets of the scientific approach is the relationship between abstract principles, including those expressed in mathematical and logical systems, and the various aspects of the actual world that the principles are intended to inform us about.
For example, over the centuries many scholars have argued about the existence of the abstract entities, lines, points, surfaces, figures, shapes and so on, that appear in mathematical proofs: Do they perhaps exist in a separate world populated by ideal forms, as the Greek philosopher Plato believed?
Mathematicians and logicians have sometimes created purely formal worlds which have no known connection with the world we live in, but the sources of both mathematics and logic were firmly grounded in the familiar world, and the principles we will discuss here are those that scientists have found useful in their work.
Specifically, what I want to discuss is the psychological difference between scientific statements and logical proofs, and some of the consequences of that difference. The difference itself can be stated simply: Working with and studying physical systems and natural phenomena requires perceptual judgments that are not at all required for working with and studying logical and mathematical systems.
Where scientific observation is based on perception, logic depends on imagination. If a proof calls for a triangle with two sides equal in length, we just imagine one. We don't have to find one and measure it, or make or draw one. Even if we wanted to, we couldn't, since the lines in the proof are supposed to be perfectly straight and of infinitesimal thickness. We might develop a diagram to help us think about or remember or communicate something about the triangle in the proof, but the actual drawing will differ in many ways from the imaginary ideal triangle that the proof is talking about.
Natural systems always deviate from the ideal of anyprinciple, in many ways. The ancient Greeks may have been reminded of this fact by the poor quality (by our standards) of their coinage. The image rarely fit precisely over the center of the coin — sometimes one edge was off the piece entirely — and often the coin was obviously not really round. With daily reminders of the difference between the artist's intention and the coin makers' result, it is no wonder that we find Plato describing two parallel worlds, the realm of perfect forms and the realm of imperfect materializations of those forms.
However, we do not really need two separate worlds to explain why perfect ideas often miss the details of actual situations. Ideas -- ideal principles -- we can consider one at a time, in our imagination; but in the actual world there are very many aspects of any situation, with various influences all acting at the same time.
We in the Western world often talk about the cause of something, as if anything that happens must have a single cause, and arguments develop over which of two alternatives is the real cause. For example, we've seen this in the arguments over heredity versus environment (nature vs. nurture) in education.
Authors with a bit more insight carefully explain that both heredity and environment influence all human characteristics. They might ask us to consider, as an example, the sounds of instrumental music, which are influenced by both the choice of instrument (the physical structure of the instrument) and by how the instrument is played. A particular difference between two musical passages may be due to just a change in the instrument, or it might be due solely to a difference in the playing, but in general, instrumental music gets all of its various characteristics by the physics of instruments interacting with the way they are played.
Similarly, a particular difference in the traits of similar organisms (like eye color, for example) may be due to a genetic difference, but any trait or organ can develop only by the interaction of an appropriate genetic endowment with an appropriate environment.
Now please notice that we've oversimplified this discussion by dumping everything except genetic influences into a catch-all category called environmental influences -- maternal nutrition, diseases, chemical pollutants and radiation, all the many factors that can affect the growth of an embryo are separate influences that can act independently. Furthermore, each of these categories, like "genetics," or "nutrition," is a catch-all term for a very complex set of influences.
The Buddhist logicians of ancient India were a couple thousand years ahead of us in understanding this. To them it was obvious that there are myriad causes of any particular event, a web of chains of causes and causes of causes, spreading back and back as far as anyone cares to go — actually, much further than anyone could conceivably go.
When we focus on one aspect of a complex situation (from this perspective, all actual situations are complex), we sometimes get a clear view of that aspect of what is happening. Comparing that principle to the actuality, though, we can notice many discrepancies.
For example, although the earth
is like a sphere, the surface is not completely smooth, and the overall
shape is not even perfectly round.
Each of the principles behind the all various aspects of each of these many facts has a relatively simple form, when we examine it by itself in our imagination; but when we try to decide if any aspect of the natural world fits that form, the decision is often not an easy one. Even the overall roundness of the Earth, which is obvious in a photograph taken from several thousand miles out in space, was difficult to figure out when people were limited to exploring the surface.
With so many different causal influences operating in any real situation, it is a wonder that we are able to discover any of them; yet the messiness of the natural world is often considered to be a failing of scientific knowledge. The principle says that world is round, but it's obviously not really round, so the principle is wrong; and some people believe that science itself is deeply flawed because it is full of principles that obviously oversimplify the wonderful complexity of the real world.
Science is often wrong in all sorts of ways, but this is not one of them. Any principle which reveals relationships in the natural world will have this character. To criticize science because it's perfect principles differ from the details of the natural world may reveal little more than the critic's ignorance of what a principle fundamentally is, and what it is for.
A scientific principle helps us understand our world by holding up for our inspection an important influence or pattern. Principles are intentionally simpler than the world itself. That is exactly how they help our understanding, by being simple.
We have no need for cognitive tools that merely copy the world in all its detail, even if we could get them: We have a perfectly good world already. We can appreciate principles for giving us simple ways of gaining insight into important aspects of this complex world. When a principle is generally valid, a little knowledge can go a long way.
One way that principles can be misleading, though, is when they are overextended, over generalized -- which they often are. Once we have discovered a principle, it is all too easy to decide that it will hold, and must hold, always and everywhere. Some basic scientific principles seem to express relationships that must necessarily be true, without any possibility of exceptions. This is one of the ways in which the power of the imagination can lead to problems.
In the history of science and mathematics we can find many examples of principles extended, in imagination, far beyond their actual range of valid application. The best known example is the case of Newton's "Laws of Motion and Universal Gravitation," which turned out to be somewhat less than universal. Although these principles were supposed to be able to explain any movement, anywhere, Einstein and the quantum physicists discovered that the range of application of Newton's amazing laws is limited.
Newton's belief that his Laws were universal didn't really cause any particular problems. Until the work on relativity and quantum physics, there was no reason to doubt them, and as soon as the new evidence and theoretical analysis emerged, they were generally accepted within a few years. This was very similar to what had happened a few centuries earlier, when Newton showed that the older principle "What goes up, must come down" only applied in a limited range of situations.
In the main part of this essay I want to focus on an over generalization of principle far more serious than Newton's belief that his Laws were universal. I want to explore the basis of the unfortunate idea that science produces completely correct knowledge.
For centuries the notion that scientific knowledge is infallible, or should be, has been distorting our understanding of how science works. Even though today most scientifically literate people would deny believing that scientific knowledge is true for certain, our beliefs about how scientific knowledge is developed were profoundly shaped by this notion, much to the detriment of real understanding of science and its methods.
From Newton's time until Einstein, the prevalent view was that scientific knowledge is certain knowledge. Scientific knowledge comes from direct observation, refined and extended by logical inference. When used with appropriate care, both of these ingredients were considered to give results that were beyond question.
Since the early part of the Twentieth Century we've had clear evidence that none of the elements of this formula for scientific certainty is correct. However, popular and even professional views of science and its workings are still thoroughly contaminated with partially digested remnants of these notions. This series of essays is an effort toward cleaning up this messy intellectual legacy.
I began my discussion of these issues in an earlier paper, The Terrible Truth About Truth , which focused on the fact that scientific knowledge sometimes changes. New scientific theories may replace (rather than merely adding to) older theories. That certainly removes any need to explain how scientific knowledge could be infallible, since it obviously isn't. Scientific theories, even extremely well developed and powerful ones, are sometimes found to be incorrect: not just incomplete, but wrong in important ways.
I argued that scientific theories being wrong in various ways is not a problem for the scientist, but rather an opportunity. Totally correct knowledge -- "truth" -- is not actually the goal, nor the product, nor any part of the process of scientific work. I concluded the discussion by suggesting that we should replace the idea that truth is the goal of science with the idea of continually improving understanding.
Although there's no longer any need to explain how science can be infallible, here in this paper I want to explore the question of how we ever came to expect scientific knowledge to be infallible in the first place.
Since this is not a mystery novel, I suppose it won't hurt to give a brief outline of the plot right away. We'll start with the Greek geometers, who took some useful facts about triangles and developed a way of proving that they had to be true about triangles in general. This established the idea of logical deduction as a source of certain knowledge about the natural world.
Then we jump a two thousand years to Isaac Newton, who developed a mathematical system that explained the movements of heavenly bodies and objects on the earth with the same set of simple principles. Scholars were used to thinking of logical deduction as a source of certain knowledge, and Newton presented his principles in the form of a series of illustrated arguments that looked a lot like geometry proofs. That's just about the whole story, in a nutshell, except that Newton's Laws worked so well that for over two hundred years the idea that they were correct -- absolutely, certainly correct -- was essentially unchallenged.
By extension, scholars naturally expected that all scientific research had the potential of yielding such impeccable results.
In this essay we'll look at some of the details of this
history, and at some of the problems with these highly influential over
generalizations. Hopefully, taking a look at how these ideas emerged and
how they fell from grace may help us get free from the tangled mess we
find ourselves in today.
So how did the idea that scientific knowledge is true for certain come to dominate Western scholarly culture? The idea of deductive proof as a source of infallible insight into the natural world came to us from the ancient Greek geometers. By the time of Euclid's Elements, the idea of deriving complex facts from the implications of simple obvious principles was already thoroughly developed.
In Euclid's geometry proofs, as in later mathematical proofs, a series of simple deductive steps is used to establish that a particular relationship, a theorem, follows logically from generally accepted principles, or from principles that have already been proven. The proof takes one beyond an empirical generalization that something seems always to be true, through a logical argument showing that it must necessarily be true.
Many of the geometric principles that geometers worked with came from insights used in practical measurement, tricks like using the lengths of shadows to measure the height of a tower, for example. Once the geometers proved a particular relationship to be true of triangles in general, it was obvious that it would still be true of the actual physical triangles involved practical tasks. Thus basic principles that were obviously true could be extended by logical inference to yield indisputable knowledge about complex properties of the natural world.
With a two thousand year old tradition of using deductive reasoning to justify complete confidence in the principles of geometry, extending that confidence to the dynamic physical principles discovered by Newton followed naturally. The way Newton chose to present his work, in Mathematical Principles of Natural Philosophy , by deriving principles from more basic principles, made that connection all the more likely.
At the beginning of his presentation, Newton laid out a few definitions, followed by three Laws of Motion, which are explicitly called "Axioms." Then in the main body of the work he develops a set of principles presented as consequences deduced from these laws, principles such as that the motions of bodies in elliptical orbits result from centripetal forces that vary as the inverse square of the distance from the focus. The entire work is done in what is obviously intended to be a rigorously logical Euclidean style, with propositions, lemmas, corollaries, theorems and scholia, illustrated with diagrams, some of which look like they could have come right out of the Elements.
Lets just touch back to our theme of principles as existing in the imagination, and thus at risk for being over-extended: Because the idea, and the ideal, of deductive proof as the way to go from mere empirical generalizations to necessary truths existed in his imagination, Newton was able to imagine applying the same method to his new physical principles. Newton's principles certainly did seem as though they must necessarily be true, without any possibility of exceptions. However, when we examine the issue carefully, with the advantage of hindsight, we find several serious problems with carrying over the idea of deductive proof from geometry into physics.
First, none of Newton's axioms are principles so obvious that there can be no point in giving reasons for them, nor are they logically derived from the principles that were already proved in Euclid's Elements. The fundamental principles in Euclid's geometry are supposed to be obvious assumptions or postulates, on which are based deductions about more complex relationships in the proofs of theorems. In contrast, Newton's fundamental principles, the Laws of Motion, are the hard-won results of the reasoning process, which were not at all obvious before the work began.
For example, Law I reads "Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon." The first part of this "axiom," that bodies remain at rest unless moved by some force, may indeed seem so obvious that there is no point in discussing it; but the second part actually contradicts a view held by Aristotle (and thus accepted by most scholars until Galileo). The older view was that objects in motion continue in motion only by continued application of a motive force, without which they would come to rest.
Furthermore, not only are his axioms far from obvious, but Newton's justifications of his propositions are not really deductive proofs: They might be more appropriately described as explanations.
Overall, we can see that what Newton has done is what many modern scientists are trained to do: Present the results of experiments and mathematical analysis as following logically from the principles that were learned by performing them. His desire to emulate Euclid is so strong that he does this even though he says in the same text that "In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction."
This leads us to the second problem with carrying over the idea of deductive proof from geometry into physics: When we examine the logic in the correct order, corresponding to what Newton and others actually believe themselves to be doing, the place of the axioms, from which the principles are to be derived, is held by the empirical observations. The principles are "inferred from the phenomena." However, the empirical basis of these principles is a set of experimental results and observations that are not at all certain. Even observations and measurements obtained "under controlled conditions on many different occasions by trained and competent observers" are fallible: Scientists are forever finding ways of improving them. This steady and many faceted refinement of the empirical basis of any science is of course all to the good, but the fact that it is a major part of scientific work should burst any illusions that the results of our observations are infallibly correct.
Newton holds quite an interesting set of views on this issue. Rule IV in his "Rules of Reasoning in Philosophy" states "In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phænomena occur, by which they may either be made more accurate, or liable to exceptions."
The "Rules of Reasoning" section comes toward the end of the Principia , and shows that Newton did not himself believe that scientific knowledge is infallible. He explicitly includes in his last Rule of Reasoning the possibility that future discoveries may lead to improvement in the principles. We can't credit him with predicting Einstein and quantum mechanics, but he did acknowledge that relevant new discoveries were possible.
On the other hand, Newton clearly knew that the principles
he had codified were no mere speculations. We can easily see that by looking
at his third Rule:
His discussion of rule three leaves no doubt: Principles derived appropriately from clear evidence are valid, beyond doubt. After alluding to the evidence for gravitation, he concludes: "we must, in consequence of this rule [Rule III], universally allow that all bodies whatsoever are endowed with a principle of mutual gravitation.... This is immutable."
If we look briefly back at Rule IV, we can note that he's also not interested in having debates with people who have bright ideas and no evidence. All in all, the textual evidence from the Principia suggests that while Newton may not have been directly responsible for the notion that science produces certain knowledge, his views may well have contributed to the notion that repeated careful observation yields impeccable evidence.
We will explore this issue further in the following section, on the logic of induction. For our current purpose, pointing out problems with importing deductive proof into physics, we've already given enough attention to this particular point.
The third problem is both deeper and more general than the preceding two. I'll begin by asking you to consider again the quote from Einstein given earlier: "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality." This applies not only to mathematical certainties, but to any logical certainties. This is so important, and generally so poorly understood, that I will go over it in some detail.
Nowadays, when we discuss the assumptions that form the basis of a chain of deductive reasoning, there is a sense of choice — that we could have chosen assumptions other than the ones we've decided to use. In Euclid's Elements, the axioms are not considered as being chosen from a set of alternatives. They are accepted as basic principles that so obviously true that there is nothing to be gained by trying to give reasons for believing them.
The axioms in the Elements fall into two categories: general principles of reasoning like "Things which are equal to the same thing are also equal to one another," and principles specific to geometry, like "It is possible to draw a straight line between any two points."
One of Euclid's geometrical postulates,
the Fifth, was troublesome. It is the "Parallel Postulate:"
All the other postulates in Euclid's system do indeed seem obvious. The parallel postulate may have seemed true as well, but it was not as obvious as the others. Historically, many mathematicians, including Euclid himself, were troubled by this complex assumption. Some tried to derive it from simpler, more obvious assumptions; but without success.
Then in the Nineteenth Century some mathematicians developed alternatives which contradicted the Parallel Postulate. Lobachevsky and others explored the assumption that there are at least two lines parallel to a given line through any point not on the line. Another group, including Riemann, tried assuming that parallel lines are impossible. It turned out that both of these alternatives to the Fifth Postulate can be shown to be consistent with the rest of Euclid's system. Each may be said to generate a geometry which is just as valid as Euclid's. Euclid's system works for plane (flat) surfaces, Rieman's works on the surface of a sphere, and the system that Lobachevsky studied works on saddle shaped surfaces.
Here again is a confusion that can be traced to the fact that principles are imaginary: Because the imagination is so powerful, it is quite possible to have three seemingly incompatible sets of principles, each of which is internally consistent. This realization requires us to reconsider the supposed infallibility of logical deduction in science.
Deductive reasoning is a way of exploring the implications of what we know — in a deductive argument, if the premises, or assumptions, are true, we can be certain that the conclusion is true as well. So far, so good. However, since we now have equally valid alternative geometry's, actually using a logical argument obviously requires choosing a set of premises to start from. So although deductive inference may indeed yield conclusions that must be correct if the premises are correct, the choice of those premises clearly depends on fallible human judgment. That means that any time we want to use what we have proved in geometry in thinking about the actual world, the entire argument depends on fallible human judgment.
The discovery (or development) of non-Euclidean geometries forced us to face the fact that the forms in the logical realm are not the physical forms we experience. In order to apply a geometry to a particular physical surface, for example, we have to choose whether to treat the surface as a flat plane or as the curved surface of a sphere, because the two ideal realms have different geometries.
Suddenly we are no longer able to build an unshakable fortress of logic in the realm of imaginary ideal forms that can be simply identified with aspects of the actual world. Whether a particular formal model is appropriate or inappropriate depends as much upon the criteria for appropriateness, and our ability to determine if they have been met, as it does on the various details of the physical situation. We are in the same position when we attempt to decide if a particular scientific theory is applicable to a particular situation or to a particular kind of situation. This human decision -- whether the principle is appropriate as a model of the relevant aspects of the situation -- can never be removed from the equation.
Deciding that something has a particular characteristic is also a process which is performed by human beings and which can be difficult or misleading in various ways. Even the basic rules of inference, such as Euclid's first common notion, are problematic in this way, when we try to apply them to real situations.
Certainly it makes sense to say two things that are each equal to some third thing must also be equal to each other; but how do we decide that two things are "equal?" Normally by this we mean that they share some feature that is relevant to our current focus of interest; they may be the same color or size or have the same name or share some other characteristic. However, unless they are not just equal things but exactly the same thing, they also differ in various ways that we are currently ignoring. Some of those differences we may be choosing to ignore, and some we may be unable to detect.
In an Appendix titled "On the Inequality of Equality ," below, I give a demonstration that if we try to use perceptual equality, inability to detect a difference, in Euclid's First Common Notion, we can prove that two things that are obviously different must be the same. In that demonstration, the basis of this contradiction is the presumption that this principle of formal logic can be applied to real objects as they are judged equal by human beings.
In concluding this section of the essay, we might say that if what you mean by 'logical deduction' is starting from premises or assumptions that are certainly true, and proceeding by logical steps that are certainly appropriate, to reach conclusions that are certainly true -- which is what many people do mean by the term -- then we can agree with Bertrand Russell in his statement that deduction has no role in the scientific study of the natural world.
However, if you take every occurrence of the word 'certainly' out of that sentence, and replace it with the phrase 'seem likely to be,' then you may have a useful description of an important part of scientific work, used, for example, in exploring what may be the implications of what we already know, and what we suspect.
That's about all for now about deduction. In the The Terrible Truth About Truth we disposed of any need to explain how scientific knowledge could be true for certain, which is a good thing, because now we've seen that deductive logic could never have allowed us to draw conclusions about the natural world with certainty, even if we were certain of the assumptions that we started with. However, we still haven't finished cleaning up the mess left over from the failure of this unnecessary but terribly influential project.
Deduction can't create new information. In the next section we'll look into the idea of inductive logic, which was supposed to be the source of the new insights that would be needed if this notion was going to explain how science produces valid knowledge about the actual world.
top of page
Deduction offers a way to explore the implications of general principles; but where do those principles come from? Scientific principles are ultimately based on observation. However, for several centuries scholars -- philosophers and scientists and others -- have generally accepted the notion that we can't directly observe general principles. What can observe are particular objects, particular events, particular processes. Going from specific observations to general principles was said to require a different type of logical operation, called inductive inference, or simply induction.
The main topic of this section is to evaluate classical inductive logic, commonly taught side by side with deductive logic as necessary components of scientific inference. Before we launch into that topic, though, I want to make a few comments on the introductory paragraph. First, I want to point out that many people who have studied perception, perceptual learning , and the role of perception in scientific work, are aware of the fact that patient observation with no preconceived agenda does seem to lead, after a while, to what seems to be direct realization of patterns and regularities which are not far removed from general principles. Real perception is much richer than the "sense data" in the theories proposed by some philosophers of science.
Second, the term 'induction' is used much more widely than in the sense of the limited form of classical inductive "logic" that we will be discussing in this particular essay. Sometimes the term is used for any system that can learn from experience. Obviously, a large part of scientific work involves learning from experience. Here, though, I'm focusing on one particular logical model, the standard view of how we get to general principles from particular observations.
Most basic discussions of the logic of induction offer little if anything beyond the following statement: The conclusion of an inductive argument contains information which is not present, even implicitly, in the premises.
A classical example of induction is going from seeing several white swans, without seeing any swans of any other color, to the conclusion that all swans, including all those we haven't examined, are white. Seeing an individual swan, and noticing that it is white in color, is a particular observation, and the idea that all swans are white is a general principle derived from making a number of similar observations with no conflicting observations.
Induction is often expressed in
the form of a syllogism:
What's wrong with this picture? The philosopher David Hume and many other scholars have criticized this schema for an inductive logic as a source of general principles: Obviously, even if the premises were all be true, the conclusion could still be false. Unless we've examined all the swans, we can't be sure that none of them are black.
One response to this criticism is to claim that the conclusion of an inductive inference is not certain but probable. Back when the whole point of inductive logic was to explain how the impeccably correct principles required for the certainty of scientific knowledge, how they could have some connection to the actual world, conceding that inductive conclusions were merely likely to be correct left a pretty serious gap in the certainty of the whole project. It was in an effort to fill that gap that the so-called 'hypothetico - deductive method' was developed.
I discussed the problems with the hypothetico - deductive account of scientific methodology at some length in my essay "The Myth of the Magical 'Scientific Method'" , and my essay titled "The Terrible Truth About Truth" goes into difficulties with the notion that scientific knowledge is probably true, so I won't say more about them those notions here.
Here I want to point out a much more serious problem with this approach to inductive logic: namely that it is uninteresting, devoid of insight, and largely irrelevant to science.
Describing induction in terms of the logic expressed in the syllogism or in examples like "All swans are white" just misses the point, if the point is to provide insight into the origin of scientific principles. Scientists don't just run around collecting facts and proclaiming them to be principles as soon as they are sure there are no known counter examples. Scientists develop principles for the purpose of explaining things that they are trying to understand.
Having no contrary examples is a secondary criterion, and actually not a necessary one. The primary criterion is that the principle should do a good job of explaining the phenomenon we are studying. A good explanation is a real treasure, and scientists often work hard, and creatively, to see if apparent failures might themselves be misleading in some way.
The expression "The exception that proves the rule" is commonly misunderstood as meaning that somehow an exception could provide conclusive evidence that a general principle is correct. This is obviously nonsense: Actually, the word "prove" is being used in an older sense, meaning "test." The exception tests the rule, challenges it, and from that testing comes deeper understanding.
In science, we want to understand the phenomena we observe, not just record their obvious characteristics. When we do understand a causal pattern, like a mechanical device, for example, we are no longer stuck with assuming, as our only way of predicting what we'll see in the next instance, that we'll see in the future what we saw in the past.
Suppose, for example, that we've been given a music box. We know from following the instructions that when we wind it up and flip this switch, the little dancer spins around and a tune plays. We can see, if we look inside, that little pegs are playing the tune on some metal strips. If we push one of the strips out of reach of the pegs, that note is missing from the song. If we push them all out of reach of the pegs, the song is gone entirely, but the dancer still spins.
Go back to the point in the story where we discovered that moving one of the metal strips out of reach of the pegs would take every sounding of that particular note out of the song. Most investigators would not then proceed to check if a similar effect would result from displacing each of the other metal strips. We would probably assume that the other strips would work the same way.
We wouldn't be merely assuming that similar looking thingies have similar characteristics. We can see how this particular thingie works. We can see how the pegs push the strips aside, and how the strips spring back into place as the peg passes. We can hear the note sounding exactly when that happens. We can make any of the strips sound it's note by plucking it with a fingernail. We can see that the pegs are doing the same thing we did when we plucked the strip ourselves. That was why we tried moving one of the strips aside, to see if we were correct about how it works.
If you'll look again at the quotation from Newton's Principia given at the beginning of this section, you'll see that his description of where principles come from includes an intermediate step, between the observation of the phenomena and the inductive generalization. First "particular propositions are inferred from the phenomena" and only then "rendered general by induction."
The particular propositions that Newton was referring to are nothing less than the profound insights that so deeply transformed our understanding of the natural world. Certainly they were worthy of being generalized far beyond the observations that led to them. The fact that they don't really explain all motion that ever occurs anywhere in the universe hardly detracts from their significance.
Working toward principles that might explain something adds another dimension to the process of learning from experience. If the problem of coming up with a satisfactory explanation is at all difficult, which it often is, the "Ah ha!" reaction, when the needed insight finally arrives, can be quite powerful. As soon as we find an explanation that makes sense of a particular instance of a phenomenon, it is quite natural to assume immediately that all the instances of the phenomenon are just like that one.
Some scientists are conscious of this process. For example, in his Rules of Reasoning in Philosophy Newton states, in Rule III, that "The qualities of bodies ... which are found to belong to all bodies within reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever."
Actually, our experiments don't have to reach very far to elicit this reaction. As in the case of the music box, even a single clear, revealing example can be enough to convince us that we know how a whole system works, and other systems like it.
Of course, we may be wrong about how far our new understanding can be extended. Quite often, over generalizing principles is part of the process: We develop an amazing new explanation and at first it seems to apply everywhere, but sooner or later we'll need to face a few facts that require revision of the overly simple story we've been telling.
The fact that Newton's generalization of his principles to explain all movement everywhere in the universe turned out to be overly optimistic is no surprise. What is surprising is that it took over two centuries for limitations of those principles to be revealed.
We can sum up the main point of this section quite simply: because classical inductive logic has no way of addressing the central role of explanation in the development of scientific principles, it useless as a model of the way scientists work toward understanding by making observations -- of how they develop principles from evidence.
Fortunately, we do have available
an alternative approach to logic which makes explanatory success the primary
criterion for principles to be accepted. Abductive
inference, also called retroductive
inference, can be described as reasoning
to the best explanation -- reasoning
from a surprising finding (observation or experimental result) to a hypothesis
that makes sense of that finding.
Obviously, this schema for abductive inference offers no insight as to how we might come up with an adequate explanation -- the formula might as well say 'Insert miracle here" -- but it does at least indicate that a bit of intelligence is required at a certain point in the process. In contrast, classical inductive logic has no room for the required miracles: No intelligence is required, because nothing is accomplished.
Starting with the concept of abductive inference, N. Russell
Hanson did a fine job of describing how scientists actually proceed from
surprising evidence to theories that might explain that evidence, in his
of Discovery and his paper "Is there a logic of scientific
N.R. Hanson, "Is there a logic of scientific discovery?'' In H. Feigl and G. Maxwell, Current Issues in The Philosophy of Science. Holt, 1961.
top of page
None of the terms in a formal logical proof, in geometry or in other mathematical discipline, can be simply identified with items and actions in our ordinary world without risking problems of various sorts. The triangles, lines, curves, trajectories, surfaces and so on, which take the stage in these proofs are imaginary perfect forms, which differ in many ways from the actual triangles and such in our ordinary experience. Whether these differences are important or not depends on how we are planing to use the information presented in the proof.
Even the basic rules of inference, such as Euclid's first common notion, are problematic in this way. Consider a simple perceptual judgment, like size, shape, color, or rate of movement. Let's say that what we mean by two things being equal, perceptually, is that we can't tell them apart. Specifically, lets say that by 'can't tell them apart' we mean that performance in an 'odd-ball' discrimination test, where three stimuli are presented and the task is to say which one is different from the other two, is no better than random guessing. Let's work with judgments of shape, since shape is an important distinguishing feature for many types of objects.
Imagine a square (call it figure A) and an equilateral triangle (call it figure B) with sides of equal length. Now imagine gradually shrinking the top edge of the square, so that it becomes a trapezoid and eventually a triangle identical to Figure B. Imagine a series of snapshots taken of Figure A during this process -- call them A1, A2, A3, and so on. Now, if we make the interval between successive snapshots small enough, it will be impossible to reliably tell them apart. So, as far as our perceptual judgment is concerned, they are equal. Indeed, each snapshot in the series is indistinguishable from the ones on either side, unless we look at the labels on the back of the photographs.
Now, if you've followed all this preparation, we are ready for our demonstration. Since snapshots A1 and A3 are both equal to the same thing, namely A2, Euclid's First Common Notion tells us that they are equal to each other. But A1 is also equal to A4, because A4 is equal to A3, and we have already established that A1 is equal to A3. We can proceed in this way to the end of the series, and when we have done so, we will have proved that the two original figures are equal! Obviously, though, when we compare them directly, they are not equal: They are not even similar.
What is the basis of this contradiction? Our perception works as well as it ever did, and the logical assumption hasn't changed in over two thousand years. What's wrong is the presumption that this principle of formal logic can be applied to real objects as they are judged equal by human beings.
Note that this demonstration does not depend on an infinite regress. Because our ability to distinguish different shapes is limited, we could actually do this experiment successfully with no more than a few dozen or a few hundred steps in the series, depending on the conditions of observation. For example, if the three shapes we are comparing are on separate tables, and we have to look at one and then walk across the room to look at the another, we won't need very many steps in the series. When the three shapes to be compared are right next to each other and we can see them all at the same time, we'll need a more finely spaced series of snapshots -- but still no more than a few hundred. A series with a few thousand steps might be required if we are allowed to use a measuring instrument -- the number of steps required will depend on the accuracy of the measuring instrument and our skill in using it, but it will certainly be a moderately small, unquestionably finite number.
Note further that we could perform similar demonstrations on any of the perceptual dimensions that distinguish any objects we care to discuss. Consider digital morphing of images, for example.
top of page
top of page
Paul Strathern; Newton and Gravity .
James Burke; The Day the Universe Changed .
Brian Magee; Confessions of a Philosopher .
William Poundstone; Labyrinths of Reason: Paradox, Puzzles, and the Frailty of Knowledge
John R. Josephson and Susan G. Josephson (Eds.); Abductive Inference: Computation, Philosophy, Technology .
top of page
Poor quality of Greek coinage and the conflict between physical and ideal realms -- Yale Sterling Library featured display, ca. 1980.
top of page
Your Comments and Suggestions
Revised on September 24, 2001
Copyright © 2001 Dharma Haven
top of page