by Dr. Terry Halwes
The procedure that gets taught as "The Scientific Method" is entirely misleading. Studying what scientists actually do is far more interesting.
what's wrong with this picture
what keeps the myth alive?
so is there a scientific method?
teaching science with no method
can we improve on the formula?
Index of Science Haven Pages
Dharma Haven's Home Page
Modern science is an amazing phenomenon, and people naturally wonder how it works. Oddly, science has never been thoroughly studied scientifically, so we have quite an array of different answers to this question, some of them accurate and some of them ridiculous. Unfortunately, the answer that became most popular was a guess made by some philosophers, which turned out to be worse than useless. Even more unfortunately, that guess is now commonly believed to be the simple truth about how science proceeds to develop new knowledge.
Discussions of methodology in science are clouded by a dreadful confusion because the phrase "the scientific method" is used in two very different ways, one appropriate and one highly misleading. The appropriate one speaks in a very general way of science as a powerful process for improving understanding. People who use the phrase in this general way may be criticizing dogmatic clinging to beliefs and prejudices, or appreciating careful and systematic reasoning about empirical evidence. Although vague, this general use of the phrase can be more or less appropriate.
On the other hand, the phrase is also commonly used in a much more specific sense -- an entirely misleading sense -- which implies that there is a unique standard method which is central to scientific progress. There is no such unique standard method -- scientific progress requires many methods -- but students in introductory science courses are taught that "The Scientific Method" is a straightforward procedure, involving testing hypotheses derived from theories in order to test those theories.
The "hypothetico-deductive" schema taught to students was not developed as a method at all: It was intended a logical analysis of how scientific theories derive support from evidence, and it was developed in a process that intentionally excluded consideration of the process of discovery in science. Few people learn that this notion came by a tangled route from an unreliable source (philosophical speculation), or that actual research on how science proceeds is still in its infancy. The question of how science is so successful at improving understanding is hardly ever presented as a question at all.
The current situation is harmful in many ways: People in some immature scientific disciplines are actually trying to use this "method" as a guide to research practice; Others are required to pretend to have followed it when they report their results; and everyone is denied the benefit of useful, insightful analysis of how science works.
If you owned a swimming pool and had in your employ a
swimming coach who wasn't helping anyone -- who was actually increasing
the danger for some of your clients -- you'd need to get someone who would
actually do the job; and first you'd have to get the current coach down
off the chair to make room for a replacement. Wouldn't you?
Before (or soon after) reading this page, I invite you to read two sections of another page, Dispelling Some Common Myths About Science. The section titled No Special Method is Required argues that there's a good reason why the effort to provide a cookbook for scientific research failed: Science is just not that simple. There is no unique "Scientific Method." Scientists use many methods of investigation and reasoning, and most of them are also used in other fields of human endeavor. Then a section titled So Why Is Science So Powerful? explores the factors that contributed to the rise and advancement of modern science, and concludes that there is no need to postulate an arcane new mode of reasoning to explain how science continually improves our understanding of ourselves and of the natural world.
In this article I'm going to focus on what's wrong with
the hypothetico- deductive account of scientific reasoning as an explanation
of what scientists do, in the sections titled What's
Wrong With This Picture and What Keeps the Myth Alive?.
I'll recommend some alternative ways of thinking about the logic of research,
in a section titled So Is There a Scientific Method?,
and offer some hints about better ways to teach science in a section titled
Science Without the Magic Method; Can we Improve
on the Formula? offers an attempt at developing a replacement for the
standard methodological dogma.
Thus what people often mean when they refer to "The Scientific
Method" is that cut and dried schema, also called the "Hypothetico-Deductive
Method." It gets stated in many ways. This one is typical:
We won't be concerned here with this formula with regard to the original attempt to explain how scientists use evidence to justify their beliefs in the correctness of their theories. (I've provided a few suggestions below in the Loose Ends section for those who are interested in that aspect of the analysis.) In this section we focus on problems with the hypothetico- deductive account taken as a method -- indeed, as the unique and essential scientific method.
The expression given above does relate to two key aspects
of scientific progress: considering existing beliefs to be open to
revision, and valuing evidence based on experience and careful
reasoning. However, it fails to do justice to many aspects of scientific
work that lead to increases in understanding. In another article I've written
1. The most obvious failing of the "method" as a guide to furthering (or understanding) scientific progress is that it ignores or distorts the role of careful observation as a source of knowledge. Some versions omit mention of observation entirely, but most of the versions that do mention it, like the one given above, leave the entirely misleading impression that information gained from observation only becomes relevant to science when it is used to test theories.
Let's consider an example: Until 1803, occasional reports of rocks falling from the sky were not believed by scientists. Then, in late August of that year, many people in the Village of Laigle, France, saw bright lights streaking across the sky, followed by "three violent detonations," after which "nearly 3000 stones fell into the fields with loud hissing noises." Each of the stones was found resting in the center of a small crater: the stones smelled of sulfur. Scientists heard the report and investigated the site, and proposed that the stones had come from space. [sources]
On the face of it, this seems to be clear example of scientifically important information gained by observation, with no hypothesis testing or theories involved at all. I chose it because the interesting phenomenon came out of the blue (literally), with no preparation on anyone's part. Scientific observations span the range between surprises like that one, to observations that are made possible by careful preparation, like the invention of the microscope or an archaeological dig, all the way up to observations that require preparatory work of astounding dedication and inventiveness, like those made using the Hubble Space Telescope. However much or little preparation is involved, though, observation informs scientists just as it has been informing sentient organisms for hundreds of millions of years: If you pay attention to something, you learn about it.
Scientists do work with their observations in ways that are rarely used by invertebrates -- making careful and systematic descriptions, drawings, photographs, videos, and a host of different types of measurements, for example -- but none of these refinements are in any way illuminated by the "method."
Of course, someone who wanted to defend the hypothetico-deductive schema as the essential key to scientific progress could find a way to cast any of these examples, even finding the meteorites, in those terms. For example, whatever the scientists believed before they interviewed the villagers could be said to be their hypothesis (e.g. "People who report falling stones are delusional"), which their journey to the village was designed to test. Stretching the description in that way is certainly possible, but what's the point of doing so? If the H-D schema isn't going to make it easier to understand how scientists do what they do, and why it works so well, then why learn it?
The H-D schema overemphasizes testing scientific principles, at the expense of the growing corpus of scientific fact. The people who originally developed the H-D logic did so as a way to explain how theories gain support from evidence, and they were naturally focusing on the testing of principles. Accordingly, turning the H-D logic around and using it as a method for testing theories, and arguing that facts were developed by another method (like systematic observation, for example) might have made sense; but teaching that H-D schema as the unique and essential method by which scientific progress is made -- as the scientific method -- while ignoring most of what scientists do and accomplish, makes no sense at all.
2. Not all revisions of theories are based on new evidence; for example, an improved theory may result from logical and mathematical work that makes possible a better understanding of the existing evidence.
3. Scientists don't only test hypotheses in order to test theories. Scientists explore their world, just like children do. When a good theory is available to serve as a map, is gets used. If no theory / map is available, the exploration goes on anyway.
We test hypotheses to explore what a theory really means, to try to understand it's implications -- and if no good theory is available we test hypotheses anyway just to see what happens, or to follow a hunch. Trial and error is an apt description of a lot of scientific work, and a good theory helps; but it certainly is not required. (Again, glorifying whatever little notion one has decided to explore as a "theory" just to get the H-D account to cover these cases only clouds the issue.)
The magic "method" implies that scientific progress is impossible without a theory to test. In disciplines that have already developed a strong tradition of successful investigation, actual scientific work is relatively unaffected by the dictates of the "method;" and scientists in those fields already have good theories to work with anyway.
Unfortunately, however, since the Behaviorist revolution in psychology immature disciplines seem to be at risk for trying to actually use the magic method as a recipe for research. For example, one professor, when asked why he continued to do experiments designed to test a theory that was obviously wrong -- which he was sure was wrong -- answered "Without a theory to test, we wouldn't be able to do experiments at all!"
In an immature scientific discipline, the prevailing lack of understanding of the domain of interest makes coming up with a good theory difficult, if not impossible. A good theory helps, but a bad theory, prematurely scraped together in the mistaken belief that a theory is required before scientific work can proceed, can stifle interesting scientific work.
Exploring without a map is preferable to exploring with a map that inspires no confidence. Researchers who follow the magic method and it's implication that a bad theory is better than no theory at all are in for trouble.
4. The relationships among theories, hypotheses, evidence, surprise, and opportunities for learning is far more complex than the magical method implies. (Remember, we are not talking about technical differences among scientific specialties -- chromatography vs. telescopy -- the "scientific method" we are discussing is supposed to be a form of inference essential to all the sciences.) This is important enough to warrant discussion in some detail:
When something surprising happens, we learn from it. Animals and children do too. We might stare at it, pupils dilated, and make excited noises: "Did you see that?!" If a theory is a good expression of what we already believe, then it may seem as though a surprising result has taught us something by showing our theory to be incorrect. Often, however, a theory is a way to be more clear about something we don't really understand, using the power of mathematics and logic to extend our ideas into unfamiliar territory. In those cases, a result that agrees with the theory may be surprising, and we learn from that surprising event.
Does anyone really think that the physicists didn't learn anything from the first nuclear explosion, because it was predicted that it would work? A surprise accelerates learning, even if a theory tells us we should have expected it. In that case, where a theory isn't really believed because it makes predictions that seem impossible, a surprising confirmation increases our confidence in the theory.
The most common version of the H-D account implies that no information is gained about a theory when a hypothesis derived from it is confirmed (since the purpose of testing hypotheses is to disprove theories -- to discover how they need to be changed). The older version of the H-D account, which gives hypothesis testing the role of confirming theories, would have been right at home in this case, but would have no way of making sense of surprising disconfirmations of theories. The H-D account is bound to miss the boat pretty often, whichever of the two alternatives you pick. What is needed is an account of scientific progress that is a little richer and a lot more flexible.
The fact that we learn a lot from paying attention to a surprising event certainly does not depend on following the intricate procedures of some magical "scientific method." Infants do that; Pigeons do that. Hundreds of million years of evolution prepared animals to pay special attention to novel events, and we're all very good at it.
In fact, no explicit theory is required at all for scientists to learn from surprising discoveries. The first person to see microorganisms moving around in a drop of pond water had no theory stating that they should be there, or that they shouldn't be there. Surprise amplifies learning when the surprising discovery supports a theory, when it is at odds with a theory, and when there is no relevant theory.
Furthermore, a lot of what we learn in science is not particularly surprising at all, because we have no idea what to expect, or because we know perfectly well what to expect, and we're right. After you've learned that a certain species of corn smut has hundreds of different sexes, finding another species that has a few dozen more may seem pretty ordinary. The results are interesting, perhaps -- interesting at least to the people who make those observations and record them and talk to each other about them and publish papers about them -- interesting, but not surprising.
The people who make these interesting but unsurprising observations may have a theory that covers the area they are studying, and they may not. They learn from their observations in either case. So here is another instance of learning in science that has nothing to do with testing theories: As mentioned in the first point, above, Animals and children learn about whatever they pay attention to, and so do scientists.
In summary all the various possible relationships between theory and evidence can lead to improvements in scientific understanding: evidence surprisingly supporting theories and surprising evidence clashing with theories, and surprising evidence discovered with no theory involved at all; unsurprising evidence gathered to explore the implications of a theory, and unsurprising evidence developed just to satisfy someone's curiosity; and theories revised with no evidence at all except mathematical elegance or someone's bright idea.
No matter which version of the hypothetico-deductive schema you pick, it will exclude most of these possibilities.
5. Let's be still more explicit about this. One key point is the issue of what testing is supposed to accomplish. The "method" is taught in several different ways: testing aimed at confirming the theory, testing aimed at falsifying the theory, or just "to test the theory" with nothing said about what the relevant outcomes might be.
Now, what actually happens in scientific work, often, is more like this: In the early development of a field of study, there are no good theories; and in trying to develop one a lot of trial and error goes on. You have a bright idea, and you check it in several ways to see if it has any chance of working. At this point, both positive and negative results are interesting, so whether you were taught that the purpose of testing is an attempt to confirm the theory or that is is an attempt to falsify the theory, what you learned was only half correct (less than half, actually, because even if the results are inconclusive, you learn more about whatever you are studying just in the process of designing and setting up and running and evaluating even inconclusive tests).
So on that analysis it would seem that the method should be taught just as testing the theory, period -- with no mention of confirmation or disconfirmation. However, even that is incorrect if you're talking about a mature scientific discipline, one with a really good theory. At this point, as with work on the human genome project, for example, they are not testing the theory -- they did that already. By now they are quite certain that the theory is basically correct. What they are doing is using the theory to guide exploration of new areas. In this phase of "normal science," as Thomas Kuhn says in "The Structure of Scientific Revolutions," results that don't seem to make sense in terms of the theory aren't taken as proof that the theory is wrong -- they are usually set aside as puzzles to be dealt with later after we know more about what we are doing.
None of the ways of presenting the magic method -- testing to disconfirm, testing to confirm, or just plain testing -- do justice to this normal science work, using a well understood theory to see what else we can make sense of.
6. What do you do if the test fails? The version that says you test hypotheses in an effort to falsify theories, and the version that says nothing about why you test hypotheses, both imply that if the hypothesis fails the test you have set for it, you will conclude that the theory is incorrect and get rid of it. That's often not what scientists actually do. What they do do makes a lot more sense.
Think about how this failure came about. You came up with a hypothesis that certain consequences were implied by the theory: If the theory is correct, and we do this, then we will observe that. So we developed a procedure for doing this, and a procedure for observing or measuring that. Then we used both procedures, and but we didn't see what our that-observation procedure told us to look for.
Now it could be that the theory is just wrong, and we have successfully devised and performed a test which revealed one of it's flaws. There are several other obvious possibilities, however: First, we could have made a mistake in deriving the hypothesis. Second, we could have made an error in developing our procedure for doing this, or third, made an error in carrying out that procedure. Forth and fifth, we could have made errors in developing or executing the procedure for observing that.
Mistakes in all of those aspects of the overall process of hypothesis testing are quite common, and scientists often look carefully at these potential problems before putting much effort into worrying about whether the theory needs to be revised. The more confidence one has in a particular theory -- the more valuable the theory -- the more thoroughly one will tend to check for possible errors before concluding that the failure tells us anything about the theory. Of course, it works the other way too: Procedures that are well understood are more likely to be trusted when they indicate that a prediction has failed.
This is all so complex and flexible that anyone who tries can certainly find examples to support whatever version of the magical method they wish to defend. That's hardly the point, however. We aren't discussing some subtle point in philosophy here -- we are discussing a pedagogical tool, a teaching strategy. It should be becoming clear, by now, that this particular strategy is very misleading, which is exactly not what you want in a teaching strategy.
7. Some versions of the "method" omit the "observation" option for testing hypotheses. This has led some students (many of whom eventually became professors) to believe that experimentation is the only way for a scientist to learn anything. You would think that the paleontologists and astronomers would have protested (since manipulating stars and dinosaurs is so difficult that it is hardly ever done), but they may not have been paying attention. (Actually some astronomers have tried to help out by calling it an experiment when they change the filter in their telescope.)
This might seem like a minor point -- in a way it is -- but if you are a student in an immature scientific discipline, in which careful observation is the appropriate way to advance your understanding, you would be very unfortunate to find yourself working with a teacher believes that you must do experiments or you won't learn anything. Neither of you knows enough to do an interesting experiment in that field.
Top of Page
1. Perhaps it seems reasonable because we've heard it so many times, from various people who are supposed to be authorities on science. The philosophers who originally the developed the hypothetico- deductive method were certainly well respected, at least within the limited circle of their professional peers. The notion was disseminated, and widely accepted, with the enthusiastic support of a bunch of behaviorist psychologists. The fact that it was just a proposal for a logical analysis that had never been completed was somehow lost in the commotion.
Soon after that the criticisms started, and they have continued; the authors of the proposal pretty much gave up on it long ago [details]. Surprisingly, however, and quite unfortunately, although the arguments against the magic method would seem to be quite devastating, they failed to reach most science teachers, who are still requiring their students to try to think this way. They also have failed to protect scientists in certain fields from having to pretend to work this way, and failed to release scientists in some areas from actually trying to work this way.
2. It's difficult to think clearly about the magical "scientific method" -- it may seem simple but actually quite complicated. It is supposed to be a general schema that explains for all sciences in all stages of their development how theories depend on evidence. This problem is harder than general epistemology (where you just have to make sense of how anyone can know anything): Scientific principles seem deeply true in a way that is not required of ordinary facts, which only have to be accurate enough to be useful.
To simplify the task, the process of scientific discovery was excluded from the analysis. All sorts of things can lead to discoveries, with things like accidents and dreams serving as sources of insight, which made giving a logical analysis of that process seem exceedingly difficult, if not impossible. Excluding all that messy detail about the sometimes irrational sources of discoveries wasn't supposed to matter, since what was to be understood was the logical relationship between evidence and theory. What resulted was an analysis not of the process of scientific work, but of the resulting network of logical inferences among theory, hypotheses and evidence that give us reason to believe that a theory is correct -- what has been called "the logic of the finished scientific report."
Some logicians are still trying to get this scheme to work after well over half a century of trying [details]. But that's not the so-called "Scientific Method." We're not done yet. The next step is to take this analysis, from which the process of scientific discovery has been excluded, and simply use it as a method for generating scientific knowledge. The supposed logic of the finished scientific report, which might go something like "We had this theory. We derived this hypothesis from it. We did the following experiments to test the hypothesis. We therefor conclude that ...." becomes "Get a theory. Derive a hypothesis from it. Devise an experiment to test the hypothesis." and so on.
How you get the theory doesn't seem to matter, because that is the process that was excluded from the original analysis. Its like telling someone who wants to build and sell automobiles "It doesn't matter how you build it; what matters is how you test it to see if it is made correctly."
This completely distorts our sense of what actually matters most in scientific research: the creativity and intelligence and hard work that lets someone come up with a new understanding of a mysterious event or process.
3. The actual situation in science is quite complicated (as we saw earlier in What's Wrong With This Picture?), and it's nice to have a tidy story to tell about what is going on, especially when trying to introduce students to learning about research. For example, if you think about all the different kinds of relationships between evidence and theory, and the various ways that they can lead to progress, it's quite involved. The so-called "scientific method" supposedly provides a simple story to tell about how science, in general, works.
Unfortunately, given the complexity of actual science, together with the complexity of the H-D account as discussed in point 2, comparing the H-D theory with what it is supposed to be a theory about is extremely difficult -- which makes evaluating H-D theory as an account of how scientific inference works extremely difficult. That complexity is increased dramatically when the H-D theory of the relation between evidence and theory is turned around to serve as a methodological map for scientific research. This injects the H-D logic forcibly back into the very domain (scientific discovery) that was originally excluded from the account to give the logic any hope of working at all.
The few attempts to make sense of the whole thing, like Weimer's Notes on the Methodology of Scientific Research, are not easy reading, and are certainly not required reading in university curricula that train science teachers (Weimer's book has been out of print for years). That's actually one main reason for writing this article: to provide an overview of the serious problems with teaching the H-D logic as The Scientific Method -- an overview that would be, hopefully, relatively easy to understand.
4. Many scientists (or would-be scientists) have
been trained to lie about their work in order to get it published. (I am
not making this up!) The basic logic is expressed beautifully in this passage
from Science Made Stupid:
Top of Page
"No Special Method is Required." The gist of the argument presented there is this: Scientists actually use quite a lot of methods -- there is no single method that all scientists use, and most of the methods they do use are not all that special -- they're used in a lot of other professions, methods like careful observation, and "trial and error," for example.
If we need a short summary, we could say that what successful scientists do is to be as intelligent as possible in examining whatever interests them. In the words of one physicist, a scientist at work "is completely free to adopt any course that his ingenuity is capable of suggesting to him."
What we don't want to do is to call this "The Scientific Method." There seem to be endless little tricks and strategies, as well as quite a few different major methods that are used again and again for certain types of problems, and at different stages in the development of a scientific discipline. I don't mean several competing alternatives, as in the arguments among philosophers; I mean several different methods used for different kinds of scientific work, which have different functions. Eventually, I think we will have something like a "natural history" of scientific methods, which may provide the basis for some truly profound understanding of scientific learning. (The best work I know of on this project is that of Herbert Simon and his colleagues, published as an interim report in their book Scientific Discovery: Computational Explorations of the Creative Processes.)
One of the more important chapters in the natural history of scientific methods will probably be the one on retroductive (or abductive) inference: Reasoning to the best explanation -- that is, reasoning from a surprising finding (observation or experimental result) to a hypothesis that makes sense of that finding.
The surprising phenomenon, X, is observed.
Here is an excerpt from a brief but excellent note on
N. Russell Hanson, starting from Pierce's work on abductive inference, has done a fine job of explaining how scientists actually proceed from surprising evidence to theories that might explain that evidence, in his book Patterns of Discovery and his paper "Is there a logic of scientific discovery?''
Recently, this form of reasoning has interested researchers in Artificial Intelligence, who are using it for modeling human evidential reasoning. Abductive Inference: Computation, Philosophy, Technology, edited by John R. Josephson and Susan G. Josephson, reports on the progress of a major investigation at the Laboratory for Artificial Intelligence Research at Ohio State University. The book argues that "knowledge arises from experience by processes of abductive inference, in contrast with the view that knowledge arises non inferentially, or that deduction and inductive generalization are sufficient to account for knowledge."
It "reports key discoveries about abduction that were made as a result of designing, building, testing, and analyzing knowledge based systems for medical diagnosis and other abductive tasks. These systems demonstrate that abductive inference can be described precisely enough to achieve good performance, even though this description lies largely outside the classical formal frameworks of mathematical logic and probability."
Review by Cathy Legg -- Discussion Thread
Sources of Information on Abduction
Top of Page
We are remarkably adaptable creatures, we humans. The Russian scientists, during the Soviet era, had to maintain two separate minds about the work they were doing, one that made sure that their reports conformed to the Marxist expectations of their Communist Party bosses, and one that could actually do the work and carefully communicate with other scientists. Unfortunately, we require that kind of cognitively expensive split mentality of all our science students, by requiring them to learn methodological principles that don't actually make sense.
Our focus in this section is quite specific: Having given up teaching hypothetico-deductivism as "The Scientific Method," how can we help students to a general understanding of the workings of science? (The emphasis is on "general" understanding because that is what the magic method was supposed to provide.)
First, of course, one could focus on some of the qualities that do tend to be characteristic of science in general, qualities such as those discussed in the section titled So Why Is Science So Powerful? in my article Dispelling Some Common Myths About Science. Those generalties, though, will certainly not be adequate as a way of introducing people to how science actually proceeds. What about the specifics of general scientific methodology?
Since the magic method was a methodological prescription injected where none should ever have been, we can't just replace it with a more accurate version, and there is really no need to do so. There is no unique standard method essential to all scientific progress, and there is nothing to be gained by continuing the pretence that we can sum up the essence of all scientific work in a few phrases.
I can see at least two ways of proceeding. We can try to develop a less misleading formulaic statement of the key points of an important type of scientific inference, such as abduction, with a clear indication that the schema is only one of many reasoning strategies that scientists use. I've begun exploring that option in the next section, which is titled Can we improve the formula? Another option would to dispense with synoptic formulae altogether. That is the approach I'll explore briefly here in this section.
What I would suggest is to develop a set of case studies from the history of science and mathematics, illustrating various aspects of the way human beings have come to understand progressively more and more about themselves and their world.
For the importance of and power of observation I'm thinking of examples like these: John Ostrum finding the claw of deinonychus and knowing immediately how important it was because he'd spent so much time examining and thinking about the anatomy of related dinosaurs; Dr. Tinbergen watching the gulls year after year, until what he was seeing them doing began to make sense, and then beginning to use experiments to answer specific questions; how building a special instrument or going to a special place can extend our powers of observation; and how terribly hard it was for Galileo to understand what he was seeing through the telescope. The range of variation in preparation, from casually noticing an unusual reaction in rubber spilled on a stove, to spending billions of dollars putting the Hubble Space Telescope in orbit and periodically sending people into space to repair or improve it.
(It is a useful simplification to consider experiment as a special form of observation: arranging to see what would happen -- what you would observe -- under a particular set of circumstances..)
The emphasis has to be able to keep shifting from the intrinsic interest of the work and discoveries, to what they can tell us about the many ways that we use to increase our understanding. With the same approach one can explore the various powers and difficulties of naturalistic observation, experimentation, of logical and mathematical analysis, and so on.
Before closing this section, let me beg you to kindly refrain from dragging students through the history of whatever discipline you may be teaching, detailing each advance in terms of who did the work and when and where and with what materials and instruments, with the arguments on all sides of every important issue, and so on. I'm suggesting that a part of the course be devoted to great examples of human intelligence at work, and most of the history of any discipline doesn't warrant that level of attention in an introductory course.
One could try to find a balance between presenting science
as a tidily packaged body of knowledge and as an ongoing mystery story
with us and our ancestors and children as the team of detectives. I believe
that you and your students will find that science is much more magical
without the method.
The following books may be useful in developing a historical case-study approach to science teaching:
An alternative to teaching general scientific methodology with no cut and dried cookbook formula, would be to come up with a less misleading replacement for the hypothetico-deductive schema. In response to comments, I decided to have a go at that task, even though I wasn't convinced that it was a good idea, and I wasn't sure it could be done. I surprised myself. Here's what I've come up with so far:
I'm much more comfortable with this than I expected to be, for several reasons:
First, it's easy to avoid most of the problems of the H-D schema just by refraining from claiming that we are talking about a unique and essential procedure worthy of the title "The Scientific Method." We then find ourselves simply trying to come up with a clear and, hopefully, useful shorthand description of something that scientists often find it worthwhile to do. We can reinforce this sense of modesty by using the article "a," rather than "the," and by being more specific about what the strategy is designed to do. Thus I've called it "A Scientific Mystery-Solving Strategy."
Second, we've managed to include abductive inference (discussed above in the section So Is There a Scientific Method?) and the notion that imagination and intelligence are required for coming up with hypotheses worth testing. This takes the "Magic" out of the method, and puts it in the scientist, where it belongs.
Third, both positive and negative outcomes are explicitly mentioned in the testing section, and both alternative hypotheses and further testing are mentioned explicitly in the evaluation section.
Fourth, there is no artificial imposition of theory as the necessary reason for generating a hypothesis and testing it. This mystery-solving strategy is used at all levels in science, whether figuring out a way to measure something, figuring out what went wrong in an experiment, or figuring out how to interpret an observation.
I'd be happy to hear comments on this strategy statement.
Top of Page
My statement that the "hypothetico-deductive method" is inadequate even as a logical analysis of how scientific theories are justified by evidence is not addressed in this paper. I'd suggest looking at Weimer's Notes on the Methodology of Scientific Research, N.R. Hanson's Patterns of Discovery, and Bartley's The Retreat to Commitment. (The books by Weimer and Hanson are, regrettably, out of print; but they can probably be obtained through an Interlibrary Loan.)
Not all participants in the debate would agree with my
statement: Some philosophers still take quite seriously the H-D account
of how scientific theories are justified by evidence. A very good example
is the paper Hypothetico-Deductivism:
The Current State of Play; The Criterion of Empirical Significance: Endgame,
by Ken Gemes.
N.R. Hanson, "Is there a logic of scientific discovery?'' In H. Feigl and G. Maxwell, Current Issues in The Philosophy of Science. Holt, 1961.
Top of Page
Books -- Web Sites -- Image Credits
"On Scientific Method" by Percy W. Bridgman
Links are to amazon.com listings, where the books are reviewed by readers, and can be ordered online. (We receive a small commission on items you order through these links.)
Thomas S. Kuhn; The
Structure of Scientific Revolutions.
N.R. Hanson, Patterns of Discovery.
Walter B. Weimer; Notes
on the Methodology of Scientific Research.
John Barrow; The World Within the World.
W.W. Bartley, III; The Retreat to Commitment.
John R. Josephson and Susan G. Josephson (Eds.); Abductive Inference: Computation, Philosophy, Technology.
Imre Lakatos, Paul K. Feyerabend, Matteo Motterlini (Editor); For and Against Method : Including Lakatos's Lectures on Scientific Method and the Lakatos-Feyerabend Correspondence.
Pat Langley, Herbert A. Simon, Gary L. Bradshow, Jan M. Zytkow; Scientific Discovery: Computational Explorations of the Creative Processes.
David Klahr, Herbert A. Simon, Christian D. Schunn; Exploring Science: The Cognition and Development of Discovery Processes.
Hallucigenia -- A fossil Onychophoran of the Burgess Shale beds, here incorrectly reconstructed as walking on what were probably dorsal protective spines. In his book The Crucible of Creation: The Burgess Shale and the Rise of Animals Simon Conway Morris discusses how his interpretation of the fossil was later shown to be upside-down and probably backwards.
I chose this image as a sort of visual joke: The upside-down reconstruction is a snapshot of Zoological theory taken at a moment when scientists were wrong about an important detail; similarly, the "hypothetico-deductive method" is a snapshot of the theory of scientific methodology, taken at a moment when the theory was dramatically distorted in several ways.
The vivid clarity and beauty of this image reminds me of how we all create solid, seemingly perfect images of whatever we happen to believe -- and then create bright new visions of perfection when those beliefs change.
Detail of an image by John Gurche, published on pages 50 and 51 of The Rise of Life, by John Reader (Alfred A. Knopf, 1986).
Dharma Haven Home Page
Support Dharma Haven
While you Save on Books and Music
Your Comments and Suggestions
Revised on April 27, 2000
Copyright © 2000 Dharma Haven
Top of Page