pcr strip laboratory
PCR strip in genetic laboratory
Photo licensed via Adobe Stock
Casey Luskin Scientist and Public Defender of ID

The Science Behind Intelligent Design Theory


Intelligent design is a scientific theory which has its roots in information theory and observations about intelligent action. Intelligent design theory makes inferences based upon observations about the types of complexity that can be produced by the action of intelligent agents vs. the types of information that can be produced through purely natural processes to infer that life was designed by an intelligence or multiple intelligences. It makes no statements about the identity of the intelligent designer(s), but merely says that intelligent action was involved at some points with the origins of various aspects of biological life.

Figure 1. Diagram showing different processes which can produced entities: natural processes (chance-law based processes), intelligent design, or unknown natural laws. Only within intelligent design theory is specified complexity, and a special case of specified complexity — irreducible complexity — found. Thus, at this point, intelligent design theory exclusively predicts that specified complex information will be found.

Intelligent design begins with observations about the types of information that we can observe produced by intelligent agents in the real world. Even the atheist zoologist Richard Dawkins says that intuitively, “[b]iology is the study of complicated things that give the appearance of having been designed for a purpose.”1 Dawkins would say that natural selection is what actually did the “designing,” however intelligent design theorist Stephen C. Meyer rightly notes that, “[i]ndeed, in all cases where we know the causal origin of ‘high information content,’ experience has shown that intelligent design played a causal role.”3 Thus, like any true scientific theory, intelligent design theory begins with empirical observations from the natural world.

Critics of intelligent design have argued that although we may observe through experience that various structures are always made by intelligence, we can still argue that they were constructed by natural processes. A typical example given is the “arch” where in our experience humans make arches, but arches, such as the one found in Arches National Park, can be explained naturally. The problem here is that we have experience that some arches can be made by humans, and experience that some arches can be made through natural processes–exactly as witnessed by this one made in Arches National Park. A quick experiment with sand and water at the beach can vaguely reproduce what happened at Arches National Park, empirically verifying that natural processes can create arches. But this is no surprise. Natural arches themselves contain small amounts of information. In our experience we have no instances of specified complex information created through natural processes alone. Thus, as seen in Figure 1, intelligent design theory makes a testable prediction from observations from the natural world: that specified complex information will be found.

As seen in Figure 2, intelligent action can potentially produce just about any level of information content. However, as Dembski argues in his “No Free Lunch” and “The Design Inference”, there is an upper limit to the sorts of information content which can be produced by natural processes (represented by Curve C). Where we see high [Dembski would add “specified”] information content, we know that natural processes were not involved, and that intelligent design alone can be responsible. Thus, we can infer design. When low information content is involved, it could have been designed, but the from our understanding of what natural processes can do, probability shifts towards the information having been produced by natural processes.

Figure 2. Point A represents something probably made by natural processes. Point B represents something made by intelligent design. Curve C represents the upper limit to what natural processes can produce. Inferences made from both points A and B are based upon probabilities.

As Dembski says describing the processes by which we explain things, “the [explanatory] filter asks three questions in the following order: (1) Does a law explain it? (2) Does chance explain it? (3) Does design explain it?”5. If law or chance can explain low complexity, then perhaps we might accept a chance-law based explanation might be most appropriate for the origins of low information content. However, if only design can explain high information content, then we are justified in inferring design (Figure 3 below).

Information is a very real entity which may or may not be created by a conscious intelligent being. Design theorist William Dembski says,”No one disputes that there is such a thing as information. As Keith Devlin (1991, p. 1) remarks, “Our very lives depend upon it, upon its gathering, storage, manipulation, transmission, security, and so on. Huge amounts of money change hands in exchange for information. People talk about it all the time. Lives are lost in its pursuit. Vast commercial empires are created in order to manufacture equipment to handle it.”2Dembski borrows accepted definitions from information theory to define information as the actualization of a possible event / scenario while excluding other events / scenarios. In other words, information is the narrowing down of what you’re talking about. He quotes from Fred Dretske saying:”Information theory identifies the amount of information associated with, or generated by, the occurrence of an event (or the realization of a state of affairs) with the reduction in uncertainty, the elimination of possibilities, represented by that event or state of affairs.”2Another definition Dembski gives is from Robert Stalnaker:”Content requires contingency. To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth”2Complexities of information are given by assigning probabilities to the excluded scenarios. When our observed scenario has a low probability and excluded scenarios have a high probability, we have information of high complexity. Through a mathematical transformation involving logarithms, probabilities of scenarios can be converted into units of information, measured in bits. DNA as a genetic molecule contains information because it tells you what to produce–to produce one entity rather than to produce other entities. This means it has information. (Now perhaps a DNA molecule on its own without the machinery to produce the proteins wouldn’t have nearly as much information. So when I speak of DNA, in this context I mean the entire machinery for using the genetic code to create biological structures.)

Functions are biological features which do things for the organism. The purpose of intelligent design theory is to look at various functions and ask if they bear the marks of something which has been designed by an intelligence.

So, in other words, when we see in the biological structure-producing DNA machinery the ability to create some structures, and not others, which perform some specific action and not some other specific action, we can legitimately say that we have complex genetic information. When we specify this information as necessary for some function given a pre-existing pattern, then we can say it was designed. This is called “complex specified information” or “CSI”.

However, because the intelligent design took place in the past, intelligent design theorists can only detect the design in the biological realm after it has happened. They cannot know the specification, or desired target before the design occurs. However, Dembski does note that, “a pattern corresponding to a possibility, though formulated after the possibility has been actualized, can constitute a specification.”2 In other words, by observing things in the present, we can deduce the specified target of the designer in the past.

Dembski gives an analogy of six children who give their parents individual anniversary gifts, which, when put together, make a complete set of chinaware. We didn’t know this complete set was possible or expected before the gifts were given, but we can still deduce and detect a pattern. But what sort of a pattern is there to which we can retroactively see that life corresponds?

Dembski argues that functionality is the pre-existing pattern to which life must always correspond. Dembski discussing functionality by saying the following:”Arno Wouters (1995) cashes it out globally in terms of viability of whole organisms. Michael Behe (1996) cashes it out in terms of the irreducible complexity and minimal function of biochemical systems. Even the staunch Darwinist Richard Dawkins will admit that life is specified functionally, cashing out the functionality of organisms in terms of reproduction of genes. Thus Dawkins (1987, p. 9) will write: ‘Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.'”2If a function vital to survival of an organism of a given structure (the pre-existing specified pattern) could occur only if a given set of parts (the complex information) were present, and this complex set of parts were to come into being, then we could justifiably infer it was designed. Because we can observe intelligence being able to manipulate parts in an innovative manner to create novel CSI, the presence of CSI indicates design at some level, and removes the possibility that a chance-law mechanism such as the mutation-selection mechanism was responsible for it. Novel CSI itself cannot be generated by a chance-law based process, but rather can only be shuffled around4. As Stephen Meyer says, “Because we know intelligent agents can (and do) produce complex and functionally specified sequences of symbols and arrangements of matter (i.e., information content), intelligent agency qualifies as a sufficient causal explanation for the origin of this effect.”3

Where the pattern is easily distinguished retroactively is when we know that a specific part is necessary for functionality. A useful way to do this is to consider the alleged evolutionary origin of a given function, and consider the specifications involved along the way:

For example, let us say that a primitive organism uses a hemoglobin-like molecule to dispose of unwanted oxygen. We observe that there are other organisms which use oxygen for respiration. We understand that their usage of oxygen involves a base number of different interacting enzymes and organ parts, not present in the organism which doesn’t use oxygen. Thus, for an organism to use oxygen for respiration, we can see some pre-specified level of a target complexity necessary over the complexity found in organisms which don’t use oxygen for respiration. Through “reverse engineering” of biological systems, we can come up with a target complexity retroactively. Irreducibly complex systems are useful in detecting design because they clearly show that some target level of specified complexity was necessary for some base level of functionality to be present. Looking at the steps necessary in the hypothetical construction of irreducibly complex systems may reveal many places where specifications must have existed in the past, and thus complex specified information exists in the present.

Figure 3. Design detection might be rudimentarily seen as a spectrum, as a function of complex-specified information content. Structures with low CSI content are best described as having been produced by evolution, while those with high CSI content are best explained through design.

In all of this, there have been no mentions of God, religion, or adherence to any religious text but rather we use observations about how intelligent design works in the present to look at aspects of the natural world to see if they are designed. Intelligent design theory is based solely upon applying observations about intelligent action and principles of information theory to the construction of biological systems, and nothing more. There is nothing mystical, supernatural, religious, or non-scientific about intelligent design theory. In its current form, intelligent design theory also can say nothing about the designer other than that the designer was intelligent. Whether you agree with the methodology of intelligent design theory or not, you have to agree with one thing: it has a scientific basis.

References Cited

1. Dawkins, Richard [zoologist and Professor for the Public Understanding of Science, Oxford University], “The Blind Watchmaker,” [1986], Penguin: London, 1991, reprint, p.1

2. Intelligent Design as a Theory of Information by William Dembski as found at http://www.arn.org/docs/dembski/wd_idtheory.htm.

3. DNA and Other Designs by Stephen C. Meyer as found at http://www.arn.org/docs/meyer/sm_dnaotherdesigns.htm.

4. No Free Lunch by William Dembski (2001).

5. The Explanatory Filter: A three-part filter for understanding how to separate and identify cause from intelligent design by William Dembski.

Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.