William Dembski holds that Darwinism cannot account for complex specified information (CSI), but that Intelligent Design can. Dembski proposes intelligent design to be a scientific theory with empirical consequences.
Intelligent design can be unpacked as a theory of information. Within such a theory, information becomes a reliable indicator of design as well as a proper object for scientific investigation. In my essay, I shall (1) show how information can be reliably detected and measured, and (2) formulate a conservation law that governs the origin and flow of information. My broad conclusion is that information is not reducible to natural causes, and that the origin of information is best sought in intelligent causes. Intelligent design, thereby, becomes a theory for detecting and measuring information, explaining its origin, and tracing its flow.
Dembski holds that complex specified information (CSI) is a reliable indicator of design. Before I start, it’s important to get clear on Dembski’s terminology, and not to confuse it with more intuitive understandings of the terms.
- Probability: “To assess the probability of an event E is to assess its probability relative to certain background information H.”
- Information: “Thus, we define the measure of information in an event of probability p as -log2 p.” In other words, anything that is probabilistic (or contingent) has information.
- Complex: low probability, specifically a probability less than 1/10^150 (or 500 bits).
- Specificity: Information is specified if it is independently identifiable by a pattern.
Dembski acknowledges that formalizing the criteria for a specified pattern is difficult, but he thinks some examples can help clear things up. Dembski gives an example of an archer standing 50 meters from a large blank wall with a bow and arrow. In the first scenario, the archer shoots an arrow at the blank wall. In the second scenario, the archer paints a target then shoots an arrow directly hitting the bull’s-eye. In both cases, there is information, because any location that the arrow hits is probabilistic. In both cases, there is complexity, because where the arrow lands has a low probability. But there is specificity in only the second case.
Dembski warns that if patterns are only constructed in an ad hoc manner, then that will not count as specified. For example, suppose the archer drew the target on the arrow after he shot the arrow; that will not count as specified but as a post hoc fabrication. Nonetheless, according to Dembski, there are many cases of specified information that do come post hoc; namely, information regarding the origin of life.
In another example, a man sees stones on the ground but sees no pattern. For this man the stones’ formation is not specified. But suppose an astronomer recognizes the formation of the stones as corresponding to some constellation. For the astronomer this formation is specified; therefore, he has grounds for thinking the stones were intentionally arranged. As Perakh points out, this has the consequence that subjective pattern recognition is all it takes for there to be specification.
With the definitions in hand, we can get to the substance. Dembski proposes the use of an “explanatory filter” as a decision procedure for choosing the best explanation for an observation. If something has a high probability, we attribute it to regularity. If something has low probability and is not specified, we attribute it to chance. If something has a low probability (over 500 bits) and is specified, we attribute it to design. Dembski argues that the explanatory filter has successfully avoided false positives. In other words, it has been infallible.
One might think a snowflake is an example of CSI, but Dembki disagrees. Dembksi says a snowflake isn’t a case of CSI, because snowflake formation is a matter of physical necessity.(I think he means a high probability given our background knowledge, but sometimes he uses “necessity” to mean determinism.)
My problem with this response is that the probability of a snowflake’s formation could have been very low probability given the background knowledge of people a thousand years ago. If that’s the case, it could have satisfied the criteria for CSI; and people should have inferred that snowflakes were intelligently designed. So it’s doubtful that his “explanatory filter” is infallible. In fact, I don’t see how any design inference could be infallible.
It seems like the design inference from the explanatory filter is argument from incredulity, but Dembski denies this claim.
… note first that even though specified complexity is established via an eliminative argument, it is not fair to say that it is established via a purely eliminative argument. If the argument were purely eliminative, one might be justified in saying that the move from specified complexity to a designing intelligence is an argument from ignorance (not X therefore Y). But unlike Fisher’s approach to hypothesis testing, in which individual chance hypotheses get eliminated without reference to the entire set of relevant chance hypotheses that might explain a phenomenon, specified complexity presupposes that the entire set of relevant chance hypotheses has first been identified.
My guess is that Dembski says this is not an argument from ignorance, because he claims that has eliminated all possible naturalistic chance hypotheses. Dembski claims that (mindless) natural causes are, in principle, incapable of explaining the origin of CSI.
[Natural causes] can explain the flow of CSI, being ideally suited for transmitting already existing CSI. What they cannot do, however, is originate CSI. This strong proscriptive claim, that natural causes can only transmit CSI but never originate it, I call the Law of Conservation of Information. It is this law that gives definite scientific content to the claim that CSI is intelligently caused. … To see that natural causes cannot account for CSI is straightforward. Natural causes comprise chance and necessity. Because information presupposes contingency, necessity is by definition incapable of producing information, much less complex specified information.
Dembski is saying that it is an a priori mathematical truth that natural (mindless) causes cannot explain the origin of CSI.
Natural causes are properly represented by nondeterministic functions (stochastic processes). Just what these are in precise mathematical terms is not important. The important thing is that functions map one set of items to another set of items and in doing so map a given item to one and only one other item. Thus for a natural cause to “generate” CSI would mean for a function to map some item to an item that exhibits CSI. But that means the complexity and specification in the item that got mapped onto gets pushed back to the item that got mapped. In other words, natural causes just push the CSI problem from the effect back to the cause, which now in turn needs to be explained.
Suppose j is an event with CSI, i is a prior event, and f is the natural function. Dembski holds that i has just as much CSI as j, because a deterministic f is merely a conduit that maps any CSI in i to j. (It’s unclear to me why f is deterministic rather than stochastic.) But why can’t it be f that adds information? Dembski explains (in a roundabout way, using more math) that the information in f itself needs to be explained. If my interpretation is right, Dembksi is saying that if there are only (mindless) natural causes, then all the information is in the initial conditions of the universe; therefore, it’s a priori true that there can’t be a (mindless) natural explanation for CSI. On this interpretation, his argument is a hidden fine-tuning argument.
On a final note, here are two amusing quotes by Dembski. The first is a rebuttal to the claim that we already have natural examples of CSI (whatever CSI means now) in nylonase—the nylon eating bacteria.
The problem with this argument is that Miller fails to show that the construction/evolution of nylonase from its precursor actually requires CSI at all. As I develop the concept, CSI requires a certain threshold of complexity to be achieved (500 bits, as I argue in my book No Free Lunch). It’s not at all clear that this threshold is achieved here (certainly Miller doesn’t compute the relevant numbers). … the nylonase enzyme seems “pre-designed”.
It’s funny that Dembski mentions that Miller hasn’t computed the numbers for the probability of the existence of nylonase, as if it was feasible. I don’t see how such a computation wouldn’t be ridiculous hand-waving. And, as far as I know, Dembski hasn’t computed the numbers for nylonase either. (500 bits corresponds to a 1/10^150 chance.) In any case, what’s the point of computing the numbers when it’s supposed to be a priori true that natural causes can’t generate CSI.
This second quote displays his amazing confidence.
CSI is what all the fuss over information has been about in recent years, not just in biology, but in science generally. It is CSI that for Manfred Eigen constitutes the great mystery of biology, and one he hopes eventually to unravel in terms of algorithms and natural laws. It is CSI that for cosmologists underlies the fine-tuning of the universe, and which the various anthropic principles attempt to understand. It is CSI that David Bohm’s quantum potentials are extracting when they scour the microworld for what Bohm calls “active information.” It is CSI that enables Maxwell’s demon to outsmart a thermodynamic system tending toward thermal equilibrium. It is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness. It is CSI that within the Kolmogorov-Chaitin theory of algorithmic information takes the form of highly compressible, nonrandom strings of digits.
Dembski, William. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence.
Perakh, Mark. The Dream World of William Dembski’s Creationism.
Pennock, Robert. Intelligent Design Creationism and Its Critics.