I recently ran across Jonathan Weisberg’s interesting paper The Argument from Indifference.
Before getting to the argument, let’s first set up the fine-tuning dialectic.
We’ve always known that the laws and constants were life-permitting. After all, if the laws and constants weren’t life-permitting, we wouldn’t be here. What we didn’t know is if the life-permitting range of the constants is narrow or wide. The recent discovery that the constants needed to be in a narrow range was said to be evidence for design.
The fact that there is life is old data—we’ve always known that life exists. The new data is that the constants need to be in a narrow range for the universe to be life-permitting. This new data gave rise to a new breed of fine-tuning arguments; and the new data is supposed to provide additional support for design over and above the old data of life.
Let N = the new data of narrow range constants, D = Design, and O = the old data of life. The fine-tuning argument in a likelihood formulation says:
(FTA) P(N|D & O) > P(N|¬D & O).
That is, the probability of the narrow constants given design and life is greater than the probability of the narrow constants given blind chance and life.
- The Likelihood Principle says: if P(E|H) > P(E|¬H) then E supports H over ¬H.
Given the likelihood principle, the FTA says that new data of fine-tuning supports design over blind chance (given the old data of life).
But is the FTA true? To figure that out Weisberg first introduces some assumptions that should seem plausible to the fine-tuning proponent.
- Divine Intent: P(O|D) = 1. The designer will always create life. (Some may say that this is unfairly high, given free will, and that it unfairly helps the FTA, but let’s put this at 1 for the sake of the argument.)
- Blind Indifference: if there is no designer, P(.|O&¬D) is a uniform distribution among the O-possibilities. In other words, given mindless chance, each O-possibility is equally probable.
- Divine Indifference: if there is a designer, P(.|O&D) is a uniform distribution among the O-possibilities. In other words, given a designer, each O-possibility is equally probable, since she does not favor any O-world over the other. All she cares about is to have a life-permitting world.
Blind Indifference and Divine Indifference entail Divine and Blind Irrelevance. That is, D and ¬D is irrelevant to the probabilities of the O-possibilities since any O-possibility is equally probably given either D or ¬D.
These assumptions, which should be plausible to the design theorist, entail that the new fine-tuning data N supports blind chance (¬D) over design (D). More formally:
(1) P(N|¬D) > P(N|D)
Here’s an intuitive way to think why this is so. A designer will only pick among O-worlds (life-permitting worlds), and blind chance will be indifferent to the O-worlds. Let’s define robust laws are laws that permit life given a wide range of constants, and fragile laws as laws that permit life on only a narrow range of constants. Since most of the O-worlds have robust laws rather than fragile laws (given that it’s easier to have life if a wide range of constants are life-permitting) , the designer is more likely than blind chance to pick a world with robust laws, which is identical as saying that N (narrow range) is more likely on blind chance than design. More formally, P(N|¬D) > P(N|D), which says that the new data N favors blind chance over design. (In the paper, Weisberg gives a mathematical proof for P(D|N) < P(D), which is equivalent to the above likelihood formulation, since P(H|O) > P(H) iff P(O|H) > P(O|¬H).)
The previous likelihood did not include O (that life exists) in our background knowledge. So what happens when we put O in to our background knowledge? Using the assumptions of Blind Indifference and Divine Indifference, we can directly see that:
(2) P(N|D&O) = P(N|¬D&O).
That is, given that life exists, the new data does not support design or blind chance—they have equal support. This can be seen intuitively: given Blind Indifference and Divine Indifference, all O-possibilities are equally probable on both design and blind chance. Blind chance and design have the same random selection mechanism.
Darren Bradley, in his reply to Weisberg, argues that there is still an argument to be made for fine-tuning; more specifically, Bradley argues that the amount of support O gives to D increases as N gets narrower—so N acts as indirect support for D. To see this, let B stand for maximally broad constants where all constants support life. In that case, we have:
- P(O|D&B) = P(O|¬D&B) = 1.
In other words, given B, life will surely exist on both design and blind chance. Since the probabilities are equal, O is indifferent between D and ¬D (given B). But now suppose that the constants need to be in a narrow range to permit life, which we represent as N. Then we’d get: P(O|D&N) = 1 (assuming Divine Intent) and P(O|¬D&N) < 1 (assuming blind chance). So that:
(3) P(O|D&N) > P(O|¬D&N).
In other words, given N, life supports D over ¬D. (And the narrower N is, the more D is supported.)
While Weisberg thinks that Bradley may be right that as N gets narrower it increases the support O gives to D, it is still the case that learning N after we learn O does not increase the net support for D. This is because the disconfirmation of D given by (1) exactly balances out the support for D given by (3), since (2) shows us that learning N after learning O neither favors design nor blind chance (they have equal support).
A note on evidence
The notion of evidence used here is Bayesian. Evidence, alone, only boosts the degree of credence of the hypothesis one way or the other; it does not determine the overall plausibility of the hypothesis. On a Bayesian view, what you should believe given the new evidence will depend on prior probabilities of beliefs. To use John Hawthorne’s example, the existence of cheese is evidence for a God with a cheese fetish; but, given the low priors we don’t actually find the cheese fetish God plausible.
Evidence comes in degrees—sometimes weak, sometimes strong. Suppose a raffle exists with a 100 tickets. If I bought 1 ticket, that would be weak evidence that I would win—yet it is still evidence. Given the weakness of the evidence, it wouldn’t be plausible that I would win. If I bought 99 tickets, it would be strong evidence that I would win; and it would be plausible that I would win.
The first worry is the normalizability problem. Timothy McGrew, Lydia McGrew, and Eric Vestrup explain the normalizability problem:
Probabilities make sense only if the sum of the logically possible disjoint alternatives adds up to one … But if we carve an infinite space up into equal finite-sized regions, we have infinitely many of them; and if we try to assign them each some fixed positive probability, however small, the sum of these is infinite.
The design theorist could restrict herself to a probability distribution over a finite range to avoid this problem—something like Robin Collins’ “epistemically illuminated region”.
A second worry has to do with Divine Indifference. So far, our designer hypothesis has had the auxiliary hypothesis that the designer is equally likely to pick among any of the O-worlds. If we changed our auxiliary hypothesis by giving the designer certain intentions, it could turn out that N is evidence for design (given O) as John Hawthorne explains in this video.
What Hawthorne doesn’t mention is that, on other auxiliary hypotheses about Divine intentions, it could turn out that N is evidence against design (given O).
A note on skeptical theism
As Hawthorne notes, near the end of the video, the skeptical theist is not in a position to use fine-tuning arguments. Skeptical theism is a response to the evidential problem of evil. The skeptical theist reasons that given our finite human epistemic position, we are in no position to make empirical judgments about the existence of gratuitous evils; for God could have morally sufficient reasons beyond our ken. Effectively, skeptical theists block off empirical/Bayesian reasoning concerning the appearance of gratuitous evils to the conclusion that a good God does not exist. Given skeptical theism, one should also accept that one is in no position to know that God couldn’t have morally sufficient reasons to create robust laws. So this should also block off empirical/Bayesian reasoning concerning fine-tuning.
It seems the skepticism should be even deeper in the case with robust/fragile laws than with the gratuitous evil case, for what information does the skeptical theist have about God’s preferences concerning laws? In the moral case, the skeptical theist can at least point to some of the moral properties of God—like lovingness, honesty, and generosity. Given these properties, it should give us some general idea about what God would do; but the skeptical theist denies this. In the fine-tuning case, what properties of God can we point to that allows us to say, with some confidence, which laws or type of laws God would create? It seems none. So it seems skepticism on fine-tuning is more warranted than skepticism about gratuitous evil.
I’ve ignored talk of multiverses so far and assumed a single universe hypothesis. Things change once we consider multiverses. Some interesting papers concerning the multiverse are Roger White’s Fine-Tuning and Multiple Universes, Kai Draper & Paul Draper & Joel Pust’s Probabilistic Arguments for Multiple Universes, and Darren Bradley’s A Defense of the Fine-Tuning Argument for the Multiverse.