Some have taken the fine-tuned constants for life to support a design inference. Some of their opponents have appealed to the weak anthropic principle (WAP) to diffuse a design inference. Supporters of WAP reason that what we can only observe a universe that has life-permitting conditions; so, in that sense, fine-tuning is not surprising. In response, Leslie and Swinburne have offered a firing squad analogy to show the flaws of the weak anthropic response. The question is: Does the firing squad response work?
Before we get into the details, we should formulate the design argument that we are considering. Elliott Sober and Robin Collins both formulate the design argument in terms of likelihoods (or the prime principle of confirmation in Collins’ terminology) rather than probabilities. A probability is represented as P(H|O), while a likelihood is represented as P(O|H), where H is the hypothesis and O is the observation. The two are related by Bayes’ theorem:
- P(H|O) = P(O|H)P(H)/P(O)
It follows from Bayes’ theorem that:
- Pr(H1│O) > Pr(H2│O) if and only if Pr(O│H1)Pr(H1) > Pr(O│H2)Pr(H2)
This comparison depends on the prior probability P(H1) and P(H2). Since prior probabilities are controversial, Sober thinks the design hypothesis is best formulated in terms of likelihoods. While probability formulations can be guides on what to believe, likelihood comparisons are more modest: they simply tell you if an observation favors one hypothesis over another, without being a guide on what to believe. Nonetheless, if you are a subjective Bayesian, you can use your subjective prior and likelihoods to update your beliefs. A likelihood formulation of the design argument will look like P(O|H1) > P(O|H2). More specifically, the design theorist says:
- P(Fine-tuning|Design) > P(Fine-tuning|Chance)
According to Elliott Sober (2004), the fine-tuning argument is flawed because it fails to take into account observation selection effects (OSE). The weak anthropic principle is a specific type of the more general OSE. Generally speaking, an OSE occurs when a bias is introduced based on the method of evidence collection. Eddington (1939) illustrates an example of an OSE. Suppose we want to know the size of the fish in a certain lake. We catch a bunch of fish with a net and they are all bigger than 10 inches; so we conclude that all the fish are bigger than 10 inches. But it turns out the net has holes that are 10 inches, so, of course we would only see fish that are bigger than 10 inches. The net we used introduced an OSE, so we should be wary our inference that all the fish are bigger than 10 inches.
In the case of the fine-tuning argument, the OSE is the fact that what we are bound to observe that the universe satisfies the conditions necessary for observers to exist. Sober argues that we need to take into account the OSE in the likelihood formulation of the fine-tuning argument to get:
- P(Fine-tuning|Design & OSE) = P(Fine-tuning|Chance & OSE) = 1
If this is right, it would defeat the likelihood formulation of the fine-tuning argument, but objectors have replied with a firing squad analogy. Swinburne, following Leslie, explains:
On a certain occasion the firing squad aim their rifles at the prisoner to be executed. There are twelve expert marksmen in the firing squad, and they fire twelve rounds each. However, on this occasion all 144 shots miss. The prisoner laughs and comments that the event is not something requiring any explanation because if the marksmen had not missed, he would not be here to observe them having done so. But of course, the prisoner’s comment is absurd; the marksmen all having missed is indeed something requiring explanation; and so too is what goes with it – the prisoner’s being alive to observe it. And the explanation will be either that it was an accident (a most unusual chance event) or that it was planned (e.g., all the marksmen had been bribed to miss). Any interpretation of the anthropic principle which suggests that the evolution of observers is something which requires no explanation in terms of boundary conditions and laws being a certain way (either inexplicably or through choice) is false.
The idea is that just as anthropic reasoning should not work with the prisoner, analogously, it should not work in the case of fine-tuning.
- ID = the firing squad decides that it will spare my life.
- Chance = the firing squad decides that it will fire in randomly chosen directions
Sober (2004) held that there was an OSE in the likelihood formulation; that is:
- Pr(the prisoner survived │ ID & the prisoner survived) = Pr(the prisoner survived │ Chance & the prisoner survived) = 1
Jonathan Weisberg (2005) convinced Sober that he was wrong about an OSE in firing squads. Weisberg explains:
[The] the alleged OSE is the fact that you can’t observe your own non-survival. Sober says that this undermines your observation of survival as evidence because it entails the evidence. But no such entailment holds. What information does the prisoner have about her methods of data collection that guarantees her observation of her own survival? None, to be sure. After all, if she had such information she wouldn’t have to worry about being shot! Rather, what she does know is that if she observes anything at all, it will be her survival. And this in no way entails that she will survive. Sober’s contention that there is an OSE in firing squad cases rests on a confusion between two propositions:
S: I will observe that I survive.
S’: If I observe whether I survive, I will observe that I survive.
While S certainly entails the evidence, it is clearly inappropriate for the prisoner to use S as a background assumption when evaluating the evidential import of her survival. And S’, though it may be a legitimate background assumption, doesn’t entail that the prisoner will survive.
Leslie and Swinburne’s firing squad case can be formulated as:
- Pr(I am alive at t3│ID at t1) > Pr(I am alive at t3│Chance at t1)
And Sober (2009) now agrees that the following OSE formulation is mistaken:
- Pr(I am alive at t3│ID at t1 & I observe at t3 whether I am alive) = Pr(I am alive at t3│Chance at t1 & I observe at t3 whether I am alive) = 1
But Sober thinks the following formulation will illustrate the disanalogy between the firing squad and fine-tuning:
- P(I observe at t3 that I am alive│the firing squad decides at t1 that it will spare my life when it fires at t2 & I am alive at t2 when the squad fires) > P(I observe at t3 that I am alive│the firing squad decides at t1 that it will fire in randomly chosen directions at t2 & I am alive at t2 when the squad fires)
The prisoner’s being alive at t2 does not screen off ID and Chance from his observing at t3 that he is alive. But my being alive at t2 does screen off ID and Chance from my observing at t3 that the constants are right.
I take this to mean that in the firing squad case the information at t2 doesn’t prevent us from discriminating between ID and chance, while in the fine-tuning case it does.
Sober, Elliott. The Design Argument. 2004.
Sober, Elliott. Absence of Evidence and Evidence of Absence: Evidential Transitivity in connection with Fossils, Fishing, Fine-Tuning, and Firing Squads. 2009.
Weisberg, Jonathan. Firing Squads and Fine Tuning: Sober on the Design Argument. 2005.