## Edward Feser’s Aristotelian Proof for God

In Five Proofs of the Existence of God, Feser runs an Aristotelian inspired argument for God. While there’s a lot that can be questioned about the argument, I’ll narrow this post to what I think is one of the weaker parts of the proof: his proof that the first cause (i.e. pure act) must be intelligent. To motivate the idea that there must be an intelligent first cause, Feser appeals to the principle of proportionate causality (PPC).

PPC: Whatever is in the effect must in some way or other be in the cause.

Feser explains the PPC:

Suppose, for example, that I give you $20. The effect in this case is your having the$20, and I am the cause of this effect. … [There are] different ways in which the cause may have what is in the effect. When I myself have a $20 bill ready to hand and I cause you to have it, what is in the effect was in the cause formally, to use some traditional jargon. That is to say, I myself was an instance of the form or pattern of having a$20 bill, and I caused you to become another instance of that form or pattern. When I don’t have the $20 bill ready to hand but I do have at least$20 credit in my bank account, you might say that what was in the effect was in that case in the cause virtually. For though I didn’t actually have the $20 on hand, I did have the power to get hold of it. And when I get Congress to grant me the power to manufacture$20 bills, you might say (once again to use some traditional jargon) that I had the $20 eminently. Because in that case, I not only have the power to acquire already existing$20 bills, but the more “eminent” power of causing them to exist in the first place. When it is said, then, that what is in an effect must in some way be in its cause, what is meant is that it must be in the cause at least “virtually” or “eminently” even if not “formally”. (2017, 33)

John Cottingham has criticized a variation of the PPC as implying an implausible heirloom view of causation where properties as passed down from cause to effect:

a sponge cake… has many properties – e.g. its characteristic sponginess – which were simply not present in any of the material ingredients (the eggs, flour, butter). … But this fact simply does not support the conclusion that the sponginess was somehow present in some form in the materials from which it arose. (Cottingham, 1986, 51)

Feser thinks this is a mistaken objection: the PPC doesn’t entail that sponginess in the cake requires there to be sponginess in the ingredients, because that would be to presuppose that the effect must be in the cause formally, i.e. that the cause must have the form of sponginess. What the PPC says is that the effect has to be in the cause formally or virtually or eminently.

Cottingham considers a weaker reading of the PPC:

(One may be tempted to say that the sponginess must have been ‘potentially’ present in the materials, but this seems to defend the [PPC] at the cost of making it trivially true. (Cottingham, 1986, 51)

Feser replies that Cottingham must have in mind Moliere’s “dormitive virtue” objection. According to that objection, to explain opium’s power to cause sleep by saying it has a dormitive power is a tautology or trivially true, since a dormitive power is defined as a power that causes sleep. That is, you would be saying nothing more than, “Opium causes sleep because it has the power to cause sleep.” Feser replies that while this statement is “minimally informative” it’s not a tautology, because the existence of powers can be denied. For example, Humeans about causation deny causal powers.

So, since I do think there are causal powers, I do think the PPC is true because to say that the effect is in the cause “eminently” is just to say that the cause has the power to produce the effect. I also think it’s minimally informative, which will be play into my objection to Feser’s proof that the first cause (pure act) must be intelligent.

From the PPC and since the first cause (pure act) is the cause of every possible form or pattern, Feser says:

38. Whatever is in an effect is in its cause in some way, whether formally, virtually, or eminently (the principle of proportionate causality).
39. The purely actual actualizer is the cause of all things. [I think it’s safe to assume Feser means all things besides itself.]
40. So, the forms or patterns manifest in all the things it causes must in some way be in the purely actual actualizer.
41. These forms or patterns can exist either in the concrete way in which they exist in individual particular things, or in the abstract way in which they exist in the thoughts of an intellect.
42. They cannot exist in the purely actual actualizer in the same way they exist in individual particular things.
43. So, they must exist in the purely actual actualizer in the abstract way in which they exist in the thoughts of an intellect.
44. So, the purely actual actualizer has intellect or intelligence. (p 37)

Earlier Feser argued that the first cause (pure act) must be immaterial. I’ll grant that for the sake of argument. I’ve already accepted 38, the PPC. I’ll accept 39 for the sake of argument. I accept 40 because it follows from 38 and 39. Here’s an example to show how modest accepting these steps are. Say I cause someone to have a black eye. The form of black is in me in the eminent sense that I had the power to produce the black eye; I needn’t have a black eye myself to give another person a black eye, which is to say that the effect needn’t be in me formally. I think if you accept causal powers (and pure act just for the sake of argument), then you should accept 40. So Feser is using two points to motivate the idea that the first cause must be intelligent: (1) the PPC, and (2) the first cause is the cause of all things. Steps 41-44 is where things move too quick for me. He motivates these steps earlier:

… what follows is that the forms or patterns of things must exist in the purely actual cause of things; and they must exist in it in a completely universal or abstract way, because this cause is the cause of every possible thing fitting a certain form or pattern. But to have forms or patterns in this universal or abstract way is just to have that capacity which is fundamental to intelligence. (33-34)

Feser is saying that since the forms or patterns of things must exist in the first cause (based on the PPC), it must exist in the cause as a universal because it is the cause of every possible particular thing that could have that form. Think of all the possible particular round things: a particular basketball, a particular orange etc. Since the first cause is the cause of all possible round things, Feser is saying that the universal roundness must be in the first cause, and that just is what is fundamental to intelligence. Here, concepts will play the role of universals. (By “universal”, we’re talking about properties that particular things have in common, like roundness.)

I think this move is too quick. All the “minimally informative” PPC requires of me is that the cause have the power to produce the effect, i.e. that the effect is in the cause eminently. And the fact that the first cause is the cause of all things doesn’t suggest to me that universals must be in the cause.

[edited 1/10/18] I think Feser’s reasoning is that the first cause can’t be another instance of fitting a certain form since it is the cause of “every possible thing fitting a certain form.” But why think that? It seems the first cause must also have some form and that that form can’t itself be caused. This is similar to the Third Man objection to Platonism. So it seems it can’t be the cause of every possible thing fitting a certain form. If some forms don’t need causes by way of universals then why do any? At this point Feser may appeal to analogies or simplicity in the first cause, but I find both of those aspects make the first cause unintelligible.

One may object that Feser uses an Augustinian argument for divine conceptualism in chapter 3 to argue that the first cause must be intelligent, and that that can be supplemented here. But I think that that is besides the point. The point of this post is to object to how he derives intelligence from the (1) PPC and (2) being the first cause of all things in chapter 1’s Aristotelian argument. For my review and criticisms of divine conceptualism see this.

## Robin Collins, the FTA and the problem of evil

In this post I will be commenting on issues in the fine-tuning argument (FTA) from the problem of evil (PoE) as Robin Collins presents it in his chapter in the Blackwell Companion to Natural Theology.

I find the evidential PoE to be a persuasive argument against God, but I find it philosophically boring. By contrast, I think the fine-tuning argument (FTA) is philosophically interesting. Robin Collins is probably the premiere defender of the FTA, and for Collins the PoE is very relevant to the FTA; so we’ll have to interact with the PoE.

Why is the PoE relevant to the FTA? Ultimately, Collins argues that the probability of life-permitting constants (Lpcs) is more probable under theism (T) than the naturalism (N), thus supporting theism. The reason is that God qua good being wants to create embodied moral agents (EMAs); it’s his goodness along with his other omni-attributes that incline us to think that he’d create EMAs and thus the Lpcs. This is where the PoE comes in, because if the creation of EMAs don’t lead to an overall good, then it would be unlikely that God create Lpcs for the EMAs.Collins says:

Thus, in order for God to have a reason to adjust [the constants] so that [the universe] contains our type of embodied moral agents, there must be certain compensatory goods that could not be realized, or at least optimally realized, without our type of embodiment. This brings us directly to the problem of evil.

If we have an adequate theodicy, then we could plausibly argue that [we] would have positive grounds for thinking that God had more reason to create the universe so that EMA is true, since it would have good reason to think that the existence of such beings would add to the overall value of reality. … On the other hand, if we have no adequate theodicy, but only a good defense – that is, a good argument showing that we lack sufficient reasons to think that a world such as ours would result in more evil than good – then [the probability of Lpc would be indeterminate]. (p 255)

Collins thinks that with an adequate theodicy the Lpcs would be very favorable to T over N. And, surprisingly, even if we lack a theodicy and only have a good defense, the Lpcs would still favor T over N. By “good defense”, I suspect Collins means either the free will defense or skeptical theism. (I’m not sure it matters to the FTA which one it is.) Let’s look at the case where we only have a “good defense.” In that case we have:

• P(Lpc|T) = indeterminate
• P(Lpc|N) << 1 (close to zero).

Normally, according to the likelihood principle, if P(Lpc|T) > P(Lpc|N) then the observation Lpc supports T over N. In the above case where P(Lpc|T) is indeterminate, Collins still thinks it would support T over N. This doesn’t seem right to me. It seems to me that when you’re using the likelihood principle, as Collins does, you compare a probability with a probability and not a probability with something indeterminate.

Collins explains his motivation for this in footnote 40.

One might challenge this conclusion by claiming that … a positive, known probability exist .… This seems incorrect, as can be seen by considering cases of disconfirmation in science. For example, suppose some hypothesis h conjoined with suitable auxiliary hypotheses, A, predict e, but e is found not to obtain. Let ~E be the claim that the experimental results were ~e. Now, P(~E|h & A & k) << 1, yet P(~E|h & A & k) ≠ 0 because of the small likelihood of experimental error. Further, often P(~E|~(h & A) & k) will be unknown or indeterminate, since we do not know all the alternatives to h nor what they predict about e. Yet, typically we would take ~E to disconfirm h & A in this case because P(~E|h & A & k) << 1 and ~P(~E|~(h & A) & k) << 1.

To paraphrase, Collins says that in science we often disconfirm some hypothesis when it is highly unlikely that we don’t see e given that hypothesis and it turns out that we don’t see e. I think Collins is reasoning that if P(~E|h & A & k) << 1, then ~(h & A) is confirmed even when it leads to the observation being indeterminate. Presumably, ~(h & A) is an infinite disjunction of mutually exclusive hypotheses, or what’s called a catch-all hypothesis. Even if this is right (which I doubt), there is a disanalogy because h and ~h are dichotomies, while, oddly, N and T are not. That’s because, as Collins runs the argument, T is a good God; so T doesn’t include an evil God, or a non-omnipotent God, among other things. If anything it seems Collins should say ~N is confirmed; and that doesn’t necessarily mean that T is confirmed, since T is a small subset of ~N.

I don’t think Collins’s reasoning here is consistent with the likelihood principle (keep in mind he runs the core FTA based on the likelihood principle), as the principle seems to rule out comparing something with indefinite probabilities. Collins’s reasoning would make more sense if indeterminate meant .50, but it doesn’t. Suppose there are marbles in a vase where each marble has a number from 1 to 10 written on it. What is the probability that we pick a marble with a number greater than 1 if it is indeterminate how the numbers are assigned to each marble. The mistake would be to think indeterminate means that we can apply a principle of indifference so that each number has a 1 in 10 chance. If that were so, we’d have a high probability that the marble picked would be greater than 1. But that’s not what indeterminate means; indeterminate means that we’re not in a position to know if it’s randomly generated or otherwise. We’re simply “in the dark” to borrow the skeptical theist’s phrase. So I can’t see how you can compare a “positive, known probablity” with something indeterminate, just as you can’t say if the marble picked will probably be greater than 1. If you think that it will be greater than 1, then it’s not indeterminate after all.

I’m not sure science should be rationally reconstructed as Collins does in footnote 40. It seems Collins is saying that we can compare and confirm hypotheses that have indeterminate observations. I don’t think we test hypotheses against their catch-alls, as is implied by the footnote. If Elliott Sober is right, we test hypotheses against non-catch-all hypotheses, like the general theory of relativity over Newton’s theories; neither theory being tested are catch-alls.

Conclusion
It seems one can’t be a skeptical theist and support the FTA; you do need to say something about what God would do with some probability.

Posted in Philosophy of Religion | Tagged , | 3 Comments

## Hume’s Lapse?

Hume famously divided knowledge into “relations of ideas” and “matters of fact.” In more modern terminology, he divided knowledge into the analytic a priori and synthetic a posteriori. This idea is captured by Hume’s fork in An Enquiry Concerning Human Understanding:

If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.

This fork was later adopted by the logical positivists and it was noticed that it was self-refuting because this fork is neither a relation of ideas nor a matter of fact; so, by its own criteria, it is nothing but sophistry and illusion—or meaningless.

I think I spotted another lapse in Hume. In A Treatise of Human Nature, Hume says:

To form a clear idea of anything, is an undeniable argument for its possibility, and is alone a refutation of any pretended demonstration against it.

I’m not sure, but this may be the earliest form of the popular principle that conceivability is a guide to metaphysical possibility. (Some think there are counterexamples to this, so David Chalmers makes a distinction between primary and secondary conceivability to mitigate that.)

The lapse, as I see it, is that this conceivability-to-possibility principle is neither a relation of ideas nor a matter of fact, so it must be meaningless by his own lights. For the sake of argument, let’s say that it is a relation of ideas. The problem doesn’t go away because analytic a priori statements are just relations between ideas and say nothing informative about the world. Saying that something is metaphysically possible is an informative statement about the world. The better option is to say that it is a matter of fact; we’d at least be saying something substantive then. But how could it be? After all, conceiving is done from the armchair. We don’t see metaphysical possibilities anymore than we see causation.

The strange thing is that in Dialogues concerning Natural Religion (9.6), Hume says:

The words, therefore, necessary existence, have no meaning …

If necessary existence has no meaning then how can contingent existence have meaning? I suspect that Bertrand Russell may have already noticed the point that I’m making, for in his famous debate with Fredrick Copleston he says:

I don’t admit the idea of a Necessary Being and I don’t admit that there is any particular meaning in calling other beings “contingent.” These phrases don’t for me have a significance except within a logic that I reject.

I think the consistent thing for a full-blown Humean to say is that contingent existence doesn’t have any meaning either.

## William Lane Craig on Does the Vastness of the Universe Support Naturalism?

If a small universe is evidence for theism, is a vast universe evidence for atheism? I want to consider Craig’s reply to this, but before I do that I should introduce some basic concepts about the symmetry of evidence; that is, the possibility for evidence for entails the possibility of evidence against.

On likelihoodism observation O is evidence for hypothesis H over ¬H iff P(O|H) > P(O|¬H). Since P(O|H) + P(¬O|H) = 1 and P(O|¬H) + P(¬O|¬H) = 1, we can insert it into the prior formula to get an interesting result:

• 1 – P(¬O|H) > 1 – P(¬O|¬H)
• P(¬O|¬H) > P(¬O|H)

So, in English, O is evidence for H over ¬H iff ¬O is evidence for ¬H over H. The means that you can have evidence for a hypothesis iff you can have evidence against a hypothesis.

Craig applies this principle to an example:

David Manley was making the point that on the cozy, pre-Copernican cosmology—what C. S. Lewis called “the discarded image” of the cosmos—theism seemed vastly more probable than atheism. Like a Fabergé egg, the little universe centered on the Earth, with the spheres of the planets and fixed stars revolving about it, cried out for an explanation in terms of a Cosmic Designer. But if you agree that theism is more likely than atheism on such a view, then, Manley argued, you must also agree that a vast cosmos, such as we observe, counts against God’s existence.

Here we see Manley arguing that if a certain observation (small universe) supports theism, then ¬observation (vast universe) supports atheism. Craig agrees to this, but he softens the blow by saying:

… the degree to which the vastness of the universe increases the probability of atheism is marginal! It scarcely changes the odds at all. So while the smallness of the universe would greatly increase the probability of theism, the vastness of the universe only negligibly increases the probability of atheism.

Let’s see how he derives this conclusion. He starts with the odds form of Bayes’ theorem, which says that the ratio of posteriors = ratio of priors x ratio of likelihoods.

$\frac{P(Theism|Small Universe)}{P(Atheism|Small Universe)} = \frac{P(Theism)}{P(Atheism)} \times \frac{P(Small Universe|Theism)}{P(Small Universe|Atheism)}$

Here Craig picks his numbers:

Suppose we say that P(Small Universe|Theism) = .01 and P(Small Universe | Atheism) = .0001. That reflects our conviction that given a small, pre-Copernican universe, God’s existence is much more probable than atheism. This assumes that the prior or intrinsic probability of theism or atheism is exactly the same; otherwise Manley’s argument collapses. So we’ll just assume for the sake of argument that P(Theism) = 0.5.

Craig is pointing out that if the prior probability of atheism is something small like .01 (making theism’s prior .99), then evidence from a vast universe will get drowned out anyway for our posterior probability of atheism. For the sake of argument, we’ll assume the priors for atheism and theism are both 0.5.

Plugging in Craig’s numbers it turns out the posterior ratios are

$\frac{P(Theism|Small Universe)}{P(Atheism|Small Universe)} = \frac{.5}{.5} \times \frac{.01}{.0001} = 100$

$\frac{P(Theism|Vast Universe)}{P(Atheism|Vast Universe)} = \frac{.5}{.5} \times \frac{.99}{.9999} = .990099$

So the end result is that given a small universe, theism is 100 times more probable than atheism; and given a vast universe, atheism is only slightly more probable. Craig is a happy man.

The reason this happens is because of the suspect initial numbers Craig plugs in. Why does Craig estimate P(Small Universe | Theism) = .01 and P(Small Universe | Atheism) = .0001? Those numbers look really low given the initial intuition about pre-Copernican cosmology, the Fabergé egg. That Pre-Copernican intuition should be reflected by P(Small Universe|Theism) > P(Vast Universe|Theism), which means there’s a greater than 50% chance of a small universe given Theism.

Craig hints at why he chose such low numbers:

Now I’ve read enough of the philosophical and scientific literature on fine-tuning to know that the vastness of the cosmos is not really surprising on theism. For example, John Barrow and Frank Tipler in their important book The Anthropic Cosmological Principle (Oxford University Press, 1985) emphasize that the size and age of the universe are just what we should expect to observe. For the carbon that makes up our bodies was synthesized in the interior of stars and then distributed throughout the universe via supernovae. It takes aeons for galaxies of stars to form and even more time for the carbon requisite for life to be spread abroad to become the foundation of biological life. No other element could substitute for carbon in this role. So the universe must be as old as it is for life to exist and, hence, as big as it is, since the universe is in a state of cosmic expansion since its inception in the Big Bang 13.7 billion years ago. So the size (my italics) and age of the universe are just what one ought to expect given the fine-tuning of the initial conditions of the universe (my italics), which, many have argued, is best explained through design.

It seems he’s using the fine-tuned constants, the initial conditions, and the laws (though unstated) as background knowledge. So, given that, that’s why a vast universe has such high probability on theism, according to Craig. But if we’re using that as background knowledge we need to use it for Atheism too, in which case the background knowledge does all the work and P(Vast Universe|Theism+k) = P(Vast Universe|Atheism+k) ≈ 1. The only reason the two probabilities would be different was if God performed miracles to affect the size of the universe to overrule what would naturally happen.  If the background knowledge makes the two probabilities equal then we can’t compare the hypotheses and we need to choose different background knowledge.

The intuition Manley was getting at was that, in the pre-Copernican era, “like a Fabergé egg, the little universe centered on the Earth, with the spheres of the planets and fixed stars revolving about it, cried out for an explanation in terms of a Cosmic Designer.” Let’s find more reasonable numbers for our imagined pre-Copernican:  P(Small Universe|Theism) = .8 and P(Small Universe|Atheism) = .1. Our pre-Copernican expects a small universe given theism and a vast universe given atheism.

$\frac{P(Theism|Small Universe)}{P(Atheism|Small Universe)} = \frac{.5}{.5} \times \frac{.8}{.1} = 8$

$\frac{P(Theism|Vast Universe)}{P(Atheism|Vast Universe)} = \frac{.5}{.5} \times \frac{.2}{.9} = 2/9$

A small universe would be significant evidence for Theism and a vast universe would be significant evidence for Atheism.

When evaluating the evidence, there are two questions: 1) the qualitative question of which hypothesis the evidence points to, and 2) the quantitative question of how strong the evidence is. The numbers I plugged in were subjective probabilities and will differ depending on your own personal God and atheist universe theory.  How strong the evidence is will depend on your subjective theories. I think the numbers Craig plugged in completely misconstrued the dialectic as Manley presented it, since on Craig’s numbers the pre-Copernican theist expects a vast universe! No wonder a vast universe is only negligible evidence for atheism.

The main point of this post is not so much about finding the exact numbers to plug in, but that the possibility of evidence for, entails the possibility of evidence against, a point which Craig now seems to agree with.

For some practice, let’s see how this works for other examples.

• If successful prayer experiments are evidence for God, then unsuccessful prayer experiments are evidence against God. If unsuccessful prayer experiments aren’t evidence against God, then successful prayer experiments aren’t evidence for God. (I’ll leave out the contrapositive hereafter.)
• If divine hiddenness isn’t evidence against God, then divine appearance isn’t evidence for God. (Surely an odd result. There must be something wrong with the antecedent.)
• If suffering is evidence against God, then non-suffering (happiness) is evidence for God. (Notice that atheists who find the problem of evil persuasive has to admit that happiness is evidence for God.)
• If fine-tuned constants is evidence for God, then coarse-tuned (wide-ranging) constants is evidence against God.
• If finding intermediate fossils is evidence for common ancestry, then not finding intermediate fossils is evidence against common ancestry.

The above examples only speak on the qualitative (or binary) nature of evidence. The quantitative aspect will depend on your priors and particular theory. If you’re wondering about the quantitative aspect for any of these examples, plug in your own numbers to the odds form of Bayes’ like I did in the vast universe example. Elliott Sober explains in his paper (p. 16) that the reason why not finding an intermediate fossil is negligible compared to the support gained from finding an intermediate fossil is because of the low probability of finding a fossil. (He goes through the math in the paper.) Evolutionists need not worry.

Posted in Philosophy of Religion | Tagged , | 1 Comment

## William Lane Craig, presentism, Zeno’s paradox, and the Kalam

I have been puzzled by how Craig’s Aristotelian solution to Zeno’s paradoxes is consistent with presentism. I finally got around to reading the part of his book that explains this. If I am understanding his view correctly, it seems that his Aristotelian solution comes at the cost of (1) giving up on presentism, and (2) giving up on the supporting premise for the second premise of the Kalam cosmological argument. The Kalam, Craig says, depends on the A-theory of time where temporal becoming is real; presentism—the view that only the present exists—is the most popular version of A-theory.

What is the Aristotelian solution to Zeno’s paradox?

… before Achilles could cross the stadium, he would have to cross half-way; but before he could cross half-way, he would have to cross a quarter of the way; but before he could cross a quarter of the way, he would have to cross an eighth of the way, and so on to infinity. Therefore, Achilles could not arrive at any point. Zeno’s paradox is resolved by noting that the intervals traversed by Achilles are potential and unequal. Zeno gratuitously assumes that any finite interval is composed of an infinite number of points, whereas Zeno’s opponents, like Aristotle, take the interval as a whole to be conceptually prior to any divisions which we might make in it. Moreover, Zeno’s intervals, being unequal, add up to a merely finite distance. By contrast, in the case of an infinite past the intervals are actual and equal and add up to an infinite distance.[1]

Craig makes two points here. First, against those who argument for actual infinities based on Zeno’s paradox, that there is a disanalogy between the infinities in Achilles’ case and the case with an infinite temporal past. Second, and more relevant, the Aristotelian solution is to take the whole to be conceptually prior to the parts. Think of a length of 1 meter. That meter is not composed of points; rather, the whole meter exists prior to the points, and any points within the meter are merely a result of our conceptual divisioning, our thinking of it. Furthermore, conceptual divisions don’t entail an actual infinity, because we can only go through the process of conceptually specifying a potential infinity.

What puzzled me was: How could this Aristotelian view be applied to presentism, given that, on presentism, only the present exists and the future and past do not exist. The question for Craig is: In the past 13.7 billion years, do we traverse an actual infinity of points? For the Aristotelian solution to work—for the whole to be prior to the parts—the whole has to exist. The Aristotelian solution couldn’t work because the only thing we could be conceptually dividing is the present instead of the whole 13.7 billion years. So, at this point, the Aristotelian solution does not seem to solve Zeno’s paradox applied to past time.

To see how Craig would reply, we need to go into his book A Tensed Theory of Time: A Critical Examination. A natural question arises for presentism: What is the extent of the present? Craig goes over three options: (1) instantaneous, (2) atomic, and (3) non-metric. I’ll go through each in turn.

Instantaneous present
An instantaneous present has been criticized “because a concrete object can no more exist with zero duration than with zero breadth and length” (Williams 1951). The idea is that it is puzzling how something with zero duration can exist. (Objectors think this isn’t problematic as they think that space is composed of an uncountably infinite number of extensionless points. The cosmological singularity, if real, would be an example of an extensionless point.)

Craig criticizes the instantaneous present view because of Zeno’s paradoxes of motion and plurality. Consider Zeno’s paradox of motion, given the continuity of time:

… let us suppose … that the basic parts of time are instants. In order for the present instant to elapse it must be succeeded by another. But there is no immediate successor to the present instant. Before the succeeding instant can become present, an infinite number of succeeding instants will have to become present first. … For no instant can immediately succeed the present. The present would therefore exist as the nunc stans of the classical doctrine of eternity, not the nunc movens of A-theoretic time. (Craig 2000, p.235-6)

If we conceive temporal becoming to proceed instant by instant, the length of time between some past event or moment and the present could never increase, since the lapse of durationless instants adds nothing to the interval between the past instant and the present instant. But then there is no “flow” of time at all, and we are left again with the nunc stans of the present instant, never able to recede into the past. The doctrine of the instantaneous present is thus incompatible with objective temporal becoming. (Craig 2000, p.236)

In sum, Craig sees two problems for instantaneous presents: (1) there cannot be ‘motion’, and (2) there cannot be duration since adding a series of zeros is still zero.

Consider how instantaneous presents affect the Kalam. Craig offers two philosophical arguments that the universe must have a beginning. The first argument argues against the metaphysical possibility of an actual infinite using thought experiments like Hilbert’s Hotel. The second argument grants the possibility of an actual infinite, but denies that an actual infinite can be formed by successive addition. (This second argument is dialectically stronger.) It is this second argument that seems to lead to problems for instantaneous presents. If Craig is right on his view of Zeno’s paradoxes, then not only can’t an infinite be formed by successive addition, but a finite can’t be formed either.

Atomic present
Alternatively, one might think of the present as having some minimal duration in time, called chronons. “One disturbing feature of such a model of temporal becoming, however, is that temporal becoming seems to be “jerky” rather than smooth (Craig 2000, p.242).”(Zeno’s stadium paradox is relevant here, but I’ll skip the details.)

The problem of jerkiness is enough for Craig to favor the next view.

Non-metrical present

We can maintain that the extent of the present depends upon the extent of the entity described as present. To quote again Andros Loizou: “… no event or state of affairs is ever present simpliciter—it is present by implicit or explicit reference to a kind of events or states of affairs, as when we speak of the present eclipse, or by reference to a time scale, as when we speak of the present hour or day, and so on.” … There is no such thing as “the present” simpliciter: it is always “the present ____,” where the blank is usually filled by a reference to some thing or event. The duration of the present will be as long or as short as the event or thing under discussion (italics mine). (Craig 2000, p.245)

Craig is saying that it makes no sense to speak of the present moment (the now), unless context is added. But surely our deciding to speak of the present hour in one context and the present day in another context does not affect the metaphysics of time: we don’t control what exists by deciding what to think. I think Craig would agree, but I don’t see how this odd view is not entailed by his non-metric view.

Craig brings up a worry about his non-metric view going back to Augustine.

The nerve of Augustine’s argument against some interval of positive duration’s being present is the assumption that in order to be present, the interval in question must be incapable of analysis into past, present, and future phases. But if what we have said so far is correct, this assumption is not incumbent upon the A-theorist. He may instead hold that an interval is present if any phase of it is present. (Craig 2000, p.247)

I share Augustine’s worry. The non-metric view entails that any interval can be present. Presentism is the view that only the present exists, but if the future and past can be in the present interval, then Craig’s view seems to be a quasi B-theory. On B-theory there is no privileged now, and all moments of time have equal ontological status, all moments are equally real. On the non-metric view, it seems the future and past can be said to exist in virtue of being contextually included in the present interval. Again, this seems to raise the same problem as before, where we can think things into existence.

What about the lower and upper limits on the “present ____?” As for lower limits:

… there need be no such minimum length or temporal duration because both space and time are potentially infinitely divisible. The duration stipulated to be present will be an arbitrary, finite duration centered on a conceptually specified instant (Craig 2000, p.246-7).

Here Craig is using the Aristotelian view applied to his non-metric view. Since, on the Aristotelian view, the whole is prior to the parts, and any parts are merely a result of our conceptualization, there won’t be a lower limit because we can always think of something smaller. Craig does not mention if there can be an upper limit on the duration of the present ____, and I don’t see how he could have a non-arbitrary upper limit. If this is right, nothing stops us from speaking about the present 13.7 billion years or the present actual infinity! So it seems one can adopt Craig’s own non-metric view and hold to actual infinities.

Stephen Puryear on Craig
Finally, Stephen Puryear (2014) has pointed out that Craig’s Aristotelian view can be equally used by the infinitist who believes in an actual infinite past, thus undermining the Kalam. The infinitist can say that the whole past infinite is prior to its parts, and since we can only conceptually divide a potential infinity there isn’t an actual infinity of events to traverse (even if there is an actual infinity in duration).

In addition, there seems to be a tension between the Aristotelian view and Craig’s argument for the impossibility of the formation of an actual infinite by successive addition—the stronger argument I was referring to earlier.

2.21 A collection formed by successive addition cannot be actually infinite.
2.22 The temporal series of past events is a collection formed by successive addition.
2.23 Therefore, the temporal series of past events cannot be actually infinite.

This argument does not seem relevant on the Aristotelian view. Given that the whole is prior to the parts, the duration of events does not consist of successive additions but conceptual divisions; successive additions presuppose that the parts are prior to the whole, contrary to the Aristotelian view.

Conclusion
This is not to say that there isn’t some mystery as to how an actual infinite duration can be traversed; rather, it’s that the non-metric view does not seem plausible, and that the Aristotelian view does not seem compatible with Craig’s view that the temporal series of past events is a collection formed by successive addition.

The other two options for presentism—the instantaneous and atomic presentist—seem more plausible to me, but I’ll leave that for another time.

References