Joseph Nebus has recently written a couple of posts (here and here) in which he discusses an interesting attempt by Józef Maria Hoëne-Wronski to create a purely numerical definition of the mathematical constant π which is independent of the classical, geometric definition of “the ratio of the circumference of a circle to its diameter.” This has been a goal of many mathematicians, since the idea of π seems like it is more fundamental to mathematics than a definition based on circles would make it seem– as evidenced by the fact that it shows up in areas of mathematics which are seemingly unrelated to circles. Wronski’s idea, to this end, was the following formula:

At first glance, the formula seems inherently nonsensical. After all, is not a number, and therefore cannot be utilized in numerical operations in this way. However, one can get a sense of what Wronski may have *intended* by this equation. It appears that Wronski wanted to utilize to represent an infinite number, and modern mathematics actually gives us several tools for handling this sort of idea. One which might be of particular use, here, is Non-Standard Analysis with its infinite and infinitesimal Hyperreal numbers. In NSA, we have the ability to perform calculations with and upon infinite numbers perfectly consistently and reasonably.

First things first, let’s translate Wronski’s equation into a more modern form. Borrowing from the work Joseph Nebus already did in his second post on the subject, we can replace all the ‘s in the equation with ‘s, instead, in order to get:

Now, we can use our tools from NSA to find suitable substitutes for in the above equation. One immediate problem which a mathematician might notice is that replacing the three symbols with positive, infinite Hyperreal numbers , , and will lead to different solutions for the equation when one uses different values for , , and .

However, Wronski died well before Georg Cantor‘s brilliant work showing that there are different sizes of infinite sets was even published, let alone accepted by mainstream mathematicians. As such, it is very reasonable to assume that Wronski believed his symbol was referring to a single, specific quantity, rather than a range of possible infinite quantities. So, let’s replace all symbols with a single positive, infinite Hyperreal number, . This gives us:

Starting with the expression within the braces, we can explore to find something which may be a bit easier to work with. This takes a little bit of work, but we can show that:

Let’s zoom in on our equation a little bit more, now. The expression is a Complex Hyperreal number which is infinitely close to the Real number, . As such, its reciprocal is also infinitely close to . Given this information, we know that the expression must simplify into some infinitesimal Complex Hyperreal number. Let’s call this number for Hyperreals and .

Similarly, we know that is a non-Complex Hyperreal number which is infinitely close to . Let’s call this number , where epsilon is some non-zero infinitesimal. Multiplying this by our earlier result yields . We can then take this expression and substitute it for the entire braced expression from our full equation:

This, in turn, can now be simplified to:

We’re still far from anything which clearly resembles the π which we all know and love, but now we are getting to a place where we can really start to see some of the implications of Wronski’s definition. Notably, either , or else NSA seems to show that Wronski’s π is not a Real number. As such, it seems like Wronski’s definition is a failure if — presumably, Wronski was not attempting to redefine π out of the set of Real numbers!

However, it seems quite dubious that it would be the case that . Looking back for a moment, we defined our as the Real part of the expression . Let’s break this down a bit further, now. The term is a Complex Hyperreal number which is infinitely close to the Real number, ; let’s call it , for infinitesimal Hyperreals and . I’ll spare my readers a few more convoluted formulae (feel free to work this out yourself!), but if and only if . However, it seems fairly clear that

One of the properties of for all Real is that it has a magnitude equal to 1. This means that for any Complex number such that and are Real and that , it will be true that . The Transfer Principle of the Hyperreals allows us to extend this statement over the Hyperreal numbers, as well. Since the Complex Hyperreal which we were concerned with is , we therefore know that . Since is non-Complex, we know that its square must be positive or zero. Similarly, must be positive and greater-than-or-equal-to 1. As such, the only way for to be true is in the case that .

For this to be the case, then . In order for this to be true, it must be the case that . However, this contradicts our initial definition for as a positive, infinite Hyperreal number.

Unfortunately, it seems that Wronski’s attempt to create a non-geometric definition for π simply does not work. That said, I’m still very curious about his thought process, here. What led him to this particular formulation, in the first place? Is it, perhaps, possible to salvage his work? Could there be some actual truth hidden underneath the apparent incoherence? It will certainly be fun to unravel this puzzle even further.

]]>One of the common claims which is utilized in arguments for the existence of God is that actual infinities cannot exist, implying that there cannot be an infinite regress of causal events in the history of the universe. If there cannot be such an infinite regress, then there must be some First Cause. Theologians then put forth other arguments attempting to show that this First Cause must be God. Blake Giunta, a Christian apologist, has constructed a very interesting and quite useful website cataloging common lines of argumentation from both sides of the debate (color coded Green for Christian arguments and Red for opposing arguments), along with citations and documentation for those claims, called BeliefMap.org. It does not take very long for a fairly cursory perusal of Belief Map to bring one to this exact claim regarding the actually infinite.

While I disagree with Mr. Giunta on many of his views, I have a great deal of respect for him and I think that his work with Belief Map is absolutely fantastic. He truly does attempt to give an irenic and charitable view to the positions of his opposition, and he does sincerely want to discuss the actual arguments being made, instead of being content to knock down Straw Men. To that end, I would like to help Mr. Giunta add to his encyclopedia of apologetics by addressing the manner in which one might answer the claims about actual infinities.

Under the heading, “**Logically, prior events can’t number to infinity**,” Belief Map separates the discussion into two further, green claims. The first of these is, “**Infinity can’t exist in the real world**,” which is further subdivided into three green categories and four red. Two of the reds are theological in basis, and not of much concern to me, but the other two are mathematical and interesting. Each of these red categories contains a minor rejoinder, so I’ll be addressing them as best I can, as well. The second claim under the “**Logically, prior events can’t number to infinity**” heading is that “**Infinity can’t be formed by adding**.” After discussing all of the “can’t exist” categories, I will then consider this one.

The problem with this argument is that it is not even cogent. Infinity is not a number. There are, classically, two ways in which “infinity” is discussed as a concept in the philosophy of mathematics (Katz, 45-50). The first is the “potential infinity,” which is the idea that an iterative process can be repeated without any apparent bound. In this case, “infinity” is a description of the manner in which a process is carried out, and certainly not a number. The second way in which the concept is discussed is the case of “actual infinity,” which is the idea that a completed set can contain a number of elements which is greater than any Natural number. In this case, “infinity” is not a number, but rather a quality of numbers– a number can be either “finite” or “infinite.” And just as there are a multitude of finite numbers, some of which are greater than others, there are similarly a multitude of infinite numbers, some of which are greater than others (Katz 795; Conway; Robinson).

Numerical operations can only be performed upon numbers. For this reason, the expression “Infinity – Infinity” is entirely incoherent. It is no mathematically different than saying “Red – Red” or “Delicious – Delicious” or “Blake Giunta – Blake Giunta.” These are not mathematical statements, and as such, we cannot draw mathematical conclusions from them.

If the argument is amended to discuss the subtraction of infinite numbers instead of the subtraction of infinity, it loses all weight. There exist systems of mathematics in which infinite numbers can be subtracted from infinite numbers perfectly consistently– for example, on Surreal numbers (Conway) or Hyperreal numbers (Robinson). These do not lead to the purported contradictions espoused by apologists.

Belief Map offers three cases as examples of metaphysically impossible scenarios: an infinite tug of war, an infinite hotel, and an infinite popsicle.

The infinite tug of war is actually just a restatement of the question of subtracting one infinite number from another, which we’ve already discussed. It makes precisely the same mistake as before, treating “infinity” as a number and not recognizing that there are numerous infinite numbers, not all of which are equal. As such, it is therefore easily resolved by proper mathematics.

The infinite hotel illustrates a counter-intuitive property of actual infinities, but it does not illustrate a metaphysical impossibility or a contradiction. The only way one might legitimately claim that this is an absurdity would be to already reject the possibility of actual infinites. However, since this is being utilized as an argument in support of just such a rejection, to do so would simply be fallaciously circular question begging.

The infinite popsicle does present something of an absurdity. I’ll agree that Bernardete’s scenario is metaphysically impossible, but not for the reason which he suggests. Popsicles are composed of atoms. Atoms have a significantly non-infinitesimal volume. As such, one cannot create a popsicle with an infinite number of layers in 4 cubic inches (or any other finite volume) of space. This thought experiment doesn’t work, but not due to any metaphysical absurdity relating to infinity.

This bald assertion is an unfortunate bit of question begging. One of the primary definitions of an actually infinite set is a set which contains a proper subset of equal cardinality (Katz 792-795). Belief Map offers no good reason to accept the claim that proper parts always contain less than wholes. One might as well argue that actual infinities can’t exist because actual infinities can’t exist.

Belief Map cites this as an objection to the claim that actual infinities cannot exist, and it is absolutely correct to do so. For more than 100 years, mathematicians have been developing and utilizing a valid and consistent framework for math which deals perfectly well with actual infinities.

However, Belief Map offers a rejoinder to this: “**But so what?** A concept’s being *logically* possible (free of formal contradictions) doesn’t entail that it is *actually/metaphysically* possible.” Certainly, I agree– though, I must say, I find a little bit of irony in this position being raised here, since I quite often see the exact same sentiment brought up by atheists in regards to God’s possibility.

That said, at best this argument is merely inconclusive. It does not say that actual infinities *aren’t* metaphysically possible, but only that they *might* *not *be. In 1925, David Hilbert addressed this precise line of argumentation:

Also old objections which we supposed long abandoned still reappear in different forms. For example, the following recently appeared: Although it may be possible to introduce a concept without risk, i.e., without getting contradictions, and even though one can prove that its introduction causes no contradictions to arise, still the introduction of the concept is not thereby justified. Is not this exactly the same objection which was once brought against complex-imaginary numbers when it was said: “True, their use doesn’t lead to contradictions. Nevertheless their introduction is unwarranted, for imaginary magnitudes do not exist”? If, apart from proving consistency, the question of the justification of a measure is to have any meaning, it can consist only in ascertaining whether the measure is accompanied by commensurate success. Such success is in fact essential, for in mathematics as elsewhere success is the supreme court to whose decisions everyone submits. (Hilbert)

The mathematics of the infinite *has* been successful– inordinately successful, in fact. It forms the basis upon which mathematics has been securely founded. Given that the previous arguments on Belief Map aren’t very convincing– or even coherent, in cases– I see no reason to think that actual infinities are not metaphysically possible.

This is another good objection to the impossibility of actual infinities. If we are discussing an interval with cardinality of at least , then there are an infinite number of subintervals contained therein (Katz 792). Consider, for example, the mathematical interval from zero to one. There are an infinite number of intervals within this interval– for example, and and and , et cetera, et cetera.

Belief Map’s rejoinder to this is to claim that such intervals can only be *potentially* infinitely divided, and are not actually infinitely divided. However, this seems to very clearly not be the case. All of the subintervals on the interval are entirely coextant with . They exist equally as much as the parent interval does and are not simply potentialities waiting around to be actualized.

Now, perhaps Mr. Giunta might respond that he has already granted that such mathematical intervals may reasonably be consistent, but that he is arguing against *physical* intervals, and that these are only potentially infinitely divisible. However, this seems to be yet another bit of question begging, and is only a reasonable assumption if one already denies the metaphysical possibility of actual infinities. After all, if (for the sake of thought experiment) we adopt the assumption that actual infinities are metaphysically possible, then all of the infinite subintervals of a given, physical interval would be coextant with that interval.

Since adding a finite number to a finite number always results in a finite number, Belief Map argues that an infinite collection cannot be formed by the sequential addition of finite elements. However, this seems to be just another circular attempt to reject actual infinities by rejecting actual infinities. No one is suggesting that adding a finite number to a finite number will yield an infinite number. We are suggesting that adding a finite number to an infinite number yields an infinite number, and similarly that adding an infinite number to a finite number yields an infinite number.

To be fair, Belief Map does note that a possible objection to this claim is that past events may “have *always* been infinite in number.” Of course, if it is the case that past events *are* infinite in number, then it *must* be the case that past events have *always* been infinite in number. This is a necessary consequence of the infinitude of past events. It would therefore seem that the claim that “infinity can’t be formed by adding” is entirely irrelevant to the situation under discussion.

From time immemorial, the infinite has stirred men’s

emotionsmore than any other question. Hardly any otherideahas stimulated the mind so fruitfully. Yet, no otherconceptneedsclarificationmore than it does. (Hilbert)

Belief Map, unfortunately, has some ill-formed views regarding the nature and mathematics of infinity. This is owed, at least in part, to the fact that Mr. Giunta borrows heavily from William Lane Craig’s work in the discussion of this subject. However, as I have discussed before, William Lane Craig has a gross misunderstanding of the concept of infinity (Part 1 and Part 2). Hopefully, the information which I have presented here can help Mr. Giunta to improve his wonderful work and correct some of the misconceptions which Belief Map’s arguments present in regards to the actually infinite.

Conway, J. H. *On numbers and games*. A.K. Peters, 2006.

Hilbert, David. “On the Infinite.” 1925. URL: https://math.dartmouth.edu/~matc/Readers/HowManyAngels/Philosophy/Philosophy.html

Katz, Victor J. *A History of Mathematics: An Introduction*. Pearson, 2018.

Robinson, Abraham. *Non-Standard analysis*. North-Holland Pub., 1974.

In the video, Dr. Wildberger claims that there are three different ways in which is commonly discussed: the Applied, the Algebraic, and the Analytical. He does a fairly good job of discussing the manner in which the ancient Greeks discovered that there exists no ratio of two whole numbers which can be equal to , which is a topic I have covered here, as well. He then explains what he means by each of the above three categories.

Since we have shown that there is no ratio of two whole numbers which can equal exactly, the Applied path seeks to find ratios which simply come close to equaling that number– approximations with an arbitrarily large or small error. We are not searching for an exact solution, on the Applied path, and indeed we are content to agree that there is no exact solution which can be attained, according to Dr. Wildberger. We can, for example, find that 1.414, when squared, gives a solution quite close to 2, but it is not exactly 2.

For the Algebraic path, we can construct an extension to the rational numbers which contains some exact solution to the question of — Dr. Wildberger gives the example of an arithmetic using pairs of rational numbers and such that . He notes that this can be done in such a way that it conforms to all the usual laws of arithmetic, but objects that the in this scenario “has nothing whatsoever to do with that one-point-four-one-four-et-cetera that we were talking about previously.”

Finally, Dr. Wildberger presents the Analytic path, which he describes as “the square root of 2 is some infinite decimal which starts out 1.414 and goes on in some fashion.” He unequivocally refers to the Analytic path as “wrong thinking,” and unabashedly goes on to claim that such an object “does not exist, my friends.” It is quite clear that Dr. Wildberger has no love for Analysis. Quite the contrary, he is openly hostile to the idea.

While there are minor statements that I could nitpick in Dr. Wildberger’s treatments of the Applied and Algebraic approaches to the square root of 2, it is his handling of the Analytic approach with which I’ll interact in this article. His discussion of the subject is incredibly hyperbolic, highly oversimplified, and entirely uncharitable. Dr. Wildberger doesn’t even pretend to consider the idea that the Analytic approach may have some reasonable underpinnings which he nevertheless finds to be flawed; rather, he simply dismisses the entire field of analysis as being incorrect and accuses it of being the ruin of mathematics. He treats the subject in this manner despite the fact that, as he will well admit, the overwhelmingly vast majority of all the world’s mathematicians from the past hundred years find the Analytic approach to be perfectly good. In fact, Dr. Wildberger rather boldly claims that these other mathematicians “are all wrong. They are seriously wrong.”

The closest which Dr. Wildberger comes to giving an accurate description of the Analytic approach is when he is discussing the number line. According to Dr. Wildberger,

…this Analytic approach to root 2… pretends that, somewhere on the line (which up ’til now only consists of Rational numbers), somewhere there’s a new place, and it’s somewhere between 1 and 2, and there’s a new number called ‘root 2,’ and it has the property that its square is 2, and we can find out what this thing is by making a calculation.

To say that this is a mischaracterization of Analysis is quite an understatement. In truth, Analysis is based upon an assumption regarding the number line, but it does not simply try to plop an object called somewhere between 1 and 2, as Dr. Wildberger claims. Rather, the assumption regarding the number line upon which Analysis is built is a fairly reasonable one– the idea that the number line is continuous. That is to say, Analysis assumes that there are no gaps or holes in the number line. If the number line only consisted of Rational numbers, as Dr. Wildberger claims it did, then there would be a great many holes in it, indeed, as there are a great many mathematical statements which produce values which cannot be expressed as Rational numbers– uncountably infinitely many, in fact.

The idea that the number line is continuous did not originate with Analysis. It had been an openly discussed question in mathematics since at least the ancient Greeks. The Analysts simply decided to explore what it would mean for such a continuum to exist. Quite happily, they found that assuming continuity led to very beautiful developments in mathematics– exactly the opposite of the picture Dr. Wildberger paints.

If one assumes that the number line is continuous, as the Analysts did, then there is no need to try to create a place for to go, despite Dr. Wildberger’s intimations otherwise. It’s already there, occupying a gap between the Rational numbers. Analysis simply asks, “What can we learn about this gap?” It was not arbitrarily placed between 1 and 2, as Dr. Wildberger hints. Analysis helps us to discover that it is there.

Nor is it true that Analysis claims “we can find out what this thing is by making a calculation.” We already know what this thing is: it is the square root of two. Dr. Wildberger is conflating “what this thing is” with the manner by which we symbolize this thing when using a particular notation. That is to say, Dr. Wildberger is attempting to claim that the number **is** its decimal representation. This is why he takes such offense at the ellipsis which is used to show that the decimal representation is incomplete. For Dr. Wildberger, the decimal representation **is** the number.

This is, of course, a silly notion. The symbols which we use to represent an idea are not equivalent to that idea. Nobody thinks, for example, that the color blue necessarily consists of the letters “b,” “l,” “u,” and “e.” Nor would anyone claim that is a more proper symbol for the number it represents than is “two” or “два” or “二” or || …or even . Similarly, it seems more than a little misguided that Dr. Wildberger is so inordinately attached to the decimal representation of the square root of 2. The fact of the matter is that, so long as it is clear that we are talking about the square root of 2, then it doesn’t matter if we represent that notion with or with or with or with “the ratio of the magnitude of the diagonal of a square to that of one of its sides.”

So when Dr. Wildberger writes…

…and asks, “Is this a correct and meaningful statement?” the answer to both is, “Yes.” None of the displayed digits is incorrect and the ellipsis acknowledges that the display is incomplete. This statement gives us a good bit of information about , and that alone makes it meaningful. When Dr. Wildberger asks about moving the ellipsis to display fewer and fewer digits, the expression remains correct and meaningful, but becomes less useful as we omit more information. The simple fact of the matter is that a mathematical statement can most certainly be “meaningful” without carrying perfectly complete information.

Even when Dr. Wildberger presents the question of in an attempt to show that the ellipsis is absurd, he is misguided, as this statement actually does have meaning– it tells us that is equal to a number. Now, Dr. Wildberger is correct to point out that one is not likely to get any credit for such an answer on homework or an exam, but his reasoning is incorrect. As Dr. Wildberger well knows, good math homework and exams care less about the completeness of the answer than they do about how the student arrived at that answer. After all, which should receive more credit on a test: a correct answer with incorrect work shown or an incorrect answer with the correct work shown? So, while may be a *technically* correct response, it does nothing to show that the student has any understanding of whatever mathematical concept is actually being tested.

This idea that the decimal expansion of contains an infinite number of non-repeating digits seems to be the only real objection which Dr. Wildberger presents in this video, but his opposition to it seems misplaced, at best. In the description to the video, Dr. Wildberger notes that he will further discuss the logical problems which he purports to exist in the treatment of irrational numbers in his videos on Cauchy sequences and Dedekind cuts, so I will be sure to watch these as well; however, his bold pronouncement that “none of them work” seems more than a little arrogant. We’re not talking about some fringe development in a little known field which is sparking controversy and debate. On the contrary, Dr. Wildberger is overtly stating that hundreds of years worth of the world’s greatest mathematical discoveries are completely wrong.

I believe I understand why Dr. Wildberger makes such outlandish claims. In some of his other work, I have seen him explicitly reject the axioms of infinity and of choice utilized in modern set theoretic frameworks. Certainly, without these axioms, our understanding of the irrationals becomes far less rigorous. However, Dr. Wildberger’s aversion to these axioms has led him to caricature his opposition rather than to treat the opposing viewpoint with even the remotest sense of charity. As such, it seems fairly difficult to take his claims on the subject seriously.

Norman Wildberger’s video on the square root of 2 does not contain the “inconvenient truths” which it purports to show. Worse, it contains rather convenient falsehoods which Dr. Wildberger has utilized in his attempt to denigrate Analysis.

]]>We can use a function for just such a purpose. A function is a specific mathematical tool which allows us to describe an entire set of data points all at once which we symbolize as (read “ of “). We encode the data by means of a mathematical formula. For example, our exemplary rolling ball might well have been encoded by the function , where the represents the time, in seconds, that the ball has been rolling, and the value of the function, tells us the distance in meters which the ball has traveled in that time. In this particular function, the coefficient of tells us the rate at which distance changes as time passes– that is, a meter per second. When the boy first rolls it, the ball is traveling at a meter per second; when it finishes it had been traveling at a meter per second; and at any single point during the journey the ball is traveling at a meter per second.

However, this is a very simple example. It describes a situation involving a constant velocity. Things become a bit more muddied when the rate at which a change occurs is, itself, changing.

Our example above describes a **linear function**. Linear functions are so named because they can be graphed on a Cartesian plane to form a straight line. The equation for a linear function is of the form , where represents the y-intercept (the point at which the line crosses over the y-axis of the plane) and where represents the slope of the line (the rate of change for the function). Utilizing the function from our example, , we have a slope of , an intercept of 0, and we can produce the following graph:

It’s very easy to see, intuitively, that this line’s slope, or rate of change, is constant throughout the whole function. We don’t even need to see the equation which generated this graph to see that this is the case, if we presume that the line on the graph is actually as straight as it appears. That very straightness is precisely what we mean by a constant rate of change. As such, it is perfectly clear that the graph has the same slope at as it does at or or . Regardless of how far along the graph we look, it will always have the same rate of change.

However, this is not true of all graphs. When a function ceases to be linear, the rate of change of that function ceases to be constant. Take, for example, the following graph of the function :

Let’s pretend that, instead of rolling the ball across a flat floor, the little boy has instead set the ball atop a ramp and let go. The ball starts moving slowly, but builds up more and more speed as it moves farther and farther from the boy. After four seconds, the ball is two meters away from the boy– just as in our first example– which means that the ball still traveled a meter per second, overall. However, it seems entirely clear that the ball was not moving at that speed at every single moment in the journey, the way it had when the boy rolled it across the floor. At the start of its roll, the ball is moving much slower than a meter per second, while at the end it is moving much faster than a meter per second.

This introduces a very interesting, and very important, question: how can we tell what the rate of change is at any given point? What is the **instantaneous rate of change**?

For example, let’s say I want to know how fast the ball is moving precisely 3 seconds after the boy has set it rolling. A person might think that they can simply determine how far the ball has gone in that time– meters– and then divide that distance by the time– 3 seconds– to conclude that the ball is traveling at of a meter per second. However, this has the same problem as the whole 4 second journey: the ball seems to be traveling slower than of a meter per second at the start and faster than of a meter per second toward the end.

One way in which we know this fact is by looking at how far the ball travels between the second and third seconds of its journey. So, after two seconds, the ball is a meter from its starting point. After three seconds, it is of a meter from the starting point. This indicates that the ball traveled of a meter in one second. But this, again, falls prey to the same problem we’ve been having: the ball seems to be moving more slowly than of a meter per second at the 2 second mark and more swiftly at the 3 second mark. We’re closer to the speed of the ball at 3 seconds than we were before, but we still haven’t determined it, quite yet.

We can continue to take smaller intervals of time in order to find better and better approximations of the speed of the ball at the 3 second mark. For example, using the distance the ball moves between the 2.5 second and 3 second marks, or the 2.75 second and 3 second marks, or the 2.99999999998 second and 3 second marks. We can come really, really close to the answer we’re trying to find by doing this, but we don’t end up with the exact answer– and mathematicians are not happy to settle for an inexact answer.

Let’s think about what we are doing in these approximations.

If the ball had traveled at a constant speed from the start, at time 0, to the 3 second mark, then its journey could be represented with line . The slope of this line is — which is the approximate speed we determined when considering the ball over this period. Similarly, the line has a slope of , our approximation from the 2 second mark to the 3 second mark. If we were to calculate the slope of , our approximation would get even better. Visually, in the graph above, we can see that the linear graphs are getting closer and closer to the parabolic graph– but there’s always some tiny bit of space between the two.

Algebraically speaking, what are we doing in these approximations? How can we translate this problem into our mathematical language?

Well, we are taking the distance which the ball has traveled after 3 seconds– which, in our math language, is — and we are subtracting the distance which the ball had traveled at an earlier time– say, or or — to find the distance which the ball has traveled between those two times. We are then dividing this distance by the amount of time which has elapsed between the two points: seconds or second or seconds.

Now let’s try to generalize this. We have our function, . We are looking at the difference between the value of the function at some point, , and the value of the function at some subsequent point, ; we are then dividing that difference by the difference in our two points, — which is just . So, this leads us to the expression .

As we have seen, the smaller the gap between our two -values, the closer our approximation becomes. Naturally, we might then think that we can find an exact solution to our problem if we just remove the gap, entirely– that is to say, what happens if we set equal to zero in the expression that we found, above? However, we very quickly come to a problem if we do that. Evaluating the expression, we’ll see that . This is certainly problematic– any middle school child should be able to point out that we simply cannot evaluate that .

But what if we had some number which wasn’t zero, and yet that number was infinitely close to ? In such a case, we could reasonably assume that our answer is infinitely close to being correct.

Thankfully, in the first part of this series, we learned that we do have such numbers: the infinitesimals. So now, if I replace the from our above expression with any arbitrary infinitesimal– let’s call it — we’ll find that evaluates to something infinitely close to the answer which we are looking to find. For the exact answer, as we mentioned, we would like to have been able to replace the with zero; but now we can be clever. Instead of trying to do undefined operations of math, dividing zero by zero, we can find the Real number solution which is infinitely close to our evaluated expression, which (as you will recall) is called the **standard part** of the expression. By taking the standard part of , we can find the exact answer to our problem.

Let’s go back to our rolling ball, now, to see how we can put this into use. We want to find the exact speed of the ball at the 3 second mark. Translating this into our expression, we get:

So, precisely at the 3 second mark, we now know that the ball is traveling at exactly of a meter per second. However, we can do even better than this. As mentioned earlier, mathematicians are greedy. We don’t just want to know what’s going on at a few of the points; we want to know what is going on at *all* of the points. So, rather than solving for some particular value of , such as 3, we can solve the expression for *all* values of , like so:

This new function, , is called the **derivative** of our original function. We denote the derivative of with an apostrophe, written as and often read as “-prime of .”

The derivative is a very powerful tool. It gives us a way of describing the instantaneous rate of change for *all* points of a given function. When discussing speed or velocity, as we have been doing for our exemplary ball, the derivative of the function for distance gives a function describing velocity. The derivative of the function describing velocity will, in turn, give us a function describing acceleration. Taking the derivative of that function will then tell us how quickly our acceleration is, itself, increasing or decreasing– and so on and so forth. When we take derivatives of derivatives, like this, we refer to them as second, third, fourth derivatives (and so on). So, as we have now seen, the second derivative of a distance function is an acceleration function.

The derivative was developed by mathematicians for the express purpose of describing the changes in change. By its use and exploration, we can conquer a great many problems which are incredibly difficult– or even impossible– without this wonderful tool. And, at the very heart of the derivative lie the infinitesimals– these numbers between our numbers– which give this mathematical tool its power.

]]>There are numbers in between the Rational numbers, too. We can define some number, , which is not equal to any Rational number. There are Rational numbers which are greater than , and those which are less than , but somehow our number squeezes itself into a gap in between the Rational numbers. In order to find such a number, we need to further extend our understanding of “number” to include the Real numbers. This should all be very familiar to the average high-school student.

Now, what happens if we extend this idea one step further? Are there more numbers which are in between the Real numbers?

For thousands of years, mathematicians have had heated debates about this question. There is a well-known concept in number theory called the Archimedean property, named after the famous mathematician Archimedes (though he, himself, had attributed the idea to his friend and mentor, Eudoxus). Euclid described the notion by saying, “Magnitudes are said to have a ratio to one another which can, when multiplied, exceed one another.” In short, this means that given any two numbers, and such that , then we should be able to add to itself a finite number of times in order to find a number which is larger than . For example, given the numbers 5 and 34, I can add 5 to itself seven times in order to get a number greater than 34– that is to say, . Given the numbers and , we can find that . Given the numbers and , we can find that .

However, this property led to a very curious problem when mathematicians began trying to discuss the number of points contained in a given line– particularly when those mathematicians attempted to compare the number of points in one line to the number of points in another line. The eminent philosopher, Aristotle, came to the conclusion that such discussions could be nothing but nonsense, and that any attempt to quantify the number of points in a given line would simply lead to confusion and folly. As an example of this, Aristotle discussed what has come to be known as his Paradox of the Wheel. Take a look at the following figure:

Ancient Greek mathematicians, while studying circles, wanted to find some way to discuss the circumference of the circle in the same way in which they talk about other magnitudes. So, they began “unrolling” circles to create a straight line equal in length to the circumference of the circle. Aristotle noticed that, given a larger and a smaller circle which share a centerpoint, rolling out the wheel to produce a straight line equal in length to the circumference of the larger circle causes the smaller circle to produce an equally long line. But how can this be? The smaller circle obviously has a smaller circumference, but rolling it out at the same rotational rate as the larger circle makes it seem to have an equal circumference to the big one!

Galileo Galilei, two millennia after Aristotle, attempted to resolve this paradox by arguing that there must be gaps in the continuum– that is to say, there must be empty spaces between the points in any line or figure– and that these gaps account for how the smaller circle’s circumference can be stretched to equal that of the larger circle. However, other mathematicians were quick to note that this would run afoul of the Archimedean principle. If such gaps existed, it should be possible to continue to stretch them until they became noticeably large. We should be able to magnify a line until we literally see it rend apart into pieces.

In the latter half of the 17th Century, Gottfried Wilhelm Leibniz began to argue that there are numbers which are infinitely small. To the absolute shock of the mathematical community, Leibniz was claiming that a 2000-year-old immutable law of number theory was, in fact, incorrect. There existed numbers, Leibniz claimed, which violate the Archimedean principle; numbers which are greater than zero, but which are nonetheless so much smaller than any Real number that it is impossible to find a finite ratio between that infinitely small number and any Real. You could add the number to itself a thousand times, a million, a quintillion, a googolplex– even Graham’s number of times– and that number would still remain smaller than any Real number which you could possibly imagine.

Not only did Leibniz believe that such numbers exist, he utilized them in order to create an entirely new method of mathematics: Calculus. However, the idea was so incredibly controversial that even other proponents of Calculus– like Isaac Newton, who independently developed that field of mathematics– railed against Leibniz for his reliance upon such an insane concept. Still, Leibniz’s results were indisputable, and a number of mathematicians joined with him in an attempt to find some rigorous and logical means of discussing these infinitely small numbers. However, after a great deal of failure, other avenues began to be explored in order to place Calculus on a rigorous footing. Particularly, the notion of the Limit was put forth, expanded, and eventually made rigorous in the 19th Century by Karl Weierstrass. With a rigorous and logical footing finally established for Calculus, the infinitely small numbers of Leibniz’s devising were abandoned and Calculus classes began being taught based on the idea of the Limit.

Thankfully, this was not the end for our strange and tiny numbers. One-hundred years after Weierstrass, and three-hundred after Leibniz, a model theorist named Abraham Robinson began to attack the problem. He was fascinated by Leibniz, and wanted to gain a better understanding of the mind which had invented the calculus. Robinson’s work led to his development of a new number system: one which did not adhere to the Archimedean principle, but which otherwise behaved in exactly the same manner as did the Real numbers. He called this new system the Hyperreal numbers. Just as the mathematicians had extended the Integers to find the Rationals, and then extended the Rationals to find the Reals, Robinson extended the Real numbers in order to find the Hyperreals.

The Hyperreals contain all of the Real numbers, so any number on the Real line is also on the Hyperreal line. However, the Hyperreal number line also contains two very special types of numbers which are not contained in the Reals. The first of these are Infinite numbers– numbers which have an absolute value greater than that of any Real number. That is to say, we can define some number such that, for any given Real number , it is true that . The second type refers to Infinitesimal numbers. Infinitesimals are the reciprocal of Infinite numbers, and as such, have an absolute value which is smaller than that of any Real number (except 0, which is considered to be Infinitesimal): .

Any number which is not Infinite is called a Finite number– including the Infinitesimals. Once the system is in place, it becomes quite easy to prove some simple, but powerful, properties of the Hyperreals. Given any positive Infinite numbers, and ; any positive Real numbers, and ; and any positive, non-zero Infinitesimals, and ; we can derive the following:

- , , and are Infinite
- and are Infinite
- , , and are Infinite
- is Finite (and possibly Infinitesimal, in the case )
- is Finite and non-Infinitesimal
- and are Finite and non-Infinitesimal
- , , and are Infinitesimal

You may notice that there are several cases missing from the above list. These cases are indeterminate forms– that is to say, without knowing more about the particular numbers involved, it is impossible to tell whether the result will be Infinitesimal or Finite or Infinite. The indeterminate forms are:

We can also derive another very important notion:

For any Finite Hyperreal number, , there is exactly one Real number, , such that is Infinitesimal. In such a case, we call the **standard part** of , denoted as .

Any two numbers which are only separated by an Infinitesimal are said to be **infinitely close** to one another. As such, another way of wording the above is that the standard part of any Finite Hyperreal number is the Real number which is infinitely close to it.

So, now, we can answer the question with which our article started. Are there numbers between the Real numbers? We find that the Hyperreals allow us to answer this with a resounding, “Yes!” Given any Real number, , and any Infinitesimal number, , we can be absolutely certain that there are no Real numbers which come between and . This concept is the absolute foundation of Infinitesimal Calculus.

]]>Many other mathematicians and philosophers of the time rightfully balked at the notion. It seemed entirely ludicrous. Bishop George Berkeley famously scoffed at Newton, asking if his fluxions were “the ghosts of departed quantities.” However, it was quite plain that the mathematics which Leibniz and Newton presented *worked*. When the results which could be found from the methods of Calculus were able to be confirmed using other methods, they were found to be accurate and true. Indeed, the Calculus was such a powerful tool that even most mathematicians and philosophers who recognized its flaws continued to utilize it in their work. Many began searching for some way to make the Calculus just as rigorous as the rest of mathematics. These efforts culminated in the work of Karl Weierstrass, who found a way to base Calculus upon a different tool. Instead of the Newtonian “fluxion” or the Leibnizian “differential,” Weierstrass gave mathematics a well-defined notion of the limit.

It is Weierstrass’ method of limits which is still taught, even to this day, in nearly every Calculus textbook in the world; but perhaps it is time to abandon this notion and return to the concept which Newton and Leibniz pioneered.

In the 1960’s, a mathematician named Abraham Robinson developed a rigorously well-defined number system called the Hyperreal numbers. This number system included numbers which are larger than any given Real number– known as “infinite” or “unlimited” numbers– as well as their reciprocals, which are greater than zero but nonetheless smaller than any real number– known as “infinitesimals.” Robinson explicitly noted that his development of the Hyperreals came out of a desire which he had for better understanding Leibniz’s thought processes. Indeed, the infinitesimals of the Hyperreal numbers look very much like the “fluxions” and “differentials” of that early Calculus. In 1986, H. Jerome Keisler wrote a textbook for the subject, Elementary Calculus: An Infinitesimal Approach, in which he provides a method for teaching Calculus without the need for limits, while still maintaining the rigor desired in mathematics.

Unfortunately, Dr. Keisler’s work has not yet gotten much of a foothold in the educational system. The method of limits has been taught for so long that it would be exceedingly difficult to displace it. However, there are some very distinct pedagogical advantages in Keisler’s approach which may make the whole ordeal well worth the effort.

Let’s look at a simple example. One early Calculus problem with which every student is presented is to find the derivative of the function . For those who don’t remember, the derivative of a function, , tells us how much the value of that function changes with respect to a change in the value of *x*. So, let’s say that the value of *y* increases by some amount which we will call when the value of *x* is increased by some amount . Algebraically, we would write this as , for the equation we are discussing. We can then take this new equation, and solve it for the value of as follows:

It is these final three steps which the mathematicians of Newton and Leibniz’s day found to be offensive. According to the Calculus, the derivative was the function which results from . If we are to say that actually has a value, then it must be true that , because division by zero is undefined. However, if , then it must be true that , because that is the only additive identity. Thus, we are left with a contradiction if we claim that .

Later mathematicians, culminating in Weierstrass, resolved this issue by redefining the derivative to be a limit as the change in *x* approaches zero. Specifically, they said that . Of course, this raises a new question: what, precisely, is a limit? Well, if is defined on an open interval about , except possibly at itself, then if, for every number , there exists a corresponding number such that for all *x* it is true that implies that . Needless to say, this is a fairly complex idea, which is why a large amount of time needs to be spent on teaching students how to properly find and evaluate limits.

Keisler’s resolution to the derivative problem we presented is somewhat simpler, and quite a bit more intuitive. In his Elementary Calculus, the in the equations above is defined to be a non-zero infinitesimal. The derivative is then defined to be where *st()* means “the standard part of…” The *standard part* of a finite Hyperreal number, *a*, is the Real number which is infinitely close to *a*; and two numbers are infinitely close if they only differ by an infinitesimal value. Looking again at Step 6 from our work above, we had the expression . Since we know that is infinitesimal, we know that is infinitely close to . Thus, for any Real number, *x*, we can see that .

From a pedagogical standpoint, it would seem that Keisler’s method is superior. Hyperreal variables can be manipulated algebraically in exactly the same way students are already familiar with manipulating Real variables. The *standard part* function is quite a bit easier and more intuitive to learn than the *limit* function. The method is inordinately closer to the original ideas which created Calculus, in the first place, and it is just as rigorous a treatment as is the method by limits. Keisler and others have reported that they’ve seen students take to the material more easily, in this manner. Perhaps the time has come to leave off of the use of limits, and to return to the method of infinitesimals for teaching Calculus.

Ancient languages maintain these problems, but add an entirely new layer of obfuscation which is not found even in most culturally distinct modern languages. Over the past few thousands of years, human understanding of the world around us has changed quite significantly. Just one hundred years ago, no one had ever viewed the ground from five miles up in the air. Two hundred years ago, we had no idea that microscopic organisms cause disease. Three hundred years ago, humanity had no idea that oxygen exists. Four hundred years ago, the world was shocked to learn the the planet Jupiter has moons. The manner in which religion, philosophy, and science have discussed a myriad of things about reality has changed so greatly in recent millennia that very often even one word in a single language can mean something exceedingly different to people living in different periods of time.

The documents which comprise the New Testament of the Christian Bible were written 2000 years ago. In those ensuing twenty centuries, many of the words used by the original authors and many of the concepts which they espoused have engendered incredible amounts of revision, alteration, and nuance by subsequent philosophers and theologians which would have been wholly alien to those initial ancient writers. The vast majority of modern readers– including an embarassingly large number of modern scholars of the text– seem wholly ignorant of this fact when they read a passage from their Bibles.

As an example of what I mean, let’s take a look at a short verse from one of the Gospels, Mark 1:10. The English Standard Version of the Bible, which I generally consider to be a good translation, renders this passage as:

And when he came up out of the water, immediately he saw the heavens being torn open and the Spirit descending on him like a dove.

This is a verse from Mark’s description of the baptism of Jesus by John the Baptist. It seems extremely straightforward, to a modern Christian reader. As Jesus comes up from the water, the Holy Spirit came out of Heaven and alighted upon Jesus in the way a beautiful bird might come down from its flight.

However, consider this alternate translation of the same text:

And when he came straight up from the water, he saw the skies being divided and the wind came down toward him like a pigeon.

This is very different, indeed. No mention of Heaven, or the Holy Spirit. It gained the adverb “straight,” but lost the adverb “immediately.” Some of the words are similar, though slightly altered, like “divided” instead of “torn open” and “pigeon” instead of “dove.”

So which translation is correct? Well, as I insinuated earlier, “correct” may not even be a word which we can use when describing translations. However, there are definitely some good reasons to prefer my translation over that of the ESV. Let’s talk about the two biggest changes in my translation over the other: “heavens” versus “skies,” and “the Spirit” versus “the wind.”

The word which the ESV translates as “heavens” and which I translate as “skies” is οὐρανους (ouranous), which is the plural form of the word οὐρανος (ouranos). This one word is sometimes translated as “sky” and other times as “Heaven” by nearly every English translation, including the English Standard Version. Given how different the words are to a modern Christian, this might seem confusing. However, the ancient Greek language didn’t have different words for these two things. Neither did ancient Hebrew, nor Aramaic, nor Latin, nor any other ancient language of which I am aware.

There is a very good reason for this. The modern conception of Heaven as a place which exists wholly removed from the physical cosmos did not exist to these ancient people. When the ancients revered “the heavens” as a divine realm, they were literally talking about the sky which they looked up and saw every day. They referred to “Heaven” as being “above” them or “higher” than them because that’s where the sky actually is. This language doesn’t even make any sense on a cosmic scale, let alone when discussing something wholly distinct from the physical cosmos. The “Heaven” which is discussed by modern theologians is not “above” us, as it has no physical relation to us.

To further illustrate this, look at what the Gospels record Jesus, himself, as saying. In a number of passages (Matthew 24:30, 26:64; Mark 13:26, 14:62), Jesus talks about the Son of Man being seen in “Heaven” coming on the clouds. Modern readers know that clouds are just collections of water vapor in the sky– very physical things, and distinctly not what theologians would consider to be a part of the realm of the divine.

So, then, if the ancients were referring to the sky when using the word οὐρανος, then why do they pluralize it in many places, including the passage which we are here discussing? This is yet another place where ancient culture and modern collide. To us, there is only one sky. It wouldn’t even occur to most people that the word can be pluralized. However, the ancients had a very different understanding of that which resides above us than we do. They thought that the objects which we see in the sky above us– the sun, moon, planets, and stars– were literally attached to crystalline spheres each of which rotated at different distances from the ground. Those spheres were what the ancients meant when they were talking about the “heavens.” Humanity was able to distinguish seven celestial bodies which were distinct from the background of stars through the use of the naked eye. Each of these was considered to be attached to a distinct sphere rotating over the ground at different heights. According to Aristotle, the lowest of these heavens was the Moon, followed by Mercury, then Venus, then the Sun, then Mars, then Jupiter, and finally Saturn in the highest sphere.

Modern Christian theologians generally do not believe that there are multiple divine realms, so pluralizing οὐρανος makes no more sense in light of modern theology than it does in modern cosmology; but it was perfectly rational to an ancient people who truly believed that there were multiple skies above us. In fact, in the New Testament, itself, we have an example of another writer who most certainly espoused this view. Paul, the eminent apostle whose name is attached to nearly half of all the books that make up the New Testament, says in 2 Corinthians 12:2, “I know a man in Christ who fourteen years ago (whether in the body or out of the body I do not know, God knows) was carried off to the third heaven.” Again, this fits rather perfectly with the ancient understanding of the world, but clashes rather significantly with modern cosmology and most modern Christian theology.

For these reasons, I think that “sky” is a much better, and much more preferable, translation of the word οὐρανος than is “Heaven.”

The distinction between the ESV’s “Spirit” translation and my “wind” is a very similar case. Here, the word being translated is πνεῦμα (pneuma). Just as before, the ESV and other English translations alternately use “spirit,” “Spirit,” “breath,” and “wind” to translate this word. You’ll notice that I listed both lower-case “spirit” and upper-case “Spirit,” separately. I did so intentionally, because when translators use that capitalized “S” version of the word, they are saying that the author was referring to the Holy Spirit– as in, the third person of the Trinity– as opposed to any other “spirit.”

Again, the word πνεῦμα carries with it cultural connotations which are somewhat alien to modern readers. The Greek word primarily means “breath” or “wind.” However, again, the ancient people had no concept of modern physics or chemistry. They didn’t know that air is composed of molecules which move and bounce off of other molecules, imparting Newtonian forces in order to cause the motion which we see. All that they knew was that, somehow, the invisible forces of “breath” and “wind” could affect that which was visible. The ancient Hebrews, as well as a few other ancient Near East cultures, came to associate this invisible force with those invisible qualities of a person which animate the visible. As such, in ancient Hebrew, the word רוח (ruach) literally meant “wind” or “breath,” but the “wind” of a person was the part of that person which truly gave them life. This carries down even into modern English in idioms like “the breath of life.”

As Hebrew people in the Greek-speaking Diaspora of the Roman Empire began to utilize ancient Greek in addition to– or even in place of– their ethnic tongue, they began to have the need to discuss these concepts in the common language of the area. As such, they chose to use the word πνεῦμα in the same way that they had utilized רוח, previously. While the Greek word also conveyed a sort of sense of invisible force, most of the Hellenic citizenry of ancient Rome didn’t see πνεῦμα as being something personal or intelligent. This connotation seems to have been the result of a syncretization between Hebrew and Hellenic cultures.

Modern theologians, just as with “Heaven,” do not regard “spirit” to be a physical thing, in the least. To them it is, in fact, the precise opposite of physical. It is entirely non-physical, and while it (somehow) imparts personhood into a being, the physical body of that being is just a shell to contain the spirit. But, again as before, this was not a concept held by ancient peoples. To them, a person’s wind was categorically no different than a storm’s wind. The idea that something might be wholly removed from the physical world would have been entirely alien to most ancient people. Among those who did hold to such a concept– for example, Plato and those who accepted his theory of universals– it would have been entirely anathema to refer to such things as “wind.” After all, “wind” is a thing which can certainly be perceived– perhaps not by the eyes, but certainly by senses like touch and hearing and sometimes even taste or smell. The Platonists insisted that the universals were entirely imperceptible, and that notions like space and time– which can certainly be applied to wind– are entirely meaningless in regard to the universals. It seems quite unlikely that the authors of the New Testament had in mind the modern conception of “spirit” when they used the word πνεῦμα in their writings.

For these reasons, I believe that “wind” or “breath” are far more preferable renderings of the word πνεῦμα than is the word “spirit.”

The words which the ESV used in translating Mark 1:10– “heavens” and “Spirit”– are part of a category of terminology which I refer to as Theologically Loaded Language. These are words which have undergone literally millennia of theological revision and discussion, and which have come to mean very different things than the original text which they translate. These two are just a very tiny example of a rather huge list which includes very common Christian words like “gospel,” “Christ,” “sin,” “angel,” “devil,” “baptism,” “Scriptures,” and many, many others.

For some time, now, I’ve wanted to do a translation of the New Testament books which avoids utilizing this sort of Theologically Loaded Language. I honestly believe that such a translation would be eminently useful to *all* people interested in the Bible, believer and skeptic alike. I would start with Mark– the earliest and the shortest of the Gospels– and progress from there. Unfortunately, however, this would require a great deal of time and effort, even just to produce a single book. I’ve thought about trying to drum up some interest with a crowdfunding site like Kickstarter, IndieGoGo, Patreon, or GoFundMe, but I’ve been somewhat recalcitrant. Would this be something in which you, my readers, might be interested? If so, please let me know in the comments. If I can engender enough interest, I may well move forward with such a project.

Consider this slightly modified version of the thought experiment…

Fred is sitting in a room at 8:00 am. There exist four Grim Reapers along with Fred, each of which is currently dormant. When any individual Grim Reaper becomes activated, if Fred is not going to be killed by the next Reaper in the order, then this Reaper will instantaneously kill Fred; otherwise, this Reaper will return to a dormant state and continue to do nothing. Each of the Grim Reapers is timed to activate at a specific time after 8:00 am. The first Reaper will activate at 8:15 am. The second activates at 8:30 am. The third activates at 8:45 am. The fourth activates at 9:00 am.

Now, 8:15 arrives and the first Reaper activates. Does it kill Fred or not? If it does kill Fred, because the second Reaper is not going to kill Fred, then the 3rd Reaper in the line is not going to kill Fred– it can’t, obviously, since Fred is already dead. However, if that’s the case, then the second Reaper *is* going to kill Fred (since those conditions are met) and the first Reaper’s conditions are no longer valid. So, even though we started assuming that the first Reaper killed Fred, we’ve learned that this cannot be the case. Indeed, the same holds true for the second Reaper– if the second Reaper kills Fred, then the fourth Reaper cannot kill Fred meaning that the third Reaper should kill Fred, violating our initial assumption. So, we see that the second Reaper is not going to kill Fred. But if the second Reaper isn’t going to kill Fred, then the first Reaper should– except that we’ve already seen this cannot happen.

Unlike Pruss’s formulation of the paradox, this problem cannot be resolved by simply claiming that actual infinites cannot exist. We’re not relying on actual infinities, here. We are looking at a finite number of Grim Reapers. Nor does is seem reasonable to come to the sort of conclusion which Pruss does in his proposed solution to the paradox. If a person tried to claim that the number “four” cannot actually be a number which applies to the real world because of this paradox, we would all laugh in their faces.

It’s a little bit easier to see the point I was trying to make in my other post, now. Regardless of whether one is an A-Theorist or a B-Theorist as far as Time is concerned, both camps agree that events which lie in the future do not alter the ontology of events in the present. On the A-Theory view of things, I cannot make a decision based upon a future which has not yet been actualized. Things which are not yet actual cannot affect that which is actual, and as such, it is clear that my version of the Grim Reaper Paradox violates this view of things.

Similarly, on the B-Theory, causality is a description of a relation between two events, but it doesn’t affect the ontology of those events. So an event in the future cannot alter the ontology of something in the present. Both events are actualized and static, and my version of the Grim Reaper Paradox violates this precept. However, this also means that events in the present do not alter the ontology of events in the future. The future is just as actual and static as are the past and present, on the B-Theory. As such, it becomes immediately clear that Pruss’ version of the Grim Reaper Paradox violates this same precept, since it is dependent upon the idea that an event can affect the ontology of future events.

I do not think that Pruss’ version of the Grim Reaper paradox shows that actual infinities are inapplicable to the real world any more than my version of this thought experiment shows that the number “four” is inapplicable to the real world. In fact, it seems to me that the paradox is best resolved by abandoning an antiquated and untenable idea of the nature of Time. Apologists like William Lane Craig have attempted to cite the Grim Reaper paradox in order to support the Kalam Cosmological Argument. Ironically, it may be the case that the Grim Reaper Paradox actually *undermines* the KCA, since that argument is entirely dependent upon the tensed A-Theory of Time.

Today, we will be discussing Part 17 of the *Excursus*. If you read my article on Part 16, you might remember that I was actually quite excited for this, due to Dr. Craig’s promise to discuss the plausibility of Design as an explanation of the universe’s fine-tuning. As I mentioned, whenever I have discussed the idea of Intelligent Design with an apologist, I have brought up this very subject. Unfortunately, I’ve only ever been met with answers about the purported improbability of chance or necessity. I’ve never been proffered any answers with positive evidence for the idea of Design, nor even with a proposed mechanism by which the Fine-Tuning of the universe *could* be Designed.

Early on in the discussion, Dr. Craig makes a statement with which I wholeheartedly agree:

But we cannot infer immediately to design because sometimes it can be justified to believe in an improbable explanation. You would be justified in believing in some improbable explanation just in case there were no better explanation available of the phenomenon in question…

The question we are facing now with regard to the fine-tuning of the universe is: is design a better explanation than chance or physical necessity?

Yes, this most certainly is the question! So, how does Dr. Craig answer this question? Does he define what, exactly, he means by the term “design?” Does he offer some method for differentiating something which is “designed” from something which is not “designed?” Does he then apply this standard to the question of Fine-Tuning in order to show that the constants and quantities of the universe more keenly fit into the “designed” category than the “not designed” category?

Dr. Craig does none of this. He never even attempts to establish that Design is plausible. Instead, he simply *presumes* that Design is plausible, then spends the rest of the time talking about a poor line of argument from Richard Dawkins. Seriously, that’s it. William Lane Craig seems to be claiming that because Dawkins makes a bad argument refuting Design, Design is therefore more plausible than Chance or Physical Necessity in explaining the Fine-Tuning question.

In response, I can think of nothing more appropriate than a paraphrase of Dr. Craig’s own words:

*I think everyone will find that conclusion jarring because the conclusion “Therefore, Design is more plausible than Chance or Necessity” doesn’t follow from the fact that Dawkins made a poor attempt at refuting Design. There are no rules of logic that would permit you to derive such an inference. There are no rules of logic that would draw that conclusion from the truth of that statement. Craig’s argument is just plainly invalid. The central argument of Craig’s Fine-Tuning discussion is a patently invalid argument.*

I’m fairly certain that this is the shortest response I’ve written to one of William Lane Craig’s arguments. Richard Dawkins is a biologist, and not a philosopher. He’s a vitriolic anti-theist, and not a theologian. When he makes a laughably invalid argument, it’s to be expected. William Lane Craig, on the other hand, holds a PhD in Philosophy. Philosophy is his profession. When *he* makes a laughably invalid argument, there is simply no excuse.

Articles in this series:

- WLC doesn’t understand infinity, Part 1 (re: Excursus #9)
- WLC doesn’t understand infinity, Part 2 (re: Excursus #10)
- WLC doesn’t understand cosmology (re: Excursus #16)

Unfortunately for our esteemed theologian, his understanding of cosmology seems to be just as poor as his understanding of mathematics.

The first statement which I would like to address is, ostensibly, a summary regarding the previous Part 15 of the *Excursus*. I neglected to address that segment more fully, because I have written a similar article previously. So, when Dr. Craig states that, in Part 15:

We saw that the fundamental constants and boundary conditions of the universe are fine-tuned for the evolution and existence of embodied conscious agents in a degree that is incomprehensibly delicate as well as complex.

…I must vehemently disagree. It is, undeniably, true that there are quite a number of constants, in our current cosmological models, whose alteration would result in a very different universe than the one which we see. However, that does not imply that these “fundamental constants and boundary conditions of the universe are fine-tuned for the evolution and existence of embodied conscious agents.” The fact that the universe would be different if we were to change its parameters does not imply that those parameters have *any* specific teleology, let alone that they are finely-tuned explicitly for life.

The Fine-Tuning problem, in physics, is the question, “Why does the universe have the values for constants which it has, rather than other values?” It is not, as Dr. Craig likes to pretend, “Why is the universe finely-tuned for the existence of life?”

Continuing, Dr. Craig states that, again in Part 15:

We’ve already seen that the first alternative – that this is a matter of physical necessity – is highly implausible. This is contrary to the best evidence of science. The best evidence indicates that these constants and quantities are independent of the laws of nature, and that there is nothing physically that would determine that they should have the finely tuned values that they do.

Once again, Dr. Craig oversteps the bounds of reason in this claim. Had Dr. Craig simply stated, “There is no good reason from cosmology to think that these constants and quantities have their values as a matter of physical necessity,” he would have been fairly accurate. However, he instead erroneously states that, “The best evidence indicates that these constants and quantites are independent of the laws of nature.” This is absolutely incorrect. We have no good reason to think that these constants *are* physically necessary, but neither do we have a good reason for thinking that they *are not* physically necessary.

Essentially, Dr. Craig is claiming that because we do not have a good reason to believe that these constants are physically necessary, they are therefore not physically necessary. This is a rather blatant *Argument from Ignorance* fallacy, and should be plainly evident as such to a studied a philosopher as William Lane Craig.

From here, Dr. Craig moves into discussing whether or not our finely-tuned universe could have been as a result of chance. He claims that:

The fundamental problem with this explanation is that the chances of a life-permitting universe’s existing are so remote that this alternative becomes unreasonable.

…John Barrow, who is a Cambridge University physicist, gives the following illustration of the sense in which it can be said that it is highly improbable that a finely tuned universe should exist. Barrow said let’s imagine a sheet of paper and put on it a dot representing our universe. Now alter some of the fundamental constants and quantities by just tiny amounts. That will then be a description of a new universe. If that universe is life-permitting, make it another red dot. If it is life-prohibiting, we will make it a black dot. Then do it again, do it again, and do it again until your sheet of paper is filled with dots. What you wind up with is a sea of black with only a couple of pinpricks of red in the field. It is in that sense that it is overwhelmingly improbable that the universe should be life-permitting. There are simply many more life-prohibiting universes than life-permitting universes in our local area of possible universes.

There is a very, very glaring problem, in this model, which is immediately apparent when one begins to actually consider the manner in which probability works. The simple fact that there are vastly more ways to arrange the parameters of the universe which are “life-prohibiting” than ways which are “life-permitting” does not imply that life-prohibiting universes are therefore more probable. Dr. Craig is making the baseless assumption that any specific arrangement of universal parameters is equally likely as all the rest to occur.

Here’s a quick illustration of what I mean. Take a regular six-sided die, like you might find in a Monopoly board game. Now, if we were to roll that die, there are six possible values which can be attained: 1, 2, 3, 4, 5, and 6. There are six different possibilities. The chances of rolling any particular value are equal: 1-in-6, or %. It is just as probable that your roll will result in a value of 6 as in a value of 1, or of 2, or of 3, or of 4, or of 5. This would be somewhat akin to the model which Dr. Craig is explicating. Every possible value has an equal chance of appearing, so if we needed to roll, say, a 6 then it would be quite probable that our roll will fail.

Now, let’s change things up a little bit. Instead of one six-sided die, let’s look at what happens if we roll two six-sided dice. Now, the possible values we can attain are 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. There are eleven different possibilities; however, the odds of any particular value are *not* 1-in-11. It is absolutely not true that you have just as much chance of rolling a 2 as you do a 3, or a 4, or a 5, for example. In fact, you are more likely to roll a 7 than to roll any other specific value. The reason for this is that the value of your roll is determined by the combination of the two dice. The first die has six different possible results, and the second die also has six different possible results, and the result of each die is independent of the other. Because of this, there are actually thirty-six different possible combinations. Of these thirty-six, only one combination of the two dice will result in a value of 2 (when both dice show 1’s). Therefore the probability of rolling a 2 is only 1-in-36. However, there are six possible combinations which will result in a value of 7 (1-6, 2-5, 3-4, 4-3, 5-2, and 6-1), which means that we have a 1-in-6 chance of receiving this value. Unlike the picture Dr. Craig is trying to paint, in this case, not every result is equally likely.

Now, let’s really throw things for a loop. Instead of two six-sided dice, let’s think about two twenty-sided dice. However, these are not normal twenty-sided dice. Let’s think about dice in which the numbers 1, 2, 3, 4, and 5 appear on exactly one side of the die, while the number 6 appears on the remaining fifteen sides. Just as in our last example, these two dice can attain eleven possible values: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. However, the probabilities in this example are far more skewed than before. There are 400 different possible combinations which can be rolled on the two dice. However, of these 400 combinations, 225 will result in our rolling a value of 12.

Now, let’s liken this to Dr. Craig’s example. Let’s say that a result of 12 is life-permitting, while a result of anything else is life-prohibiting. If we were to take a paper and draw one red dot for our 12, and ten black dots for the values 2 through 11, it would look like our life-permitting universe is very unlikely. However, there’s actually a better-than-50% chance that we will roll a 12!

Dr. Craig does nothing to show that every particular set of parameters for the universe is equally as probable as any other. As such, his dot-drawing example is particularly silly, and his later example of white and orange ping-pong balls suffers from the same problem.

Towards the very end of Part 16 of the *Excursus*, Dr. Craig discusses the manner in which some have attempted to answer the Fine-Tuning problem with appeals to a Multiverse:

Therefore, theorists have come to recognize that the Anthropic Principle will not eliminate the need of an explanation of the fine-tuning unless it is conjoined with a so-called

Many Worlds Hypothesisormultiverse hypothesis. According to the Many Worlds Hypothesis our universe is just one member of a World Ensemble of parallel randomly ordered universes, preferably infinite in number. Often this ensemble is called the multiverse. If all of these other universes really exist and they are randomly ordered in their constants and quantities then by chance alone life-permitting worlds will appear in the ensemble. Since only finely tuned universes have observers in them, any observers existing in the World Ensemble will naturally look out and observe their worlds to be finely tuned. So the claim is no appeal to design is necessary in order to explain fine-tuning.…Before I comment on the World Ensemble hypothesis, let’s just be sure we all understand it – how

it is an attempt to rescue the alternative of design, and how it explains the fine-tuning of the universe that we observe.…

In order to explain fine-tuning, we are being asked to believe not only that there are other unobservable universes but that there are an infinite number of these universes, and moreover that they randomly vary in their constants and quantities. All of this is needed in order to guarantee that life-permitting universes like ours will appear by chance in the ensemble. This is really extraordinary when you think about it. It is a sort of back-handed compliment, if you will, to the design hypothesis.Because otherwise sober scientists would not be flocking to adopt so speculative and extravagant a view as the Many Worlds Hypothesis unless they felt absolutely compelled to do so.…The design hypothesis enjoys independent reasons for thinking that such a being exists whereas there is no independent reason for thinking that the World Ensemble exists.

It is simply postulated to explain the fine-tuning without any independent evidence for thinking that there is such a thing.

I know that this is a rather large block of text, so I’ve added the emphasis of bold, underlined text to highlight the really important portions of what Dr. Craig is, here, claiming. Important, mind you, because they are so incredibly wrong.

Dr. Craig seems to be claiming that Many Worlds was only developed as a means of metaphysically explaining away the Fine-Tuning problem. As if “otherwise sober scientists” decided to simply make up a completely *ad Hoc* and ridiculous assertion for the sole purpose of being able to avoid appealing to design as a possibility. This is, of course, preposterously wrong. However, it is more than just wrong. William Lane Craig is overtly insulting both the character and the intelligence of all those who hold to the idea of Many Worlds. It is, quite frankly, rather disappointing to find Dr. Craig making use of such an abhorrent rhetorical tactic.

Many Worlds was not, as Dr. Craig infers, developed for the sole purpose of standing as a stop-gap against the idea of design in the Fine-Tuning debate. In fact, Many Worlds was not developed to discuss the Fine-Tuning debate, at all. Many Worlds is one of a number of different possible ways to interpret the mathematics of Quantum Physics, and it is for *that* purpose that Hugh Everett first proposed the idea in 1957. The concept stood, and was argued for and against, for decades before anyone thought to propose that the Many Worlds interpretation might offer some unique answers to the Fine-Tuning question.

The Many Worlds interpretation is certainly no more “extravagant” or “speculative” than any other interpretation of Quantum Mechanics. Indeed, it is actually a far simpler explanation than a number of other possible interpretations of QM. It is for *this* reason that Many Worlds began to become popular among physicists. There is absolutely no reason to think that holding to Many Worlds should imply that a physicist is not being a “sober scientist” as a result.

However, Dr. Craig doubles-down on his ludicrous line-of-thought by addressing a particular sociological survey:

In fact, when I was doing the seminar on fine-tuning last summer at St. Thomas University, one of the other professors in the seminar was Neil Manson, professor of philosophy. Neil had done an extraordinary sociological survey of contemporary cosmologists about issues like fine-tuning. I think this is the first and only such sociological survey done by a reputable organization published in a peer-reviewed journal that I know of. What Manson asked the cosmologists was, “Do you think that other theorists who adopt the multiverse hypothesis do so in order to avoid the design hypothesis?” He was very clever to ask it that way. He didn’t ask “Do

youadopt it for that reason?” That would make them have to confess, “Yes, I as a scientist am really trying to avoid design, and that is why I believe in the World Ensemble.” No, he said, “Do you think your colleagues who believe in the multiverse are motivated by a desire to get away from design?”

I agree with Dr. Craig that Professor Manson was “very clever” to ask the question with the particular wording quoted here (assuming that it is accurately quoted). Of course, I differ with Dr. Craig on *why** *that particular wording can be considered clever. If I know 1000 theorists, even if I only know 3 who “adopt the multiverse hypothesis… in order to avoid the design hypothesis,” then in order to answer the question honestly, I would have to say, “Yes, I think that other theorists who adopt the multiverse hypothesis do so in order to avoid the design hypothesis.” Even if I didn’t know any such theorists, personally, but I had heard rumors that some exist, it is quite likely I would answer that question in the affirmative. Even worse, if I neither knew any such theorists nor had heard rumors of such theorists, but held an unjustified belief that such theorists nonetheless exist, I would still answer that question with a “Yes.” Honestly, this question (as presented by Dr. Craig) seems to be very poorly worded, and far too vague to be of any real use.

The fact that there may be some theorists who adopt the multiverse hypothesis in order to avoid the design hypothesis does not, in any way, imply that the majority of multiverse supporters are so biased. Nor does it imply that the multiverse hypothesis is, at all, problematic.

The next time somebody says to you, “Oh, well, it could have happened by chance!” or “The improbable happens!” or “It was just dumb luck!” then ask them, “If that is the case, why do the detractors of design feel compelled to embrace an extravagance like the World Ensemble hypothesis in order to avoid design?” The fact that they would resort to such a metaphysical hypothesis I think is, as I say, the best evidence that the chance hypothesis is in deep trouble.

I am absolutely perplexed to hear such a statement issued from the mouth of a professional philosopher. Dr. Craig is fairly clearly claiming that the “best evidence that the chance hypothesis is in deep trouble” is a rather egregious Genetic Fallacy. Even if it was the case that “detractors of design feel compelled to embrace an extravagance like the World Ensemble hypothesis in order to avoid design,” it does not therefore follow that the Fine-Tuning of the universe could not have been the result of chance. William Lane Craig should be completely aware that the origin of a belief is irrelevant to the veracity of that belief.

Dr. Craig then shows his complete unfamiliarity with Many Worlds with the following:

One way to respond to the Many Worlds Hypothesis would be to show that the multiverse itself also requires fine-tuning. In order to be scientifically credible, some plausible mechanism has to be suggested for generating the many worlds in the ensemble. But if the Many Worlds Hypothesis is to be successful in attributing fine-tuning to chance alone, then the mechanism that generates the many worlds had better not be fine-tuned itself. Otherwise, you’ve just kicked the problem upstairs, and the whole debate arises all over again on the level of the multiverse.

The Many Worlds in the ensemble are not “generated,” at all. They are parallel. There does not need to be a “plausible mechanism… for generating the many worlds” any more than there needs to be a plausible mechanism by which the X-Axis generates the Y-Axis on a Cartesian Plane in order for us to plot a few points on a graph. This isn’t like an assembly-line machine popping out new Worlds every so often, as illustrated in this cartoon from Dr. Craig on the subject. Many Worlds simply proposes that every possible state of a Quantum Wave Function represents the actual state of some real world. These worlds are not “generated.” They don’t pop into existence due to some action. This does not require any fine-tuning, itself.

Now, with all that said, even if we were to ignore everything which I said in this article, and even if– for the sake of argument– we were to accept Dr. Craig’s claims that physical necessity and chance are unlikely explanations for Fine-Tuning, he’s left with a larger problem. Simply stating that other propositions are unlikely does not imply that your preferred option is *more* likely. Dr. Craig still has the burden to show that Design is even a valid possibility for explaining the Fine-Tuning problem. Of course, Dr. Craig recognizes this problem and ends part 16 with this:

So what about design? Is design any better an explanation of the fine-tuning of the universe? Or is it equally implausible? That will be the question that we take up next week.

Honestly, I was very excited to hear this. Whenever I have discussed the idea of Intelligent Design with an apologist, I have brought up these very questions. Unfortunately, I’ve only ever been met with answers about the purported improbability of chance or necessity. I’ve never been proffered any answers with positive evidence for the idea of Design, nor even with a proposed mechanism by which the Fine-Tuning of the universe *could* be Designed. In my next article, we’ll see if *Excursus* Part 17 can actually answer these questions reasonably.

Articles in this series:

- WLC doesn’t understand infinity, Part 1 (re: Excursus #9)
- WLC doesn’t understand infinity, Part 2 (re: Excursus #10)
- WLC dodges his own question (re: Excursus #17)