We can use a function for just such a purpose. A function is a specific mathematical tool which allows us to describe an entire set of data points all at once. We encode the data by means of a mathematical formula. For example, our exemplary rolling ball might well have been encoded by the function , where the represents the time, in seconds, that the ball has been rolling, and the value of the function, tells us the distance in meters which the ball has traveled in that time. In this particular function, the coefficient of tells us the rate at which distance changes as time passes– that is, a meter per second. When the boy first rolls it, the ball is traveling at a meter per second; when it finishes it had been traveling at a meter per second; and at any single point during the journey the ball is traveling at a meter per second.

However, this is a very simple example. It describes a situation involving a constant velocity. Things become a bit more muddied when the rate at which a change occurs is, itself, changing.

Our example above describes a **linear function**. Linear functions are so named because they can be graphed on a Cartesian plane to form a straight line. The equation for a linear function is of the form , where represents the y-intercept (the point at which the line crosses over the y-axis of the plane) and where represents the slope of the line (the rate of change for the function). Utilizing the function from our example, we have a slope of $latex \frac{1}{2}, an intercept of 0, and we can produce the following graph:

It’s very easy to see, intuitively, that this line’s slope, or rate of change, is constant throughout the whole function. We don’t even need to see the equation which generated this graph to see that this is the case, if we presume that the line on the graph is actually as straight as it appears. That very straightness is precisely what we mean by a constant rate of change. As such, it is perfectly clear that the graph has the same slope at as it does at or or . Regardless of how far along the graph we look, it will always have the same rate of change.

However, this is not true of all graphs. When a function ceases to be linear, the rate of change of that function ceases to be constant. Take, for example, the following graph of the function :

Let’s pretend that, instead of rolling the ball across a flat floor, the little boy has instead set the ball atop a ramp and let go. The ball starts moving slowly, but builds up more and more speed as it moves farther and farther from the boy. After four seconds, the ball is two meters away from the boy– just as in our first example– which means that the ball still traveled a meter per second, overall. However, it seems entirely clear that the ball was not moving at that speed at every single moment in the journey, the way it had when the boy rolled it across the floor. At the start of its roll, the ball is moving much slower than a meter per second, while at the end it is moving much faster than a meter per second.

This introduces a very interesting, and very important, question: how can we tell what the rate of change is at any given point? What is the **instantaneous rate of change**?

For example, let’s say I want to know how fast the ball is moving precisely 3 seconds after the boy has set it rolling. A person might think that they can simply determine how far the ball has gone in that time– meters– and then divide that distance by the time– 3 seconds– to conclude that the ball is traveling at of a meter per second. However, this has the same problem as the whole 4 second journey: the ball seems to be traveling slower than of a meter per second at the start and faster than of a meter per second toward the end.

One way in which we know this fact is by looking at how far the ball travels between the second and third seconds of its journey. So, after two seconds, the ball is a meter from its starting point. After three seconds, it is of a meter from the starting point. This indicates that the ball traveled of a meter in one second. But this, again, falls prey to the same problem we’ve been having: the ball seems to be moving more slowly than of a meter per second at the 2 second mark and more swiftly at the 3 second mark. We’re closer to the speed of the ball at 3 seconds than we were before, but we still haven’t determined it, quite yet.

We can continue to take smaller intervals of time in order to find better and better approximations of the speed of the ball at the 3 second mark. For example, using the distance the ball moves between the 2.5 second and 3 second marks, or the 2.75 second and 3 second marks, or the 2.99999999998 second and 3 second marks. We can come really, really close to the answer we’re trying to find by doing this, but we don’t end up with the exact answer– and mathematicians are not happy to settle for an inexact answer.

Let’s think about what we are doing in these approximations.

If the ball had traveled at a constant speed from the start, at time 0, to the 3 second mark, then its journey could be represented with line . The slope of this line is — which is the approximate speed we determined when considering the ball over this period. Similarly, the line has a slope of , our approximation from the 2 second mark to the 3 second mark. If we were to calculate the slope of , our approximation would get even better. Visually, in the graph above, we can see that the linear graphs are getting closer and closer to the parabolic graph– but there’s always some tiny bit of space between the two.

Algebraically speaking, what are we doing in these approximations? How can we translate this problem into our mathematical language?

Well, we are taking the distance which the ball has traveled after 3 seconds– which, in our math language, is — and we are subtracting the distance which the ball had traveled at an earlier time– say, or or — to find the distance which the ball has traveled between those two times. We are then dividing this distance by the amount of time which has elapsed between the two points: seconds or second or seconds.

Now let’s try to generalize this. We have our function, . We are looking at the difference between the value of the function at some point, , and the value of the function at some subsequent point, ; we are then dividing that difference by the difference in our two points, — which is just . So, this leads us to the expression .

As we have seen, the smaller the gap between our two -values, the closer our approximation becomes. Naturally, we might then think that we can find an exact solution to our problem if we just remove the gap, entirely– that is to say, what happens if we set equal to zero in the expression that we found, above? However, we very quickly come to a problem if we do that. Evaluating the expression, we’ll see that . This is certainly problematic– any middle school child should be able to point out that we simply cannot evaluate that .

But what if we had some number which wasn’t zero, and yet that number was infinitely close to ? In such a case, we could reasonably assume that our answer is infinitely close to being correct.

Thankfully, in the first part of this series, we learned that we do have such numbers: the infinitesimals. So now, if I replace the from our above expression with any arbitrary infinitesimal– let’s call it — we’ll find that evaluates to something infinitely close to the answer which we are looking to find. For the exact answer, as we mentioned, we would like to have been able to replace the with zero; but now we can be clever. Instead of trying to do undefined operations of math, dividing zero by zero, we can find the Real number solution which is infinitely close to our evaluated expression, which (as you will recall) is called the **standard part** of the expression. By taking the standard part of , we can find the exact answer to our problem.

Let’s go back to our rolling ball, now, to see how we can put this into use. We want to find the exact speed of the ball at the 3 second mark. Translating this into our expression, we get:

So, precisely at the 3 second mark, we now know that the ball is traveling at exactly of a meter per second. However, we can do even better than this. As mentioned earlier, mathematicians are greedy. We don’t just want to know what’s going on at a few of the points; we want to know what is going on at *all* of the points. So, rather than solving for some particular value of , such as 3, we can solve the expression for *all* values of , like so:

This new function, , is called the **derivative** of our original function. We denote the derivative of with an apostrophe, written as and often read as “-prime of .”

The derivative is a very powerful tool. It gives us a way of describing the instantaneous rate of change for *all* points of a given function. When discussing speed or velocity, as we have been doing for our exemplary ball, the derivative of the function for distance gives a function describing velocity. The derivative of the function describing velocity will, in turn, give us a function describing acceleration. Taking the derivative of that function will then tell us how quickly our acceleration is, itself, increasing or decreasing– and so on and so forth. When we take derivatives of derivatives, like this, we refer to them as second, third, fourth derivatives (and so on). So, as we have now seen, the second derivative of a distance function is an acceleration function.

The derivative was developed by mathematicians for the express purpose of describing the changes in change. By its use and exploration, we can conquer a great many problems which are incredibly difficult– or even impossible– without this wonderful tool. And, at the very heart of the derivative lie the infinitesimals– these numbers between our numbers– which give this mathematical tool its power.

]]>

There are numbers in between the Rational numbers, too. We can define some number, , which is not equal to any Rational number. There are Rational numbers which are greater than , and those which are less than , but somehow our number squeezes itself into a gap in between the Rational numbers. In order to find such a number, we need to further extend our understanding of “number” to include the Real numbers. This should all be very familiar to the average high-school student.

Now, what happens if we extend this idea one step further? Are there more numbers which are in between the Real numbers?

For thousands of years, mathematicians have had heated debates about this question. There is a well-known concept in number theory called the Archimedean property, named after the famous mathematician Archimedes (though he, himself, had attributed the idea to his friend and mentor, Eudoxus). Euclid described the notion by saying, “Magnitudes are said to have a ratio to one another which can, when multiplied, exceed one another.” In short, this means that given any two numbers, and such that , then we should be able to add to itself a finite number of times in order to find a number which is larger than . For example, given the numbers 5 and 34, I can add 5 to itself seven times in order to get a number greater than 34– that is to say, . Given the numbers and , we can find that . Given the numbers and , we can find that .

However, this property led to a very curious problem when mathematicians began trying to discuss the number of points contained in a given line– particularly when those mathematicians attempted to compare the number of points in one line to the number of points in another line. The eminent philosopher, Aristotle, came to the conclusion that such discussions could be nothing but nonsense, and that any attempt to quantify the number of points in a given line would simply lead to confusion and folly. As an example of this, Aristotle discussed what has come to be known as his Paradox of the Wheel. Take a look at the following figure:

Aristotle noticed that, given a larger and a smaller circle which share a centerpoint, rolling out the wheel to produce a straight line equal in length to the circumference of the larger circle causes the smaller circle to produce an equally long line. But how can this be? The smaller circle obviously has a smaller circumference, but rolling it out at the same rotational rate as the larger circle makes it seem to have an equal circumference to the big one!

Galileo Galilei, two millennia after Aristotle, attempted to resolve this paradox by arguing that there must be gaps in the continuum– that is to say, there must be empty spaces between the points in any line or figure– and that these gaps account for how the smaller circle’s circumference can be stretched to equal that of the larger circle. However, other mathematicians were quick to note that this would run afoul of the Archimedean principle. If such gaps existed, it should be possible to continue to stretch them until they became noticeably large. We should be able to magnify a line until we literally see it rend apart into pieces.

In the latter half of the 17th Century, Gottfried Wilhelm Leibniz began to argue that there are numbers which are infinitely small. To the absolute shock of the mathematical community, Leibniz was claiming that a 2000-year-old immutable law of number theory was, in fact, incorrect. There existed numbers, Leibniz claimed, which violate the Archimedean principle; numbers which are greater than zero, but which are nonetheless so much smaller than any Real number that it is impossible to find a finite ratio between that infinitely small number and any Real. You could add the number to itself a thousand times, a million, a quintillion, a gogolplex– even Graham’s number of times– and that number would still remain smaller than any Real number which you could possibly imagine.

Not only did Leibniz believe that such numbers exist, he utilized them in order to create an entirely new method of mathematics: Calculus. However, the idea was so incredibly controversial that even other proponents of Calculus– like Isaac Newton, who independently developed that field of mathematics– railed against Leibniz for his reliance upon such an insane concept. Still, Leibniz’s results were indisputable, and a number of mathematicians joined with him in an attempt to find some rigorous and logical means of discussing these infinitely small numbers. However, after a great deal of failure, other avenues began to be explored in order to place Calculus on a rigorous footing. Particularly, the notion of the Limit was put forth, expanded, and eventually made rigorous in the 19th Century by Karl Weierstrass. With a rigorous and logical footing finally established for Calculus, the infinitely small numbers of Leibniz’s devising were abandoned and Calculus classes began being taught based on the idea of the Limit.

Thankfully, this was not the end for our strange and tiny numbers. One-hundred years after Weierstrass, and three-hundred after Leibniz, a model theorist named Abraham Robinson began to attack the problem. He was fascinated by Leibniz, and wanted to gain a better understanding of the mind which had invented the calculus. Robinson’s work led to his development of a new number system: one which did not adhere to the Archimedean principle, but which otherwise behaved in exactly the same manner as did the Real numbers. He called this new system the Hyperreal numbers. Just as the mathematicians had extended the Integers to find the Rationals, and then extended the Rationals to find the Reals, Robinson extended the Real numbers in order to find the Hyperreals.

The Hyperreals contain all of the Real numbers, so any number on the Real line is also on the Hyperreal line. However, the Hyperreal number line also contains two very special types of numbers which are not contained in the Reals. The first of these are Infinite numbers– numbers which have an absolute value greater than that of any Real number. That is to say, we can define some number such that, for any given Real number , it is true that . The second type refers to Infinitesimal numbers. Infinitesimals are the reciprocal of Infinite numbers, and as such, have an absolute value which is smaller than that of any Real number (except 0, which is considered to be Infinitesimal): .

Any number which is not Infinite is called a Finite number– including the Infinitesimals. Once the system is in place, it becomes quite easy to prove some simple, but powerful, properties of the Hyperreals. Given any positive Infinite numbers, and ; any positive Real numbers, and ; and any positive, non-zero Infinitesimals, and ; we can derive the following:

- , , and are Infinite
- and are Infinite
- , , and are Infinite
- is Finite (and possibly Infinitesimal, in the case )
- is Finite and non-Infinitesimal
- and are Finite and non-Infinitesimal
- , , and are Infinitesimal

You may notice that there are several cases missing from the above list. These cases are indeterminate forms– that is to say, without knowing more about the particular numbers involved, it is impossible to tell whether the result will be Infinitesimal or Finite or Infinite. The indeterminate forms are:

We can also derive another very important notion:

For any Finite Hyperreal number, , there is exactly one Real number, , such that $N-r$ is Infinitesimal. In such a case, we call the **standard part** of , denoted as .

Any two numbers which are only separated by an Infinitesimal are said to be **infinitely close** to one another. As such, another way of wording the above is that the standard part of any Finite Hyperreal number is the Real number which is infinitely close to it.

So, now, we can answer the question with which our article started. Are there numbers between the Real numbers? We find that the Hyperreals allow us to answer this with a resounding, “Yes!” Given any Real number, , and any Infinitesimal number, , we can be absolutely certain that there are no Real numbers which come between and . This concept is the absolute foundation of Infinitesimal Calculus.

]]>

Many other mathematicians and philosophers of the time rightfully balked at the notion. It seemed entirely ludicrous. Bishop George Berkeley famously scoffed at Newton, asking if his fluxions were “the ghosts of departed quantities.” However, it was quite plain that the mathematics which Leibniz and Newton presented *worked*. When the results which could be found from the methods of Calculus were able to be confirmed using other methods, they were found to be accurate and true. Indeed, the Calculus was such a powerful tool that even most mathematicians and philosophers who recognized its flaws continued to utilize it in their work. Many began searching for some way to make the Calculus just as rigorous as the rest of mathematics. These efforts culminated in the work of Karl Weierstrass, who found a way to base Calculus upon a different tool. Instead of the Newtonian “fluxion” or the Leibnizian “differential,” Weierstrass gave mathematics a well-defined notion of the limit.

It is Weierstrass’ method of limits which is still taught, even to this day, in nearly every Calculus textbook in the world; but perhaps it is time to abandon this notion and return to the concept which Newton and Leibniz pioneered.

In the 1960’s, a mathematician named Abraham Robinson developed a rigorously well-defined number system called the Hyperreal numbers. This number system included numbers which are larger than any given Real number– known as “infinite” or “unlimited” numbers– as well as their reciprocals, which are greater than zero but nonetheless smaller than any real number– known as “infinitesimals.” Robinson explicitly noted that his development of the Hyperreals came out of a desire which he had for better understanding Leibniz’s thought processes. Indeed, the infinitesimals of the Hyperreal numbers look very much like the “fluxions” and “differentials” of that early Calculus. In 1986, H. Jerome Keisler wrote a textbook for the subject, Elementary Calculus: An Infinitesimal Approach, in which he provides a method for teaching Calculus without the need for limits, while still maintaining the rigor desired in mathematics.

Unfortunately, Dr. Keisler’s work has not yet gotten much of a foothold in the educational system. The method of limits has been taught for so long that it would be exceedingly difficult to displace it. However, there are some very distinct pedagogical advantages in Keisler’s approach which may make the whole ordeal well worth the effort.

Let’s look at a simple example. One early Calculus problem with which every student is presented is to find the derivative of the function . For those who don’t remember, the derivative of a function, , tells us how much the value of that function changes with respect to a change in the value of *x*. So, let’s say that the value of *y* increases by some amount which we will call when the value of *x* is increased by some amount . Algebraically, we would write this as , for the equation we are discussing. We can then take this new equation, and solve it for the value of as follows:

It is these final three steps which the mathematicians of Newton and Leibniz’s day found to be offensive. According to the Calculus, the derivative was the function which results from . If we are to say that actually has a value, then it must be true that , because division by zero is undefined. However, if , then it must be true that , because that is the only additive identity. Thus, we are left with a contradiction if we claim that .

Later mathematicians, culminating in Weierstrass, resolved this issue by redefining the derivative to be a limit as the change in *x* approaches zero. Specifically, they said that . Of course, this raises a new question: what, precisely, is a limit? Well, if is defined on an open interval about , except possibly at itself, then if, for every number , there exists a corresponding number such that for all *x* it is true that implies that . Needless to say, this is a fairly complex idea, which is why a large amount of time needs to be spent on teaching students how to properly find and evaluate limits.

Keisler’s resolution to the derivative problem we presented is somewhat simpler, and quite a bit more intuitive. In his Elementary Calculus, the in the equations above is defined to be a non-zero infinitesimal. The derivative is then defined to be where *st()* means “the standard part of…” The *standard part* of a finite Hyperreal number, *a*, is the Real number which is infinitely close to *a*; and two numbers are infinitely close if they only differ by an infinitesimal value. Looking again at Step 6 from our work above, we had the expression . Since we know that is infinitesimal, we know that is infinitely close to . Thus, for any Real number, *x*, we can see that .

From a pedagogical standpoint, it would seem that Keisler’s method is superior. Hyperreal variables can be manipulated algebraically in exactly the same way students are already familiar with manipulating Real variables. The *standard part* function is quite a bit easier and more intuitive to learn than the *limit* function. The method is inordinately closer to the original ideas which created Calculus, in the first place, and it is just as rigorous a treatment as is the method by limits. Keisler and others have reported that they’ve seen students take to the material more easily, in this manner. Perhaps the time has come to leave off of the use of limits, and to return to the method of infinitesimals for teaching Calculus.

]]>

Ancient languages maintain these problems, but add an entirely new layer of obfuscation which is not found even in most culturally distinct modern languages. Over the past few thousands of years, human understanding of the world around us has changed quite significantly. Just one hundred years ago, no one had ever viewed the ground from five miles up in the air. Two hundred years ago, we had no idea that microscopic organisms cause disease. Three hundred years ago, humanity had no idea that oxygen exists. Four hundred years ago, the world was shocked to learn the the planet Jupiter has moons. The manner in which religion, philosophy, and science have discussed a myriad of things about reality has changed so greatly in recent millennia that very often even one word in a single language can mean something exceedingly different to people living in different periods of time.

The documents which comprise the New Testament of the Christian Bible were written 2000 years ago. In those ensuing twenty centuries, many of the words used by the original authors and many of the concepts which they espoused have engendered incredible amounts of revision, alteration, and nuance by subsequent philosophers and theologians which would have been wholly alien to those initial ancient writers. The vast majority of modern readers– including an embarassingly large number of modern scholars of the text– seem wholly ignorant of this fact when they read a passage from their Bibles.

As an example of what I mean, let’s take a look at a short verse from one of the Gospels, Mark 1:10. The English Standard Version of the Bible, which I generally consider to be a good translation, renders this passage as:

And when he came up out of the water, immediately he saw the heavens being torn open and the Spirit descending on him like a dove.

This is a verse from Mark’s description of the baptism of Jesus by John the Baptist. It seems extremely straightforward, to a modern Christian reader. As Jesus comes up from the water, the Holy Spirit came out of Heaven and alighted upon Jesus in the way a beautiful bird might come down from its flight.

However, consider this alternate translation of the same text:

And when he came straight up from the water, he saw the skies being divided and the wind came down toward him like a pigeon.

This is very different, indeed. No mention of Heaven, or the Holy Spirit. It gained the adverb “straight,” but lost the adverb “immediately.” Some of the words are similar, though slightly altered, like “divided” instead of “torn open” and “pigeon” instead of “dove.”

So which translation is correct? Well, as I insinuated earlier, “correct” may not even be a word which we can use when describing translations. However, there are definitely some good reasons to prefer my translation over that of the ESV. Let’s talk about the two biggest changes in my translation over the other: “heavens” versus “skies,” and “the Spirit” versus “the wind.”

The word which the ESV translates as “heavens” and which I translate as “skies” is οὐρανους (ouranous), which is the plural form of the word οὐρανος (ouranos). This one word is sometimes translated as “sky” and other times as “Heaven” by nearly every English translation, including the English Standard Version. Given how different the words are to a modern Christian, this might seem confusing. However, the ancient Greek language didn’t have different words for these two things. Neither did ancient Hebrew, nor Aramaic, nor Latin, nor any other ancient language of which I am aware.

There is a very good reason for this. The modern conception of Heaven as a place which exists wholly removed from the physical cosmos did not exist to these ancient people. When the ancients revered “the heavens” as a divine realm, they were literally talking about the sky which they looked up and saw every day. They referred to “Heaven” as being “above” them or “higher” than them because that’s where the sky actually is. This language doesn’t even make any sense on a cosmic scale, let alone when discussing something wholly distinct from the physical cosmos. The “Heaven” which is discussed by modern theologians is not “above” us, as it has no physical relation to us.

To further illustrate this, look at what the Gospels record Jesus, himself, as saying. In a number of passages (Matthew 24:30, 26:64; Mark 13:26, 14:62), Jesus talks about the Son of Man being seen in “Heaven” coming on the clouds. Modern readers know that clouds are just collections of water vapor in the sky– very physical things, and distinctly not what theologians would consider to be a part of the realm of the divine.

So, then, if the ancients were referring to the sky when using the word οὐρανος, then why do they pluralize it in many places, including the passage which we are here discussing? This is yet another place where ancient culture and modern collide. To us, there is only one sky. It wouldn’t even occur to most people that the word can be pluralized. However, the ancients had a very different understanding of that which resides above us than we do. They thought that the objects which we see in the sky above us– the sun, moon, planets, and stars– were literally attached to crystalline spheres each of which rotated at different distances from the ground. Those spheres were what the ancients meant when they were talking about the “heavens.” Humanity was able to distinguish seven celestial bodies which were distinct from the background of stars through the use of the naked eye. Each of these was considered to be attached to a distinct sphere rotating over the ground at different heights. According to Aristotle, the lowest of these heavens was the Moon, followed by Mercury, then Venus, then the Sun, then Mars, then Jupiter, and finally Saturn in the highest sphere.

Modern Christian theologians generally do not believe that there are multiple divine realms, so pluralizing οὐρανος makes no more sense in light of modern theology than it does in modern cosmology; but it was perfectly rational to an ancient people who truly believed that there were multiple skies above us. In fact, in the New Testament, itself, we have an example of another writer who most certainly espoused this view. Paul, the eminent apostle whose name is attached to nearly half of all the books that make up the New Testament, says in 2 Corinthians 12:2, “I know a man in Christ who fourteen years ago (whether in the body or out of the body I do not know, God knows) was carried off to the third heaven.” Again, this fits rather perfectly with the ancient understanding of the world, but clashes rather significantly with modern cosmology and most modern Christian theology.

For these reasons, I think that “sky” is a much better, and much more preferable, translation of the word οὐρανος than is “Heaven.”

The distinction between the ESV’s “Spirit” translation and my “wind” is a very similar case. Here, the word being translated is πνεῦμα (pneuma). Just as before, the ESV and other English translations alternately use “spirit,” “Spirit,” “breath,” and “wind” to translate this word. You’ll notice that I listed both lower-case “spirit” and upper-case “Spirit,” separately. I did so intentionally, because when translators use that capitalized “S” version of the word, they are saying that the author was referring to the Holy Spirit– as in, the third person of the Trinity– as opposed to any other “spirit.”

Again, the word πνεῦμα carries with it cultural connotations which are somewhat alien to modern readers. The Greek word primarily means “breath” or “wind.” However, again, the ancient people had no concept of modern physics or chemistry. They didn’t know that air is composed of molecules which move and bounce off of other molecules, imparting Newtonian forces in order to cause the motion which we see. All that they knew was that, somehow, the invisible forces of “breath” and “wind” could affect that which was visible. The ancient Hebrews, as well as a few other ancient Near East cultures, came to associate this invisible force with those invisible qualities of a person which animate the visible. As such, in ancient Hebrew, the word רוח (ruach) literally meant “wind” or “breath,” but the “wind” of a person was the part of that person which truly gave them life. This carries down even into modern English in idioms like “the breath of life.”

As Hebrew people in the Greek-speaking Diaspora of the Roman Empire began to utilize ancient Greek in addition to– or even in place of– their ethnic tongue, they began to have the need to discuss these concepts in the common language of the area. As such, they chose to use the word πνεῦμα in the same way that they had utilized רוח, previously. While the Greek word also conveyed a sort of sense of invisible force, most of the Hellenic citizenry of ancient Rome didn’t see πνεῦμα as being something personal or intelligent. This connotation seems to have been the result of a syncretization between Hebrew and Hellenic cultures.

Modern theologians, just as with “Heaven,” do not regard “spirit” to be a physical thing, in the least. To them it is, in fact, the precise opposite of physical. It is entirely non-physical, and while it (somehow) imparts personhood into a being, the physical body of that being is just a shell to contain the spirit. But, again as before, this was not a concept held by ancient peoples. To them, a person’s wind was categorically no different than a storm’s wind. The idea that something might be wholly removed from the physical world would have been entirely alien to most ancient people. Among those who did hold to such a concept– for example, Plato and those who accepted his theory of universals– it would have been entirely anathema to refer to such things as “wind.” After all, “wind” is a thing which can certainly be perceived– perhaps not by the eyes, but certainly by senses like touch and hearing and sometimes even taste or smell. The Platonists insisted that the universals were entirely imperceptible, and that notions like space and time– which can certainly be applied to wind– are entirely meaningless in regard to the universals. It seems quite unlikely that the authors of the New Testament had in mind the modern conception of “spirit” when they used the word πνεῦμα in their writings.

For these reasons, I believe that “wind” or “breath” are far more preferable renderings of the word πνεῦμα than is the word “spirit.”

The words which the ESV used in translating Mark 1:10– “heavens” and “Spirit”– are part of a category of terminology which I refer to as Theologically Loaded Language. These are words which have undergone literally millennia of theological revision and discussion, and which have come to mean very different things than the original text which they translate. These two are just a very tiny example of a rather huge list which includes very common Christian words like “gospel,” “Christ,” “sin,” “angel,” “devil,” “baptism,” “Scriptures,” and many, many others.

For some time, now, I’ve wanted to do a translation of the New Testament books which avoids utilizing this sort of Theologically Loaded Language. I honestly believe that such a translation would be eminently useful to *all* people interested in the Bible, believer and skeptic alike. I would start with Mark– the earliest and the shortest of the Gospels– and progress from there. Unfortunately, however, this would require a great deal of time and effort, even just to produce a single book. I’ve thought about trying to drum up some interest with a crowdfunding site like Kickstarter, IndieGoGo, Patreon, or GoFundMe, but I’ve been somewhat recalcitrant. Would this be something in which you, my readers, might be interested? If so, please let me know in the comments. If I can engender enough interest, I may well move forward with such a project.

]]>

Consider this slightly modified version of the thought experiment…

Fred is sitting in a room at 8:00 am. There exist four Grim Reapers along with Fred, each of which is currently dormant. When any individual Grim Reaper becomes activated, if Fred is not going to be killed by the next Reaper in the order, then this Reaper will instantaneously kill Fred; otherwise, this Reaper will return to a dormant state and continue to do nothing. Each of the Grim Reapers is timed to activate at a specific time after 8:00 am. The first Reaper will activate at 8:15 am. The second activates at 8:30 am. The third activates at 8:45 am. The fourth activates at 9:00 am.

Now, 8:15 arrives and the first Reaper activates. Does it kill Fred or not? If it does kill Fred, because the second Reaper is not going to kill Fred, then the 3rd Reaper in the line is not going to kill Fred– it can’t, obviously, since Fred is already dead. However, if that’s the case, then the second Reaper *is* going to kill Fred (since those conditions are met) and the first Reaper’s conditions are no longer valid. So, even though we started assuming that the first Reaper killed Fred, we’ve learned that this cannot be the case. Indeed, the same holds true for the second Reaper– if the second Reaper kills Fred, then the fourth Reaper cannot kill Fred meaning that the third Reaper should kill Fred, violating our initial assumption. So, we see that the second Reaper is not going to kill Fred. But if the second Reaper isn’t going to kill Fred, then the first Reaper should– except that we’ve already seen this cannot happen.

Unlike Pruss’s formulation of the paradox, this problem cannot be resolved by simply claiming that actual infinites cannot exist. We’re not relying on actual infinities, here. We are looking at a finite number of Grim Reapers. Nor does is seem reasonable to come to the sort of conclusion which Pruss does in his proposed solution to the paradox. If a person tried to claim that the number “four” cannot actually be a number which applies to the real world because of this paradox, we would all laugh in their faces.

It’s a little bit easier to see the point I was trying to make in my other post, now. Regardless of whether one is an A-Theorist or a B-Theorist as far as Time is concerned, both camps agree that events which lie in the future do not alter the ontology of events in the present. On the A-Theory view of things, I cannot make a decision based upon a future which has not yet been actualized. Things which are not yet actual cannot affect that which is actual, and as such, it is clear that my version of the Grim Reaper Paradox violates this view of things.

Similarly, on the B-Theory, causality is a description of a relation between two events, but it doesn’t affect the ontology of those events. So an event in the future cannot alter the ontology of something in the present. Both events are actualized and static, and my version of the Grim Reaper Paradox violates this precept. However, this also means that events in the present do not alter the ontology of events in the future. The future is just as actual and static as are the past and present, on the B-Theory. As such, it becomes immediately clear that Pruss’ version of the Grim Reaper Paradox violates this same precept, since it is dependent upon the idea that an event can affect the ontology of future events.

I do not think that Pruss’ version of the Grim Reaper paradox shows that actual infinities are inapplicable to the real world any more than my version of this thought experiment shows that the number “four” is inapplicable to the real world. In fact, it seems to me that the paradox is best resolved by abandoning an antiquated and untenable idea of the nature of Time. Apologists like William Lane Craig have attempted to cite the Grim Reaper paradox in order to support the Kalam Cosmological Argument. Ironically, it may be the case that the Grim Reaper Paradox actually *undermines* the KCA, since that argument is entirely dependent upon the tensed A-Theory of Time.

]]>

Today, we will be discussing Part 17 of the *Excursus*. If you read my article on Part 16, you might remember that I was actually quite excited for this, due to Dr. Craig’s promise to discuss the plausibility of Design as an explanation of the universe’s fine-tuning. As I mentioned, whenever I have discussed the idea of Intelligent Design with an apologist, I have brought up this very subject. Unfortunately, I’ve only ever been met with answers about the purported improbability of chance or necessity. I’ve never been proffered any answers with positive evidence for the idea of Design, nor even with a proposed mechanism by which the Fine-Tuning of the universe *could* be Designed.

Early on in the discussion, Dr. Craig makes a statement with which I wholeheartedly agree:

But we cannot infer immediately to design because sometimes it can be justified to believe in an improbable explanation. You would be justified in believing in some improbable explanation just in case there were no better explanation available of the phenomenon in question…

The question we are facing now with regard to the fine-tuning of the universe is: is design a better explanation than chance or physical necessity?

Yes, this most certainly is the question! So, how does Dr. Craig answer this question? Does he define what, exactly, he means by the term “design?” Does he offer some method for differentiating something which is “designed” from something which is not “designed?” Does he then apply this standard to the question of Fine-Tuning in order to show that the constants and quantities of the universe more keenly fit into the “designed” category than the “not designed” category?

Dr. Craig does none of this. He never even attempts to establish that Design is plausible. Instead, he simply *presumes* that Design is plausible, then spends the rest of the time talking about a poor line of argument from Richard Dawkins. Seriously, that’s it. William Lane Craig seems to be claiming that because Dawkins makes a bad argument refuting Design, Design is therefore more plausible than Chance or Physical Necessity in explaining the Fine-Tuning question.

In response, I can think of nothing more appropriate than a paraphrase of Dr. Craig’s own words:

*I think everyone will find that conclusion jarring because the conclusion “Therefore, Design is more plausible than Chance or Necessity” doesn’t follow from the fact that Dawkins made a poor attempt at refuting Design. There are no rules of logic that would permit you to derive such an inference. There are no rules of logic that would draw that conclusion from the truth of that statement. Craig’s argument is just plainly invalid. The central argument of Craig’s Fine-Tuning discussion is a patently invalid argument.*

I’m fairly certain that this is the shortest response I’ve written to one of William Lane Craig’s arguments. Richard Dawkins is a biologist, and not a philosopher. He’s a vitriolic anti-theist, and not a theologian. When he makes a laughably invalid argument, it’s to be expected. William Lane Craig, on the other hand, holds a PhD in Philosophy. Philosophy is his profession. When *he* makes a laughably invalid argument, there is simply no excuse.

Articles in this series:

- WLC doesn’t understand infinity, Part 1 (re: Excursus #9)
- WLC doesn’t understand infinity, Part 2 (re: Excursus #10)
- WLC doesn’t understand cosmology (re: Excursus #16)

]]>

Unfortunately for our esteemed theologian, his understanding of cosmology seems to be just as poor as his understanding of mathematics.

The first statement which I would like to address is, ostensibly, a summary regarding the previous Part 15 of the *Excursus*. I neglected to address that segment more fully, because I have written a similar article previously. So, when Dr. Craig states that, in Part 15:

We saw that the fundamental constants and boundary conditions of the universe are fine-tuned for the evolution and existence of embodied conscious agents in a degree that is incomprehensibly delicate as well as complex.

…I must vehemently disagree. It is, undeniably, true that there are quite a number of constants, in our current cosmological models, whose alteration would result in a very different universe than the one which we see. However, that does not imply that these “fundamental constants and boundary conditions of the universe are fine-tuned for the evolution and existence of embodied conscious agents.” The fact that the universe would be different if we were to change its parameters does not imply that those parameters have *any* specific teleology, let alone that they are finely-tuned explicitly for life.

The Fine-Tuning problem, in physics, is the question, “Why does the universe have the values for constants which it has, rather than other values?” It is not, as Dr. Craig likes to pretend, “Why is the universe finely-tuned for the existence of life?”

Continuing, Dr. Craig states that, again in Part 15:

We’ve already seen that the first alternative – that this is a matter of physical necessity – is highly implausible. This is contrary to the best evidence of science. The best evidence indicates that these constants and quantities are independent of the laws of nature, and that there is nothing physically that would determine that they should have the finely tuned values that they do.

Once again, Dr. Craig oversteps the bounds of reason in this claim. Had Dr. Craig simply stated, “There is no good reason from cosmology to think that these constants and quantities have their values as a matter of physical necessity,” he would have been fairly accurate. However, he instead erroneously states that, “The best evidence indicates that these constants and quantites are independent of the laws of nature.” This is absolutely incorrect. We have no good reason to think that these constants *are* physically necessary, but neither do we have a good reason for thinking that they *are not* physically necessary.

Essentially, Dr. Craig is claiming that because we do not have a good reason to believe that these constants are physically necessary, they are therefore not physically necessary. This is a rather blatant *Argument from Ignorance* fallacy, and should be plainly evident as such to a studied a philosopher as William Lane Craig.

From here, Dr. Craig moves into discussing whether or not our finely-tuned universe could have been as a result of chance. He claims that:

The fundamental problem with this explanation is that the chances of a life-permitting universe’s existing are so remote that this alternative becomes unreasonable.

…John Barrow, who is a Cambridge University physicist, gives the following illustration of the sense in which it can be said that it is highly improbable that a finely tuned universe should exist. Barrow said let’s imagine a sheet of paper and put on it a dot representing our universe. Now alter some of the fundamental constants and quantities by just tiny amounts. That will then be a description of a new universe. If that universe is life-permitting, make it another red dot. If it is life-prohibiting, we will make it a black dot. Then do it again, do it again, and do it again until your sheet of paper is filled with dots. What you wind up with is a sea of black with only a couple of pinpricks of red in the field. It is in that sense that it is overwhelmingly improbable that the universe should be life-permitting. There are simply many more life-prohibiting universes than life-permitting universes in our local area of possible universes.

There is a very, very glaring problem, in this model, which is immediately apparent when one begins to actually consider the manner in which probability works. The simple fact that there are vastly more ways to arrange the parameters of the universe which are “life-prohibiting” than ways which are “life-permitting” does not imply that life-prohibiting universes are therefore more probable. Dr. Craig is making the baseless assumption that any specific arrangement of universal parameters is equally likely as all the rest to occur.

Here’s a quick illustration of what I mean. Take a regular six-sided die, like you might find in a Monopoly board game. Now, if we were to roll that die, there are six possible values which can be attained: 1, 2, 3, 4, 5, and 6. There are six different possibilities. The chances of rolling any particular value are equal: 1-in-6, or %. It is just as probable that your roll will result in a value of 6 as in a value of 1, or of 2, or of 3, or of 4, or of 5. This would be somewhat akin to the model which Dr. Craig is explicating. Every possible value has an equal chance of appearing, so if we needed to roll, say, a 6 then it would be quite probable that our roll will fail.

Now, let’s change things up a little bit. Instead of one six-sided die, let’s look at what happens if we roll two six-sided dice. Now, the possible values we can attain are 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. There are eleven different possibilities; however, the odds of any particular value are *not* 1-in-11. It is absolutely not true that you have just as much chance of rolling a 2 as you do a 3, or a 4, or a 5, for example. In fact, you are more likely to roll a 7 than to roll any other specific value. The reason for this is that the value of your roll is determined by the combination of the two dice. The first die has six different possible results, and the second die also has six different possible results, and the result of each die is independent of the other. Because of this, there are actually thirty-six different possible combinations. Of these thirty-six, only one combination of the two dice will result in a value of 2 (when both dice show 1’s). Therefore the probability of rolling a 2 is only 1-in-36. However, there are six possible combinations which will result in a value of 7 (1-6, 2-5, 3-4, 4-3, 5-2, and 6-1), which means that we have a 1-in-6 chance of receiving this value. Unlike the picture Dr. Craig is trying to paint, in this case, not every result is equally likely.

Now, let’s really throw things for a loop. Instead of two six-sided dice, let’s think about two twenty-sided dice. However, these are not normal twenty-sided dice. Let’s think about dice in which the numbers 1, 2, 3, 4, and 5 appear on exactly one side of the die, while the number 6 appears on the remaining fifteen sides. Just as in our last example, these two dice can attain eleven possible values: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. However, the probabilities in this example are far more skewed than before. There are 400 different possible combinations which can be rolled on the two dice. However, of these 400 combinations, 225 will result in our rolling a value of 12.

Now, let’s liken this to Dr. Craig’s example. Let’s say that a result of 12 is life-permitting, while a result of anything else is life-prohibiting. If we were to take a paper and draw one red dot for our 12, and ten black dots for the values 2 through 11, it would look like our life-permitting universe is very unlikely. However, there’s actually a better-than-50% chance that we will roll a 12!

Dr. Craig does nothing to show that every particular set of parameters for the universe is equally as probable as any other. As such, his dot-drawing example is particularly silly, and his later example of white and orange ping-pong balls suffers from the same problem.

Towards the very end of Part 16 of the *Excursus*, Dr. Craig discusses the manner in which some have attempted to answer the Fine-Tuning problem with appeals to a Multiverse:

Therefore, theorists have come to recognize that the Anthropic Principle will not eliminate the need of an explanation of the fine-tuning unless it is conjoined with a so-called

Many Worlds Hypothesisormultiverse hypothesis. According to the Many Worlds Hypothesis our universe is just one member of a World Ensemble of parallel randomly ordered universes, preferably infinite in number. Often this ensemble is called the multiverse. If all of these other universes really exist and they are randomly ordered in their constants and quantities then by chance alone life-permitting worlds will appear in the ensemble. Since only finely tuned universes have observers in them, any observers existing in the World Ensemble will naturally look out and observe their worlds to be finely tuned. So the claim is no appeal to design is necessary in order to explain fine-tuning.…Before I comment on the World Ensemble hypothesis, let’s just be sure we all understand it – how

it is an attempt to rescue the alternative of design, and how it explains the fine-tuning of the universe that we observe.…

In order to explain fine-tuning, we are being asked to believe not only that there are other unobservable universes but that there are an infinite number of these universes, and moreover that they randomly vary in their constants and quantities. All of this is needed in order to guarantee that life-permitting universes like ours will appear by chance in the ensemble. This is really extraordinary when you think about it. It is a sort of back-handed compliment, if you will, to the design hypothesis.Because otherwise sober scientists would not be flocking to adopt so speculative and extravagant a view as the Many Worlds Hypothesis unless they felt absolutely compelled to do so.…The design hypothesis enjoys independent reasons for thinking that such a being exists whereas there is no independent reason for thinking that the World Ensemble exists.

It is simply postulated to explain the fine-tuning without any independent evidence for thinking that there is such a thing.

I know that this is a rather large block of text, so I’ve added the emphasis of bold, underlined text to highlight the really important portions of what Dr. Craig is, here, claiming. Important, mind you, because they are so incredibly wrong.

Dr. Craig seems to be claiming that Many Worlds was only developed as a means of metaphysically explaining away the Fine-Tuning problem. As if “otherwise sober scientists” decided to simply make up a completely *ad Hoc* and ridiculous assertion for the sole purpose of being able to avoid appealing to design as a possibility. This is, of course, preposterously wrong. However, it is more than just wrong. William Lane Craig is overtly insulting both the character and the intelligence of all those who hold to the idea of Many Worlds. It is, quite frankly, rather disappointing to find Dr. Craig making use of such an abhorrent rhetorical tactic.

Many Worlds was not, as Dr. Craig infers, developed for the sole purpose of standing as a stop-gap against the idea of design in the Fine-Tuning debate. In fact, Many Worlds was not developed to discuss the Fine-Tuning debate, at all. Many Worlds is one of a number of different possible ways to interpret the mathematics of Quantum Physics, and it is for *that* purpose that Hugh Everett first proposed the idea in 1957. The concept stood, and was argued for and against, for decades before anyone thought to propose that the Many Worlds interpretation might offer some unique answers to the Fine-Tuning question.

The Many Worlds interpretation is certainly no more “extravagant” or “speculative” than any other interpretation of Quantum Mechanics. Indeed, it is actually a far simpler explanation than a number of other possible interpretations of QM. It is for *this* reason that Many Worlds began to become popular among physicists. There is absolutely no reason to think that holding to Many Worlds should imply that a physicist is not being a “sober scientist” as a result.

However, Dr. Craig doubles-down on his ludicrous line-of-thought by addressing a particular sociological survey:

In fact, when I was doing the seminar on fine-tuning last summer at St. Thomas University, one of the other professors in the seminar was Neil Manson, professor of philosophy. Neil had done an extraordinary sociological survey of contemporary cosmologists about issues like fine-tuning. I think this is the first and only such sociological survey done by a reputable organization published in a peer-reviewed journal that I know of. What Manson asked the cosmologists was, “Do you think that other theorists who adopt the multiverse hypothesis do so in order to avoid the design hypothesis?” He was very clever to ask it that way. He didn’t ask “Do

youadopt it for that reason?” That would make them have to confess, “Yes, I as a scientist am really trying to avoid design, and that is why I believe in the World Ensemble.” No, he said, “Do you think your colleagues who believe in the multiverse are motivated by a desire to get away from design?”

I agree with Dr. Craig that Professor Manson was “very clever” to ask the question with the particular wording quoted here (assuming that it is accurately quoted). Of course, I differ with Dr. Craig on *why** *that particular wording can be considered clever. If I know 1000 theorists, even if I only know 3 who “adopt the multiverse hypothesis… in order to avoid the design hypothesis,” then in order to answer the question honestly, I would have to say, “Yes, I think that other theorists who adopt the multiverse hypothesis do so in order to avoid the design hypothesis.” Even if I didn’t know any such theorists, personally, but I had heard rumors that some exist, it is quite likely I would answer that question in the affirmative. Even worse, if I neither knew any such theorists nor had heard rumors of such theorists, but held an unjustified belief that such theorists nonetheless exist, I would still answer that question with a “Yes.” Honestly, this question (as presented by Dr. Craig) seems to be very poorly worded, and far too vague to be of any real use.

The fact that there may be some theorists who adopt the multiverse hypothesis in order to avoid the design hypothesis does not, in any way, imply that the majority of multiverse supporters are so biased. Nor does it imply that the multiverse hypothesis is, at all, problematic.

The next time somebody says to you, “Oh, well, it could have happened by chance!” or “The improbable happens!” or “It was just dumb luck!” then ask them, “If that is the case, why do the detractors of design feel compelled to embrace an extravagance like the World Ensemble hypothesis in order to avoid design?” The fact that they would resort to such a metaphysical hypothesis I think is, as I say, the best evidence that the chance hypothesis is in deep trouble.

I am absolutely perplexed to hear such a statement issued from the mouth of a professional philosopher. Dr. Craig is fairly clearly claiming that the “best evidence that the chance hypothesis is in deep trouble” is a rather egregious Genetic Fallacy. Even if it was the case that “detractors of design feel compelled to embrace an extravagance like the World Ensemble hypothesis in order to avoid design,” it does not therefore follow that the Fine-Tuning of the universe could not have been the result of chance. William Lane Craig should be completely aware that the origin of a belief is irrelevant to the veracity of that belief.

Dr. Craig then shows his complete unfamiliarity with Many Worlds with the following:

One way to respond to the Many Worlds Hypothesis would be to show that the multiverse itself also requires fine-tuning. In order to be scientifically credible, some plausible mechanism has to be suggested for generating the many worlds in the ensemble. But if the Many Worlds Hypothesis is to be successful in attributing fine-tuning to chance alone, then the mechanism that generates the many worlds had better not be fine-tuned itself. Otherwise, you’ve just kicked the problem upstairs, and the whole debate arises all over again on the level of the multiverse.

The Many Worlds in the ensemble are not “generated,” at all. They are parallel. There does not need to be a “plausible mechanism… for generating the many worlds” any more than there needs to be a plausible mechanism by which the X-Axis generates the Y-Axis on a Cartesian Plane in order for us to plot a few points on a graph. This isn’t like an assembly-line machine popping out new Worlds every so often, as illustrated in this cartoon from Dr. Craig on the subject. Many Worlds simply proposes that every possible state of a Quantum Wave Function represents the actual state of some real world. These worlds are not “generated.” They don’t pop into existence due to some action. This does not require any fine-tuning, itself.

Now, with all that said, even if we were to ignore everything which I said in this article, and even if– for the sake of argument– we were to accept Dr. Craig’s claims that physical necessity and chance are unlikely explanations for Fine-Tuning, he’s left with a larger problem. Simply stating that other propositions are unlikely does not imply that your preferred option is *more* likely. Dr. Craig still has the burden to show that Design is even a valid possibility for explaining the Fine-Tuning problem. Of course, Dr. Craig recognizes this problem and ends part 16 with this:

So what about design? Is design any better an explanation of the fine-tuning of the universe? Or is it equally implausible? That will be the question that we take up next week.

Honestly, I was very excited to hear this. Whenever I have discussed the idea of Intelligent Design with an apologist, I have brought up these very questions. Unfortunately, I’ve only ever been met with answers about the purported improbability of chance or necessity. I’ve never been proffered any answers with positive evidence for the idea of Design, nor even with a proposed mechanism by which the Fine-Tuning of the universe *could* be Designed. In my next article, we’ll see if *Excursus* Part 17 can actually answer these questions reasonably.

Articles in this series:

- WLC doesn’t understand infinity, Part 1 (re: Excursus #9)
- WLC doesn’t understand infinity, Part 2 (re: Excursus #10)
- WLC dodges his own question (re: Excursus #17)

]]>

Well, as I mentioned, the very counterintuitive nature of the result led at least one of my readers to question its validity. As such, I thought I would lay out one proof of this concept, in order to make it easier for those who do not accept the result to pinpoint exactly where they disagree. I’ll break my proof down into numbered steps, to ease in that venture.

By the symbol, 0.999…, I mean to say an infinite decimal expansion in which all digits to the right of the decimal place are 9’s. Mathematically, we can express this as:

Those of you who remember your Calculus might immediately recognize this Summation as a textbook example of a convergent geometric series. However, for those who do not, let’s work through the steps to determine the limit of this expression.

Provided the series converges, we say that the value of the summation is equal to that series’ limit.

**(a)**

Similarly, if convergent, the limit of the series is equal to the limit of the partial sums of the series. In general, the *n*th partial sum of our series can be seen to be:

**(b) **

So, as long as the series converges, we can see that:

**(c)**

If the partial sums form a convergent sequence, then the whole series converges. A convergent sequence is one which has an existent, finite limit.

**(a) **

**(b)**

**(c) **

So, now, if exists and is finite, then it follows that exists and is finite. And, if that is the case, then it is evident that our summation series from (1) converges.

We say that the limit of some function, , exists and is finite if, as *k* is made arbitrarily large, then becomes arbitrarily close to some Real number, *L*.

So, our question now becomes, as *k *is made any arbitrarily large value, does become arbitrarily close to any single Real number? It’s fairly obvious that, the larger *k* we utilize, the closer gets to 0. We can, in fact, make as close to 0 as we want, simply by choosing a large enough value of *k*.

In general, it will always be true that:

**(a)**

…for any Real number, *k. *Additionally, for any Real number *r* such that , we can choose a value of *k* which would make it true that:

**(b)**

And for all *k*, it is true that:

**(c)**

Therefore:

**(d)**

Since 0 is a finite, Real number, it is clear that the limit exists.

Now, we have everything we need to evaluate our initial summation.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

]]>

Despite the fact that it is fairly simple to prove that 0.999…=1, the concept is so counterintuitive that I find people try to struggle against it– even when they know and accept the reasoning behind the equality. One such attempt comes from Presh Talwalkar. In the following video, Mr. Talwalkar attempts to demonstrate that on the Surreal number system, 0.999…≠1.

Unfortunately for Mr. Talwalkar, he is wrong. Even on the Surreals, it is still true that 0.999…=1.

In the video, Mr. Talwalkar acknowledges that it is absolutely true that 0.999…=1 on the Real numbers. However, he then asserts that it is *not* true that 0.999…=1 on the Surreal numbers. Right away, this should look fairly suspect to anyone familiar with the Surreals. The reason for that is that the Surreals are a superset which *contains* the Real numbers. Anything which is true for a number which exists within the Real numbers will similarly be true for that number in the Surreals. It is therefore entirely incoherent to claim that 0.999… is a different number in the Surreals than it is in the Reals.

Most of Mr. Talwalkar’s video is fairly accurate, though I would prefer a more rigorous treatment of its subject matter. The point where it goes wrong, however, comes when he attempts to discuss some “weird numbers” which can be constructed on the surreals. He begins by discussing , which he erroneously claims to be “1 divided by infinity” and “point zero repeating, with a one at the end.” Neither of these descriptions is even coherent. Infinity is not a number. You cannot divide 1 by infinity any more than you can divide 1 by Blue, or by Sweet, or by Alexander Hamilton. The latter description “point zero repeating, with a one at the end” is quite obviously self-contradictory. If we have “point zero repeating,” then there is no “end” at which to write a “one.” That said, the Surreal number which Mr. Talwalkar defines here, , *is* an actual number and *can* be utilized in mathematics. As defined, represents a number which is greater than zero, but smaller than all of the Real numbers. That is to say, for any Real number , it is true that .

He continues by defining another "weird number," . Again, this number *is* correctly defined, and it *is* an actual number. It is therefore possible to deduce certain properties which are possessed by . For example, since we know that , we can know with certainty that . However, Mr. Talwalkar oversteps the bounds of logic when he baldly asserts that "you can think about it as point nine repeating." He gives absolutely no justification for asserting that , and it is actually quite simple to prove that this assertion is, in fact, entirely untrue. I shall do so, now, by *Reductio ad Absurdum*.

Let’s start by assuming Mr. Talwalkar’s assertion to be true. Then, by exploring the properties of the numbers, I’ll show that this assertion leads to a logical contradiction, and that it therefore cannot be true.

- Given (1) and (2), and therefore
- Given (4) and (6),
- Given (5) and (7),
- Given (3) and (8),

This, of course, is nonsensical. A number must always equal itself. Therefore, our premise (1) cannot be true.

It doesn’t matter whether we are talking about the Rational numbers or the Real numbers or the Hyperreal numbers or the Surreal numbers. The simple fact of the matter is that 0.999… is equal to 1. For some reason, many people find this very difficult to accept, but it is absolutely true. Presh Talwalkar’s attempt to show otherwise fails just as surely as all those which came before it. To quote Vi Hart on the issue:

If you’re having Math problems, I feel bad for you, son.

I got 98.999… problems, but 0.999… equals 1.

]]>

Trent began by describing some of the more popular Cosmological arguments. Concerning the formulation put forward by Thomas Aquinas, he says:

What Thomas argues, in the

Summa Theologica, he says, “Let’s suppose the universe were eternal– it’s always existed.” Uh, there’s still things in the universe that need explanation. For example, “Why is there something rather than nothing?” Uh, why is there a universe of motion when, even if the universe is eternal, it could just be a static block.

I would agree that such things need explanation. However, this does not imply that the explanation needs to be found outside of the properties of the universe, itself. After all, even if the universe *is* contingent, and even if Trent is correct to assert that it *was* created by God, we have not answered the question, “Why is there something rather than nothing?” God is still something, rather than nothing. I think Trent would agree that the simple fact that we can ask, “Why does God exist rather than not existing?” does not imply that it is therefore possible God does not exist. Similarly, the simple fact that we have a question about the universe’s existence does not imply that the universe might therefore have not existed.

Another argument says, “Well, another reason the universe doesn’t have to exist is that, at one time, there was nothing. Nada. There was no thing, at all, and then suddenly the universe came into existence. And you can’t get something from nothing.

This begs a rather important question: was there a time when there was nothing? This formulation of the Cosmological argument seems to assume that there was, but I would wholeheartedly disagree with that claim. In fact, I would argue that it doesn’t even make any sense. Trent and I both agree that Time is not nothing. So if there *is* a time, then something exists, and it would be completely wrong to then claim that nothing exists at that time. In fact, it seems entirely incoherent to claim that the philosophical concept of Nothing could exist.

After this, I mentioned that I find the Leibnizian Cosmological argument to be the best form of that series, and so we moved on to discussing that part. Trent summarized my view as follows:

Joe’s objection, however, is that perhaps the universe is necessary, like God is. The universe– there’s never– there could have never been a state of affairs where the universe did not exist. Uh, maybe it’s just the case, there always has to be a universe

This is precisely my position. We have two options: either there could have been a state of affairs in which the universe did not exist, or else there could not have been. If one wants to support either option, he needs to rely on better arguments than, “we do not know that the converse is true.” I was not claiming that there could never have been a state of affairs where the universe did not exist. I was asking Trent to defend the claim that there could have been such a state of affairs, since all Cosmological arguments for God are predicated upon it.

TRENT:Here are some reasons that I would give to think that the universe– why think the universe is contingent, as opposed to necessary. Well, one reason that I would give would be a conceptual one, and that would be the fact that while one can imagine, uh, the universe not existing, uh, that’s not really the case with necessary objects…Now, there is, this objection can go back to God but before we get there, what do you think of that? That being able to conceive that something doesn’t exist is evidence for its contingency?

BP:I wouldn’t really agree with that, because I– I mean, you– I would argue that I can, uh, I can imagine a world in which God does not exist–

TRENT:Right, Joe, I know the objection can go there, that you could imagine God doesn’t exist, and I have a rejoinder to that, but before that, I’m just talking– I’m not even talking about God right now, I’m just talking about whether the universe is necessary or contingent. And the fact that I can imagine it not existing makes it more like a contingent thing…

Unfortunately, Trent didn’t get the opportunity to elaborate on his rejoinder to the objection that one could imagine God doesn’t exist. Hopefully, this is in his book, *Answering Atheism*, and I’ll get a chance to respond to it after I receive my copy. However, I will still note, as I did previously, that the simple fact that we can imagine a thing not existing implies nothing at all about the actual possibility of its existence or non-existence.

For example, a triangle with three sides– you can’t imagine any other kind of triangle. You know, it’s three sides are necessary. Uh, but I could imagine a triangle that’s blue or red or purple or all kinds of colors. That shows whatever color a triangle might be is contingent. It could be– I could imagine it being different. And the same would go for anything else, but the sides– I can’t imagine it any other way.

…You know, we don’t ask, “Why does the square have four sides? Why is the circle round?” That’s just the way it is.

The reason one can’t imagine any other kind of triangle than one with three sides is that a triangle, by definition, is a plane figure with three sides. The reason we can imagine a triangle of different colors is that the definition of “triangle” implies nothing at all about the color of that plane figure. The fact that we can or can’t imagine it being different is a *consequent* of our understanding of the object’s properties. It is not the *reason* for those properties. Precisely the same thing can be said regarding the square’s four sides: a square is a quadrilateral *by definition*.

As for a circle’s roundness, we can say the same thing but in reverse. When we say something is “round,” we mean that it is approaching circularity or sphericality in shape. “Round,” by definition, references the shape of a circle. We *can* certainly ask why a circle has its particular shape as opposed to others– in fact, mathematicians have been doing so for thousands of years. You see, unlike a triangle or square whose definitions delineate the particular number of sides, a circle has a more abstract definition: it is the set of all points in a plane which are equidistant from another point. So, if we were given point *O* and distance *r*, we would say that all of the points which are exactly *r*‘s distance from *O* comprise a circle. We can then explore the implications which this definition has on the properties of that shape.

While I wasn’t able to be so verbose in my response, I did mention in my call that this was due to the defined properties of triangles as opposed to the force of our imaginative ability. Trent answered by saying,

Well, I’m not saying that the universe’s contingency is necessary, or that it– I mean– I guess there’s a way God could make the universe have to exist, somehow. Uh, I’m just saying there’s no reason to think the universe is necessary, and there are reasons to think it’s contingent, or it could fail to exist.

I was responding to one of the reasons which was given in support of the idea that the universe could fail to exist, and arguing that it does not actually demonstrate that the universe could fail to exist. Even if Trent was correct to assert that there is no reason to think the universe is necessary, I was asking what reasons there are to think that it’s contingent. I do not feel that the conceptual argument which he presented gives us a very good reason to think that the universe is contingent.

Trent then asked me about the example of Mars, and as to whether I thought Mars was contingent. I told him that I did, because there was a time when Mars did not exist.

Okay, so that– that’s one way to go about it. If something didn’t exist, it’s not necessary. And we could talk later, if we had more time, that I think that the universe– there was a time it didn’t exist.

This would seem wholly incoherent. Later in the discussion, Trent and I both agree that Time is a part of the universe. If this is the case, then it is logically impossible for there to have been a time when the universe didn’t exist. There cannot have been a time when Time did not exist– that’s a nonsensical assertion. I talk about this in more detail in my article, The Universe Has Always Existed, defending my position with a very simple logical argument:

- The universe is the set of all physical things which really exist.
- Time is a physical thing which really exists.
- If Time exists, then the universe exists. (1,2)
- Time is the set of all moments which exist.
- There exists some moment,
*t*. - If
*t*exists, then Time exists. (4,5) - Therefore, if
*t*exists, then the universe exists. (6,3) - The phrase “always” is defined as meaning “for all moments of time.”
- There are no moments of time in which the universe does not exist. (7)
- Therefore, the universe has always existed. (8,9)

Referring again to Mars, Trent asked me to consider a situation in which I was unaware as to whether or not there was a time in which Mars did not exist. He then asked:

Wouldn’t it follow that you can imagine a world without Mars or without your living room couch, or you could imagine certain things not existing, so it follows they are contingent?

No. I do not see how that follows, at all. Once again, the fact that we can imagine something not existing implies nothing at all about the nature of that thing’s ontology. Again, Trent is saying that the presence of the question implies the answer to that question. The simple fact that we could imagine things being different does not imply that they could be different, nor that the explanation for why they are not different is due to contingency. Allow me to give an example from mathematics.

We can ask the question, “Why is there an infinite amount of Prime numbers rather than a finite amount?” In fact, anyone who has ever taken even an introductory course on Number Theory has likely been asked this very question. However, the very fact that we can ask this question does not imply that the infinitude of the Primes is a contingent fact. It is, quite the contrary, most certainly a necessary fact, derived from the properties of numbers, themselves. Nor does the fact that we now know the answer to the question change the situation, at all. There was a time when Humanity asked about the infinitude of the Primes prior to having an answer to the question. The presence of the question no more implied that this infinitude was a contingent fact, at that point in time, than it does, now.

I then attempted to direct the conversation back to my main question. I admitted that it may be possible for the universe to be contingent, but I have not seen it demonstrated to be so. Trent then asked if I had any reason to think that the universe might be necessary. Giving an extremely brief summary of the position I elucidate above regarding Time, I stated that I cannot see any cogent way to present the idea that the universe did not exist, because the universe has literally existed for all moments of Time. Trent then responded:

Right, but– but you can have non-existence without Time. In fact, I would say that, uh, a state of affairs where there is nothing but Time doesn’t make sense. It’s not possible, because all Time is is a measure of change, and if nothing exists to change, you actually can’t have time at all.

I never argued that it was possible for a state of affairs to exist in which only Time, absent of any change, existed. What I said was that it is incoherent to claim that there was a time in which the universe did not exist. The universe has literally always existed, and it is incoherent to claim that there could have been a state of affairs in which it did not exist.

Trent moved on to the idea that all things within the universe are contingent, in order to attempt to expand that idea to the universe as a whole. He says:

There didn’t have to be Mars. There didn’t have to be a Milky Way. Eventually, when you say, there didn’t have to be all the stuff in the universe, there didn’t have to be a universe, at all. I mean, some people say, “Well, maybe the basic units of the universe, like quarks, they have to exist.” Maybe not. What if they were strings instead of quarks or some other– the fact that things could be different requires an explanation for why it’s not different.

This presents an extremely narrow view of the universe. Trent only discusses material objects, here: Mars, the Milky Way, quarks, strings. However, when we talk about “the universe” in philosophy (and also in cosmology) we’re not simply talking about matter. In fact, Trent himself defined the universe much more broadly, when he first began explaining the Cosmological family of arguments. He said that the universe is “all of space, time, matter, and energy.” So, even if one were to show that all of the matter in the universe is contingent, and he may put forth similar arguments for energy, Trent has done nothing at all to show that space-time, itself, is contingent. Since space-time is the very fabric of the universe, on modern cosmological models, it becomes necessary to show that space-time is contingent if one wants to demonstrate that the universe, as a whole, is contingent.

Now, God is a necessary and infinite being Who can’t be different, Who just is, and He serves, as I think, the best explanation for why there is a universe rather than that there isn’t one.

As I mentioned earlier, all of the same questions which Trent asked about the physical cosmos can also be asked about God. Why is there God rather than no God? Why does God have property *X* instead of not having property *X*? I think Trent and I will both agree that the presence of these questions implies nothing at all about whether God is necessary or contingent. I therefore find it very curious that he thinks the presence of these questions bear any sort of implication upon the necessity or contingency of the physical cosmos. Now, as mentioned, Trent says he has a rejoinder to this objection. If it is in his book, I will certainly address it when my copy arrives; however, if it is not, I’ll try to get in touch with Trent personally to find out if it can resolve this apparent discrepancy.

Once again, I do want to thank Trent Horn and Patrick Coffin for taking my call, treating me respectfully, and sincerely trying to address my concerns as I presented them. I greatly enjoyed the discussion, and found it immensely thought-provoking. It is wonderful to find an apologist who truly is concerned with having a fair and irenic dialogue with those who disagree. All too often, people on both sides of an argument talk *at* the other person, rather than *with* him– regurgitating canned responses and pat answers without regard for what the other person is actually saying. Trent Horn did not do this. He listened to what I had to say, and he addressed my questions and concerns as best he could in the time we had; and for that, he has my utmost respect and gratitude.

]]>