Mathematicians are not most people.

For quite a long time, now, mathematicians have recognized that there are at least two very distinct ways in which we use numbers to describe things. Being the scholarly, academic types that they are, mathematicians have assigned names to these two different types of numbers which sound heady and difficult to the average person: *ordinal* numbers and *cardinal* numbers. Indeed, even mathematics students sometimes need quite a bit of work and explanation in order to really grasp the difference between these two types of number; but I’m going to do my best to explain these things in a very simple way for a casual audience.

Ordinal and cardinal numbers roughly correspond to the ideas of *value* and *size*, respectively.

Everyone can fairly intuitively recognize that value and size are entirely distinct concepts. If I have a bar of lead and a bar of gold which are both 30 cubic centimeters, it doesn’t take much to realize that both are the same size. However, if I were to allow you to choose one to take, who would hesitate to grab the gold bar? Despite being equal in size, the two bars are certainly not equal in value. The gold has far, far greater value than does the lead. What most people do not realize is that there is a similar concept to be found among numbers.

In order to explain, I’m going to ask you to take a look at the picture below.

In this picture, we have a set of ravens and a set of owls. Without counting or using numbers to describe either set, how can we know that both sets are the same size? Most people will immediately intuit that this is the case, even before recognizing that there are 5 of each, ravens and owls. However, those same people usually don’t understand why. You see, our brains tend to naturally make a connection that we might not consciously recognize at first. We notice, without even realizing it, that for each raven there is also an owl. We can match them up in what mathematicians refer to as a one-to-one relationship, as in the image below.

When the elements of two sets can be placed into a one-to-one relationship, like this, we say that they have the same *cardinality*. If they cannot, then the cardinality of the two sets is different. As such, cardinality is a means of discussing the size of a set.

However, as mentioned, there is another way in which we think about numbers, too. Have a look at the following image. In it, you’ll see a series of symbols being compared to one another using less-than symbols (<). Again, without assigning any numbers to the symbols, can you place them in order from least to greatest?

It’s not a very difficult puzzle. We see that the Mirror is less than the Mask, but the Wolf is less than the Mirror. Similarly, the Rose is less than the Scepter, but the Mask is less than the Rose. All it takes is a little rearrangement of the information which we have been given to see the following:

Without using any numbers, at all, we’ve placed these unfamiliar symbols into order from least to greatest. We don’t need any numbers to know, for example, that Wolf is less than Mask or that Mirror is less than Scepter. In short, we’ve established an order of value for these symbols, without needing to know anything else about them, at all. This sense of ordering things by their value is what we mean when we discuss *ordinal* numbers.

Thus far, everything we’ve discussed should seem fairly simple and straightforward. You might, in fact, wonder why I’ve even chosen to discuss it, at all. Sure, ordinality and cardinality might be two different concepts, but what difference does it really make? One does not need to consciously understand these differences in order to talk about numbers in a meaningful way.

Indeed, when we are discussing the numbers with which most people are quite familiar, saying that ordinal numbers are different than cardinal numbers seems like a distinction without a real difference. Who cares if we are talking about the cardinal number 5 or the ordinal number 5, for instance? Isn’t the number 5 just the number 5 regardless of whether we’re talking about ordinality or cardinality?

In the case of finite sets, this holds quite well. The cardinal 5 and the ordinal 5 aren’t really very different, other than in some minor quality about how we use them. The whole notion seems like quibbling pedantry, to the layperson. The problem arises, however, when we begin talking about infinite sets. It does not take very long to realize that with infinite sets, ordinals and cardinals are very different things.

Let me explain by discussing one of the ways in which we can construct the Natural numbers. We begin with an empty set– that is, a set which contains no elements. Now, we can play a little game with one very simple rule. The rule is that we are allowed to create a new set called a Successor which contains as its elements every other set which we already have. Since we currently only have the empty set, this means that we can create a new Successor set which contains the empty set as its only element. Once we have done that, we can apply the rule again, this time creating a Successor set which contains the empty set and the set containing the empty set. We can then apply the rule again to find another Successor, and we can continue this process ad infinitum.

If this seems confusing, take a look at the image below. The boxes represent our sets, while commas separate the elements within a set.

Now, all this “Successor to the Successor to the…” language is the confusing part of all this, so let’s make things easier by assigning symbols to represent each of the sets which we have created. We can assign the symbol “0” to the Empty Set, the symbol “1” to the Successor to 0, the symbol “2” to the Successor to 1, and the symbol “3” for the Successor to 2. I’m sure you now see where I am going with this.

If we continue playing our Successor game, we can probably already see that the set “4” will contain 0, 1, 2, and 3. The set “5” will have sets 0, 1, 2, 3, and 4 as its elements. The set “6” will be constructed of elements 0, 1, 2, 3, 4, and 5. Et cetera, et cetera. This is all very straightforward. All of the numbers which are created in this way are termed the Natural numbers by mathematicians.

This setup gives us a clear method for discussing the *ordinality*, or value (as I have been describing it), of a set. A set is said to be “less than” another set if the former is an element of the latter. So, 1 is less than 3 because 3 contains 1 as an element, in this construction. Similarly, 0 is less than all of the other sets because 0 is an element of all of the other sets. Conversely, 2 is not less than 1 because 2 is not an element of 1; and no Natural number is less than 0 because no Natural number is an element of 0.

We can also use this method to define what it means for sets to be *ordinally equal*. If we have two sets, A and B, we can say that A and B are *ordinally equal *if A is not less than B and B is not less than A. So, for example, if we were to compare our set “3” to a copy of itself, we would see that original-3 is not less than copy-3 (because the set “3” does not contain 3 as an element), and copy-3 is not less than original-3 for the same reason. Therefore, we would conclude that 3 is ordinally equal to 3.

We can also use this setup as a very easy way of discussing the *cardinality*, or size, of a set. For an illustration of this, take a look at the next image.

Let’s say that we want to compare the set which we called “3” with the set of owls, pictured here. If you will recall from our discussion of ravens and owls earlier, if the elements of two sets can be placed in a one-to-one relationship with one another, then those two sets have the same cardinality. So our set of owls, in this picture, has the same cardinality as our set “3” has because we can match each of the owls to each of the elements in 3. When two sets have the same cardinality we can say that they are *cardinally equal*.

With the Natural numbers, it is plain to see that two sets which are *ordinally equal *are always going to be *cardinally equal,* as well. The set 3, for example, is not less than itself and it can have its elements matched one-to-one with itself. Similarly, if two numbers are not ordinally equal they will not be cardinally equal either. So, 3 is less than 4, and there is no way to match up all of the elements of 3 and 4 so that they will be in a one-to-one relationship.

This all may seem rather obvious to some. Of course two sets which are ordinally equal are going to be cardinally equal! How could things be any other way? One might wonder anyone would bother to worry about the distinction between ordinality and cardinality, at all. The truth is, however, that numbers are far stranger than most people will ever realize.

Let’s say that I create a new set which contains as its elements all of the Natural numbers. We will call this set “ω” (the Greek letter *omega*). By the definition of “less than” which we used earlier, it is clear that every Natural number is less than ω, since every Natural number is an element of ω. Similarly, we see that ω is not less than any Natural number, since no Natural number would have ω as an element. We have created a completely new ordinal number.

Now, we can apply our Successor rule again to create a set which contains all of the Natural numbers and ω as its elements. We’ll call this Successor set “ω+1.” Clearly, since ω is an element of ω+1, we can see that ω is less than ω+1. Again, we can continue to apply the Successor rule to create new sets which we might call “ω+2,” “ω+3,” “ω+4,” and so on, and we can continue to see that our ordinal “less than” rule still holds for each of these. We call ω and the Successors which can be generated from it *transfinite* ordinals.

One might easily think that, just as was the case with the Natural numbers, if two transfinite sets are not ordinally equal, then they can’t be cardinally equal, either. But, as we shall see, the world of the infinite is a very, very different place.

Now, it’s clear that ω does not have the same cardinality as any of the Natural numbers, because no matter which Natural number one chooses, attempting to set up a one-to-one relationship between its elements and those of ω will result in unmatched elements remaining in ω; and given all the precedent which we saw in the Natural numbers, we might expect that ω+1 has a different cardinality than does ω. Indeed, at first glance this might seem to be the case. When we try matching the elements of ω to those of ω+1, the obvious thing would be to match 0 to 0, 1 to 1, 2 to 2, and so on. If we were to go about it in this way, we would match up every Natural number in the set ω to its counterpart in ω+1 and there would still be one unmatched element in ω+1.

However, what if we were to be a little bit clever? Instead of matching each Natural number to its counterpart in the other set, let’s instead match the ω-element of ω+1 with the 0-element of ω. We can then match each Natural number element in ω+1 with its Successor element in ω, so that 0 matches with 1, 1 matches with 2, et cetera. Now, we find every element in ω+1 *does* pair up with an element in ω in a one-to-one relationship.

Despite the fact that ω is ordinally less than ω+1, we can see that these two sets are cardinally equal! In fact, ω has the same cardinality as *any* of the Successor sets which can be generated from it. So, ω has the same cardinality as ω+1 or ω+15 or ω+3672. Mathematicians refer to the cardinality of ω with another symbol, (pronounced “Aleph null” or “Aleph zero,” from the Hebrew letter). So, despite the fact that the ordinal number 3 and the cardinal number 3 don’t seem to be very different, it becomes exceedingly clear that the ordinal number ω and the cardinal number are not the same thing, at all.

A bar of lead and a bar of gold might be the same size, but the bar of lead has far less value than the bar of gold. Similarly, we now see that it is entirely possible for two sets to be the same size, even if one has far less value than the other. Though we do not usually see it in our everyday experience, there is a very distinct and meaningful difference between numbers which are used ordinally and numbers which are used cardinally. Despite the fact that the concept is so thoroughly ingrained in us, it is not always obvious what it is that we mean when we use numbers.

]]>

The First Way is Aquinas’ Argument from Motion. The philosopher, being heavily entrenched in Aristotelian philosophy, means something different by “motion” than is commonly understood, today. When Aquinas says that a thing is “in motion,” he means that some change which has yet to occur is being made in that thing. In addition to spatial mobilization, this can refer to any sort of change—for example, a chameleon which is perfectly still, but changing colors, would be “in motion,” on Aquinas’ view. We can summarize this argument as follows:

- There are some things which are in motion.
- If a thing is in motion, that motion must originate in the action of some other thing.
- This chain cannot go to infinity, because there must be a First Mover.
- Therefore, there is a First Mover whose motion does not originate in the action of some other thing. (3)

Laid out in this fashion, it seems perfectly clear that Aquinas’ First Way is not even a valid argument, let alone a sound one. His third premise is a blatant instance of question begging. Aquinas asserts that there must be a First Mover, therefore there is a First Mover. Furthermore, the conclusion which he draws violates his second premise, making it a fairly clear example of special pleading. If Aquinas’ conclusion is true, then his second premise is false, invalidating the argument.

Aquinas’ dedication to Aristotle is again evident in his Second Way, the Argument from the Formality of Efficient Causation. Aristotle’s view of the notion of causation divided the concept into four categories: material, efficient, formal, and final. The “material” cause is that which is being changed, the “efficient” cause is the agent effecting the change, the “formal” cause is the manner in which the change occurs, and the “final” cause is the end purpose which drove the change (Aristotle’s *Physics* 2.3). Using an argument which is similar to his First Way, Aquinas argues:

- Changes have Efficient Causes.
- No change can be its own Efficient Cause.
- There cannot be an infinite chain of Efficient Causes.
- Therefore, there must be a First Efficient Cause. (1, 2, 3)

This argument is slightly better than Aquinas’ First Way, as this one is at least valid and free of overt logical fallacies. However, it is not necessarily sound. The third premise is fairly suspect, and Aquinas does not give very good support for it. The priest notes that in any series of efficient causes, a subsequent efficient cause is always the result of a prior. He then baldly asserts that there must be a first Efficient Cause. This, again, shows Aristotle’s influence on Aquinas, as the Greek philosopher was vehemently opposed to the notion of actually infinite sets of entities (*Physics* 6.1). Unfortunately for Aquinas, Aristotle’s understanding of infinity is nearly as antiquated as was his belief that the Sun, planets, and stars revolve around the Earth. Mathematicians and physicists have been regularly utilizing completed infinite sets in real-world applications for more than 400 years, now, and it has been nearly 100 years since the concept has even been a source of much controversy in the philosophy of mathematics. If Aquinas wants to demonstrate that there cannot be an infinitely receding chain of Efficient Causes, he must do a better job than to simply note the fact that this would imply that the chain lacks a first cause.

The Third Way is an Argument from Contingency. Aquinas points out that there are things which exist that exhibit the potential to not exist. Four centuries later, in a similar argument, Leibniz would say that such things are “contingent,” as opposed to “necessary” things—that is, things which cannot logically be non-existent. Aquinas’ Third Way is summed up as follows:

- Some things exist which have the potential to cease existing.
- If a thing can cease to exist, then there must have been a prior time in which it did not exist.
- If everything has the potential to cease existing, then there must have been a time when nothing existed. (1, 2)
- If nothing existed, nothing can have been brought into existence.
- Therefore, it cannot be that everything has the potential to cease existing, since (4) contradicts with (1).

There are two problems with this Third Way which are of particular concern. Firstly, Aquinas offers absolutely no justification for his second premise. It does not follow that a thing potentially ceasing to exist implies that it must have not existed at some time in the past. However, even if that premise is true, the most which Aquinas might draw from this argument is that there must exist *at least* one thing which is necessary. He has not demonstrated that there can only be a single necessary thing. Aquinas gives no reason to think that there might not be a multitude of necessary things which do not have their necessity caused, but rather exist necessarily in and of themselves. Certainly, if numerous such things were in existence, Aquinas would not say that the whole set of them is what is meant by the name, “God.”

Aquinas proposes an Argument from Gradation as his Fourth Way. He discusses the manner in which things are described, noting that some things exhibit a certain property to a greater or lesser scale than do others. He claims that this gradation is judged in comparison to some singular exhibitor which most exemplifies that property.

- There are some properties of things which can be accorded as greater or lesser.
- A property can only be accorded as greater or lesser as it resembles a thing which is the greatest exhibitor of that property.
- “Being,” “goodness,” and every other perfection can be accorded as greater or lesser in a thing.
- Therefore, there must exist a greatest exhibitor of “being,” “goodness,” and every other perfection. (1, 2, 3)

The second premise of this argument is worse than just dubious. Indeed, it would seem to be glaringly false. In his own example supporting the premise, Aquinas states that “hotter” is judged in comparison to fire, which must be considered “hottest,” since it is the source of all heat– yet another antiquated and abandoned position which Aquinas acquired from Aristotle. However, it is now known that fire is *not* the source of all heat, and that there exist things which are far, far hotter than fire. Even more damning to this perspective, however, is that a person need not ever know anything of fire to know that a stone which had been left out in the sun is hotter than one left in the shade. Nor is it even clear that “hottest” is a notion which can be applied to any single, universal thing. The gradation of a property is not judged by comparison to the maximal extremes of that property, but rather by comparison to other things which exhibit that property. An oven is hotter than a refrigerator and a blast furnace is hotter, still, but these are all still exceptionally cold in comparison to the surface temperature of a star.

Finally, the Fifth Way is an Argument from Teleology. Aquinas claims that there exist things which lack intelligence, and which yet act to a purposeful end. This purpose, he claims, must be instilled into those things by an agent with intelligence.

- There exist things which lack intelligence and act for some purpose.
- A thing which lacks intelligence cannot act for a purpose unless it is directed to do so by a being with intelligence.
- Therefore, some intelligent being exists by whom all things which lack intelligence are directed to their purpose. (1, 2)

Once again, Aquinas’ formulation seems problematic. Firstly, it is not clear that things which lack intelligence are acting in accordance with some distinct purpose. His attempted justification for this point does not seem to really lend it much support. Aquinas claims that these things act “always, or nearly always, in the same way, to obtain the best result.” It is far from demonstrated that this is the case; however, even if it is, it does not follow that that their actions are therefore directed by an intelligent being. Aquinas simply asserts that this is so. Again, this would seem to be a function of the priest’s reliance upon Aristotle, and his ignorance of the physics which would be discovered centuries after his *Summa*. Furthermore, as with the Third Way, Aquinas gives no good reason to think that all of these natural, intelligence-lacking things must therefore be directed by a *single* entity, other than the earlier argument regarding Efficient Causation which has already been seen to be problematic. Even if his premises were true, Aquinas is not justified in this leap of logic. The planets, for example, might each be driven in their orbits by an intelligent being distinct from that driving any of the others, and neither of the first two premises would be violated. As before, it is doubtful Aquinas would agree that all intelligent beings are what is meant by the name, “God.”

The Five Ways of Thomas Aquinas’ *Summa Theologica* are, to be certain, an important part of the history of philosophy. However, their brilliance and validity are very often overstated by proponents. Aquinas’ views are mired in a foundation of the positions of Aristotelian physics which, though extremely influential in 13^{th} and 14^{th} Century Christendom, are now extremely antiquated and which have been abandoned by all serious scholarship. Even within that framework, however, his arguments are rather loosely proffered and lacking in rigor. Aquinas’ Five Ways do not exhibit the strength of argument with which they are commonly characterized. As such, the question of God’s existence does not seem to be quite so well founded as the Italian priest claimed.

Joseph Nebus has recently written a couple of posts (here and here) in which he discusses an interesting attempt by Józef Maria Hoëne-Wronski to create a purely numerical definition of the mathematical constant π which is independent of the classical, geometric definition of “the ratio of the circumference of a circle to its diameter.” This has been a goal of many mathematicians, since the idea of π seems like it is more fundamental to mathematics than a definition based on circles would make it seem– as evidenced by the fact that it shows up in areas of mathematics which are seemingly unrelated to circles. Wronski’s idea, to this end, was the following formula:

At first glance, the formula seems inherently nonsensical. After all, is not a number, and therefore cannot be utilized in numerical operations in this way. However, one can get a sense of what Wronski may have *intended* by this equation. It appears that Wronski wanted to utilize to represent an infinite number, and modern mathematics actually gives us several tools for handling this sort of idea. One which might be of particular use, here, is Non-Standard Analysis with its infinite and infinitesimal Hyperreal numbers. In NSA, we have the ability to perform calculations with and upon infinite numbers perfectly consistently and reasonably.

First things first, let’s translate Wronski’s equation into a more modern form. Borrowing from the work Joseph Nebus already did in his second post on the subject, we can replace all the ‘s in the equation with ‘s, instead, in order to get:

Now, we can use our tools from NSA to find suitable substitutes for in the above equation. One immediate problem which a mathematician might notice is that replacing the three symbols with positive, infinite Hyperreal numbers , , and will lead to different solutions for the equation when one uses different values for , , and .

However, Wronski died well before Georg Cantor‘s brilliant work showing that there are different sizes of infinite sets was even published, let alone accepted by mainstream mathematicians. As such, it is very reasonable to assume that Wronski believed his symbol was referring to a single, specific quantity, rather than a range of possible infinite quantities. So, let’s replace all symbols with a single positive, infinite Hyperreal number, . This gives us:

Starting with the expression within the braces, we can explore to find something which may be a bit easier to work with. This takes a little bit of work, but we can show that:

Let’s zoom in on our equation a little bit more, now. The expression is a Complex Hyperreal number which is infinitely close to the Real number, . As such, its reciprocal is also infinitely close to . Given this information, we know that the expression must simplify into some infinitesimal Complex Hyperreal number. Let’s call this number for Hyperreals and .

Similarly, we know that is a non-Complex Hyperreal number which is infinitely close to . Let’s call this number , where epsilon is some non-zero infinitesimal. Multiplying this by our earlier result yields . We can then take this expression and substitute it for the entire braced expression from our full equation:

This, in turn, can now be simplified to:

We’re still far from anything which clearly resembles the π which we all know and love, but now we are getting to a place where we can really start to see some of the implications of Wronski’s definition. Notably, either , or else NSA seems to show that Wronski’s π is not a Real number. As such, it seems like Wronski’s definition is a failure if — presumably, Wronski was not attempting to redefine π out of the set of Real numbers!

However, it seems quite dubious that it would be the case that . Looking back for a moment, we defined our as the Real part of the expression . Let’s break this down a bit further, now. The term is a Complex Hyperreal number which is infinitely close to the Real number, ; let’s call it , for infinitesimal Hyperreals and . I’ll spare my readers a few more convoluted formulae (feel free to work this out yourself!), but if and only if . However, it seems fairly clear that

One of the properties of for all Real is that it has a magnitude equal to 1. This means that for any Complex number such that and are Real and that , it will be true that . The Transfer Principle of the Hyperreals allows us to extend this statement over the Hyperreal numbers, as well. Since the Complex Hyperreal which we were concerned with is , we therefore know that . Since is non-Complex, we know that its square must be positive or zero. Similarly, must be positive and greater-than-or-equal-to 1. As such, the only way for to be true is in the case that .

For this to be the case, then . In order for this to be true, it must be the case that . However, this contradicts our initial definition for as a positive, infinite Hyperreal number.

Unfortunately, it seems that Wronski’s attempt to create a non-geometric definition for π simply does not work. That said, I’m still very curious about his thought process, here. What led him to this particular formulation, in the first place? Is it, perhaps, possible to salvage his work? Could there be some actual truth hidden underneath the apparent incoherence? It will certainly be fun to unravel this puzzle even further.

]]>One of the common claims which is utilized in arguments for the existence of God is that actual infinities cannot exist, implying that there cannot be an infinite regress of causal events in the history of the universe. If there cannot be such an infinite regress, then there must be some First Cause. Theologians then put forth other arguments attempting to show that this First Cause must be God. Blake Giunta, a Christian apologist, has constructed a very interesting and quite useful website cataloging common lines of argumentation from both sides of the debate (color coded Green for Christian arguments and Red for opposing arguments), along with citations and documentation for those claims, called BeliefMap.org. It does not take very long for a fairly cursory perusal of Belief Map to bring one to this exact claim regarding the actually infinite.

While I disagree with Mr. Giunta on many of his views, I have a great deal of respect for him and I think that his work with Belief Map is absolutely fantastic. He truly does attempt to give an irenic and charitable view to the positions of his opposition, and he does sincerely want to discuss the actual arguments being made, instead of being content to knock down Straw Men. To that end, I would like to help Mr. Giunta add to his encyclopedia of apologetics by addressing the manner in which one might answer the claims about actual infinities.

Under the heading, “**Logically, prior events can’t number to infinity**,” Belief Map separates the discussion into two further, green claims. The first of these is, “**Infinity can’t exist in the real world**,” which is further subdivided into three green categories and four red. Two of the reds are theological in basis, and not of much concern to me, but the other two are mathematical and interesting. Each of these red categories contains a minor rejoinder, so I’ll be addressing them as best I can, as well. The second claim under the “**Logically, prior events can’t number to infinity**” heading is that “**Infinity can’t be formed by adding**.” After discussing all of the “can’t exist” categories, I will then consider this one.

The problem with this argument is that it is not even cogent. Infinity is not a number. There are, classically, two ways in which “infinity” is discussed as a concept in the philosophy of mathematics (Katz, 45-50). The first is the “potential infinity,” which is the idea that an iterative process can be repeated without any apparent bound. In this case, “infinity” is a description of the manner in which a process is carried out, and certainly not a number. The second way in which the concept is discussed is the case of “actual infinity,” which is the idea that a completed set can contain a number of elements which is greater than any Natural number. In this case, “infinity” is not a number, but rather a quality of numbers– a number can be either “finite” or “infinite.” And just as there are a multitude of finite numbers, some of which are greater than others, there are similarly a multitude of infinite numbers, some of which are greater than others (Katz 795; Conway; Robinson).

Numerical operations can only be performed upon numbers. For this reason, the expression “Infinity – Infinity” is entirely incoherent. It is no mathematically different than saying “Red – Red” or “Delicious – Delicious” or “Blake Giunta – Blake Giunta.” These are not mathematical statements, and as such, we cannot draw mathematical conclusions from them.

If the argument is amended to discuss the subtraction of infinite numbers instead of the subtraction of infinity, it loses all weight. There exist systems of mathematics in which infinite numbers can be subtracted from infinite numbers perfectly consistently– for example, on Surreal numbers (Conway) or Hyperreal numbers (Robinson). These do not lead to the purported contradictions espoused by apologists.

Belief Map offers three cases as examples of metaphysically impossible scenarios: an infinite tug of war, an infinite hotel, and an infinite popsicle.

The infinite tug of war is actually just a restatement of the question of subtracting one infinite number from another, which we’ve already discussed. It makes precisely the same mistake as before, treating “infinity” as a number and not recognizing that there are numerous infinite numbers, not all of which are equal. As such, it is therefore easily resolved by proper mathematics.

The infinite hotel illustrates a counter-intuitive property of actual infinities, but it does not illustrate a metaphysical impossibility or a contradiction. The only way one might legitimately claim that this is an absurdity would be to already reject the possibility of actual infinites. However, since this is being utilized as an argument in support of just such a rejection, to do so would simply be fallaciously circular question begging.

The infinite popsicle does present something of an absurdity. I’ll agree that Bernardete’s scenario is metaphysically impossible, but not for the reason which he suggests. Popsicles are composed of atoms. Atoms have a significantly non-infinitesimal volume. As such, one cannot create a popsicle with an infinite number of layers in 4 cubic inches (or any other finite volume) of space. This thought experiment doesn’t work, but not due to any metaphysical absurdity relating to infinity.

This bald assertion is an unfortunate bit of question begging. One of the primary definitions of an actually infinite set is a set which contains a proper subset of equal cardinality (Katz 792-795). Belief Map offers no good reason to accept the claim that proper parts always contain less than wholes. One might as well argue that actual infinities can’t exist because actual infinities can’t exist.

Belief Map cites this as an objection to the claim that actual infinities cannot exist, and it is absolutely correct to do so. For more than 100 years, mathematicians have been developing and utilizing a valid and consistent framework for math which deals perfectly well with actual infinities.

However, Belief Map offers a rejoinder to this: “**But so what?** A concept’s being *logically* possible (free of formal contradictions) doesn’t entail that it is *actually/metaphysically* possible.” Certainly, I agree– though, I must say, I find a little bit of irony in this position being raised here, since I quite often see the exact same sentiment brought up by atheists in regards to God’s possibility.

That said, at best this argument is merely inconclusive. It does not say that actual infinities *aren’t* metaphysically possible, but only that they *might* *not *be. In 1925, David Hilbert addressed this precise line of argumentation:

Also old objections which we supposed long abandoned still reappear in different forms. For example, the following recently appeared: Although it may be possible to introduce a concept without risk, i.e., without getting contradictions, and even though one can prove that its introduction causes no contradictions to arise, still the introduction of the concept is not thereby justified. Is not this exactly the same objection which was once brought against complex-imaginary numbers when it was said: “True, their use doesn’t lead to contradictions. Nevertheless their introduction is unwarranted, for imaginary magnitudes do not exist”? If, apart from proving consistency, the question of the justification of a measure is to have any meaning, it can consist only in ascertaining whether the measure is accompanied by commensurate success. Such success is in fact essential, for in mathematics as elsewhere success is the supreme court to whose decisions everyone submits. (Hilbert)

The mathematics of the infinite *has* been successful– inordinately successful, in fact. It forms the basis upon which mathematics has been securely founded. Given that the previous arguments on Belief Map aren’t very convincing– or even coherent, in cases– I see no reason to think that actual infinities are not metaphysically possible.

This is another good objection to the impossibility of actual infinities. If we are discussing an interval with cardinality of at least , then there are an infinite number of subintervals contained therein (Katz 792). Consider, for example, the mathematical interval from zero to one. There are an infinite number of intervals within this interval– for example, and and and , et cetera, et cetera.

Belief Map’s rejoinder to this is to claim that such intervals can only be *potentially* infinitely divided, and are not actually infinitely divided. However, this seems to very clearly not be the case. All of the subintervals on the interval are entirely coextant with . They exist equally as much as the parent interval does and are not simply potentialities waiting around to be actualized.

Now, perhaps Mr. Giunta might respond that he has already granted that such mathematical intervals may reasonably be consistent, but that he is arguing against *physical* intervals, and that these are only potentially infinitely divisible. However, this seems to be yet another bit of question begging, and is only a reasonable assumption if one already denies the metaphysical possibility of actual infinities. After all, if (for the sake of thought experiment) we adopt the assumption that actual infinities are metaphysically possible, then all of the infinite subintervals of a given, physical interval would be coextant with that interval.

Since adding a finite number to a finite number always results in a finite number, Belief Map argues that an infinite collection cannot be formed by the sequential addition of finite elements. However, this seems to be just another circular attempt to reject actual infinities by rejecting actual infinities. No one is suggesting that adding a finite number to a finite number will yield an infinite number. We are suggesting that adding a finite number to an infinite number yields an infinite number, and similarly that adding an infinite number to a finite number yields an infinite number.

To be fair, Belief Map does note that a possible objection to this claim is that past events may “have *always* been infinite in number.” Of course, if it is the case that past events *are* infinite in number, then it *must* be the case that past events have *always* been infinite in number. This is a necessary consequence of the infinitude of past events. It would therefore seem that the claim that “infinity can’t be formed by adding” is entirely irrelevant to the situation under discussion.

From time immemorial, the infinite has stirred men’s

emotionsmore than any other question. Hardly any otherideahas stimulated the mind so fruitfully. Yet, no otherconceptneedsclarificationmore than it does. (Hilbert)

Belief Map, unfortunately, has some ill-formed views regarding the nature and mathematics of infinity. This is owed, at least in part, to the fact that Mr. Giunta borrows heavily from William Lane Craig’s work in the discussion of this subject. However, as I have discussed before, William Lane Craig has a gross misunderstanding of the concept of infinity (Part 1 and Part 2). Hopefully, the information which I have presented here can help Mr. Giunta to improve his wonderful work and correct some of the misconceptions which Belief Map’s arguments present in regards to the actually infinite.

Conway, J. H. *On numbers and games*. A.K. Peters, 2006.

Hilbert, David. “On the Infinite.” 1925. URL: https://math.dartmouth.edu/~matc/Readers/HowManyAngels/Philosophy/Philosophy.html

Katz, Victor J. *A History of Mathematics: An Introduction*. Pearson, 2018.

Robinson, Abraham. *Non-Standard analysis*. North-Holland Pub., 1974.

In the video, Dr. Wildberger claims that there are three different ways in which is commonly discussed: the Applied, the Algebraic, and the Analytical. He does a fairly good job of discussing the manner in which the ancient Greeks discovered that there exists no ratio of two whole numbers which can be equal to , which is a topic I have covered here, as well. He then explains what he means by each of the above three categories.

Since we have shown that there is no ratio of two whole numbers which can equal exactly, the Applied path seeks to find ratios which simply come close to equaling that number– approximations with an arbitrarily large or small error. We are not searching for an exact solution, on the Applied path, and indeed we are content to agree that there is no exact solution which can be attained, according to Dr. Wildberger. We can, for example, find that 1.414, when squared, gives a solution quite close to 2, but it is not exactly 2.

For the Algebraic path, we can construct an extension to the rational numbers which contains some exact solution to the question of — Dr. Wildberger gives the example of an arithmetic using pairs of rational numbers and such that . He notes that this can be done in such a way that it conforms to all the usual laws of arithmetic, but objects that the in this scenario “has nothing whatsoever to do with that one-point-four-one-four-et-cetera that we were talking about previously.”

Finally, Dr. Wildberger presents the Analytic path, which he describes as “the square root of 2 is some infinite decimal which starts out 1.414 and goes on in some fashion.” He unequivocally refers to the Analytic path as “wrong thinking,” and unabashedly goes on to claim that such an object “does not exist, my friends.” It is quite clear that Dr. Wildberger has no love for Analysis. Quite the contrary, he is openly hostile to the idea.

While there are minor statements that I could nitpick in Dr. Wildberger’s treatments of the Applied and Algebraic approaches to the square root of 2, it is his handling of the Analytic approach with which I’ll interact in this article. His discussion of the subject is incredibly hyperbolic, highly oversimplified, and entirely uncharitable. Dr. Wildberger doesn’t even pretend to consider the idea that the Analytic approach may have some reasonable underpinnings which he nevertheless finds to be flawed; rather, he simply dismisses the entire field of analysis as being incorrect and accuses it of being the ruin of mathematics. He treats the subject in this manner despite the fact that, as he will well admit, the overwhelmingly vast majority of all the world’s mathematicians from the past hundred years find the Analytic approach to be perfectly good. In fact, Dr. Wildberger rather boldly claims that these other mathematicians “are all wrong. They are seriously wrong.”

The closest which Dr. Wildberger comes to giving an accurate description of the Analytic approach is when he is discussing the number line. According to Dr. Wildberger,

…this Analytic approach to root 2… pretends that, somewhere on the line (which up ’til now only consists of Rational numbers), somewhere there’s a new place, and it’s somewhere between 1 and 2, and there’s a new number called ‘root 2,’ and it has the property that its square is 2, and we can find out what this thing is by making a calculation.

To say that this is a mischaracterization of Analysis is quite an understatement. In truth, Analysis is based upon an assumption regarding the number line, but it does not simply try to plop an object called somewhere between 1 and 2, as Dr. Wildberger claims. Rather, the assumption regarding the number line upon which Analysis is built is a fairly reasonable one– the idea that the number line is continuous. That is to say, Analysis assumes that there are no gaps or holes in the number line. If the number line only consisted of Rational numbers, as Dr. Wildberger claims it did, then there would be a great many holes in it, indeed, as there are a great many mathematical statements which produce values which cannot be expressed as Rational numbers– uncountably infinitely many, in fact.

The idea that the number line is continuous did not originate with Analysis. It had been an openly discussed question in mathematics since at least the ancient Greeks. The Analysts simply decided to explore what it would mean for such a continuum to exist. Quite happily, they found that assuming continuity led to very beautiful developments in mathematics– exactly the opposite of the picture Dr. Wildberger paints.

If one assumes that the number line is continuous, as the Analysts did, then there is no need to try to create a place for to go, despite Dr. Wildberger’s intimations otherwise. It’s already there, occupying a gap between the Rational numbers. Analysis simply asks, “What can we learn about this gap?” It was not arbitrarily placed between 1 and 2, as Dr. Wildberger hints. Analysis helps us to discover that it is there.

Nor is it true that Analysis claims “we can find out what this thing is by making a calculation.” We already know what this thing is: it is the square root of two. Dr. Wildberger is conflating “what this thing is” with the manner by which we symbolize this thing when using a particular notation. That is to say, Dr. Wildberger is attempting to claim that the number **is** its decimal representation. This is why he takes such offense at the ellipsis which is used to show that the decimal representation is incomplete. For Dr. Wildberger, the decimal representation **is** the number.

This is, of course, a silly notion. The symbols which we use to represent an idea are not equivalent to that idea. Nobody thinks, for example, that the color blue necessarily consists of the letters “b,” “l,” “u,” and “e.” Nor would anyone claim that is a more proper symbol for the number it represents than is “two” or “два” or “二” or || …or even . Similarly, it seems more than a little misguided that Dr. Wildberger is so inordinately attached to the decimal representation of the square root of 2. The fact of the matter is that, so long as it is clear that we are talking about the square root of 2, then it doesn’t matter if we represent that notion with or with or with or with “the ratio of the magnitude of the diagonal of a square to that of one of its sides.”

So when Dr. Wildberger writes…

…and asks, “Is this a correct and meaningful statement?” the answer to both is, “Yes.” None of the displayed digits is incorrect and the ellipsis acknowledges that the display is incomplete. This statement gives us a good bit of information about , and that alone makes it meaningful. When Dr. Wildberger asks about moving the ellipsis to display fewer and fewer digits, the expression remains correct and meaningful, but becomes less useful as we omit more information. The simple fact of the matter is that a mathematical statement can most certainly be “meaningful” without carrying perfectly complete information.

Even when Dr. Wildberger presents the question of in an attempt to show that the ellipsis is absurd, he is misguided, as this statement actually does have meaning– it tells us that is equal to a number. Now, Dr. Wildberger is correct to point out that one is not likely to get any credit for such an answer on homework or an exam, but his reasoning is incorrect. As Dr. Wildberger well knows, good math homework and exams care less about the completeness of the answer than they do about how the student arrived at that answer. After all, which should receive more credit on a test: a correct answer with incorrect work shown or an incorrect answer with the correct work shown? So, while may be a *technically* correct response, it does nothing to show that the student has any understanding of whatever mathematical concept is actually being tested.

This idea that the decimal expansion of contains an infinite number of non-repeating digits seems to be the only real objection which Dr. Wildberger presents in this video, but his opposition to it seems misplaced, at best. In the description to the video, Dr. Wildberger notes that he will further discuss the logical problems which he purports to exist in the treatment of irrational numbers in his videos on Cauchy sequences and Dedekind cuts, so I will be sure to watch these as well; however, his bold pronouncement that “none of them work” seems more than a little arrogant. We’re not talking about some fringe development in a little known field which is sparking controversy and debate. On the contrary, Dr. Wildberger is overtly stating that hundreds of years worth of the world’s greatest mathematical discoveries are completely wrong.

I believe I understand why Dr. Wildberger makes such outlandish claims. In some of his other work, I have seen him explicitly reject the axioms of infinity and of choice utilized in modern set theoretic frameworks. Certainly, without these axioms, our understanding of the irrationals becomes far less rigorous. However, Dr. Wildberger’s aversion to these axioms has led him to caricature his opposition rather than to treat the opposing viewpoint with even the remotest sense of charity. As such, it seems fairly difficult to take his claims on the subject seriously.

Norman Wildberger’s video on the square root of 2 does not contain the “inconvenient truths” which it purports to show. Worse, it contains rather convenient falsehoods which Dr. Wildberger has utilized in his attempt to denigrate Analysis.

]]>We can use a function for just such a purpose. A function is a specific mathematical tool which allows us to describe an entire set of data points all at once which we symbolize as (read “ of “). We encode the data by means of a mathematical formula. For example, our exemplary rolling ball might well have been encoded by the function , where the represents the time, in seconds, that the ball has been rolling, and the value of the function, tells us the distance in meters which the ball has traveled in that time. In this particular function, the coefficient of tells us the rate at which distance changes as time passes– that is, a meter per second. When the boy first rolls it, the ball is traveling at a meter per second; when it finishes it had been traveling at a meter per second; and at any single point during the journey the ball is traveling at a meter per second.

However, this is a very simple example. It describes a situation involving a constant velocity. Things become a bit more muddied when the rate at which a change occurs is, itself, changing.

Our example above describes a **linear function**. Linear functions are so named because they can be graphed on a Cartesian plane to form a straight line. The equation for a linear function is of the form , where represents the y-intercept (the point at which the line crosses over the y-axis of the plane) and where represents the slope of the line (the rate of change for the function). Utilizing the function from our example, , we have a slope of , an intercept of 0, and we can produce the following graph:

It’s very easy to see, intuitively, that this line’s slope, or rate of change, is constant throughout the whole function. We don’t even need to see the equation which generated this graph to see that this is the case, if we presume that the line on the graph is actually as straight as it appears. That very straightness is precisely what we mean by a constant rate of change. As such, it is perfectly clear that the graph has the same slope at as it does at or or . Regardless of how far along the graph we look, it will always have the same rate of change.

However, this is not true of all graphs. When a function ceases to be linear, the rate of change of that function ceases to be constant. Take, for example, the following graph of the function :

Let’s pretend that, instead of rolling the ball across a flat floor, the little boy has instead set the ball atop a ramp and let go. The ball starts moving slowly, but builds up more and more speed as it moves farther and farther from the boy. After four seconds, the ball is two meters away from the boy– just as in our first example– which means that the ball still traveled a meter per second, overall. However, it seems entirely clear that the ball was not moving at that speed at every single moment in the journey, the way it had when the boy rolled it across the floor. At the start of its roll, the ball is moving much slower than a meter per second, while at the end it is moving much faster than a meter per second.

This introduces a very interesting, and very important, question: how can we tell what the rate of change is at any given point? What is the **instantaneous rate of change**?

For example, let’s say I want to know how fast the ball is moving precisely 3 seconds after the boy has set it rolling. A person might think that they can simply determine how far the ball has gone in that time– meters– and then divide that distance by the time– 3 seconds– to conclude that the ball is traveling at of a meter per second. However, this has the same problem as the whole 4 second journey: the ball seems to be traveling slower than of a meter per second at the start and faster than of a meter per second toward the end.

One way in which we know this fact is by looking at how far the ball travels between the second and third seconds of its journey. So, after two seconds, the ball is a meter from its starting point. After three seconds, it is of a meter from the starting point. This indicates that the ball traveled of a meter in one second. But this, again, falls prey to the same problem we’ve been having: the ball seems to be moving more slowly than of a meter per second at the 2 second mark and more swiftly at the 3 second mark. We’re closer to the speed of the ball at 3 seconds than we were before, but we still haven’t determined it, quite yet.

We can continue to take smaller intervals of time in order to find better and better approximations of the speed of the ball at the 3 second mark. For example, using the distance the ball moves between the 2.5 second and 3 second marks, or the 2.75 second and 3 second marks, or the 2.99999999998 second and 3 second marks. We can come really, really close to the answer we’re trying to find by doing this, but we don’t end up with the exact answer– and mathematicians are not happy to settle for an inexact answer.

Let’s think about what we are doing in these approximations.

If the ball had traveled at a constant speed from the start, at time 0, to the 3 second mark, then its journey could be represented with line . The slope of this line is — which is the approximate speed we determined when considering the ball over this period. Similarly, the line has a slope of , our approximation from the 2 second mark to the 3 second mark. If we were to calculate the slope of , our approximation would get even better. Visually, in the graph above, we can see that the linear graphs are getting closer and closer to the parabolic graph– but there’s always some tiny bit of space between the two.

Algebraically speaking, what are we doing in these approximations? How can we translate this problem into our mathematical language?

Well, we are taking the distance which the ball has traveled after 3 seconds– which, in our math language, is — and we are subtracting the distance which the ball had traveled at an earlier time– say, or or — to find the distance which the ball has traveled between those two times. We are then dividing this distance by the amount of time which has elapsed between the two points: seconds or second or seconds.

Now let’s try to generalize this. We have our function, . We are looking at the difference between the value of the function at some point, , and the value of the function at some subsequent point, ; we are then dividing that difference by the difference in our two points, — which is just . So, this leads us to the expression .

As we have seen, the smaller the gap between our two -values, the closer our approximation becomes. Naturally, we might then think that we can find an exact solution to our problem if we just remove the gap, entirely– that is to say, what happens if we set equal to zero in the expression that we found, above? However, we very quickly come to a problem if we do that. Evaluating the expression, we’ll see that . This is certainly problematic– any middle school child should be able to point out that we simply cannot evaluate that .

But what if we had some number which wasn’t zero, and yet that number was infinitely close to ? In such a case, we could reasonably assume that our answer is infinitely close to being correct.

Thankfully, in the first part of this series, we learned that we do have such numbers: the infinitesimals. So now, if I replace the from our above expression with any arbitrary infinitesimal– let’s call it — we’ll find that evaluates to something infinitely close to the answer which we are looking to find. For the exact answer, as we mentioned, we would like to have been able to replace the with zero; but now we can be clever. Instead of trying to do undefined operations of math, dividing zero by zero, we can find the Real number solution which is infinitely close to our evaluated expression, which (as you will recall) is called the **standard part** of the expression. By taking the standard part of , we can find the exact answer to our problem.

Let’s go back to our rolling ball, now, to see how we can put this into use. We want to find the exact speed of the ball at the 3 second mark. Translating this into our expression, we get:

So, precisely at the 3 second mark, we now know that the ball is traveling at exactly of a meter per second. However, we can do even better than this. As mentioned earlier, mathematicians are greedy. We don’t just want to know what’s going on at a few of the points; we want to know what is going on at *all* of the points. So, rather than solving for some particular value of , such as 3, we can solve the expression for *all* values of , like so:

This new function, , is called the **derivative** of our original function. We denote the derivative of with an apostrophe, written as and often read as “-prime of .”

The derivative is a very powerful tool. It gives us a way of describing the instantaneous rate of change for *all* points of a given function. When discussing speed or velocity, as we have been doing for our exemplary ball, the derivative of the function for distance gives a function describing velocity. The derivative of the function describing velocity will, in turn, give us a function describing acceleration. Taking the derivative of that function will then tell us how quickly our acceleration is, itself, increasing or decreasing– and so on and so forth. When we take derivatives of derivatives, like this, we refer to them as second, third, fourth derivatives (and so on). So, as we have now seen, the second derivative of a distance function is an acceleration function.

The derivative was developed by mathematicians for the express purpose of describing the changes in change. By its use and exploration, we can conquer a great many problems which are incredibly difficult– or even impossible– without this wonderful tool. And, at the very heart of the derivative lie the infinitesimals– these numbers between our numbers– which give this mathematical tool its power.

]]>There are numbers in between the Rational numbers, too. We can define some number, , which is not equal to any Rational number. There are Rational numbers which are greater than , and those which are less than , but somehow our number squeezes itself into a gap in between the Rational numbers. In order to find such a number, we need to further extend our understanding of “number” to include the Real numbers. This should all be very familiar to the average high-school student.

Now, what happens if we extend this idea one step further? Are there more numbers which are in between the Real numbers?

For thousands of years, mathematicians have had heated debates about this question. There is a well-known concept in number theory called the Archimedean property, named after the famous mathematician Archimedes (though he, himself, had attributed the idea to his friend and mentor, Eudoxus). Euclid described the notion by saying, “Magnitudes are said to have a ratio to one another which can, when multiplied, exceed one another.” In short, this means that given any two numbers, and such that , then we should be able to add to itself a finite number of times in order to find a number which is larger than . For example, given the numbers 5 and 34, I can add 5 to itself seven times in order to get a number greater than 34– that is to say, . Given the numbers and , we can find that . Given the numbers and , we can find that .

However, this property led to a very curious problem when mathematicians began trying to discuss the number of points contained in a given line– particularly when those mathematicians attempted to compare the number of points in one line to the number of points in another line. The eminent philosopher, Aristotle, came to the conclusion that such discussions could be nothing but nonsense, and that any attempt to quantify the number of points in a given line would simply lead to confusion and folly. As an example of this, Aristotle discussed what has come to be known as his Paradox of the Wheel. Take a look at the following figure:

Ancient Greek mathematicians, while studying circles, wanted to find some way to discuss the circumference of the circle in the same way in which they talk about other magnitudes. So, they began “unrolling” circles to create a straight line equal in length to the circumference of the circle. Aristotle noticed that, given a larger and a smaller circle which share a centerpoint, rolling out the wheel to produce a straight line equal in length to the circumference of the larger circle causes the smaller circle to produce an equally long line. But how can this be? The smaller circle obviously has a smaller circumference, but rolling it out at the same rotational rate as the larger circle makes it seem to have an equal circumference to the big one!

Galileo Galilei, two millennia after Aristotle, attempted to resolve this paradox by arguing that there must be gaps in the continuum– that is to say, there must be empty spaces between the points in any line or figure– and that these gaps account for how the smaller circle’s circumference can be stretched to equal that of the larger circle. However, other mathematicians were quick to note that this would run afoul of the Archimedean principle. If such gaps existed, it should be possible to continue to stretch them until they became noticeably large. We should be able to magnify a line until we literally see it rend apart into pieces.

In the latter half of the 17th Century, Gottfried Wilhelm Leibniz began to argue that there are numbers which are infinitely small. To the absolute shock of the mathematical community, Leibniz was claiming that a 2000-year-old immutable law of number theory was, in fact, incorrect. There existed numbers, Leibniz claimed, which violate the Archimedean principle; numbers which are greater than zero, but which are nonetheless so much smaller than any Real number that it is impossible to find a finite ratio between that infinitely small number and any Real. You could add the number to itself a thousand times, a million, a quintillion, a googolplex– even Graham’s number of times– and that number would still remain smaller than any Real number which you could possibly imagine.

Not only did Leibniz believe that such numbers exist, he utilized them in order to create an entirely new method of mathematics: Calculus. However, the idea was so incredibly controversial that even other proponents of Calculus– like Isaac Newton, who independently developed that field of mathematics– railed against Leibniz for his reliance upon such an insane concept. Still, Leibniz’s results were indisputable, and a number of mathematicians joined with him in an attempt to find some rigorous and logical means of discussing these infinitely small numbers. However, after a great deal of failure, other avenues began to be explored in order to place Calculus on a rigorous footing. Particularly, the notion of the Limit was put forth, expanded, and eventually made rigorous in the 19th Century by Karl Weierstrass. With a rigorous and logical footing finally established for Calculus, the infinitely small numbers of Leibniz’s devising were abandoned and Calculus classes began being taught based on the idea of the Limit.

Thankfully, this was not the end for our strange and tiny numbers. One-hundred years after Weierstrass, and three-hundred after Leibniz, a model theorist named Abraham Robinson began to attack the problem. He was fascinated by Leibniz, and wanted to gain a better understanding of the mind which had invented the calculus. Robinson’s work led to his development of a new number system: one which did not adhere to the Archimedean principle, but which otherwise behaved in exactly the same manner as did the Real numbers. He called this new system the Hyperreal numbers. Just as the mathematicians had extended the Integers to find the Rationals, and then extended the Rationals to find the Reals, Robinson extended the Real numbers in order to find the Hyperreals.

The Hyperreals contain all of the Real numbers, so any number on the Real line is also on the Hyperreal line. However, the Hyperreal number line also contains two very special types of numbers which are not contained in the Reals. The first of these are Infinite numbers– numbers which have an absolute value greater than that of any Real number. That is to say, we can define some number such that, for any given Real number , it is true that . The second type refers to Infinitesimal numbers. Infinitesimals are the reciprocal of Infinite numbers, and as such, have an absolute value which is smaller than that of any Real number (except 0, which is considered to be Infinitesimal): .

Any number which is not Infinite is called a Finite number– including the Infinitesimals. Once the system is in place, it becomes quite easy to prove some simple, but powerful, properties of the Hyperreals. Given any positive Infinite numbers, and ; any positive Real numbers, and ; and any positive, non-zero Infinitesimals, and ; we can derive the following:

- , , and are Infinite
- and are Infinite
- , , and are Infinite
- is Finite (and possibly Infinitesimal, in the case )
- is Finite and non-Infinitesimal
- and are Finite and non-Infinitesimal
- , , and are Infinitesimal

You may notice that there are several cases missing from the above list. These cases are indeterminate forms– that is to say, without knowing more about the particular numbers involved, it is impossible to tell whether the result will be Infinitesimal or Finite or Infinite. The indeterminate forms are:

We can also derive another very important notion:

For any Finite Hyperreal number, , there is exactly one Real number, , such that is Infinitesimal. In such a case, we call the **standard part** of , denoted as .

Any two numbers which are only separated by an Infinitesimal are said to be **infinitely close** to one another. As such, another way of wording the above is that the standard part of any Finite Hyperreal number is the Real number which is infinitely close to it.

So, now, we can answer the question with which our article started. Are there numbers between the Real numbers? We find that the Hyperreals allow us to answer this with a resounding, “Yes!” Given any Real number, , and any Infinitesimal number, , we can be absolutely certain that there are no Real numbers which come between and . This concept is the absolute foundation of Infinitesimal Calculus.

]]>Many other mathematicians and philosophers of the time rightfully balked at the notion. It seemed entirely ludicrous. Bishop George Berkeley famously scoffed at Newton, asking if his fluxions were “the ghosts of departed quantities.” However, it was quite plain that the mathematics which Leibniz and Newton presented *worked*. When the results which could be found from the methods of Calculus were able to be confirmed using other methods, they were found to be accurate and true. Indeed, the Calculus was such a powerful tool that even most mathematicians and philosophers who recognized its flaws continued to utilize it in their work. Many began searching for some way to make the Calculus just as rigorous as the rest of mathematics. These efforts culminated in the work of Karl Weierstrass, who found a way to base Calculus upon a different tool. Instead of the Newtonian “fluxion” or the Leibnizian “differential,” Weierstrass gave mathematics a well-defined notion of the limit.

It is Weierstrass’ method of limits which is still taught, even to this day, in nearly every Calculus textbook in the world; but perhaps it is time to abandon this notion and return to the concept which Newton and Leibniz pioneered.

In the 1960’s, a mathematician named Abraham Robinson developed a rigorously well-defined number system called the Hyperreal numbers. This number system included numbers which are larger than any given Real number– known as “infinite” or “unlimited” numbers– as well as their reciprocals, which are greater than zero but nonetheless smaller than any real number– known as “infinitesimals.” Robinson explicitly noted that his development of the Hyperreals came out of a desire which he had for better understanding Leibniz’s thought processes. Indeed, the infinitesimals of the Hyperreal numbers look very much like the “fluxions” and “differentials” of that early Calculus. In 1986, H. Jerome Keisler wrote a textbook for the subject, Elementary Calculus: An Infinitesimal Approach, in which he provides a method for teaching Calculus without the need for limits, while still maintaining the rigor desired in mathematics.

Unfortunately, Dr. Keisler’s work has not yet gotten much of a foothold in the educational system. The method of limits has been taught for so long that it would be exceedingly difficult to displace it. However, there are some very distinct pedagogical advantages in Keisler’s approach which may make the whole ordeal well worth the effort.

Let’s look at a simple example. One early Calculus problem with which every student is presented is to find the derivative of the function . For those who don’t remember, the derivative of a function, , tells us how much the value of that function changes with respect to a change in the value of *x*. So, let’s say that the value of *y* increases by some amount which we will call when the value of *x* is increased by some amount . Algebraically, we would write this as , for the equation we are discussing. We can then take this new equation, and solve it for the value of as follows:

It is these final three steps which the mathematicians of Newton and Leibniz’s day found to be offensive. According to the Calculus, the derivative was the function which results from . If we are to say that actually has a value, then it must be true that , because division by zero is undefined. However, if , then it must be true that , because that is the only additive identity. Thus, we are left with a contradiction if we claim that .

Later mathematicians, culminating in Weierstrass, resolved this issue by redefining the derivative to be a limit as the change in *x* approaches zero. Specifically, they said that . Of course, this raises a new question: what, precisely, is a limit? Well, if is defined on an open interval about , except possibly at itself, then if, for every number , there exists a corresponding number such that for all *x* it is true that implies that . Needless to say, this is a fairly complex idea, which is why a large amount of time needs to be spent on teaching students how to properly find and evaluate limits.

Keisler’s resolution to the derivative problem we presented is somewhat simpler, and quite a bit more intuitive. In his Elementary Calculus, the in the equations above is defined to be a non-zero infinitesimal. The derivative is then defined to be where *st()* means “the standard part of…” The *standard part* of a finite Hyperreal number, *a*, is the Real number which is infinitely close to *a*; and two numbers are infinitely close if they only differ by an infinitesimal value. Looking again at Step 6 from our work above, we had the expression . Since we know that is infinitesimal, we know that is infinitely close to . Thus, for any Real number, *x*, we can see that .

From a pedagogical standpoint, it would seem that Keisler’s method is superior. Hyperreal variables can be manipulated algebraically in exactly the same way students are already familiar with manipulating Real variables. The *standard part* function is quite a bit easier and more intuitive to learn than the *limit* function. The method is inordinately closer to the original ideas which created Calculus, in the first place, and it is just as rigorous a treatment as is the method by limits. Keisler and others have reported that they’ve seen students take to the material more easily, in this manner. Perhaps the time has come to leave off of the use of limits, and to return to the method of infinitesimals for teaching Calculus.

Ancient languages maintain these problems, but add an entirely new layer of obfuscation which is not found even in most culturally distinct modern languages. Over the past few thousands of years, human understanding of the world around us has changed quite significantly. Just one hundred years ago, no one had ever viewed the ground from five miles up in the air. Two hundred years ago, we had no idea that microscopic organisms cause disease. Three hundred years ago, humanity had no idea that oxygen exists. Four hundred years ago, the world was shocked to learn the the planet Jupiter has moons. The manner in which religion, philosophy, and science have discussed a myriad of things about reality has changed so greatly in recent millennia that very often even one word in a single language can mean something exceedingly different to people living in different periods of time.

The documents which comprise the New Testament of the Christian Bible were written 2000 years ago. In those ensuing twenty centuries, many of the words used by the original authors and many of the concepts which they espoused have engendered incredible amounts of revision, alteration, and nuance by subsequent philosophers and theologians which would have been wholly alien to those initial ancient writers. The vast majority of modern readers– including an embarassingly large number of modern scholars of the text– seem wholly ignorant of this fact when they read a passage from their Bibles.

As an example of what I mean, let’s take a look at a short verse from one of the Gospels, Mark 1:10. The English Standard Version of the Bible, which I generally consider to be a good translation, renders this passage as:

And when he came up out of the water, immediately he saw the heavens being torn open and the Spirit descending on him like a dove.

This is a verse from Mark’s description of the baptism of Jesus by John the Baptist. It seems extremely straightforward, to a modern Christian reader. As Jesus comes up from the water, the Holy Spirit came out of Heaven and alighted upon Jesus in the way a beautiful bird might come down from its flight.

However, consider this alternate translation of the same text:

And when he came straight up from the water, he saw the skies being divided and the wind came down toward him like a pigeon.

This is very different, indeed. No mention of Heaven, or the Holy Spirit. It gained the adverb “straight,” but lost the adverb “immediately.” Some of the words are similar, though slightly altered, like “divided” instead of “torn open” and “pigeon” instead of “dove.”

So which translation is correct? Well, as I insinuated earlier, “correct” may not even be a word which we can use when describing translations. However, there are definitely some good reasons to prefer my translation over that of the ESV. Let’s talk about the two biggest changes in my translation over the other: “heavens” versus “skies,” and “the Spirit” versus “the wind.”

The word which the ESV translates as “heavens” and which I translate as “skies” is οὐρανους (ouranous), which is the plural form of the word οὐρανος (ouranos). This one word is sometimes translated as “sky” and other times as “Heaven” by nearly every English translation, including the English Standard Version. Given how different the words are to a modern Christian, this might seem confusing. However, the ancient Greek language didn’t have different words for these two things. Neither did ancient Hebrew, nor Aramaic, nor Latin, nor any other ancient language of which I am aware.

There is a very good reason for this. The modern conception of Heaven as a place which exists wholly removed from the physical cosmos did not exist to these ancient people. When the ancients revered “the heavens” as a divine realm, they were literally talking about the sky which they looked up and saw every day. They referred to “Heaven” as being “above” them or “higher” than them because that’s where the sky actually is. This language doesn’t even make any sense on a cosmic scale, let alone when discussing something wholly distinct from the physical cosmos. The “Heaven” which is discussed by modern theologians is not “above” us, as it has no physical relation to us.

To further illustrate this, look at what the Gospels record Jesus, himself, as saying. In a number of passages (Matthew 24:30, 26:64; Mark 13:26, 14:62), Jesus talks about the Son of Man being seen in “Heaven” coming on the clouds. Modern readers know that clouds are just collections of water vapor in the sky– very physical things, and distinctly not what theologians would consider to be a part of the realm of the divine.

So, then, if the ancients were referring to the sky when using the word οὐρανος, then why do they pluralize it in many places, including the passage which we are here discussing? This is yet another place where ancient culture and modern collide. To us, there is only one sky. It wouldn’t even occur to most people that the word can be pluralized. However, the ancients had a very different understanding of that which resides above us than we do. They thought that the objects which we see in the sky above us– the sun, moon, planets, and stars– were literally attached to crystalline spheres each of which rotated at different distances from the ground. Those spheres were what the ancients meant when they were talking about the “heavens.” Humanity was able to distinguish seven celestial bodies which were distinct from the background of stars through the use of the naked eye. Each of these was considered to be attached to a distinct sphere rotating over the ground at different heights. According to Aristotle, the lowest of these heavens was the Moon, followed by Mercury, then Venus, then the Sun, then Mars, then Jupiter, and finally Saturn in the highest sphere.

Modern Christian theologians generally do not believe that there are multiple divine realms, so pluralizing οὐρανος makes no more sense in light of modern theology than it does in modern cosmology; but it was perfectly rational to an ancient people who truly believed that there were multiple skies above us. In fact, in the New Testament, itself, we have an example of another writer who most certainly espoused this view. Paul, the eminent apostle whose name is attached to nearly half of all the books that make up the New Testament, says in 2 Corinthians 12:2, “I know a man in Christ who fourteen years ago (whether in the body or out of the body I do not know, God knows) was carried off to the third heaven.” Again, this fits rather perfectly with the ancient understanding of the world, but clashes rather significantly with modern cosmology and most modern Christian theology.

For these reasons, I think that “sky” is a much better, and much more preferable, translation of the word οὐρανος than is “Heaven.”

The distinction between the ESV’s “Spirit” translation and my “wind” is a very similar case. Here, the word being translated is πνεῦμα (pneuma). Just as before, the ESV and other English translations alternately use “spirit,” “Spirit,” “breath,” and “wind” to translate this word. You’ll notice that I listed both lower-case “spirit” and upper-case “Spirit,” separately. I did so intentionally, because when translators use that capitalized “S” version of the word, they are saying that the author was referring to the Holy Spirit– as in, the third person of the Trinity– as opposed to any other “spirit.”

Again, the word πνεῦμα carries with it cultural connotations which are somewhat alien to modern readers. The Greek word primarily means “breath” or “wind.” However, again, the ancient people had no concept of modern physics or chemistry. They didn’t know that air is composed of molecules which move and bounce off of other molecules, imparting Newtonian forces in order to cause the motion which we see. All that they knew was that, somehow, the invisible forces of “breath” and “wind” could affect that which was visible. The ancient Hebrews, as well as a few other ancient Near East cultures, came to associate this invisible force with those invisible qualities of a person which animate the visible. As such, in ancient Hebrew, the word רוח (ruach) literally meant “wind” or “breath,” but the “wind” of a person was the part of that person which truly gave them life. This carries down even into modern English in idioms like “the breath of life.”

As Hebrew people in the Greek-speaking Diaspora of the Roman Empire began to utilize ancient Greek in addition to– or even in place of– their ethnic tongue, they began to have the need to discuss these concepts in the common language of the area. As such, they chose to use the word πνεῦμα in the same way that they had utilized רוח, previously. While the Greek word also conveyed a sort of sense of invisible force, most of the Hellenic citizenry of ancient Rome didn’t see πνεῦμα as being something personal or intelligent. This connotation seems to have been the result of a syncretization between Hebrew and Hellenic cultures.

Modern theologians, just as with “Heaven,” do not regard “spirit” to be a physical thing, in the least. To them it is, in fact, the precise opposite of physical. It is entirely non-physical, and while it (somehow) imparts personhood into a being, the physical body of that being is just a shell to contain the spirit. But, again as before, this was not a concept held by ancient peoples. To them, a person’s wind was categorically no different than a storm’s wind. The idea that something might be wholly removed from the physical world would have been entirely alien to most ancient people. Among those who did hold to such a concept– for example, Plato and those who accepted his theory of universals– it would have been entirely anathema to refer to such things as “wind.” After all, “wind” is a thing which can certainly be perceived– perhaps not by the eyes, but certainly by senses like touch and hearing and sometimes even taste or smell. The Platonists insisted that the universals were entirely imperceptible, and that notions like space and time– which can certainly be applied to wind– are entirely meaningless in regard to the universals. It seems quite unlikely that the authors of the New Testament had in mind the modern conception of “spirit” when they used the word πνεῦμα in their writings.

For these reasons, I believe that “wind” or “breath” are far more preferable renderings of the word πνεῦμα than is the word “spirit.”

The words which the ESV used in translating Mark 1:10– “heavens” and “Spirit”– are part of a category of terminology which I refer to as Theologically Loaded Language. These are words which have undergone literally millennia of theological revision and discussion, and which have come to mean very different things than the original text which they translate. These two are just a very tiny example of a rather huge list which includes very common Christian words like “gospel,” “Christ,” “sin,” “angel,” “devil,” “baptism,” “Scriptures,” and many, many others.

For some time, now, I’ve wanted to do a translation of the New Testament books which avoids utilizing this sort of Theologically Loaded Language. I honestly believe that such a translation would be eminently useful to *all* people interested in the Bible, believer and skeptic alike. I would start with Mark– the earliest and the shortest of the Gospels– and progress from there. Unfortunately, however, this would require a great deal of time and effort, even just to produce a single book. I’ve thought about trying to drum up some interest with a crowdfunding site like Kickstarter, IndieGoGo, Patreon, or GoFundMe, but I’ve been somewhat recalcitrant. Would this be something in which you, my readers, might be interested? If so, please let me know in the comments. If I can engender enough interest, I may well move forward with such a project.

Consider this slightly modified version of the thought experiment…

Fred is sitting in a room at 8:00 am. There exist four Grim Reapers along with Fred, each of which is currently dormant. When any individual Grim Reaper becomes activated, if Fred is not going to be killed by the next Reaper in the order, then this Reaper will instantaneously kill Fred; otherwise, this Reaper will return to a dormant state and continue to do nothing. Each of the Grim Reapers is timed to activate at a specific time after 8:00 am. The first Reaper will activate at 8:15 am. The second activates at 8:30 am. The third activates at 8:45 am. The fourth activates at 9:00 am.

Now, 8:15 arrives and the first Reaper activates. Does it kill Fred or not? If it does kill Fred, because the second Reaper is not going to kill Fred, then the 3rd Reaper in the line is not going to kill Fred– it can’t, obviously, since Fred is already dead. However, if that’s the case, then the second Reaper *is* going to kill Fred (since those conditions are met) and the first Reaper’s conditions are no longer valid. So, even though we started assuming that the first Reaper killed Fred, we’ve learned that this cannot be the case. Indeed, the same holds true for the second Reaper– if the second Reaper kills Fred, then the fourth Reaper cannot kill Fred meaning that the third Reaper should kill Fred, violating our initial assumption. So, we see that the second Reaper is not going to kill Fred. But if the second Reaper isn’t going to kill Fred, then the first Reaper should– except that we’ve already seen this cannot happen.

Unlike Pruss’s formulation of the paradox, this problem cannot be resolved by simply claiming that actual infinites cannot exist. We’re not relying on actual infinities, here. We are looking at a finite number of Grim Reapers. Nor does is seem reasonable to come to the sort of conclusion which Pruss does in his proposed solution to the paradox. If a person tried to claim that the number “four” cannot actually be a number which applies to the real world because of this paradox, we would all laugh in their faces.

It’s a little bit easier to see the point I was trying to make in my other post, now. Regardless of whether one is an A-Theorist or a B-Theorist as far as Time is concerned, both camps agree that events which lie in the future do not alter the ontology of events in the present. On the A-Theory view of things, I cannot make a decision based upon a future which has not yet been actualized. Things which are not yet actual cannot affect that which is actual, and as such, it is clear that my version of the Grim Reaper Paradox violates this view of things.

Similarly, on the B-Theory, causality is a description of a relation between two events, but it doesn’t affect the ontology of those events. So an event in the future cannot alter the ontology of something in the present. Both events are actualized and static, and my version of the Grim Reaper Paradox violates this precept. However, this also means that events in the present do not alter the ontology of events in the future. The future is just as actual and static as are the past and present, on the B-Theory. As such, it becomes immediately clear that Pruss’ version of the Grim Reaper Paradox violates this same precept, since it is dependent upon the idea that an event can affect the ontology of future events.

I do not think that Pruss’ version of the Grim Reaper paradox shows that actual infinities are inapplicable to the real world any more than my version of this thought experiment shows that the number “four” is inapplicable to the real world. In fact, it seems to me that the paradox is best resolved by abandoning an antiquated and untenable idea of the nature of Time. Apologists like William Lane Craig have attempted to cite the Grim Reaper paradox in order to support the Kalam Cosmological Argument. Ironically, it may be the case that the Grim Reaper Paradox actually *undermines* the KCA, since that argument is entirely dependent upon the tensed A-Theory of Time.