## More on 0.999…=1

In my last post, I discussed a particular video which I found to be more than a bit misleading. The discussion centered around a simple, but extremely counterintuitive notion of mathematics: the fact that the number 0.999…, or zero-point-nine-repeating, is equal to 1.

Well, as I mentioned, the very counterintuitive nature of the result led at least one of my readers to question its validity. As such, I thought I would lay out one proof of this concept, in order to make it easier for those who do not accept the result to pinpoint exactly where they disagree. I’ll break my proof down into numbered steps, to ease in that venture.

## (1) Definition of 0.999…

By the symbol, 0.999…, I mean to say an infinite decimal expansion in which all digits to the right of the decimal place are 9’s. Mathematically, we can express this as:

## (2) Partial Sums

Those of you who remember your Calculus might immediately recognize this Summation as a textbook example of a convergent geometric series. However, for those who do not, let’s work through the steps to determine the limit of this expression.

Provided the series converges, we say that the value of the summation is equal to that series’ limit.

**(a)**

Similarly, if convergent, the limit of the series is equal to the limit of the partial sums of the series. In general, the *n*th partial sum of our series can be seen to be:

**(b) **

So, as long as the series converges, we can see that:

**(c)**

## (3) Convergence

If the partial sums form a convergent sequence, then the whole series converges. A convergent sequence is one which has an existent, finite limit.

**(a) **

**(b)**

**(c) **

## (4) Limit

So, now, if exists and is finite, then it follows that exists and is finite. And, if that is the case, then it is evident that our summation series from (1) converges.

We say that the limit of some function, , exists and is finite if, as *k* is made arbitrarily large, then becomes arbitrarily close to some Real number, *L*.

So, our question now becomes, as *k *is made any arbitrarily large value, does become arbitrarily close to any single Real number? It’s fairly obvious that, the larger *k* we utilize, the closer gets to 0. We can, in fact, make as close to 0 as we want, simply by choosing a large enough value of *k*.

In general, it will always be true that:

**(a)**

…for any Real number, *k. *Additionally, for any Real number *r* such that , we can choose a value of *k* which would make it true that:

**(b)**

And for all *k*, it is true that:

**(c)**

Therefore:

**(d)**

Since 0 is a finite, Real number, it is clear that the limit exists.

## (5) Evaluating the Summation Series

Now, we have everything we need to evaluate our initial summation.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

Just for fun I googled “definition of real number”. Pity for the high school students who go on to study math at college. This is what I got from the first page and a half:

Real numbers are numbers that can be found on the number line.

The Real Numbers did not have a name before Imaginary Numbers were thought of. They got called “Real” because they were not Imaginary. That is the actual answer!

A ‘real number’ is any element of the set R, which is the union of the set of a rational numbers and the set of irrational numbers.

A real number is a rational number or the limit of a sequence of rational numbers.

Seriously, this is worth a look:

http://www.math.ubc.ca/~cass/courses/m446-03/dedekind.pdf

I would be interested in a proof that every real number, as defined by Dedekind, has a constructable decimal expansion. The simple proof of uncountability depends on this.

Finally, and this is fundamentally important in math, what is written in symbols is only a representation of a number, and so one can probably live with the fact that 1.000000(rec) and 0.99999(rec) are two representaions of the multiplicative identity of the real number system.

Jolly interesting stuff.

What would it mean for a decimal expansion to be constructible in this context? My immediate reaction would be to say that uncountability implies that most real numbers wouldn’t have a decimal expansion that is ‘constructible’, but it depends what you mean

Following Dedekind’s definition of “The Cut”, it requires that every rational number can be assigned to one only of the two sets “above the cut” and “below the cut”, and this is all very well in theory, but only possible in practice if there is a finite description of the “Cut”, as in ” It is the square root of 2″, and hence a finite description of the decision algorithm. So in this case, and of course in many actual cases, the algorithm can allow the decision to be made: Take a rational number, square it, if more than 2 the number is in the “above” set. Now we are left with the problem of the countability of the set of all algorithms, and here I stop !!!!!

I can’t let go of this !

The difficulty with the simple proof is the subtraction, as it is only formally possible, so I thought of adding:

0.99999(rec) + 0.100000 = 1.099999(rec) a finite doable operation

0.99999(rec) + 0.010000 = 1.009999(rec) a finite doable operation

and each time the sum is nearer to 1

With a loose version of epsilon/delta I can say “Give me a number bigger than 1 and I can find a rational number which makes the above sum less than your number”

Or, in simpler terms, anything you add to 0.9999(rec) gives you a number bigger than 1

Hence 0.99999(rec) and 1.00000(rec) are representations of the unit 1

And also, if 3+4 = 7 then 0.99999(rec) = 1.00000(rec) = 1, with the commonly accepted meaning of the equals sign.

I looked into how real numbers were defined because I wanted to understand how a concept as intangible as ‘infinity’ could be used in a supposedly logic-based subject like mathematics.

Attempts at defining real numbers include declaring them as equivalence classes of Cauchy sequences of rational numbers, Dedekind cuts, or certain infinite “decimal representations”. These definitions all require the acceptance of ‘infinitely many’ iterations or occurrences.

But when mathematicians say “for n = 1 to infinity” or “infinitely many” what number type are they referring to (do they even know or think about it – I doubt it)?

It can’t be natural numbers or integers because these cannot “go to infinity”. By definition they can only take finite values. It can’t even be ‘reals’ because these do not allow infinity as a value.

So presumably they must be talking about something like the Hyperreal numbers or the Surreal numbers? Note that these are extensions to the reals, which they both take as already being well-defined.

So all definitions of ‘reals’ are based on the assumption that ‘reals’ are already well-defined. In short, real numbers are not well-defined at all.

This is a glaring problem in ‘Step(1)’ of the main article above, where the notion of “for n = 1 to infinity” has been used as if this is obviously an acceptable thing to use.

This comes as no surprise because mainstream mathematics cannot define ‘infinity’. It merely assumes it exists and treats any weird consequences as simply discovering the strange nature of infinity rather than indicating a mistake.

As for the simple proof of ‘10x – x’, it is scandalous that any mathematician should claim this is in any way valid (but they all still do). The flaw in the so-called proof is clearly shown in this video from 06:00 to 08:45 (minutes): https://www.youtube.com/watch?v=–HdatJwbQY .

In order to make the subtraction work, you have to ‘borrow’ an extra term from ‘infinity’ for one of the two endless series. If you accept infinity, then there are ‘infinitely many’ other ways you could ‘borrow’ different amounts of terms ‘from infinity’ and the subtraction would not cancel out completely (thus there are ‘infinitely many’ ways 0.999… does not =1).

Another big problem with the main article above is in step (2) where it is stated: “Provided the series converges, we say that the value of the summation is equal to that series’ limit”.

You can define the notion of a ‘limit’ and you can say the limit has a value of 1 and that this limit is associated with the series 0.999…, but you cannot simply assert that the value of the summation is equal to the limit. This is to simply assume what you set out to prove.

By the way, you can define 0.999… without using ‘infinitely many’; you can say it is the series where the sum to the n-th term is 1 minus (1 / (10 to the power n)).

This approach can be taken for all series, whether ‘converging’ or not. The ‘fixed part’ will equate to what you call the ‘limit’ for series that are said to ‘converge’. Here we have a property with a clear meaning that applies to any type of series and which does not require us to assert the series must equate to it.

I can absolutely assure you that mathematicians have put an incredible amount of thought into this subject, including a great many volumes of erudite work and quite a bit of heated discussion. And they have done so for no less than 400 years.

When a mathematician says “for

n=1 to infinity” or “asngoes to infinity,” they are not proposing that there is any time when the integernwill equal infinity– that would be entirely incoherent. Rather, these phrases act as a shorthand for “let us explore the trend in the change in value of a particular function asnis made arbitrarily larger with no upper bound.”This is simply not true. I’ve given one definition of “infinity” above, when discussing infinity in the evolution of a function. When discussing “infinity” as a property of numbers, there is a very simple definition: for all real numbers

r, an infinite numberNis one such that 0<|r|<|N|.Once again, Mr. Peny’s video is entirely incorrect. I find it exceedingly curious that you acknowledge the nigh-unanimous agreement of professional mathematicians on this subject, but you would instead trust the flawed analysis of a YouTube video produced by someone who is not a mathematician.

There is no “borrowing” of a term from infinity, any more than you “borrow” a zero from infinity when you say that 1*10=10.

It is not to simply assume what we set out to prove. Rather, it is to define the notion of equality for functions. What does it mean to say that f(x) is equal to g(x)? We are defining this notion in a consistent manner and showing that, based upon this definition, 0.999… is equal to 1.

You

coulddefine it as such, but then you would not be talking about the same thing which mathematicians are talking about when they refer to 0.999…; mathematicians mean precisely what I wrote in (1), when they use the symbol 0.999… in a discussion. Your alternate definition, here, does not define a single number, but rather a whole set of numbers– one which, ironically, contains an infinite number of terms.You said: “they are not proposing that there is any time when the integer n will equal infinity”.

And: “let us explore the trend in the change in value of a particular function as n is made arbitrarily larger with no upper bound”

But if n is an integer, the ‘trend’ is immaterial because we know that n cannot reach ‘infinity’ and so we know the value of the function (the sum to the n-th term of the series) cannot equal the ‘limit’ for any n. Thus your starting definition in (1) is false.

You said: “there is a very simple definition: for all real numbers r, an infinite number N is one such that 0<|r|<|N|.”

I could just as easily declare a Ponderer number, P is one such that 0<|r|<|N|<|P|

And because I am not showing you how I construct a P number, and because I claim that any counter-intuitive answers that result from using P are just down to the nature of P, you will be unable to prove that P is not a valid number. I will have denied you the ability to prove it is invalid through my vague definition.

You said: “Once again, Mr. Peny’s video is entirely incorrect.”

But once again, you fail to identify the flaw in his argument.

You said: “We are defining this notion in a consistent manner and showing that, based upon this definition, 0.999… is equal to 1”

If you think of 1 as being the series 1 +(zero times n), then the sum to the n-th term will always be 1. This allows the two series to be compared. The series for 1 is clearly different to the series where the sum to the n-th term is 1 minus (1 / (10 to the power n)).

You have chosen to compare two functions by asserting they must both equate to a single numeric value. You cannot prove the sum will somehow equate to the limit after ‘infinitely many’ iterations (which cannot be realised, in n is an integer). So you simply assert what you want to prove, that the sum equals the series limit.

You said: “You could define it as such, but then you would not be talking about the same thing which mathematicians are talking about when they refer to 0.999…”

But then why have mathematicians chosen to use a definition that makes no sense (for the reasons I have just explained) rather than this one that does?

I don’t see why you think this is relevant. I very clearly defined what I meant by “limit,” in the main article, and that definition does not require that

never equal infinity. As I said, I completely agree thatncannot ever equal infinity, and that this would not even be a cogent statement. That’s completely irrelevant to what it means for a particular number to be the limit of a particular function.You absolutely could proffer such a definition. It would then be incumbent upon you to show that such numbers are useful and can be discussed by some consistent mathematics– for example, the way that Abraham Robinson expounded upon the Hyperreal numbers back in the 1960’s.

The flaw comes around 8:05 in the video which you keep posting, in which Mr. Peny decides to simply omit the ellipsis denoting that the sequence is an infinite expansion, and then pretends that he’s discussing the same number which he had been before.

Nothing in my argument, in the original article, ever required that we perform infinitely many iterations of any iterative function, so I am not sure why you think this is at all relevant.

They haven’t. They’ve chosen to utilize a definition which is perfectly sensible and which is consistent with all the mathematics developed over the last several millennia of human scholarship.

You said: “I completely agree that n cannot ever equal infinity, and that this would not even be a cogent statement. That’s completely irrelevant to what it means for a particular number to be the limit of a particular function”

Then which of these statements is wrong:

(1) After n terms have been added, we cannot add ‘infinitely many’ more terms because however far we extend n, it can only contain a finite value

(2) Any finite value for n will result in a sum that is less than 1

(3) As n cannot reach ‘infinity’, the sum cannot reach the value of 1

(4) Asserting that the sum must equal the limit based on the ‘trend’ directly contradicts (3)

And which of these statements is wrong:

(1) The sequence of positive non-zero terms (in 0.999…) is endless, meaning there is no last term

(2) As there is no ‘last term’, any attempt to calculate the sum as a single fixed value must fail

Or does this logic make perfect sense?

(1) Despite the notion of ‘endlessness’ indicating we cannot have a completed infinity, we will assume you can

(2) Despite not being able to prove the sum can have a fixed value, we will assume it can

(3) We will examine the trend as n increases and we will assert that the sum must equal the limit, and we will call this proof

You said: “Mr. Peny decides to simply omit the ellipsis denoting that the sequence is an infinite expansion, and then pretends that he’s discussing the same number”

Let me explain the flaw in another way to add clarity…

You start of with x = 0.999…, here we have ‘infinitely many’ terms

When we multiply by 10 to get 10x, we still have ‘infinitely many’ terms but each term has been multiplied by 10.

The illusion here is that we have created an extra term because the decimal representation is 9.999…, but we have not. This is easier to visualise if you think in terms of a series of individual terms as shown in the video. You have not created an extra term by multiplying by 10!

The 9.999… series needs to have 1 more term than the 0.999… series in order for the endless parts of 10x and x to match exactly (and thus cancel out) when performing the subtraction.

Now if we are allowed to borrow extra terms from infinity, then there are ‘infinitely many’ ways that different numbers of terms could be borrowed and where 0.999… would appear to not equate to 1 (using this same so-called proof).

We’ll start with (1), which is wrong. The fact that the

nth partial sum is finite does not imply that we cannot add infinitely many more terms. After that, (2) is correct. However, (3) is incorrect; once again, the definition of a limit is not dependent uponnreaching infinity. Since (3) is wrong, it then follows that (4) is also wrong.Here, statement (2) is wrong. The fact that there is no final term does not imply that “any attempt to calculate the sum as a single fixed value must fail.” I defined a method for calculating a single, fixed value for this series which does not fail, which is rigorous, and which is consistent with the rest of mathematics.

Here, I see no reason to accept the premise of (1), that the concept of endlessness implies we cannot have a completed infinity. I disagree with (2), and have shown that we can prove the sum has a single, fixed, and meaningful value which is consistent and useful in the rest of mathematics. In (3) we are not asserting that the sum must equal the limit, we are defining what we mean when we say two functions are equal. Once again, this definition yields results which are entirely consistent with the rest of mathematics.

I agree so far.

I’m confused– who has ever asserted that we have created an “extra term?” Assuming

xis the series which I defined in (1) of the article, I completely agree that 10xhas the same number of terms asx. This seems entirely irrelevant.No, it doesn’t. Again, both series have exactly the same number of terms. Both 9.999… and 0.999… have a 9 for the digit in the tenths place. Both have a 9 for the digit in the hundredths place. Both have a 9 in the thousandths place. And, in general, both have a 9 in the 10^(-

k) place. Since 9 minus 9 always equals zero, the 10^(-k) places from the two numbers will always cancel with one another. This doesn’t require that 9.999… have any more terms in its series than does 0.999…If we really give this some wider thinking we will see that 1.0000(rec)… is not the same thing as 1

You said: “The fact that the nth partial sum is finite does not imply that we cannot add infinitely many more terms”

You agree that n is an integer and cannot become infinite, and you have defined your summation using n, and yet you still claim you can add an ‘infinite amount’ of terms. Can you really not see a problem?

Then you said; “(3) is incorrect; once again, the definition of a limit is not dependent upon n reaching infinity”

But point (3) does not mention a limit. It says “As n cannot reach ‘infinity’, the sum cannot reach the value of 1”. It says nothing about the definition of a limit being dependant on reaching infinity. You are protesting against something you have asserted it to be saying, not what it actually says.

You said: “I defined a method for calculating a single, fixed value for this series”

In my first two replies I pointed out why I believe you did not.

You said: “In (3) we are not asserting that the sum must equal the limit, we are defining what we mean when we say two functions are equal.”

Surely the most sensible way to compare a number with an endless series is to consider the number as an endless series [e.g. where the n-th sum is 1 plus (zero times n)] and then compare the n-th sums).

Rather than elevating the simpler object to the same structure as the more complicated object for comparison, you are deciding to extract a property from the more complicated object (the limit) that happens to be of the same type as the simpler object, and you are using that as the basis for your comparison.

I am not disputing that you believe it is better to consider the series as being equal to some fixed value, or that you have defined the way to do this is to assert that the sum equals the series limit.

You can argue that this is the accepted or ‘defined’ way to compare functions, but just saying something is defined in a certain way does not make it the best way of doing something.

I am not disputing that the limit is 1.

Therefore by asserting/defining that the sum equals the series limit, you have asserted/defined what you set out to prove.

You said: “I’m confused– who has ever asserted that we have created an “extra term?””

And later you said: “in general, both have a 9 in the 10^(-k) place”

But here you have created an extra term without even realising it.

If you start off with a series where the n-th term is n/(10 to the power n) then when you multiply the series by 10 you just get a series where the n-th term is 10n/(10 to the power n).

In the so-called proof, you have to compare the first (n+1) terms from the 10x series with the first n terms from the x series in order for the trailing parts to cancel each other out.

The same flawed logic can be used with diverging series. For example,

Series1 = 1+2+4+8+…

Series2 = 2 x Series1

Series1 = Series 2 – Series1

=

2+4+8+…

-1-2-4-8-…

= -1

Therefore 1+2+4+8+… = -1

And by subtracting -1 from both sides: 2+4+8+… = -2

But, if we are allowing this logic where we can line-up whatever terms we want to cancel out the trailing parts (which it appears we are), then

1+2+4+… = 1+(1+1)+(1+1+1+1)+…=1+1+1+…

And by the same logic

2+4+8+… = 1+1+1+…

Therefore

-1 = 1+1+1+… = -2

Contradiction!

Correction: I meant to say “by subtracting 1 from both sides” not -1 obviously!

I start with the reasonable assertion that 0.333(rec)… is the result of dividing 1 by 3, and 1 divided by 3 is one third, a well defined number.

So let us divide 2 by 2, weel it’s easier to see what is going on if we divide 20 by 2 using a valid but non standard variation of the “standard algorithm”.

Here goes:

Write 20 as 20.00000(rec)…

Then start the algorithm: – 2 goes into 20 9 times with remainder 2

Bring down the first decimal zero and get remainder 2.0

2 goes into 2.0 9 times, with remainder 0.2

…and so on…

At each step we get another 9 in the decimal expansion (quotient as a decimal), and so the process is unending, and gives 9.9999(rec)…

But we “know” that 20 divided by 2 is 10, so 9.9999(rec)… and 10 are representations of the same number.

One could start with 2 divided by 2 but this requires a bit more suspension of belief.

Clearly the problem lies with the general repeating decimal expansion process, as for 1/7 etcetera. 0.9999(rec) is not “special”.

You said: “I start with the reasonable assertion that 0.333(rec)… is the result of dividing 1 by 3, and 1 divided by 3 is one third”

Hold on a minute, this is far from reasonable.

After n digits have been processed by the decimal expansion of 1/3, you have 0.333…3 (i.e. a string of n ‘3’ digits) plus (1/3)(1/(10 to the power n)).

You need the last term for equality.

Unless you know how to change from a finite number of iterations into an infinite number of iterations, you can only ever process a finite number of digits.

So you are doing two very unreasonable things here.

First you are assuming that ‘infinitely many’ can somehow be achieved.

Secondly you are asserting that when this mysterious value is reached, the last expression (that I pointed out above) will somehow disappear.

You cannot prove any of these claims.

With your 20 divided by 2 example, after processing n digits you need to add an expression that takes account of your remainder in order to achieve equality.

So after two digits, 9.9 you would have to add 0.1. However far you go you will not achieve ‘infinitely many’ nines and have no remainder, at least not in any mathematically rigorous sense.

It would seem that you do not accept the notion of an infinite decimal expansion at all, and so the discussion of the equality or not between 0.999(rec) and 1 is somewhat pointless.

No-one is arguing that 0.9999(as many as you like + the correct bit left over is not equal to 1.

Here’s a little verse I posted in August 2014

Infinity, a place beyond.

That most strange place, infinity,

Is somewhere I don’t want to be.

I’d rather stay with Brouwer

In his ivory tower.

I would gladly accept the notion of an infinite decimal expansion if it could be show how a value expressed in totally finite terms (like 1/3, or the square root of 2, or pi = limit of the circumference of a circle with diameter 1 etc.) can be transformed into a decimal containing ‘infinitely many’ digits.

I don’t need to see all the digits, I just want to know how the algorithm can end and achieve equality with the starting value. I fail to understand how anyone can accept the notion without this proof.

The great thing about trying to prove 0.999… equals 1 is that if it could be done, then it would clearly show how a finite object can transform into an infinite object with the same value. But sadly, this has never been proved; people just assume it to be true.

By saying you’d rather stay with Brouwer I assume you have some sympathy with his Intuitionism, in which all infinity is considered to be potential infinity (= a mathematical procedure in which there is an unending series of steps).

But you can still have endlessness without the use of expressions like ‘potential infinity’. Regardless of how precisely such terms are mathematically defined, they will always carry mysterious connotations which does not help matters. I believe the philosopher David Hume wanted all notions of infinity removed from mathematics. The words used in mathematics should help to provide clarity and assist understanding, not create confusion and hint at mysticism.