Almost universally, when Calculus is taught to modern students, we preface the entire subject by introducing those students to a concept known as a “limit.” The reason for this, historically, was to ensure that mathematics was taught in a rigorous and well-defined manner. When Leibniz (and, independently, Newton) first developed methods for performing calculus, the concept of a limit was nowhere to be found. However, the tool which these men *did* utilize in their work was something which they had not rigorously defined, at the time. Newton called it a “fluxion” and Leibniz called it a “differential,” but the concept was the same: a number which was not zero, but which was so small that adding it to any Real number did not yield a different Real number.

Many other mathematicians and philosophers of the time rightfully balked at the notion. It seemed entirely ludicrous. Bishop George Berkeley famously scoffed at Newton, asking if his fluxions were “the ghosts of departed quantities.” However, it was quite plain that the mathematics which Leibniz and Newton presented *worked*. When the results which could be found from the methods of Calculus were able to be confirmed using other methods, they were found to be accurate and true. Indeed, the Calculus was such a powerful tool that even most mathematicians and philosophers who recognized its flaws continued to utilize it in their work. Many began searching for some way to make the Calculus just as rigorous as the rest of mathematics. These efforts culminated in the work of Karl Weierstrass, who found a way to base Calculus upon a different tool. Instead of the Newtonian “fluxion” or the Leibnizian “differential,” Weierstrass gave mathematics a well-defined notion of the limit.

It is Weierstrass’ method of limits which is still taught, even to this day, in nearly every Calculus textbook in the world; but perhaps it is time to abandon this notion and return to the concept which Newton and Leibniz pioneered.

Read more…