# FAQ: When is .999... less than 1? Robinson on non-standard integers in N* – N

Recently, Prof. Katz took a look at the notorious student-teacher frictions over .999..., and weighed in... on the side of the students.

David Tall comments as follows in his latest book:

"[A mathematician] may think of the physical line as an approximation to the infinity of numbers, so that the line is a practical representation of numbers... [and] of the number line as a visual representation of a precise numerical system of decimals. However, this still does not alter the fact that there are connections in the minds of students based on experiences with the number line that differ from the formal theory of real numbers and cause them to feel confused."

Students find .999... confusing. The key question is whether or not we are dealing with the real numbers in discussing .999... Let's refer to the standard evaluation of .999... to 1 as the unital evaluation. Let's examine the problem of student resistance to the unital evaluation. The fundamental problem is that the curricula are set up in such a way that students are exposed to unital evaluation before they learn about either the real number system, or the rigorous notion of a limit.

Before the number system has been specified, the students' hunch that .999... falls infinitesimally short of 1 can be justified in a mathematically rigorous fashion, in the framework of the hyperreal number system. Other interpretations (than the unital evaluation) of the symbol .999... are possible, that are more in line with the students' naive initial intuition, persistently reported by teachers.

Question 1. Aren't there many standard proofs that 0.999...=1? Since we can't have that and also 0.999...≠1 at the same time, if mathematics is consistent, then isn't there necessarily a flaw in the proof given in the text "strict non-standard inequality"?

Answer. The standard proofs are of course correct, in the context of the standard real numbers. However, the annals of the teaching of decimal notation are full of evidence of student frustration with the unital evaluation of .999... This does not mean that we should tell the students an untruth. What this does mean is that it may be instructive to examine why exactly the students are frustrated. The important observation here is that the students are not told about either of the following two items:
1. the real number system;
2. limits,

before they are exposed to the topic of decimal notation, as well as the problem of unital evaluation. What the text "strict non-standard inequality" argues is that so long as the number system has not been specified explicitly, the students' hunch that .999... falls infinitesimally short of 1 can be justified in a rigorous fashion, in the framework of Abraham Robinson's non-standard analysis.

Question 2. Isn't a problem with the proof that the definitions aren't precise? The text says that 0.999... has an "unbounded number of repeated digits 9". That is not a meaningful mathematical statement; there is no such number as "unbounded". If it is to be precise, then the text needs to provide a formal definition of "unbounded", which it hasn't done.

Answer. The text does not mean for this comment to be a precise definition. The precise definition of .999... as a real number is well known. The thrust of the argument is that before the number system has been explicitly specified, one can reasonably consider that the ellipsis "..." in the symbol .999... is in fact ambiguous. From this point of view, the notation .999... stands, not for a single number, but for a class of numbers, all but one of which are less than 1.

Note that F. Richman argued in '99 (Math. Mag. vol. 72) that a strict inequality .999... < 1, would necessarily require certain cancellations to be disallowed. He creates a natural semiring, motivated by constructivist considerations. In the context of the semiring, the absence of certain cancellations (i.e. subtractions) leads to a system where a strict inequality .999... < 1 is satisfied.

Question 3. Doesn't decimal representation have the same meaning in standard analysis as non-standard analysis?

Answer. Yes and no. A. Harold Lightstone has developed an extended decimal notation, available for every (say, finite) hyperreal number, that gives more precise information about the hyperreal. In his notation, the standard real .999... would appear as

.999...;...999...

Question 4. Since non-standard analysis is a conservative extension of the standard reals, shouldn't all existing properties of the standard reals continue to hold?

Answer. Certainly, .999...;...999... equals 1, on the nose, in the hyperreal number system, as well. An accessible account of the hyperreals can be found in chapter 6: Ghosts of departed quantities of Ian Stewart's popular book From here to infinity.

Question 5. The text on "strict non-standard inequality" mentions that Lightstone represents a nonstandard number less 1 as .999...;...999^. Wouldn't he consider this as something different from .999..., since he uses a different notation, and that he would say

.999...;...999^   <   .999... = 1?

Question 6. Isn't the text arbitrarily redefining .999... as equal to .999...;...999^, which would contradict the standard definition?

Answer. No, the contention is that the notation is ambiguous, and could reasonably be applied to a class of numbers infinitely close to 1 (including the one above).

Question 7. The text claims that "there is an unbounded number of 9s in .999..., but saying that it has an infinite number of 9s is only a figure of speech". Now there are several problems with such a claim. First, there is no such object as an "unbounded number". Second, isn't "infinitely many 9's" not a figure of speech, but rather quite precise: infinite in this context would mean the cardinal number, alef0 ?

Answer. One can certainly choose to call the output of a series whatever one wishes. The terminology "infinite sum" is a useful and intuitive term, when it comes to understanding standard calculus. In other ways, it can be misleading. Thus, the term contains no hint of the fact that such an "alef0-fold sum" is only a partial operation, unlike the inductively defined n-fold sums. Namely, a series can diverge, in which case the infinite sum is undefined (to be sure, this does not happen for series representing real numbers). A more serious problem with the "alef0-fold sum" intuition is that it creates a serious impediment to understanding Lightstone extended decimals

.a1 a2 a3 ... ; ... aH ...
If one thinks of the standard real as an alef0-fold sum of the countably many terms a1/10, a2/100, a3/1000, etc., then it would look as though Lightstone's extended decimals add additional positive (infinitesimal) terms to the real value one started with (which seems to be already "present" to the left of the semicolon). It then becomes difficult to understand how such an extended decimal can represent a number LESS than 1. For this reason, it becomes necessary to analyze the "infinite sum" figure of speech, with an emphasis on the built-in limit.

Question 8a. Are you trying to convince me that the expression infinite sum, routinely used in Calculus, is only a figure of speech?

Answer. The debate over whether or not an infinite sum is a figure of speech, is in a way a re-enactment of the Newton-Berkeley debate. The founders of the calculus thought of
1. the derivative as a ratio of a pair of infinitesimals, and of
2. the integral as an infinite sum of terms f(x)dx.
Bishop Berkeley most famously criticized the former as follows. The infinitesimal

dx
appearing in the denominator is expected to be nonzero at the beginning of the calculation, yet at the end of the calculation it is neglected as if it were zero. The implied stripping away of an infinitesimal at the end of the calculation occurs in evaluating an integral, as well. Abraham Robinson solved the 300-year-old logical puzzle of the infinitesimal definition of the integral, in terms of the standard part function. The integral is not an infinite Riemann sum, but rather the standard part of the latter. From this viewpoint, calling it an infinite sum is merely a figure of speech, as the crucial final step is left out.

Question 8b. Perhaps the historical definition of an integral, as an infinite sum of infinitesimals, had been a figure of speech. But why is an infinite sum of a sequence of real numbers more of a figure of speech than a sum of two real numbers?

Answer. Foundationally speaking, the two issues are closely related. Namely, the rigorous justification of the notion of an integral is identical to the rigorous justification of the notion of a series. One can accomplish it finitistically with epsilontics, or one can accomplish it infinitesimally with standard part. In either case, one are dealing with an issue of an entirely different nature than finite n-fold sums.

Question 9. You have claimed that "saying that it has an infinite number of 9s is only a figure of speech". Of course "infinity" is not a number in standard analysis: this word refers to a number in the cardinal number system, i.e. the cardinality of the number of digits; it does not refer to a number in the real number system.

Answer. One can certainly consider an infinite string of 9s labeled by the standard natural numbers. However, when challenged to write down a precise definition of .999..., one invariably fall back upon the limit concept (and presumably the respectable epsilon, delta definition thereof). Thus, it turns out that .999... is really the limit of the sequence .9, .99, .999, etc. Note that such a definition never uses an infinite string of 9s labeled by the standard natural numbers. Informally, when the students are confronted with the problem of the unital evaluation, they are told that the decimal in question is zero, point, followed by infinitely many 9s. Well, taken literally, this describes the hyperreal number

.999...;...9990000
perfectly well: we have zero, point, followed by H-infinitely many 9s. Moreover, this statement in a way is truer than the one about the standard decimal, as explained above (the infinite string is never used in the actual standard definition). The hyperreal is an infinite sum, on the nose. Neither is it a limit, nor can it be approximated by finite sums.

Question 10. Do limits have a role in the hyperreal approach?

Answer. Let u1=.9, u2=.99, u3=.999, etc. Then the limit, from the hyperreal viewpoint, is the standard part of uH for any infinite hyperinteger H. The standard part strips away the (negative) infinitesimal, resulting in the standard value 1, and the students are right almost everywhere.

Question 11. A mathematical notation is whatever it is defined to be, no more and no less. Isn't .999... defined to be equal to 1?

Answer. As far as teaching is concerned, it is not necessarily up to pure mathematicians to decide what is good notation and what is not, but rather should be determined by the teaching profession and its needs.

Question 12. In its normal context, ".999..." is defined unambiguously, shouldn't it therefore be taught as a single mathematical object?

Answer. Indeed, in the context of ZFC standard reals and the appropriate notion of limit, the definition is unambiguous. The issue here is elsewhere: what does .999... LOOK LIKE to highschoolers when they are exposed to the problem of unital evaluation, before learning about R and lim.

Question 13. Don't standard analysis texts provide a unique definition of .999... that is almost universally accepted, as a certain infinite sum that (independently) happens to evaluate to 1?

Answer. More precisely, it is a limit of finite sums, whereas "infinite sum" is a figurative way of describing the limit. Note that the hyperreal sum from 1 to H, where H is an infinite hyperinteger, can also be described figuratively as an "infinite sum", or more precisely H-infinite sum.

Question 14. There are certain operations that happen to work with "formal" manipulation, such as dividing each digit by 3 to result in 0.333... But shouldn't such manipulation be taught as merely a convenient shortcut that happens to work but needs be verified independently with a rigorous argument before it is accepted?

Answer. Correct. The best rigorous argument, of course, is that the sequence .9, .99, .999, etc. gets closer and closer to 1 (and therefore 1 is the limit by definition). The students would most likely find the previous sentence (before the parenthesis) unobjectionable. Meanwhile, the parenthetical remark is unintelligible to them, unless they have already taken calculus.

Question 15a. Isn't it very misleading to change the standard meaning of .999..., even though it may be convenient? This is in the context of standard analysis, since non-standard analysis is not taught very often because it has its own set of issues and complexities.

Answer. Recently a course in calculus was taught using H. Jerome Keisler's textbook Elementary Calculus. The course was taught to a group of 25 freshmen. The TA had to be trained as well, as the material was new to the TA. The students have never been so excited about learning calculus, according to repeated reports from the TA. At the end of the semester, teacher evaluation forms were handed out, and were in the area of 7 out of maximum of 7. Two of the students happened to be highschool teachers (they were somewhat exceptional in an otherwise teenage class). They said they were so excited about the new approach that they had already started using infinitesimals in teaching basic calculus to their 12th graders. After the class was over, the TA paid a special visit to the professor's office, so as to place a request that next year, the course should be taught using the same approach. Furthermore, the TA volunteered to speak to the chairman personally, so as to impress upon him the advantages of the new method. The 0.999... issue was not emphasized in the class.

Question 15b. How can one possibly teach the construction of infinitesimals to students?

Answer. The construction of the reals (Cauchy sequences or Dedekind cuts) is not presented in a typical standard calculus class. Rather, the lecturer relies on intuitive descriptions, judging correctly that there is no reason to get bogged down in technicalities. There is no more reason to present a construction of infinitesimals, either, so long as the students are given clear ideas as to how to perform arithmetic operations on infinitesimals, finite numbers, and infinite numbers. This replaces the rules for manipulating limits found in the standard approach.

Question 16. How would one express the number π in the ".999...;...999" notation?

Answer. The digits of a standard real appearing after the semicolon are, to a considerable extent, determined by the digits before the semicolon. The following interesting fact might begin to clarify the situation. Let

dmin
be the least digit occurring infinitely many times in the standard decimal expansion of π. Similarly, let
dmin
be the least digit occurring in an infinite place of the extended decimal expansion of π. Then the following equality holds:
dmin = dmin .
This equality indicates that our scant knowledge of the infinite decimal places of π is not due to "arbitrariness of the construction using the axiom of choice", as has sometimes been claimed; but rather to our scant knowledge of the standard decimal expansion: no "natural" irrationals are known to possess infinitely many occurrences of any specific digit.

Question 17. What does the odd expression "H-infinitely many" mean exactly?

Answer. A typical application of an infinite hyperinteger H is the proof of the extreme value theorem. Here one partitions the interval, say [0,1], into H-infinitely many equal subintervals (each subinterval is of course infinitesimally long). Then we find the maximum xi0 among the H+1 partition points by the transfer principle, and point out that by continuity, the standard part of the hyperreal xi0 gives a maximum of the real function. Meanwhile, an application of non-standard analysis to differential geometry may be found at arXiv:0902.3126

Question 18. I am still bothered by changing the meaning of the notation .999... as it can be misleading. I recall I was taught that it is preferable to use the y' or yx notation until one is familiar with derivatives, since dy/dx can be very misleading even though it can be extremely convenient. Shouldn't it be avoided?

Answer. There is a very good reason for this, already pointed out by Bishop Berkeley 300 years ago! Namely, standard analysis has no way of justifying these manipulations rigorously. Naturally, the notation dy/dx should be used as late as possible in the standard approach, until the students are already comfortable with derivatives, as the implied ratio can indeed be misleading. With the introduction of infinitesimals such as Δx, one defines the derivative f '(x) as

f '(x) = st(Δy / Δx),
where "st" is the standard part function. Then one sets dx=Δx, and defines dy=f '(x)dx. Then f '(x) is truly the ratio of two infinitesimals: f '(x)=dy/dx, as envisioned by Leibniz and justified by Robinson.

Question 19. How does one relate hyperreal infinities to cardinality? It still isn't clear to me what "H-infinitely many 9s" means. Is it alef0, aleph1, the continuum, or something else?

Answer. To begin to understand what is going on, one needs to get away from the naive cardinals of Cantor's theory, and focus instead on the distinction between a language and a model. A language (more precisely, a theory in a language, such as first order logic) is a collection of propositions. One then interprets such propositions with respect to a particular model. A key notion here is that of an internal set. Each set S of reals has a natural extension S* over R*, but also atomic elements of R* are considered internal, so the collection of internal sets is somewhat larger than just the natural extensions of real sets. However, as a first approximation, one can think of internal sets as natural extensions of real sets.

A key observation is that, when the language is being applied to the non-standard extension, the propositions are being interpreted as applying only to internal sets, rather than to all sets. In more detail, there is a certain set-theoretic construction of R*, but the language will be interpreted as applying only to internal sets and not all set-theoretic subsets of R*. Such an interpretation is what makes it possible for the transfer principle to be true, when applied to a theory in first order language.

Question 20. I still have no idea what the extended decimal expansion is.

Answer. In Robinson's theory, the standard natural numbers N are imbedded inside the collection of hyperreal natural numbers, denoted N*. The elements of the difference N* – N are sometimes called (positive) infinite hyperintegers, or non-standard integers (see photo above). The standard decimal expansion is thought of as a string of digits labeled by N. Similarly, the extended expansion can be thought of as a string labeled by N*. Thus an extended decimal expansion for a hyperreal in the unit interval will appear as

.a1a2a3...;...aH-2aH-1aH...
Such extended decimal expansions were developed by Lightstone. The digits before the semicolon are the "standard" ones. Given an infinite hyperinteger H, the string containing H-infinitely many 9s will be represented by
.999...;...999
where the last digit 9 appears in position H. It falls short of 1 by the infinitesimal amount 1/10H.

Question 21. What happens if one decreases .999...;...999 further, by the same infinitesimal amount 1/10H ?

Answer. One obtains the hyperreal number

.999...;...998,
with digit "8" appearing in position H.

Question 22. You mention that the students have not been taught about R and lim before being introduced to non-terminating decimals. Perhaps the best solution is to delay the introduction of non-terminating decimals? What point is there in seeking the "right" approach, if in any case the students will not know what you are talking about?

Answer. How would you propose to implement such a scheme? More specifically, just how much are we to divulge to the students about the result of the long division of 1 by 3?

Question 23a. Just between the two of us, in the end, there is still no theoretical explanation for the strict inequality .999... < 1, is there? You did not disprove the equality .999... = 1. Are there any schoolchildren that could understand Lightstone's notation?

Answer. The point is not to teach Lightstone's notation to schoolchildren, but to open up their horizons by mentioning the existence of larger number systems, in which their "hunch" that .999... falls short of 1, can be justified in a mathematically sound fashion, consistent with the idea of an "infinite string of 9s" they are already being told about. The underbrace notation that appears in the article "strict non-standard inequality" is more self-explanatory than Lightstone's notation.

Question 23b. The multitude of bad teachers will stumble and misrepresent whatever notation you come up with. For the typesetting purposes Lighthouse's notation is more suitable than the underbrace appearing in "strict non-standard inequality". Isn't an able mathematician committing a capital sin by promoting a pet viewpoint, as the cure-all solution to the problems of math education?

Answer. Your assessment is that the situation is bleak, and the teachers are weak. On the other hand, you seem to be making a hidden assumption that the status-quo cannot be changed in any way. Without curing all ills of mathematics education, one can ask what educators think of a specific proposal addressing a specific minor ill, namely student frustration with the problem of unital evaluation.

One solution would be to dodge the discussion of it altogether. In practice, this is not what is done, but rather the students are indeed presented with the claim of the evaluation of .999... to 1. This is done before they are taught R or lim. The facts on the ground are that such teaching is indeed going on, whether in 12th grade or at the freshman level.

Question 24. Are hyperreals conceptually easier than the common reals? Will modern children interpret sensibly "infinity minus one," say?

Answer. David Tall, a towering mathematics education figure, has published the results of an "interview" with a pre-teen, who quite naturally developed a number system where 1, 2, 3 can be added to "infinity" to obtain other, larger, "infinities". This indicates that the idea is not as counterintuitive as it may seem to us, through the lens of our standard education.

Question 25. If the great Kronecker could not digest Cantor's infinities, how are modern children to interpret them?

Answer. No, schoolchildren should not be taught the arithmetic of the hyperreals. On the other hand, the study by K. Sullivan in the Chicago area indicates that students following the non-standard calculus course were better able to interpret the sense of the mathematical formalism of calculus than a control group following a standard syllabus. Are these students greater than Kronecker? Certainly not. On the other hand, Kronecker's commitment to the ideology of finitism was as powerful as most mathematicians' commitment to the standard reals is today.

Question 26. Isn't the more sophisticated reader going to wonder why Lightstone stated that decimal representation is unique (in the seminal paper on decimal representation of the hyperreals, published in the American Mathematical Monthly), while the recent text on the "strict non-standard inequality" is making a big fuss over the nonuniqueness of decimal notation and the strict inequality?

Answer. Lightstone was referring to the convention of replacing each terminating decimal, by a tail of 9s. Beyond that, it is hard to get into Lightstone's head. Necessarily remaining in the domain of speculation, one could mention the following points. The responses received by now to the "strict non-standard inequality .999... < 1", range from shocked to scandalized. Now Lightstone was interested in, well, publishing his article. There is more than one person involved in publishing an article. Namely, an editor also has a say, and one of his priorities is defining the level of controversy acceptable in his periodical. Lightstone could have made the point that all but one extended expansions starting with 999... give a hyperreal value strictly less than 1. Instead, he explicitly reproduces only the expansion equal to 1. In addition, he explicitly reproduces an additional expansion--and explains why it does not exist! Perhaps he wanted to stay away from the issue of non-uniqueness as well as the related strict inequality, and concentrate instead on getting a minimal amount of material on non-standard analysis published in a mainstream popular periodical. All this is in the domain of speculation. As far as the reasons for "strict non-standard inequality", they are more specific: first, the way the issue is currently handled by the educational establishment is not fair to the students; furthermore, the standard treatment conceals the power of infinitesimals in this particular issue.

Question 27. Amazing. Where can I found out more?

Answer. Ian Stewart's newly published book Professor Stewart's Hoard of Mathematical Treasure (released in oct '09) contains a section "what is point nine recurring?", subtitled After decades of institutionalized denial, research mathematician reveals: .999... can be less than 1, almost everywhere. Read all about it at this arXiv post: A strict non-standard inequality .999... < 1

A more detailed version of these remarks may be found online at Montana Mathematics Enthusiast, vol. 7 ('10): when is .9 less than 1?