Counting Information: a note on physicalized numbers Minds and Machines. Vol 6, 229-238
Over the last two decades the question of what computers can in fact and in principle do in our universe, what is termed the ultimate physical limits of computation, has been systematically investigated by various physicists and computer scientists . Pre-eminent among these are Rolf Landauer and his co workers who have organized their approach around the insistence that all information associated with computational processes, whatever function it per forms, is necessarily physical. One consequence of this physicalism is Landauer’s contention that classical continuum mathematics is an inappropriate and indeed misleading tool to study a universe that is in all likelihood discrete, and that such mathematics has to be eliminated if a self-consistent theory of the limits in question is to be developed. The present paper accepts Landauer’s starting point and his conclusion about the continuum, but argues that his critique of mathe matics is not radical enough: not only continuum mathematics but the density of the rationals and hence the infinitude of the natural numbers has to be rejected if a coherent theorization of ultimate computational limits is to be forthcoming.
In a recent article (Landauer 1991) summarizing the work on ultimate limits,
programmatically entitled “Information Is Physical”, Landauer averred that the fact “there are no unavoidable energy consumption requirements per step in a computer” in relation to dissipation (contrary to what many had believed), is now well established and called attention instead to other issues concerning the “operating environment” of the universe and, more pointedly, the mathematical formalism physicists use to investigate physical law, that remain problematic. In order to pursue these I need to reprise some of the history and context laid out by Landauer.
The question of what computers can and can’t do in the universe we inhabit has occupied Landauer for a considerable period; since, in fact, his pioneering paper, Landauer (1961), made a thermodynamic connection between information and dissipation. What he showed was that physically irreversible operations dissipate energy: a computation “requires a minimal heat generation, per machine cycle, typically of the order of kT for each reversible operation”. Logical irreversibility, typically the erasure of information, is associated in a direct way with physical irreversibility. So that, for example, in order to destroy one bit of information at least kTln2 units of energy needs to be dissipated, where T is the absolute temperature and k is Boltzmann’s constant.
Any computation of the kind familiar to us (word processing, calculation, pattern generation, and so on) is permeated with irreversible steps. This follows from the fact that the logical basis for these computations is one directional, since the standard AND and OR gates assumed by contemporary programming languages shed information: they have binary input and unary output.
The !imitative nature of Landauer’s result was the starting point of the work of Charles Bennett. In 1973 Bennett showed that whilst Landauer’s absolute minimum of entropy production was unavoidable, the premise of its applicability- the widespread and universally accepted reliance on AND- and OR-based irreversible logic -could be circumvented. Bennett demonstrated that irreversible logic gates and the consequent erasures of information they engender are not essential to computation. (Of course, as Bennett observes, any computa tion can be made reversible by saving all the information along the way. But this is unacceptable, since it only transfers the activity of erasure – and consequent entropy-production -to the machine’s memory tape which, when the tape comes to be re-used, would have to be wiped clean.)
Instead, Bennett argued that it was necessary to institute a wholesale re-writing of computational logic, what Fredkin and Toffoli (1982) elaborated as “conserva tive logic”, to replace the familiar and accepted one that fails to conserve directionality. Such a re-writing demonstrates how each previously irreversible step – basically an erasure that destroyed information -could in principle be replaced by a much longer more complex computation that used this information positively, in a bidirectional way. He accomplished this by giving an abstract specification of a class of Turing machines that would in theory run on reversible logic, as well as specifying a physical example – the biosynthesis of messenger RNA- of reversible computation. Since then others (Benioff, Toffoli, Feynman, Zurek) have added to the list of ideal reversible machines -both of a classically deterministic and quantum kind – as well as providing specific proposals, such as Fredkin gates, for physically implementing reversible logic.
Bennett’s achievement (which as a bonus led to a re-working of the Szilard Gabor-Bouillon solution to Maxwell’s Demon) has been widely appreciated and influential, and has extended and thematized the agenda, set originally by Landauer, for what will be called here the information-is-physical approach. I want now to take a closer look at how, according to this approach, one is to understand the original quest for the ultimate physical limits of computation.
With computation understood in terms of reversibility and governed by the appropriate conservative logic, the major source of dissipation is removed from the scene, and we are left with the effects of thermal vibration arising from the fact that any computational act takes place at a temperature which, by the third law of thermodynamics, is necessarily non-zero. The outcome of this is the production of noise -which degrades signals and produces errors. Leaving aside the various quantum models of reversible computing that have been put forward, and simplifying the discussion of the classical alternative, two quite different approaches have been proposed to deal with noise.
One, the so-called billiard ball or ballistic model eliminates noise and hence errors entirely. It does this by being composed of idealized parts with perfectly reflecting surfaces. But, because they need to be assembled with perfect precision, such machines are held to be unrealistic (at least in their classical formulation), and since I make a certain appeal here to realizability I shall ignore them in what follows.
The other encompasses a class of machines Bennett calls Brownian. With these the idea is not to deny or attempt to eliminate the thermal environment but work, as best as possible, with it:
Brownian computers allow thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible (low potential-energy) portion of the computer’s configuration space. In these computers, a simple assemblage of simple parts determines a low-energy labyrinth isomorphic to the desired computation, through which the system executes its random walk, with a slight drift velocity due to a weak driving force in the direction of forward computation …. (T]he drift velocity is proportional to the driving force, and hence the energy dissipated approaches zero only in the limit of zero speed. (Bennett 1982, p. 905)
For such machines the resistance they encounter is like the viscosity of electricity or hydrodynamics rather than conventional static friction: the slower one computes the less work needed to overcome resistance, i.e., the less the dissipation.
With this solution to the problem of computational noise – described in Toffoli (1984) as “virtually nondissipative computation” and summarized by Landauer (1991) as the ability “to minimize dissipation to any desired extent” -we reach the fulfillment of what we might call the internal aspect of the information-is-physical agenda: no unavoidable energy requirements of computing arise from purely energy-theoretic, thermodynamical constraints. “Are there then”, Landauer goes on to ask, “no limits imposed by physics?”. The answer is that, of course, there are, namely, the external constraints of what is out there. We are likely, he observes, to be in a finite universe; it’s difficult to believe that nature would allow an unlimited memory; there is everything from corrosion, degradation, electromigration to cosmic rays and spilled coffee to worry about; if we deal with errors by redundancy and building more massive machines, then we might more quickly run out of parts, and so on and so forth.
In addition to these external effects, Landauer also sees a more abstract difficulty in the way of understanding the ultimate limits of computation:
In contrast with the physical situation, mathematics has taught us think in terms of an unlimited sequence of operations. We have all grown up with the sense of values of the mathematician: ‘Given any epsilon, there exists an N, such that . ..’. We can calculate pi to any required number of places . But that requires an unlimited memory unlikely to be available in our real physical universe . Therefore all of classical continuum mathematics, normally invoked in our formulation of the laws of physics, is not really physically executable . The reader may object. Can we not define the real numbers within a formal mathematical postulate system? . . . Undoubtedly we can. But physics demands more than that ; it requires us to go beyond a closed formal system and to calculate actual numbers . If we cannot distinguish pi from a terribly close neighbor, then all the differential equations that constitute the laws of physics are only suggestive .. . (Landauer 1991, p . 28)
Thus, Landauer’s response is to reject classical continuum mathematics. But he observes -with greater acuity than the majority of physicists -that things aren’t that simple . Such a rejection, if not thoroughgoing, fails to be consistent:
Others have , in a variety of ways, suggested that space and time in the universe are not really described by a continuum and that there is some sort of discretization, or some limit on the information associated with a limited range of space and time. Most of these investigators, however, consider that to be a description of the physical universe and are still willing to invoke continuum mathematics to describe their picture . (Landauer 1991, p. 29)
Who these others are we’re not told, but it would not be difficult to find examples of physicists fitting the description . In any event , Landauer ‘s quest for a self-consistent theorization of the physics of computation points to two issues: an ontological one of external features -the possibilities and limitations of what is out there, “the construction materials and operating environments available in our actual universe”; and an epistemological one of the mathematical formalism – whose idealized character leads to the possibility of a mismatch between it and physical reality . These two issue are not, as I shall indicate, able to be separated . Of course, worries about a lack of fit between mathematics and physical reality are not new. In the early days of quantum theory Eddington had misgivings about the validity of mathematical formalism for distance scales smaller than 10-13 centimeters and recently Roger Penrose articulated not dissimilar doubts. Refer ring to the property of density -between any two real numbers there is always a third – Penrose observed:
It is not at all clear that physical distances or times can realistically be said to have this property. If we continue to divide up the physical distance between two points , we would eventually reach scales so small that the very concept of distance, in the ordinary sense, could cease to have meaning. It is anticipated that at the ‘quantum gravity’ scale of 1020th of the size of a sub-atomic particle, this would indeed be the case . But to mirror the real numbers, we would have to go to scales indefinitely smaller than this : 10200 th , 102000 th , .. . of a particle size , for example . It is not at all clear that such absurdly tiny scales have any physical meaning whatever . A similar remark would hold for correspondingly tiny intervals of time . (Penrose 1989, p .86)
Neither Eddington nor Penrose saw fit to make their doubts into any principled refusal of the continuum as a tool for theorizing physical reality. Not so Landauer as we see, and not so John Wheeler (invoked by Landauer) who, as is well known , has campaigned widely against the use of continuum mathematics within the formulation of physical law. The question is, however, whether Wheeler and Landauer go far enough. After all, the real number continuum is itself only an outwork of the integers, an algebraic-topological completion, in fact, of the rationals. Problematizing the gap between the infinitely fine gradations of real numbers and what seems to be the all too finite nature of reality only raises the question of why we are so secure about the endless progression of integers and the rationals based on them. Avoiding the reals -deciding beforehand that continuum mathematics is “not really executable” as Landauer has it, and should form no part of our formulation of the laws of physics – though it pushes in the right direction, hardly settles the matter. It leaves open the question: Why do we believe that the mathematics of the integers is “executable”? At first blush this seems a bizarre question. How could there possibly be any doubts about the role of the familiar whole numbers in the formulation of physical law? Before I confront Landauer’s program with this let me try remove some of the question’s strangeness by commenting on Penrose’s doubts (which are presumably not far removed from those motivating Landauer and Wheeler). What exercises Penrose about the reals, in relation to their use by physics, is their density, which implies a mirroring of real number intervals by unreally small distance and time scales. But one doesn’t need the continuum to encounter this problem: density is already fully present as a property of the rationals. And the rationals are fully present- by a simple finitary operation – as soon as the integers are given. An examination of the integers’ “executability”, then, seems inescapable.
Observe that neither Landauer, Wheeler, nor any of their co-workers has called for a mathematics that would put the integers into question or suggested that we treat them with the kind of suspicion reserved for the reals. The question arises, then, whether the mathematics on which Landauer’s program is based does not make for an incoherence parallel to the one he diagnoses, but for the endless progression of integers rather than the continuum. Thus, might it not be the case that not only do we have continuum mathematics illegitimately describing a discrete universe, but infinitistic mathematics equally illegitimately describing a finite world? If such is the case then the resulting incoherence would undoubtedly prevent the attainment of the self-consistent theory Landauer calls for. What guarantee is there that this doesn’t happen, that there is no mismatch or divergence between the information-is-physical program for examining the ultimate limits of computation and the mathematical apparatus it uses to articulate and carry out this program?
To answer this we need to return to the internal portion of the program that seems (at least to its adherents) settled and unproblematic, namely the lack of any unavoidable energy requirements per computational step, summarized under the slogan “virtually dissipation free computing”. What, we can ask, does virtually mean? Equivalently, what is involved in claiming that-in Landauer’ swords-the “effects of noise can be offset to any required degree”? Or again, what are the mathematical assumptions lying behind the assertion that the dissipation ap proaches zero “in the limit of zero speed”? These expressions, and others such as
‘arbitrarily small’, ‘as close to zero as one pleases’, and so on, are all translations of the same bit of mathematics, namely, the fundamental idea of a limit. As is well known this idea assumes the prior existence of the endless progression of integers. But ‘assume’ is too weak a word here: the expression ‘limit to zero’ and indeed the entire limit formalism as this is interpreted within contemporary mathematics is without meaning, cannot be said or made intelligible, except in relation to this unlimited progression. Arbitrarily small rationals and arbitrarily large integers go together.
The employment of the limit formalism raises, then, the obvious parallel to Penrose’s question: What is the physical meaning of computational speed being arbitrarily close to zero? How small a computational distance could be traversed in a unit of time? Or how long would one -could one in our universe -wait for the next step of a computation?
Such questions are not idle philosophical speculations. Nor are they unnatural in the context of physics’ examination of the status of its own limits. Nor can they be dismissed out of hand. They point to a fundamental difficulty of physics’ deployment of mathematical objects that are nameable with ridiculous ease – 10- exponent 100 the diameter of an atom, or 10-expoent 1000 second, and so on – but which lack a smidgen of connection to any existing, projected or even currently imaginable physical theory.
The limit operation is only problematic if the numbers being endless is problematic. So the question we need to ask is: In what sense do the whole numbers go on for ever? The standard response runs: If they didn’t, there would be a largest number N; and since it’s always possible to add one and get the number N + 1, there can be no largest number; hence the integers are endless. To which there is an obvious (but all too avoidable) question: Who -or what- can always add one more? Or, putting it in terms of the integers rather than in terms of agency, where did the number N + 1 come from? Where, in general, do the numbers come from?
Numbers are inseparable from counting. So the question becomes: Who or what is counting? If you’re a Platonist about mathematics, as most scientists are, that’s a dumb question. Numbers, like points, lines, functions, limits, circles, spaces and every other kind of mathematical object are just ‘out there’. Timeless and spaceless and originless. They don’t answer to a ‘where’, a ‘who?’ or a ‘what?’ They exist and have always existed: uncreated, ideal, transcendental entities; in this perspective counting is merely a progression through an already existing infinite sequence of objects. Now Landauer is anything but a Platonist and this sort of metaphysical realism is clearly unacceptable to the information-is-physical proponents. They, on the contrary, have to address the physicality of endless – that is, arbitrary long- counting. Moreover, for them counting must, in some sense, boil down to computing. In other words, their characterization of classical, non-quantum, counting would be in terms of a deterministic computing device and would run something like as follows.
One has a machine MU which, though it is strictly not necessary for work we shall assign to it, can for the sake of simplicity be assumed after the work of Bennett to operate reversibly. The typical step executed by MU produces j + 1 as output when presented with an integer j as input. More precisely, MU produces a coding of j + 1, say in binary notation (or decimal or more generally r-ary positional notation) when given a corresponding coding of j. If we start MU off with an input of zero and let it run, then we would want to say- and we could check its output whenever we liked -that it was counting. Let us look briefly at the thermodynamic operation of MU.
101011111 -> Mu -> 101100000
We can write down a simple energy equation for MU. Thus, Jet k be Boltzmann’s constant, and suppose MU counts up to N operating at speed V and temperature
T. Then the entropy term – the energy dissipated at each step – will be akTV.
Besides dissipation there is energy required to do computational work. First, storage: it is known (Bennett 1984), that energy of “at least kT must be tied up in the physical representation of a bit during its transmission or storage”. This means that the energy needed to maintain integrity of the input j during the jth step will be proportional to kTlog(j). Second, processing: as well as maintaining integrity during storage and transmission, MU also computes, processing the representation by adding 1 at each step to obtain the new representation (e.g. changing three hundred fifty one to three hundred fifty two as illustrated); the energy requirement for this is on average proportional to kT for each step. Taking a summation up to N gives the total energy E(N) used by MU to count up to N as
E(N) = kT(aNV + blog(N!) + cN), for suitable constants a, b, c. Note that the Nth
step, ie, the energy required to go from N to N + 1, is E(N+I )- E(N) = kT(aV +
blogN + c), which is an increasing and unbounded function of N.
We see from this calculation that for large enough N the energy required by MU to count one more time, that is to go from N to N + 1, is greater than any pre-assigned quantity. Moreover, if in order to make the dissipation arbitrarily small we make the velocity of MU arbitrarily close to 0, then the action involved in counting up to N, for any N, becomes arbitrarily large. But it is by no means evident, in fact it’s quite obscure, to imagine how the universe would permit such phenomena; would permit, that is, greater than any pre-assigned quantity of energy and/or quantity of action to be available or take place within the region of spacetime enveloping some specific, actualizable machine MU.
Matters seem Jocked into a circle here. We use the integers as part of our mathematical formalism to describe and theorize computing in terms of physical quantities such as energy and action, and we use computing as the model and instantiation of counting from which we derive the integers. Is there a way out of the loop? Within the present context there seems not to be, since the sort of scientific investigation implied by an examination of the ultimate limits of computation seems to require that neither computing nor counting be specifiable except in terms of each other. Of course, one could build and run a ‘computer’, without theorizing the process via mathematics (after all, this is what evolution did to produce our brains). And one could, as I have explained elsewhere, in Rotman (1993), understand the process of counting and the whole numbers associated with it as a form of repetitive semiotic activity that didn’t mention computers. But outside an evolutionary account of computation or a semiotic understanding of number there seems to be no satisfactory way to break the circle. A conclusion arrived at some time ago by Toffoli as a quite general feature of the relation between physics and computing:
Certain problems are not . .. ‘solvable’ or ‘unsolvable’. Rather, they are to some extent self-referential or circular, and as such they don’t ask to be solved-they have to be understood and lived. In my opinion, ‘physical limits of computation’ and ‘computational models of physics’ are two poles that .. . encompass one of the deepest and most vital of these circular issues. (Toffoli 1982, p. 174)
Of course, this doesn’t absolve us from the need to be critical about our assumptions, about what features of either computing or counting we’re willing to accept as given. With this in mind, let’s return to Landauer’s search for a self-consistent formalism.
Assuming virtually dissipation-free computing to be meaningful enmeshes us, then, in a curious kind of self-enhancing logic. This is because to describe such computing – actually merely to say it -one has to invoke the limit construction
v 0. This, in turn, invokes an infinite sequence of integers. The integers are defined to be the result of counting. But counting is a certain kind of computational process, a sequence of physical steps. The obvious question: In what sense is it physically coherent for there to be a potential infinity of physical acts executable by a machine MU, each of which is associated with a definite amount of dissipation, energy and action? And on what logical basis- even supposing the idea coherent and not counter to physics- can one assume the potential realizability of an infinitude of computational acts as part of one’s theory of the nature of any computational act? What epistemological advance is it, in other words, to arrive at a conclusion about the physical nature and limits of the particular informational process we call counting by folding into the very language of one’s understanding the notion that numbers go on endlessly?
It seems, then, that Landauer’s quest for a self-consistent theory of the physics of computation runs up against a certain barrier. The difficulty is not that Landauer is unaware of the danger of assuming in one’s mathematics what is denied in one’s picture of physical reality. Unlike the run of physicists he is, as we’ve seen, extremely sensitive to the issue, to the lack of contact between an idealized mathematics and an unco-operatively real world; going so far as to castigate the mathematicians for saddling us all with the “given an epsilon, there exists an N …” type of inference that produces, among many other things, such illusions of unlimited accuracy as the endless decimal expansion of pi. Yet it is precisely this sort of mathematical move, the machinery of the limit, that he is obliged to use – seemingly without noticing, or at any rate without critical mention – when he assents, as he does, to the meaningfulness of virtually dissipation-free computing . The problem is that in spite of his sensitivity he misses a discrete version of the danger he sees so well in the case of the continuum. What haunts the information -is-physical program here is not continuity but non finitude : a finite world being theorized and described by an infinite sequence of numbers . Only this time , when the physics is about numbers, the worry is far more pressing. And, since the arbitrarily small is the problem, the only solution is to reject the density property of not only the reals but the rationals too . But to deny the applicability of the density of the rationals is to deny that arbitrarily small, rational distances, e.g. for large n, correspond to physical distances, which in turn is to assert that beyond some region the integers themselves have no purchase on physical reality insofar as this reality is held to be measurable.
There is a certain irony in all this. The very sub-discipline of computer science – the physics of computation -one would expect to address the issue of counting arbitrarily far begs the question. And the fact that Landauer explicates the field under the banner “Information Is Physical” only sharpens the irony, since it is the very unphysicality- the idealized, transcendental nature – of the infinitary mathematical formalism that is the heart -or perhaps one should say immortal soul- of the problem. Thus, if we are to take “information is physical” seriously, then we have to apply it to all information, which means recognizing that intelligence and symbolic activity, information about information if you will, is physical. Such meta-information is the mathematical formalism, the whole symbolic apparatus of number itself. How else do we articulate, describe, define, compare, represent , manipulate and signify information? Plainly, to talk of “information ” in a way that makes it arithmetically quantifiable, to apply mathematics to “bits” to theorize the connection between informational and thermodynamic degrees of freedom in the first place, to define “degrees of freedom”, to connect bits to an already mathematized entropy, to articulate a computational account of physics, and conversely to develop a physics of computation , is already to work within a thoroughly mathematized horizon to the idea of what can pass for information . In brief , mathematics functions as the source of all the information about in formation that the theory of computing seems prepared to countenance as scientific.
To summarize: the proper pursuit of Landauer ‘s goal of a self-consistent account of the ultimate physical limits of computation would demand – as he indeed does but with insufficient rigor – that mathematical formalism itself be sufficiently de-idealized and brought into the field of what we hold to be physical, material, realizable. To do this entails not just inveighing against continuum mathematics, but mounting a thoroughgoing assault against the uncritical accept ance of endless counting that prevails in con ary mathematics and physics . Moreover, such an assault could hardly remain at the level of a purely negative critique but would, as part of its analysis of infinitism, need to map out a positive account, a view of counting and hence of the whole numbers radically different from the current one. Recently, in Rotman (1993), I carried out a particular version of such a program, and as part of its positive content outlined the arithmetic of a de-infinitized sequence of natural numbers, what I called non Euclidean arithmetic, that emerges from re-interpreting the three dots in the familiar expression 1, 2, 3, … as an instruction to continue for as long as possible in our universe.
Bennett, C. (1982), ‘The Thermodynamics of Computation -a Review’, lnternatio1111l Jo11rnal of Theoretical Physics 21 (12), pp. 905-940.
Bennett C. (1984}, ‘Thermodynamically Reversible Computation’, Physical Review Letters 53 (12), p .
Fredkin, E. and Toffoli, T.(1982), ‘Conservative Logic’, /11ternational Jo11mal of Theoretical Physics
21 (3/4), pp. 219-253.
Landauer, R. ( 1961), ‘Irreversibility and Heat Generation in the Computing Process’, IBM Jo11rnal of Research and Development 3, pp. 183-191.
Landauer, R. (1991}, ‘Information ls Physical’, Physics Today, May, pp . 23-29. Penrose, R. (1989) , The Emperor ‘s New Mind. London : Penguin Books .
Rotman, B. ( 1993), Ad /11finit11m …the Ghost in T11ri11g’s Machine . Stanford: Stanford University Press.
Toffoli, T. (1982), ‘Physics and Computation’, /ntematio11al Journal of Theoretical Physics 21 (3/4), pp. 165-175.
Toffoli, T. (1984), ‘Comment on “Dissipation in Computation”‘, Physical Review Letters 53 (12), p. 1204.